- Relational File
- Flat File Database
- Flat File Cabinet
- Spring Batch Flat File Item Writer Examples
- Spring Batch Example Code
- Round File
Spring Batch Samples
Overview
A batch job may consist one or more steps, and each step must define a reader and writer. Spring batch has a simple interface called Tasklet which can be used to do single operation example a web service call. This batch partitioner is a bean class which implements Spring batch interface org.springframework.batch.core.partition.support.Partitioner. Each slave step then reads a file, processes records and writes to the database using chunk based processing where chunk size(5 here) is specified by the commit-interval attribute.
There is considerable variability in the types of input and outputformats in batch jobs. There are also a number of options to considerin terms of how the types of strategies that will be used to handleskips, recovery, and statistics. However, when approaching a newbatch job there are a few standard questions to answer to helpdetermine how the job will be written and how to use the servicesoffered by the spring batch framework. Consider the following:
- How do I configure this batch job? In the samples the pattern is to follow the convention of
[nameOf]Job.xml
. Each sample identifies the XML definition used to configure the job. Job configurations that use a common execution environment have many common items in their respective configurations. - What is the input source? Each sample batch job identifies its input source.
- What is my output source? Each sample batch job identifies its output source.
- How are records read and validated from the input source? This refers to the input type and its format (e.g. flat file with fixed position, comma separated or XML, etc.)
- What is the policy of the job if a input record fails the validation step? The most important aspect is whether the record can be skipped so that processing can be continued.
- How do I process the data and write to the output source? How and what business logic is being applied to the processing of a record?
- How do I recover from an exception while operating on the output source? There are numerous recovery strategies that can be applied to handling errors on transactional targets. The samples provide a feeling for some of the choices.
- Can I restart the job and if so which strategy can I use to restart the job? The samples show some of the options available to jobs and what the decision criteria is for the respective choices.
Here is a list of samples with checks to indicate which features each one demonstrates:
Job/Feature | skip | retry | restart | automatic mapping | asynch launch | validation | delegation | write behind | non-squenetial | asynch process | filtering |
---|---|---|---|---|---|---|---|---|---|---|---|
Adhoc Loop and JMX Demo | X | ||||||||||
Amqp Job Sample | X | ||||||||||
BeanWrapperMapper Sample | X | ||||||||||
Composite ItemWriter Sample | X | ||||||||||
Customer Filter Sample | X | ||||||||||
Delegating Sample | X | ||||||||||
Football Job | |||||||||||
Header Footer Sample | |||||||||||
Hibernate Sample | X | X | |||||||||
IO Sample Job | X | X | |||||||||
Infinite Loop Sample | X | X | |||||||||
Loop Flow Sample | |||||||||||
Multiline | X | ||||||||||
Multiline Order Job | X | ||||||||||
Parallel Sample | X | ||||||||||
Partitioning Sample | X | ||||||||||
Remote Chunking Sample | X | ||||||||||
Quartz Sample | X | ||||||||||
Restart Sample | X | ||||||||||
Retry Sample | X | ||||||||||
Skip Sample | X | ||||||||||
Trade Job | X |
The IO Sample Job has a number of special instances that show different IO features using the same job configuration but with different readers and writers:
Job/Feature | delimited input | fixed-length input | xml input | db paging input | db cursor input | delimited output | fixed-length output | xml output | db output | multiple files | multi-line | mulit-record |
---|---|---|---|---|---|---|---|---|---|---|---|---|
delimited | x | x | ||||||||||
Fixed Length Import Job | x | x | ||||||||||
Hibernate Sample | x | x | ||||||||||
Jdbc Cursor and Batch Update | x | x | ||||||||||
jpa | x | x | ||||||||||
Multiline | x | x | x | |||||||||
multiRecordtype | x | x | x | |||||||||
multiResource | x | x | x | |||||||||
XML Input Output | x | x |
Spring Batch Flat File Item Writer Example Outline Format. On the day of launch, a two- stage Terrier- Improved Malemute sounding rocket will carry ten canisters that will be deployed about five minutes after liftoff. The canisters will then create vividly colored artificial clouds aka vapor tracers.
Common Sample Source Structures
The easiest way to launch a sample job in Spring Batch is to open upa unit test in your IDE and run it directly. Each sample has aseparate test case in the
org.springframework.batch.samples
package. The name of the test case is [JobName]FunctionalTests
.Note: The test cases do not ship in the samples jar file, but theyare in the .zip distribution and in the source code, whichyou can download using subversion (or browse in a web browser ifyou need to). See here for a link to the source code repository.
You can also use the same Spring configuration as the unit test tolaunch the job via a main method in
CommmandLineJobRunner
.The samples source code has an Eclipse launch configuration to dothis, taking the hassle out of setting up a classpath to run thejob.Adhoc Loop and JMX Demo
This job is simply an infinite loop. It runs forever so it isuseful for testing features to do with stopping and starting jobs.It is used, for instance, as one of the jobs that can be run fromJMX using the Eclipse launch configuration 'jmxLauncher'.
The JMX launcher uses an additional XML configuration file(adhoc-job-launcher-context.xml) to set up a
JobOperator
forrunning jobs asynchronously (i.e. in a background thread). Thisfollows the same pattern as the Quartz sample, so see that sectionfor more details of the JobLauncher
configuration.The rest of the configuration for this demo consists of exposingsome components from the application context as JMX managed beans.The
JobOperator
is exposed so that it can be controlled from aremote client (such as JConsole from the JDK) which does not haveSpring Batch on the classpath. See the Spring Core Reference Guidefor more details on how to customise the JMX configuration.Jdbc Cursor and Batch Update
The purpose of this sample is to show to usage of the
JdbcCursorItemReader
and the JdbcBatchItemWriter
to makeefficient updates to a database table.The
JdbcBatchItemWriter
accepts a special form ofPreparedStatementSetter
as a (mandatory) dependency. This isresponsible for copying fields from the item to be written to aPreparedStatement
matching the SQL query that has beeninjected. The implementation of theCustomerCreditUpdatePreparedStatementSetter
shows bestpractice of keeping all the information needed for the execution inone place, since it contains a static constant value (QUERY
)which is used to configure the query for the writer.Amqp Job Sample
This sample shows the use of Spring Batch to write to an
AmqpItemWriter
.The AmqpItemReader
and Writer were contributed by Chris Schaefer.It is modeled after the JmsItemReader
/ Writer implementations, whichare popular models for remote chunking. It leverages the AmqpTemplate
.This example requires the env to have a copy of rabbitmq installedand running. Maison de m bara games. The standard dashboard can be used to see the trafficfrom the
MessageProducer
to the AmqpItemWriter
. Make sure youlaunch the MessageProducer
before launching the test.BeanWrapperMapper Sample
This sample shows the use of automatic mapping from fields in a fileto a domain object. The
Trade
and Person
objects neededby the job are created from the Spring configuration using prototypebeans, and then their properties are set using theBeanWrapperFieldSetMapper
, which sets properties of theprototype according to the field names in the file.Nested property paths are resolved in the same way as normal Springbinding occurs, but with a little extra leeway in terms of spellingand capitalisation. Thus for instance, the
Trade
object has aproperty called customer
(lower case), but the file has beenconfigured to have a column name CUSTOMER
(upper case), andthe mapper will accept the values happily. Underscores instead ofcamel-casing (e.g. CREDIT_CARD
instead of creditCard
)also work.Composite ItemWriter Sample
This shows a common use case using a composite pattern, composinginstances of other framework readers or writers. It is also quitecommon for business-specific readers or writers to wrapoff-the-shelf components in a similar way.
In this job the composite pattern is used just to make duplicatecopies of the output data. The delegates for the
CompositeItemWriter
have to be separately registered asstreams in the Step
where they are used, in order for the stepto be restartable. This is a common feature of all delegatepatterns.Customer Filter Sample
This shows the use of the
ItemProcessor
to filter out items byreturning null. When an item is filtered it leads to an incrementin the filterCount
in the step execution.Delegating Sample
This sample shows the delegate pattern again, and also the
ItemReaderAdapter
which is used to adapt a POJO to theItemReader
interface.Fixed Length Import Job
![Spring Batch Flat File Item Writer Example Spring Batch Flat File Item Writer Example](/uploads/1/3/3/8/133866047/677472131.png)
The goal is to demonstrate a typical scenario of importing datafrom a fixed-length file to database
This job shows a typical scenario, when reading input data andprocessing the data is cleanly separated. The data provider isresponsible for reading input and mapping each record to a domainobject, which is then passed to the module processor. The moduleprocessor handles the processing of the domain objects, in this caseit only writes them to database.
In this example we are using a simple fixed length record structurethat can be found in the project at
data/iosample/input
. A considerable amount ofthought can go into designing the folder structures for batch filemanagement. The fixed length records look like this:![Database Database](/uploads/1/3/3/8/133866047/178102594.jpg)
Looking back to the configuration file you will see where this isdocumented in the property of the
FixedLengthTokenizer
. You caninfer the following properties:FieldName | Length |
---|---|
ISIN | 12 |
Quantity | 3 |
Price | 5 |
Customer | 9 |
Output target: database - writes the data to database using a DAOobject
Football Job
This is a (American) Football statistics loading job. We gave it theid of
footballJob
in our configuration file. Before divinginto the batch job, we'll examine the two input files that need tobe loaded. First is player.csv
, which can be found in thesamples project undersrc/main/resources/data/footballjob/input/. Each line within thisfile represents a player, with a unique id, the player’s name,position, etc:One of the first noticeable characteristics of the file is that eachdata element is separated by a comma, a format most are familiarwith known as 'CSV'. Other separators such as pipes or semicolonscould just as easily be used to delineate between uniqueelements. In general, it falls into one of two types of flat fileformats: delimited or fixed length. (The fixed length case wascovered in the
fixedLengthImportJob
.The second file, 'games.csv' is formatted the same as the previousexample, and resides in the same directory:
Each line in the file represents an individual player's performancein a particular game, containing such statistics as passing yards,receptions, rushes, and total touchdowns.
Our example batch job is going to load both files into a database,and then combine each to summarise how each player performed for aparticular year. Although this example is fairly trivial, it showsmultiple types of input, and the general style is a common batchscenario. That is, summarising a very large dataset so that it canbe more easily manipulated or viewed by an online web-basedapplication. In an enterprise solution the third step, the reportingstep, could be implemented through the use of Eclipse BIRT or one ofthe many Java Reporting Engines. Given this description, we can theneasily divide our batch job up into 3 'steps': one to load theplayer data, one to load the game data, and one to produce a summaryreport:
Note: One of the nice features of Spring is a project calledSpring IDE. When you download the project you can install SpringIDE and add the Spring configurations to the IDE project. This isnot a tutorial on Spring IDE but the visual view into Spring beansis helpful in understanding the structure of a JobConfiguration. Spring IDE produces the following diagram:
This corresponds exactly with the
footballJob.xml
jobconfiguration file which can be found in the jobs folder undersrc/main/resources
. When you drill down into the football jobyou will see that the configuration has a list of steps:A step is run until there is no more input to process, which inthis case would mean that each file has been completelyprocessed. To describe it in a more narrative form: the first step,playerLoad, begins executing by grabbing one line of input from thefile, and parsing it into a domain object. That domain object isthen passed to a dao, which writes it out to the PLAYERS table. Thisaction is repeated until there are no more lines in the file,causing the playerLoad step to finish. Next, the gameLoad step doesthe same for the games input file, inserting into the GAMEStable. Once finished, the playerSummarization step can begin. Unlikethe first two steps, playerSummarization input comes from thedatabase, using a Sql statement to combine the GAMES and PLAYERStable. Each returned row is packaged into a domain object andwritten out to the PLAYER_SUMMARY table.
Now that we've discussed the entire flow of the batch job, we candive deeper into the first step: playerLoad:
The root bean in this case is a
SimpleStepFactoryBean
, whichcan be considered a 'blueprint' of sorts that tells the executionenvironment basic details about how the batch job should beexecuted. It contains four properties: (others have been removed forgreater clarity) commitInterval, startLimit, itemReader anditemWriter . After performing all necessary startup, the frameworkwill periodically delegate to the reader and writer. In this way,the developer can remain solely concerned with their businesslogic.- ItemReader – the item reader is the source of the informationpipe. At the most basic level input is read in from an inputsource, parsed into a domain object and returned. In this way, thegood batch architecture practice of ensuring all data has beenread before beginning processing can be enforced, along withproviding a possible avenue for reuse.
- ItemWriter – this is the business logic. At a high level,the item writer takes the item returned from the readerand 'processes' it. In our case it's a data access object that issimply responsible for inserting a record into the PLAYERStable. As you can see the developer does very little.
The application developer simply provides a job configuration with aconfigured number of steps, an ItemReader associated to some typeof input source, and ItemWriter associated to some type ofoutput source and a little mapping of data from flat records toobjects and the pipe is ready wired for processing.
Another property in the step configuration, the commitInterval,gives the framework vital information about how to controltransactions during the batch run. Due to the large amount of datainvolved in batch processing, it is often advantageous to 'batch'together multiple logical units of work into one transaction, sincestarting and committing a transaction is extremely expensive. Forexample, in the playerLoad step, the framework calls read() on theitem reader. The item reader reads one record from the file, andreturns a domain object representation which is passed to theprocessor. The writer then writes the one record to the database. Itcan then be said that one iteration = one call to
ItemReader.read()
= one line of the file. Therefore, settingyour commitInterval to 5 would result in the framework committing atransaction after 5 lines have been read from the file, with 5resultant entries in the PLAYERS table.Following the general flow of the batch job, the next step is todescribe how each line of the file will be parsed from its stringrepresentation into a domain object. The first thing the providerwill need is an
ItemReader
, which is provided as part of the SpringBatch infrastructure. Because the input is flat-file based, aFlatFileItemReader
is used:There are three required dependencies of the item reader; the firstis a resource to read in, which is the file to process. The seconddependency is a
LineTokenizer
. The interface for aLineTokenizer
is very simple, given a string; it will return aFieldSet
that wraps the results from splitting the providedstring. A FieldSet
is Spring Batch's abstraction for flat filedata. It allows developers to work with file input in much the sameway as they would work with database input. All the developers needto provide is a FieldSetMapper
(similar to a SpringRowMapper
) that will map the provided FieldSet
into anObject
. Simply by providing the names of each token to theLineTokenizer
, the ItemReader
can pass theFieldSet
into our PlayerMapper
, which implements theFieldSetMapper
interface. There is a single method,mapLine()
, which maps FieldSet
s the same way thatdevelopers are comfortable mapping ResultSet
s into JavaObject
s, either by index or field name. This behaviour is byintention and design similar to the RowMapper
passed into aJdbcTemplate
. You can see this below:The flow of the
ItemReader
, in this case, starts with a callto read the next line from the file. This is passed into theprovided LineTokenizer
. The LineTokenizer
splits theline at every comma, and creates a FieldSet
using the createdString
array and the array of names passed in.Note: it is only necessary to provide the names to create the
FieldSet
if you wish to access the field by name, ratherthan by index.Once the domain representation of the data has been returned by theprovider, (i.e. a
Player
object in this case) it is passed tothe ItemWriter
, which is essentially a Dao that uses a SpringJdbcTemplate
to insert a new row in the PLAYERS table.The next step, gameLoad, works almost exactly the same as theplayerLoad step, except the games file is used.
The final step, playerSummarization, is much like the previous twosteps, in that it reads from a reader and returns a domain object toa writer. However, in this case, the input source is the database,not a file:
The
JdbcCursorItemReader
has three dependences:- A
DataSource
- The
RowMapper
to use for each row. - The Sql statement used to create the cursor.
When the step is first started, a query will be run against thedatabase to open a cursor, and each call to
itemReader.read()
will move the cursor to the next row, using the providedRowMapper
to return the correct object. As with the previoustwo steps, each record returned by the provider will be written outto the database in the PLAYER_SUMMARY table. Finally to run thissample application you can execute the JUnit testFootballJobFunctionalTests
, and you'll see an output showingeach of the records as they are processed. Please keep in mind thatAoP is used to wrap the ItemWriter
and output each record as itis processed to the logger, which may impact performance.Header Footer Sample
This sample shows the use of callbacks and listeners to deal withheaders and footers in flat files. It uses two custom callbacks:
HeaderCopyCallback
: copies the header of a file from theinput to the output.SummaryFooterCallback
: creates a summary footer at the endof the output file.
Hibernate Sample
The purpose of this sample is to show a typical usage of Hibernateas an ORM tool in the input and output of a job.
The job uses a
HibernateCursorItemReader
for the input, wherea simple HQL query is used to supply items. It also uses anon-framework ItemWriter
wrapping a DAO, which perhaps waswritten as part of an online system.The output reliability and robustness are improved by the use of
Session.flush()
inside ItemWriter.write()
. This'write-behind' behaviour is provided by Hibernate implicitly, but weneed to take control of it so that the skip and retry featuresprovided by Spring Batch can work effectively.Infinite Loop Sample
This sample has a single step that is an infinite loop, reading andwriting fake data. It is used to demonstrate stop signals andrestart capabilities.
Loop Flow Sample
Shows how to implement a job that repeats one of its steps up to alimit set by a
JobExecutionDecider
.Multiline
The goal of this sample is to show some common tricks with multilinerecords in file input jobs.
The input file in this case consists of two groups of tradesdelimited by special lines in a file (BEGIN and END):
The goal of the job is to operate on the two groups, so the itemtype is naturally
List<Trade
>. To get these items deliveredfrom an item reader we employ two components from Spring Batch: theAggregateItemReader
and thePrefixMatchingCompositeLineTokenizer
. The latter isresponsible for recognising the difference between the trade dataand the delimiter records. The former is responsible foraggregating the trades from each group into a List
and handingout the list from its read()
method. To help these componentsperform their responsibilities we also provide some businessknowledge about the data in the form of a FieldSetMapper
(TradeFieldSetMapper
). The TradeFieldSetMapper
checksits input for the delimiter fields (BEGIN, END) and if it detectsthem, returns the special tokens that AggregateItemReader
needs. Otherwise it maps the input into a Trade
object.Multiline Order Job
The goal is to demonstrate how to handle a more complex file inputformat, where a record meant for processing includes nested recordsand spans multiple lines
The input source is file with multiline records.
OrderItemReader
is an example of a non-default programmaticitem reader. It reads input until it detects that the multilinerecord has finished and encapsulates the record in a single domainobject.The output target is a file with multiline records. The concrete
ItemWriter
passes the object to a an injected 'delegatewriter' which in this case writes the output to a file. The writerin this case demonstrates how to write multiline output using acustom aggregator transformer.Parallel Sample
The purpose of this sample is to show multi-threaded step executionusing the Process Indicator pattern. Properties of z transform with proof in dsp pdf.
The job reads data from the same file as the Fixed Length Import sample, but instead ofwriting it out directly it goes through a staging table, and thestaging table is read in a multi-threaded step. Note that for sucha simple example where the item processing was not expensive, thereis unlikely to be much if any benefit in using a multi-threadedstep.
Multi-threaded step execution is easy to configure using SpringBatch, but there are some limitations. Most of the out-of-the-box
ItemReader
and ItemWriter
implementations are notdesigned to work in this scenario because they need to berestartable and they are also stateful. There should be no surpriseabout this, and reading a file (for instance) is usually fast enoughthat multi-threading that part of the process is not likely toprovide much benefit, compared to the cost of managing the state.The best strategy to cope with restart state from multipleconcurrent threads depends on the kind of input source involved:
- For file-based input (and output) restart sate is practicallyimpossible to manage. Spring Batch does not provide any featuresor samples to help with this use case.
- With message middleware input it is trivial to manage restarts,since there is no state to store (if a transaction rolls back themessages are returned to the destination they came from).
- With database input state management is still necessary, but itisn't particularly difficult. The easiest thing to do is rely ona Process Indicator in the input data, which is a column in thedata indicating for each row if it has been processed or not. Theflag is updated inside the batch transaction, and then in the caseof a failure the updates are lost, and the records will show asun-processed on a restart.
This last strategy is implemented in the
StagingItemReader
.Its companion, the StagingItemWriter
is responsible forsetting up the data in a staging table which contains the processindicator. The reader is then driven by a simple SQL query thatincludes a where clause for the processed flag, i.e.It is then responsible for updating the processed flag (whichhappens inside the main step transaction).
Partitioning Sample
The purpose of this sample is to show multi-threaded step executionusing the
PartitionHandler
SPI. The example uses aTaskExecutorPartitionHandler
to spread the work of readingsome files across multiple threads, with one Step
executionper thread. The key components are the PartitionStep
and theMultiResourcePartitioner
which is responsible for dividing upthe work. Notice that the readers and writers in the Step
that is being partitioned are step-scoped, so that their state doesnot get shared across threads of execution.Remote Partitioning Sample
This sample shows how to configure a remote partitioning job. The manager stepuses a
MessageChannelPartitionHandler
to send partitions to and receivereplies from workers. Two examples are shown:- A manager step that polls the job repository to see if all workers have finishedtheir work
- A manager step that aggregates replies from workers to notify work completion
The sample uses an embedded JMS broker and an embedded database for simplicitybut any option supported via Spring Integration for communication is technicallyacceptable.
Remote Chunking Sample
This sample shows how to configure a remote chunking job. The manager step willread numbers from 1 to 6 and send two chunks ({1, 2, 3} and {4, 5, 6}) to workersfor processing and writing.
This example shows how to use:
- the
RemoteChunkingManagerStepBuilderFactory
to create a manager step - the
RemoteChunkingWorkerBuilder
to configure an integration flow on the worker side.
The sample uses an embedded JMS broker as a communication middleware between themanager and workers. The usage of an embedded broker is only for simplicity's sake,the communication between the manager and workers is still done through JMS queuesand Spring Integration channels and messages are sent over the wire through a TCP port.
Quartz Sample
The goal is to demonstrate how to schedule job execution usingQuartz scheduler. In this case there is no unit test to launch thesample because it just re-uses the football job. There is a mainmethod in
JobRegistryBackgroundJobRunner
and an Eclipse launchconfiguration which runs it with arguments to pick up the footballjob.The additional XML configuration for this job is in
quartz-job-launcher.xml
, and it also re-usesfootballJob.xml
The configuration declares a
JobLauncher
bean. The launcherbean is different from the other samples only in that it uses anasynchronous task executor, so that the jobs are launched in aseparate thread to the main method:Relational File
Also, a Quartz
JobDetail
is defined using a SpringJobDetailBean
as a convenience.Finally, a trigger with a scheduler is defined that will launch thejob detail every 10 seconds:
The job is thus scheduled to run every 10 seconds. In fact itshould be successful on the first attempt, so the second andsubsequent attempts should through a
JobInstanceAlreadyCompleteException
. In a production system,the job detail would probably be modified to account for thisexception (e.g. catch it and re-submit with a new set of jobparameters). The point here is that Spring Batch guarantees thatthe job execution is idempotent - you can never inadvertentlyprocess the same data twice.Restart Sample
The goal of this sample is to show how a job can be restarted aftera failure and continue processing where it left off.
To simulate a failure we 'fake' a failure on the fourth recordthough the use of a sample component
ExceptionThrowingItemReaderProxy
. This is a stateful readerthat counts how many records it has processed and throws a plannedexception in a specified place. Since we re-use the same instancewhen we restart the job it will not fail the second time.Retry Sample
The purpose of this sample is to show how to use the automatic retrycapabilities of Spring Batch.
The retry is configured in the step through the
SkipLimitStepFactoryBean
:Failed items will cause a rollback for all
Exception
types, upto a limit of 3 attempts. On the 4th attempt, the failed item wouldbe skipped, and there would be a callback to aItemSkipListener
if one was provided (via the 'listeners'property of the step factory bean).An
ItemReader
is provided that will generate uniqueTrade
data by just incrementing a counter. Note that it usesthe counter in its mark()
and reset()
methods so thatthe same content is returned after a rollback. The same content isreturned, but the instance of Trade
is different, which meansthat the implementation of equals()
in the Trade
objectis important. This is because to identify a failed item on retry(so that the number of attempts can be counted) the framework bydefault uses Object.equals()
to compare the recently faileditem with a cache of previously failed items. Without implementinga field-based equals()
method for the domain object, our jobwill spin round the retry for potentially quite a long time beforefailing because the default implementation of equals()
isbased on object reference, not on field content.Skip Sample
The purpose of this sample is to show how to use the skip featuresof Spring Batch. Since skip is really just a special case of retry(with limit 0), the details are quite similar to the RetrySample, but the use case is less artificial, since itis based on the Trade Sample.
The failure condition is still artificial, since it is triggered bya special
ItemWriter
wrapper (ItemTrackingItemWriter
).The plan is that a certain item (the third) will fail businessvalidation in the writer, and the system can then respond byskipping it. We also configure the step so that it will not rollback on the validation exception, since we know that it didn'tinvalidate the transaction, only the item. This is done through thetransaction attribute:The format for the transaction attribute specification is given inthe Spring Core documentation (e.g. https://everinvestments473.weebly.com/blog/mariah-carey-beautiful-dirty-version-mp3-download. see the Javadocs forTransactionAttributeEditor).
Tasklet Job
The goal is to show the simplest use of the batch framework with asingle job with a single step, which cleans up a directory and runsa system command.
Description: The
Job
itself is defined by the bean definition withid='taskletJob'
. In this example we have two steps.Flat File Database
- The first step defines a tasklet that is responsible forclearing out a directory though a custom
Tasklet
. Eachtasklet has anexecute()
method which is called by thestep. All processing of business data should be handled by thismethod. - The second step uses another tasklet to execute a system (OS)command line.
You can visualise the Spring configuration of a job throughSpring-IDE. See Spring IDE. Thesource view of the configuration is as follows:
For simplicity we are only displaying the job configuration itselfand leaving out the details of the supporting batch executionenvironment configuration.
Trade Job
The goal is to show a reasonably complex scenario, that wouldresemble the real-life usage of the framework.
This job has 3 steps. First, data about trades are imported from afile to database. Second, the trades are read from the database andcredit on customer accounts is decreased appropriately. Last, areport about customers is exported to a file.
XML Input Output
The goal here is to show the use of XML input and output throughstreaming and Spring OXM marshallers and unmarshallers.
The job has a single step that copies
Trade
data from one XMLfile to another. It uses XStream for the object XML conversion,because this is simple to configure for basic use cases like thisone. SeeSpring OXM documentation for details of other options.Batch metrics with Micrometer
Flat File Cabinet
This sample shows how to use Micrometer to collect batch metrics in Spring Batch.It uses Prometheus as the metrics back end and Grafana as the front end.The sample consists of two jobs:
Spring Batch Flat File Item Writer Examples
job1
: Composed of two tasklets that printhello
andworld
job2
: Composed of single chunk-oriented step that reads and writes a random number of items
These two jobs are run repeatedly at regular intervals and might fail randomly for demonstration purposes.
This sample requires docker compose to start the monitoring stack.To run the sample, please follow these steps:
Spring Batch Example Code
This should start the required monitoring stack:
- Prometheus server on port
9090
- Prometheus push gateway on port
9091
- Grafana on port
3000
Once started, you need to configure Prometheus as data source in Grafanaand import the ready-to-use dashboard in
spring-batch-samples/src/grafana/spring-batch-dashboard.json
.Round File
Finally, run the
org.springframework.batch.sample.metrics.BatchMetricsApplication
class without any argument to start the sample.