|
|
Configuration manager
Source tables and mapping
This is the most important part in telling the
system where to retrieve data, how to translate it, and where to store it.
Everything you enter in this section is from the point of view of the source. Or
you specify the record (line, file,...) to retrieve once from the source, and
then what to do with it. On this page we will give a brief explanation, how you
enter the configuration elements. If you want to find out more on what to do in
particular cases, check the knowledge
base section.
This section is valid whether you use the trigger
based or batch oriented channel.
In the navigation
tree-view under 'source tables and mapping' select new or properties to
enter this section of the configuration.
- Source connection name: Select the
source connection you previously defined in the database
connection section. You cannot enter a connection name that was not
defined before. The source connection is the connection client to retrieve
the information from.
- Source table name: The table name,
without library
or user name, to retrieve the data from. Table name is the general name
for what logically can be represented as a database table name. FILE and SMS
are reserved words when you use the Record file or SMS connection client.
- Source field name: the name of
the field, column to retrieve the data from. This is the name without
leading or trailing characters when you use the record
file connection client.
- Source field type: The field type you
retrieve. This can only be NUMBER or STRING. For those more familiar with
ANSI SQL, this is to determine whether the data is written between quotes
(') or not. When you are handling time and/or datestamps, it will go under
NUMBER or STRING (most likely STRING). Allthough you might not be able to
retrieve time and datestamps in ISO format, converting them to ISO format
can solve a lot of problems, and is supported by most of the databases. (ISO
format: yyyy-mm-dd hh:mm:ss). For booleans it depends on the database you're
working with. Most likely they return 0 (False) or 1 (True), and support
this when you write this back.
- Primary key: check this on when or
the source field, or the destination field is part of the primary key. The
primary key is all the fields that are needed to get a unique record in a
table. For some connection clients this might be irelevant, but for common
databases this the basic rule to access records.
Once you defined a source field and
destination field as part of the primary key, it will be used in the primary
key. This means, that when you use a source field multiple times to be
distributed to different destinations, and this source field is part of the
primary key of the source record, you only need to define this once.
If you have a combination source -
destination field where one of both fields is part of the primary key, and
the other can't be part, you must work out a solution by using the STACK and
split one from-to definition into 2.
The primary key you define in PRODBX should
not necessarily be the primary key that is defined in the database. Like
when you have an incremental meaningless key in your database, your primary
key definition is most likely different then this database key. Remember the
primary key is the combination of fields that gives you unique records.
It might be that one source record is
translated into multiple records on the destination. Then your destination
primary key is most likely having more fields then your source primary key.
You might have this when your source record contains a number of fields
(columns) that should be translated into multiple records on the
destination. Each of those records will have an extra primary key identifier
on the destination that indexes the data type. To accomplish this, reuse
your source primary key field to define the additional destination primary
key, and use functions like DEFAULT_STRING or DEFAULT_NUMERIC.
Please remind: when you define a batch
oriented exchange, and you apply a filter to the records to be exchanged, it
is not the filtered recordset PRODBX is working with. Primary key should
give unique records on the entire source table.
For those familiar with ANSI SQL, PRODBX
will use the primary key information to build the WHERE clause.
- Multiple use id: Whenever you reuse a
combination source connection, source table and source field, you must give
it a unique multiple use id.
- Record id: If one source record
results in multiple destination records (independent of the connection) you
must distinguish those destination records by giving them a record name.
PRODBX will group all fields having the same record id and assume them to be
written as a separate record to the destination. If you don't specify a
record id, PRODBX will also group all empty record ids to be a destination
record. The sequence in which PRODBX will generate the destination records
is the alphabetical sorted record id.
- Destination connection name: like
source connection but to define the destination connection.
- Destination table name: like source
table but to define the destination table.
- Destination field name: like the
source field name but to define the destination field. The destination field
my also be STACK. The stack is one internal PRODBX variable. By writing
information to the stack, you may reuse it later in a function that supports
the stack input. Using the stack also implicates you define a correct 'Field
handling sequence'.
If you have difficulties with the reserved
word STACK being the stack indicator, you may change it in the registry
under HKEY_LOCAL_MACHINE\Software\Promes\Prodbx\StackName
- Destination field type: as source
field type but for the destination. PRODBX will automatically switch numbers
and strings if the source type would be different then the destination type.
- Field
handling sequence: When you are working with the STACK, and
since you have only one stack variable, it is very important PRODBX knows
the sequence when the stack is written, and when it is used. If PRODBX would
interpret a field using the stack before it interpreted the field that
generated the stack, your result will be incorrect. To make sure the fields
are interpreted in the correct order, you define a field handling sequence
over here. What happens is: first PRODBX will isolate the destination record
by grouping the record id. Thereafter the lines that build this record are
sorted in ascending field handling sequence. This is the order PRODBX will
handle each line. (line = source table and mapping line).
- Translate function: This is the
script that interprets the data of the source, translates it for writing to
the destination. See the functions section.
- Function parameter 1: string constant
as input for the translate function.
- Function parameter 2: string constant
as input for the translate function.
- Function parameter 3: numeric
constant as input for the translate function.
- Function parameter 4: numeric
constant as input for the translate function.
- Obsolete check on destination: Only
valid for batch-oriented data exchange. Applied on all fields of the same
destination connection and table. When executing a full 'download' from
source to destination there is always the problem of the obsolete records.
This is, records that do no longer exist in the source, but still available
in the destination. If you switch on this obsolete check, PRODBX will find
out the obsolete records and delete them at the end of the 'download'. How?
At the start PRODBX will save all primary key information of the destination
in internal memory. Each record written to the destination will delete the
primary key of the record in internal memory. At the end, the internal
memory only primary key information of records that do no longer exist in
the source. These records are then deleted at the destination.
When the destination client is a Fixed file client then enabling this function
will group all records between begin and end obsolete check, and write only
one file, instead of writing a file per record.
- 'Last updated on' field: This is the
field (column) name in the destination table that hold the time and date
stamp when the record was updated the last time. If the time and date stamp
of this update is older then the time and datestamp in the destination
record, the transaction is redirected to the logging
manager. This can be very important when you have a distributed setup of
offline databases to be synchronised with one main database.
How does PRODBX determine the time and date
stamp of the current update? It will investigate in the next order the
existence of:
- It is generated by the source application.
This means that one of the source fields contains the time and date
stamp of when the source was updated, and to written to the field of the
destination having the same name as you defined here.
- It is generated by the trigger. Now you
find the time and date stamp back in the TRB
file of the source supplying database. The trigger must generate the
time and date stamp and write it to the field TDS_UPDATE. Since the
format might be different then the destination format you must define a
function to run between retrieving the TDS_UPDATE from TRB and using it
for comparison with the field entered here ('last updated' on field).
Defining this function is done in the database
connections section.
- It is generated by PRODBX. Nor source
application or source trigger can supply the time and date stamp. PRODBX
will take the system clock of the connection client running the source
connection.
|
|