Rails 820000 1920

Switching Siebel Database Software

Switching an Oracle Siebel CRM instance from one database to another is a big decision, but can bring many positive benefits if executed well. Amardeep Parmar (Siebel CRM Consultant) identifies the key bottlenecks in the switch and how to avoid them for a smoother transition.

Switching your Siebel CRM instance from one database provider to another is not an easy task and developers will often find themselves delving into unchartered territory. However, for some organisations, there may be overwhelming need for a change. For example, some firms may decide that it is necessary to consolidate database platforms within the organisation. Others may believe that the strongest Oracle database features will better support the organisation’s mission-critical Siebel application.

The two fundamental reasons why an organisation might fear taking the plunge into a database switch are fidelity and performance of the process to switch. Understandably, organisations want the minimum downtime for users during this type of transition. The implications surrounding data accuracy and integrity are much easier to face in comparison to the performance implications. If a test takes days, that can be the end of the conversation, yet it is possible to dramatically improve the performance and minimise the impact on the business.

In this blog post, I’ll outline some critical bottlenecks that firms can overcome, should they decide to move forward with a Siebel database transition. Addressing these challenges will help speed up the process and make it as headache-free as possible.

Please note: This blog post is aimed at Siebel developers and therefore assumes a basic level of technical knowledge. Furthermore, we’ve historically used the Siebel executable dataimp/dataexp programs (which are shipped with Siebel and handle data migration), but this blog post does not focus on the detailed steps of this approach.

Increase Redo Log Size #

Archive logging should always be switched off on the database during a migration, however, redo logs will still be active. The log creation can use a disproportionately large amount of the CPU processing capacity, thus slowing down the import. For example, we have seen instances where the redo logs have used a massive 85% of the CPU. It is simple to combat this by increasing the redo log size, therefore reducing the time spent switching between logs and consequently freeing up the CPU to provide enough processing power for the data import.

Maximise Array Size and Commit Size #

The dataimp executable has several parameters, of which the array and commit size are particularly useful. These, respectively, set the number of records stored in memory and the number of records saved at each interval. Examples of the command seen online often have these set to values in the low 1000s. An ‘array size’ of 1000 means that 1000 records are inserted at a time and a ‘commit size’ of 5000 means that the records are saved to the database every 5000th record. However, the upper limit for these parameters is bound only by the upper limit for unsigned 16-bit binary numbers. Use the value of ‘65535’ for both the array size and commit size and the records imported in each interval will be increased, resulting in fewer save instances, fewer disruptions and a quicker import. The limit for certain tables must be set lower, a complexity that I will not delve into here but should be kept in mind.

Run Multiple Imports Simultaneously #

Another tip to reduce migration time is to run multiple imports of different tables concurrently i.e. using the ‘parallelism’ method. The number of processes is limited by several factors, including the number of CPUs, dbwriters and dbslaves. Parallelism can be controlled through the use of input files in the dataimp command. The premise is to try to balance out the different threads so that each import thread finishes as close to each other as possible, thus improving efficiency. The data should be exported in a way that produces files of similar size before being imported into the Oracle database . This is because there is a strong but imperfect correlation between the size of the data files and the time taken to import. Often, a small number of tables take up a disproportionately large amount of space on the database. With this method, it is possible to have several threads for the same table introducing further time-saving benefits.

Remove Table Indexes #

A major influencer on to the time taken to import data is the verification that takes place for each record against the table indexes. These index definitions can be backed up and deleted from the database, which will significantly speed up the import.

This option will inevitably result in records not being validated by their indexes when imported and any mistakes will risk compromising data integrity. For this reason, it is absolutely vital that all records between the source and destination are compared with each other before the import, to ensure that there is no double entry (as you won’t receive a notification warning of potential duplicate records if unique indexes have been removed). Another impact of this method relates to the fact that the indexes will need to be rebuilt after the migration is complete. Taking this into account, a trade-off must be considered wisely based on the size of your environment, in order to decide whether table index removal should be applied.

Early Read-Only Table Import #

There are particular tables within Siebel that are read-only by design, which means that, if they are imported before the outage, data integrity can be maintained. This avoids the need to analyse differences between the databases when the source database is switched off. This method is particularly useful when the difference analysis can not be used, for example when there is an upgrade and a migration in the same window. In that case, read-only records from a custom table can be added well in advance of the upgrade or migration as they are not affected by the upgrade.

Conclusion #

If your organisation has identified there is a rationale for doing so, a migration to the latest Oracle database from other databases can be achieved. Such database migration of production databases are often feared not only because of the level of complexities involved, but also because of the perceived amount of time it can consume. However, take on board the handy tips mentioned above, and the journey will be made far simpler.