The ETL process – Part 2 – Guidelines & Standards – cont.

Guidelines lead the way

Guidelines lead the way

In one of the previous posts I have introduced the first two topics of the top four guidelines and standards for the ETL process.

Especially point one regarding big-scale ETL systems may have provoked some disagreement, disconcernment, or confusion. Everything I said is only relevant when there is the possibility and willingness of having a (at least temporary) team of dedicated developers. This might not be feasible for smaller projects. However, if there are any issues a commercial ETL tool cannot cope with, it would often be better to develop a new and independent system than tweaking the tool with weird workarounds. I have also heared of clients that have abandoned requirements, just because the ETL system could not fulfill them.

Here are topics three and four:

  • Don’t throw away anything that ever entered the data warehouse.

    Psychologists know the syndrome of compulsive hoarding. While this is a serious disease, it might come in handy in data warehousing. A general goal should be to be able to restore any state of the warehouse that ever existed. Quite frequently, the client establishes new structures in dimensions or hierarchies. There might even be major paradigm shifts in business models. In this context, clients often decide to forget about the structures and data of the past, just to find out later that some comparison or A/B analysis of present and past could be quite enlightening. It could also be interesting to aggreate and analyze historic data according to actual structures.
    I have also undergone scenarios where the client recognized that the whole restructuring or paradigm shift didn’t lead to the desired results. So they eventually decided to do a rollback to the old structures.
    Another scenario is the worst case of a technical failure that leads to a defunct database. To make things worse, an up-to-date backup is also no longer available.
    In all those cases, a well designed and fostered archive of all the interface data can be a lifesaver.
    There will be a detailed discussion about archiving in one of the upcoming posts.

  • Exclusively use human-readable files as input for interfaces.

    First of all: No direct-reads from data sources. This is not only a direct consequence of the previous topic, but there are also some more things to consider about directly reading from the data source:

    • The data source could be in an inconsistent state.
    • Querying the data source could seriously slow down other attached online systems.
    • The state of the data in the data source might be volatile.
    • The dependency on the structure of the data source requires a higher grade of coordination and management.
    • Auditing of the data transferred is more difficult.
    • Special connectors to specific data sources are necessary.

If it is only possible, try to convince the people concerned of delivering all interface data in human-readable flat files. Possible formats are CSV, TXT, or more nosql-oriented formats like JSON and XML. External data providers often have their own proprietary formats, but most of them are at least human-readable.
Why human-readable? Because this makes life much easier when auditing or lookups are necessary. We can open an interface file in an editor and look directly at or for the interesting data. Plus, in case of a data quality problem that needs immediate mending, we could quickly patch the data in the interface file. As mentioned in point two of guidelines & standards, please apply this only as an absolute exception and only to avoid showstoppers, since the responsibility for data quality should be in the realm of the respective data source.
Last but not least, it’s quite easy to compress, archive, backup and restore flat files. Conversely, archiving and backing up direct-read data requires extra steps to export it from the warehouse.

This concludes the top four list of guidelines & standards for the ETL process. The next posts are all about metadata for the ETL process.

Please feel free to register any time. 🙂

 

Use TOAD

The TOAD logo

The TOAD logo

After the first part of guidelines and standards I’d like to do a little interlude and introduce TOAD.

Whenever I start a new data warehouse or database project I warmly recommend to the client to purchase a handful of TOAD licences for the development team. Sometimes even the free version will do the job.

For me, TOAD is an indispensable tool for developing and debugging stored procedures, SQL statements, and database objects. Compared to TOAD, the built-in tools of the database vendors like SQL Server Management Studio (SSMS) or Oracle SQL Developer appear somewhat ridiculous.

There are versions for many DB systems like Oracle, SQL Server, DB2, MySql, SAP, Hadoop and there is also an agile and vivid community.

Just my two cents for the weekend. And, BTW, I’m in no way associated with Quest. I’m just a happy and pleased user for many years.

Please feel free to register any time. 🙂

The ETL process – Part 2 – Guidelines & Standards

Guidelines lead the way

Guidelines lead the way

After achieving some results from the analysis of the ETL process (Part 1 – The Analysis), it quickly becomes evident that it is not sensible to finally come up with an “egg-laying woolly-milk-sow” (as we say in German: “eierlegende Wollmilchsau” :-)).
However, if the analysis has been complete and painstaking, the requirements for the ETL process should be clear after that. Despite the fact that there should be room for extensions and new functionalities (especially in an agile environment), clear-cut red lines should be drawn.

Here are the first two of my top four list of guidelines:

  • Beware of too much proprietary or closed-source software.

    My best friends, the big-scale ETL tools and frameworks, fall into that category.
    Some people, especially staff-members of big consulting companies, would strictly contradict. Usually, their chief argument is that the use of tools or frameworks decreases the degree of dependence on software developers. That is nothing but the truth!
    But what’s finally also true is that the use of tools or frameworks increases the degree of dependence on staff-members of big consulting companies or software vendors, and on special developers, who are proficient in those tools and frameworks.
    Without a doubt you won’t find people like these around every corner. To top it all off they are usually significantly more expensive than developers, who are not that highly specialized in the development or customizing for a very specific product. And finally there is the cost for the products themselves.
    On the other hand, chances of finding some really brilliant developers with excellent skills in the fields mentioned below are much better. Even after most of them have left the team after the system has gone into production, it should not be too difficult to find new developers when it’s necessary. If there hasn’t been a knowledge drain and the system is well documented, the integration of new team members should be quite smooth.

    Skills needed or appropriate for development of the entire ETL process (strongly IMHO):

    • SQL, T-SQL, PL/SQL for Stored Procedures
    • C# for SQL Server CLR Stored Procedures
    • Java for Oracle Java Stored Procedures
    • bash or power shell for shell script programming
    • Shell tools like grep, awk, sed, etc.
    • ftp script programming
    • cron job or scheduler programming

  • Data sources are responsible for their own data quality.

    In literally all of my data warehouse projects and without exception we have discovered data quality issues in the source systems. Fortunately, this usually happens at a quite early stage of the project. The start of a data warehouse project can eventually even take credit for the discovery of serious flaws in source and/or legacy systems.
    The data warehouse process must not be the sweeper that eliminates the slip-ups from earlier stages of the data flow. We all know that nothing lives longer than a quick workaround. There is no doubt about the necessity to provide these workarounds to avoid showstoppers; especially at an early stage of the production phase. But, by all means try to get rid of them as quickly as possible. They turn out to be a heavy burden in the long run.
    Sometimes the need to promote data quality on an enterprise-wide level might occur. It could be necessary to escalate those issues through the corporate or department hierarchy; sometimes even up to the CIO and/or the CTO.

The subject of next post will be the remaining points of the top four guidelines list.

Please feel free to register any time. 🙂