Saturday, April 17, 2010

Money must roll, information must flow

Just like money put away in an old sock loses its value, information that is locked in a database also loses its value. It is more valuable to use the money to improve your position or at least to put it in a bank account to earn interest. Information should flow from one business process or department to another and if the operational use has past, when stored in a data warehouse you still earn interest on it.

When we build systems we tend to build them for a single purpose only or within the context of a single department of the business. The effect is that the data in the systems don’t follow the complete sequence of processes since a department or process never stands on its own. Even if you would consider the odd scenario that a department has little to do with the rest of the business, there would be at least the Finance department that one way or the other is involved to get the paid for the provided products and services or to pay contracts and salaries.

When I say that information should flow, it does not mean that the exact set of data should be passed on from one system to another. It could very well be a subset or an aggregate. The concept is not much different as within the boundary of a single high level business process supported by a system where various sub-processes pass on information to other sub-processes.

Within the context of a single system, the information is usually stored in a single database and therefore readily available for all functions within that system. When we want to realise the information flow between systems, this becomes more complicated. It is not always possible to retrieve data from the other system for various reasons. It could be that the database cannot be accessed directly because it is a closed or remote system, it could be that the other system does not provide interfaces to achieve this or simply because the other systems technical architecture is very complex.

The old fashioned way by providing the user in the one system export the data into a file and in the other system to upload the file. When interfacing with banking systems this is still common practice. API’s or web services are more modern ways of implementing interfaces between systems. But those are not always available and if you want to make them yourself you can be confronted with unclear technical documentation and technical structures within the existing systems.

Specifically when you want to interface with ERP and COTS systems, you too often need to rely on customisations that become expensive to maintain in the context upgrades. No wonder that systems integration have become a specialised area of expertise within IT.

It is not always easy to achieve integration between systems and to let the information flow. Due to the cost and associated risks in relation to maintenance (you easily forget to include the implication for the other systems in your projects) and risks in relation to upgrades, you need to carefully consider how and when you want to integrate systems.

But I found it always very rewarding once you have achieved the end result. So often you experience that information that needs to be passed on from one business group or function to another goes with many complexities. Not the right information is passed on, insufficient information is passed on or it is passed on too late. Segregation in areas of responsibility and an attitude of “not my responsibility what happens over there” can make these situations very persistent. Even when you as IT person can identify a solution and the benefit of the automated flow, the distributed ownership can make it difficult to find the funding and business commitment to make the improvement. And let’s not talk about the complexities to achieve automated information flows across organizations.

As an IT department you should always consider carefully how you build your system and information infrastructure. If you implement a system, you should consider what your next project could be. Will you be able to extend easily and can you integrate the solution with other systems? Sometimes a quick solution for the problem at hand can be found with a simple off-the-shelf or externally hosted system. But if that means that you are limited to append future modules and or to link other systems, you might want to consider options that allow for future expansion to or to consider in-house development.

For in-housed solutions, I try to assure that data is always stored in only single place. New systems will need to access the database of the existing systems such that the software applications might look to the users as distinct systems, they all work in harmony and operate as a single integrated solution. If this is impractical, you’ll need to use interfaces such as web services to obtain the ‘other’ data in real time without the need to store a copy in the application’s own database (you would need to store a reference to it though).

With the advancements of technology, we use more and more large data objects for sound, image and video. The requirements for data storage has jumped significantly in the recent period and this will continue for a while. In combination with the increased opportunities for communication we’ll see that a plethora of copies of those objects will exist within the organisation and around the globe. Just like in the real life where we should not litter, we should not do this in cyberspace either. In the context of green IT both users and IT have a responsibility avoiding too many copies of the same data.

Just consider the fact when you send an email with an attachment and how many copies this will create. If you send an image, you probably have this stored in a special environment where you develop and maintain images. Then you will have a final copy stored in a location from where you attach it to the email. If you send it to 5 people, they all receive a copy in their mailbox plus your own copy in the “send items” folder. If they store it then again in their file systems, you have another copy. This could result into 13 copies and if all this is backed up daily and you keep backups for at least 30 days, think about how many copies you have created here. We might say that disk space is cheap, but don’t forget that the disks still need to spin.

In this context, I would not be surprised that email solutions soon will emerge where you only add a reference to the object in the email. When the recipient wants to open the attachment, he will do this from the location of the sender. The recipient then has the option to store a local copy or to leave it remotely. The cloud and the Internet contribute significantly to all this littering and I think the cloud and the Internet should also bring solutions to minimise it. Information should flow, but use a single copy of the data where possible.

The issue of the flow of information is foremost an issue to be addressed during the business analysis phase. This is the best moment when you can identify in what other business areas the information could or should be used and what future opportunities there might be. It is also the moment where the difficult issues around business ownership of data should be addressed. When this cannot be resolved, it is best to escalate this up into the hierarchy to see if you can get a resolution. Awareness of senior management around the relevance is a prerequisite. Even when you might not want to implement the integration immediately, the understanding for the choice of the technology solution is important.

I always enjoy it very much if I see when information flows and that systems and business processes work in unison, even though some of those integration points can sometimes be a pain to maintain. As an analyst, consultant and architect, I follow the information.

No comments:

Post a Comment

You are welcome to leave any response or thoughts that you have as feedback.