classic | mobile
 

Search...

SA Instrumentation & Control Buyers' Guide

Technews Industry Guide - IIoT 2018

Technews Industry Guide - Maintenance, Reliability & Asset Optimisation

 

Historian's demise greatly exaggerated
November 2010, IT in Manufacturing


Introduction


Today, manufacturers in all industries must be more competitive, and understand exactly how to tune and control their manufacturing processes in an optimal manner to stay competitive. Automation software is sophisticated and gives operations personnel the visibility of the plant state in real time, enabling them to make changes and tune process parameters to achieve the optimal state, regardless of end product.

The plant may be making bread, gasoline or pharmaceuticals, and without the constant control and feedback from their process systems, operators can seldom produce optimal output. However, real-time control is not the only control that enables optimal production. Analysis of previous production statistics allows engineers to learn what went right or wrong about a previous production run. Knowing how a change in one variable parameter can affect the yield of a process is highly valuable information. Typically, it takes multiple production runs before one is deemed perfect enough to be a role model for future production – often termed a ‘golden batch’.

To enable the analysis of a process, it is necessary to record information regarding operating parameters and states at the time of production. This is where the plant historian becomes a useful application tool. A plant historian is a database system designed to record as many parameters of a manufacturing process as needed. Typically, vast amounts of data are created during a manufacturing production run, and often data changes occur at high frequencies – especially when something goes wrong. This information must be stored accurately and timely.

Does any database system meet this challenge? Some proponents would suggest that a commercial, off the shelf database can be used, but in reality, this is not the case.

Just because the New York Stock Exchange trading systems and other high-throughput applications use a relational database does not mean they are the correct database for everything. For example, take time-series data; can SQL Server or Oracle store time-series data? Certainly. Are there issues you should know before you try it? Absolutely. In this paper we will review some myths on how a commercial database can be used as a plant historian, and why these ideas are not as sound as they seem initially.

Myth 1: Storage is so cheap that efficiency does not matter

You must fully understand how much data a typical process actually generates. A modest 5000-tag historian logging data every second generates 157 billion values per year. Stored efficiently in 8 bytes each, that is roughly a terabyte a year. In certain tests, comparing SQL Server storage requirements for time-series data to the Wonderware Historian, the difference was 50:1 and included all the necessary indices. Even with storage prices falling, 50 terabytes a year is a considerable amount of data. In addition, you must recognise that having enough disk space to hold that much data is insufficient, most historian applications also require that the data be protected, multiplying the amount of storage for backups or disk mirroring. Some industries have regulatory requirements for several years’ worth of data, further amplifying the required storage amount.

Myth 2: Relational databases are fast enough

As hardware price-performance has improved, relational databases have benefited. However, relational databases are designed to protect referential integrity around ‘transactions’ that may update multiple table values in unison, adding significant overhead. For example, on high-end hardware (running 96 processors) SQL Server 2008 established a record 2013 transactions per second. Even on such high-end hardware it is not possible to store 5000 values per second and treat each value as a transaction. Therefore, a front-end buffering application must collect the data and stream numerous values into the database in a single transaction. Databases without full transactional support, such as MySQL’s freeware MyISAM storage engine, can support higher throughputs, but still require a front-end buffer to achieve adequate throughput for all but the smallest historian applications.

Obviously, the main reason to store data is so that it can be retrieved easily, making retrieval performance extremely important. In general purpose solutions, such as a relational database, it is possible to organise data so that it is either efficient to store (higher throughput) or efficient to retrieve (fast retrieval), but not both. Efficient retrieval of time-series data from general purpose databases requires the use of a clustered index, such as in the higher-throughput MyISAM storage engine, which is currently unavailable in general purpose products.

In contrast, purpose built storage engines are designed specifically for time-series data leverage, knowledge of how data is collected and consumed allows it to be stored efficiently for both – this would not be possible if the data was generalised.

Myth 3: Managing data in a relational database is trivial

Relational databases are designed to accumulate massive amounts of data. However, as the amount of data grows so do query execution times, the size of backups and numerous other routine operations. To alleviate this problem, database administrators routinely purge data from the system. In any database that protects transactional integrity, this purge operation must suspend normal updates, which is a problem for historian applications running 24/7/365. To make the actual purge operation tolerable requires minimising the amount of data maintained in the database.

In the event purged data is required later (for example, in response to an audit or some regulatory demands); the data cannot be easily restored. Generally, recommended practice is to restore a full database backup that includes the required data to a separate system dedicated for this purpose. This is even more problematic if the required data is unavailable within a single database backup. For example, if the data is only maintained for the last 30 days in the online database and an audit requires 90 days of data, you must either manually merge all the data into a single database, requiring three systems each with an isolated 30-day window, or serially examine each backup.

True historians, in contrast, are designed to handle the rapid growth in data and provide simple means of taking subsets of the data offline and online.

Myth 4: Retrieving time-series data is no different from any other type of data

With all the power of Structured Query Language (SQL) to query data, some claim that relational databases are as good at retrieving time-series data as they are transactional data. SQL allows greater flexibility but is based on some fundamental assumptions that do not apply to time-series data: a) there is no inherent order in the data records (in fact, time-series data is ordered by time), b) all the data is explicitly stored (in fact, most historian data only represents samples from a continuum of the real data), all data is of equal significance.

These two differences are significant. For example, if an instrument reports a value time stamped at ‘7:59:58.603’ and a user queries a relational database for the value at ‘8:00:00.000’, no data is returned as there is no record stored at that precise time – the database does not recognise that time is a continuum. Similarly, if a temperature was ‘21,0°C’ and two-minutes later was ‘23,0°C’, it has no inherent ability to infer that halfway between these samples the temperature was approximately ‘22,0°C’.

In historian applications, rarely are steady-state operations significant. If the only way for a client application to find exceptions is to query all of the data for a measurement, this could place a heavy load on the overall system: server, network and client. In contrast, historians generally have the means of filtering out insignificant data (based on comparing sequential records) to reduce the volume that must be delivered to client applications.

Myth 5: All data is equal in importance and quality

In collecting thousands of data points from around a process, it is inevitable that some information is incorrect. For instance, there may be problems with physical equipment that is out of range, or simply not working. To a standard database, a stored value is precisely that, a value. In a plant historian, a stored data point not only has an associated value and time, it has an indication of data quality. Storing a data point from an instrument outside of the instrument’s normal operating range, for example, will cause a specific series of quality indicators to be stored with the value. When these are retrieved, they can be used to alert operations or engineering personnel to the potential anomaly. The information used within the historian in a summary point (for example, calculating an average over the last hour of a temperature) will cause the resulting aggregate value to have a quality factor. Managing and propagating the quality of data values within a process historian is necessary to enable any report or analysis performed with that data to be flagged as suspect and alert the consumer of the data.

Myth 6: The only options are fully relational or fully proprietary historian solutions

While it is true that most historian solutions use fully proprietary technology to address the inherent limitations of relational database or fully leverage relational database to reduce their own engineering costs, Wonderware Historian delivers the best of both worlds. It relies on a solid relational scheme for managing all the relatively static configuration data, but extends the native transactional storage engine and query processor of Microsoft SQL Server with proprietary extensions to address the limitations for time-series data.

Building on Microsoft SQL Server delivers a solution that is easier to secure and manage than fully proprietary solutions, but without compromising on the fundamental capabilities required in a historian.

Summary

This white paper discussed several reasons why a process historian is superior to the task of plant information acquisition and retrieval compared to a relational database system. However, this is not to say that commercial software has no place in an industrial environment. Today, process information is often needed outside of the plant environment and inside the business systems section of an enterprise. There is no better way to provide this interface between the plant data and the enterprise systems than a commercially accepted, standard interface. As pointed out in the last myth, the Wonderware Historian integrates a commercially available product (Microsoft SQL Server) with its open, standard query interface (SQL), to provide open access to plant historical data. This interface is easily understood by the IT department for reporting or integrating into the enterprise ERP system.

Wonderware’s Historian offers all the capability discussed within this paper and more. Trusted and in use in over 25 000 installations worldwide, Wonderware Historian empowers plant operations and enterprise business users alike, delivering the right information to the right person and leaving database management where it belongs – in the enterprise IT department and not on the plant floor.

For more information contact Jaco Markwat, Wonderware Southern Africa, 0861 WONDER, jaco.markwat@wonderware.co.za, www.wonderware.co.za


Credit(s)
Supplied By: IS³ - Industry Software, Solutions & Support
Tel: +27 11 607 8100
Fax: +27 11 607 8478
Email: contact@is3.co.za
www: www.is3.co.za
Share via email     Share via LinkedIn   Print this page

Further reading:

  • Key digital transformation IT concepts for operations
    December 2018, IT in Manufacturing
    Rather than focus on the digital transformation IT concepts through a technical lens, this article looks at them in terms of their implication on industrial operations.
  • Data centre management as a service
    December 2018, IT in Manufacturing
    DMaaS aggregates and analyses large sets of anonymised customer data that can be enhanced with machine learning.
  • Operator guided solutions
    December 2018, Adroit Technologies, IT in Manufacturing
    At parts assembly production sites, where parts are picked from stock, it is almost inevitable that picking mistakes will occur. As parts become more complex and their component types increase, the problem ...
  • Software for low voltage distribution planning
    November 2018, ElectroMechanica, IT in Manufacturing
    New software from Hager facilitates planning and configuration of low voltage switchgear.
  • SKF ups the digital ante at ­Göteborg plant
    November 2018, SKF South Africa, IT in Manufacturing
    Swedish group, SKF, has been implementing digital transformation since 2015, investing close to €19 million to carry out its digital revolution at the Göteborg plant which has, for over a century, been ...
  • 3D software eliminates ­programming
    November 2018, ASSTech Process Electronics + Instrumentation, IT in Manufacturing
    More and more industrial users are discovering the potential of three dimensional software-aided object measurement. With the VisionApp 360 software, Wenglor now offers a smart tool that makes 3D object ...
  • Advanced data management from Siemens
    November 2018, Siemens Digital Factory & Process Indust. & Drives, IT in Manufacturing
    Siemens is innovating its data management software for process analytical technology (PAT) with Simatic Sipat version 5.1, which allows users to monitor and control the quality of their products in real-time ...
  • The 5 stages of cybersecurity awareness
    October 2018, IT in Manufacturing
    Before any of these recommendations can be implemented, managers must first understand and accept the risks they face and the potential consequences. An understanding of human behaviour can help. The ...
  • How adding services to products could start your journey towards an Industry 4.0 solution
    October 2018, Absolute Perspectives, This Week's Editor's Pick, IT in Manufacturing
    For manufacturers, digital transformation involves understanding a range of new technologies and applying these to both create new business and to improve the current operation. Industry 4.0 provides ...
  • Energy management software
    October 2018, Yokogawa South Africa, IT in Manufacturing
    Energy management solutions from KBC, a subsidiary of Yokogawa Electric Corp.
  • Using IIoT analytics to build customer solutions
    October 2018, Parker Hannifin Sales Company South, IT in Manufacturing
    Parker’s Voice of the Machine platform contextualises the data collected from machines.
  • Key considerations when designing IIoT networks for smart businesses
    October 2018, RJ Connect, IT in Manufacturing
    In the era of the IIoT, industries have opportunities to become more productive, more efficient and more dynamic. For example, the IIoT provides businesses with new capabilities such as dashboards that ...

 
 
         
Contact:
Technews Publishing (Pty) Ltd
1st Floor, Stabilitas House
265 Kent Ave, Randburg, 2194
South Africa
Publications by Technews
Dataweek Electronics & Communications Technology
Electronic Buyers Guide (EBG)

Hi-Tech Security Solutions
Hi-Tech Security Business Directory

Motion Control in Southern Africa
Motion Control Buyers’ Guide (MCBG)

South African Instrumentation & Control
South African Instrumentation & Control Buyers’ Guide (IBG)
Other
Terms & conditions of use, including privacy policy
PAIA Manual





 

         
    classic | mobile

Copyright © Technews Publishing (Pty) Ltd. All rights reserved.