Follow us at
plmPartner
  • Home
  • About
  • Blog
  • Archive
  • Video Log
  • Podcast
  • Learning Center
  • Privacy Policy

Facility Configuration Management

10/30/2020

0 Comments

 
Picture
During the last eight articles we have covered different aspects of facility information management in a real-world project. In this article I will focus on a facility’s configuration management.
If you would like to read previous chapters first before we take a deeper dive into facility configuration management, you can find them all here: Archive
 
In my view there are two important parts to facility configuration management. One is the management of changes, traceability and control of the individual information structures we’ve talked about in previous chapters, so, how is the information in the Functional Breakdown Structure, Location Breakdown Structure, product designs (EBOM) and installed asset information managed when changes needs to be made.

As an example, a system design for a water-cooling system in the Functional Breakdown Structure will most likely undergo several design changes during its lifetime, even after it has been taken into operation. Such design changes will lead to work performed on the already installed assets in the facility. Either in the form of re-calibration of existing assets, replacement of assets or even new installations and subsequent commissioning of those installations.

This leads us nicely to the second part of facility configuration management, because as the Functional Breakdown Structure (the facility design) evolves during the lifetime of a facility, we need to be able to identify at least the following:
  • What the facility’s design requirements originally were (As-Designed)
  • What we installed in the facility to fulfill the design requirements (assets) including operational configuration information (As-Built)
  • What has been done to the installed assets after commissioning of the facility (As-Maintained or As-Operated)

This all means that it is not enough to have control of all changes. A possibility must also exist to be able to say what the exact initially agreed design requirements were, what the As-Built was like (combination of design requirement and physically installed asset information to see that what was installed is in accordance with the design requirements) AND how the assets have evolved as a result of operations and maintenance work since initial commissioning (As-Maintained or As-Operated).
Having control of this is extremely important, both to be in compliance with regulatory requirements if an accident were to occur or in the event of an audit, but also for analysis and tracking of the facility’s well being and effective maintenance.


The figure below is loosely borrowed from an excellent publication from IAEA (International Atomic Energy Agency) on configuration management in nuclear plants ( IAEA-TECDOC-1335) 


Picture
Figure 1: Image used  in infographic is courtesy of European Spallation Source ERIC

Although the publication is old (from 2003), it explains very well what information needs to be controlled (even if it was document centric back then). I have translated it to what it will mean from a structured data management perspective. In essence: Design requirements must conform to what we say is there, and what we say is there includes the information stored in a functional breakdown structure, location breakdown structure, associated product data and all data regarding the installed asset including operational configuration information and maintenance information.

But hang on, that is only the right-hand part of the picture!
Exactly, because we also need to be able to prove that what we say is there conforms to what is actually there physically on site in the facility, and that what is physically there on site conforms to the design requirements.

This means that work processes must be in place from the facility owner side to assure that:
  • Design data, installed assets and all associated information conform all of the time
  • All changes are authorized
  • Conformance can be verified
  
How can this be achieved?
If facility data is structured and connected like in the previous article series of “PLM tales from a true mega-project” and Plant Information Management (see Archive), configuration management becomes a lot easier, but is still by all means not trivial.
Picture
Figure 2: Important structured information for Configuration Management

As the facility design (Functional Breakdown Structure) evolves over time, it must be possible to make “snapshots” or rather baselines of each individual system design within the functional breakdown structure when they reach sufficient maturity (released from design perspective to manufacturing). These “snapshots”, when performed on each and every system in the facility will gradually populate a complete As-Designed baseline of the full facility design.  

The exact same mechanism must be in place to create an As-Built, but here the installed asset information also needs to be included as well as any “red-line drawings” or deviation from the As-Designed information. Such deviations, if managed correctly, has already introduced changes in the functional breakdown structure via a change order which renders it different compared to the As-Designed baseline.

The new incremental baselines of designed systems including design changes made during installation together with actually installed asset information, calibration, certificates and traceability of performed work altogether forms the As-Built baseline.
When such capabilities are in place, we are able to say exactly what the As-Designed looked like and what the As-Built was like, and it allows for comparison between the two in order to determine what the differences were, why they occurred and who authorized them.
With such capabilities one could also create other forms of baselines if needed, like As-Installed, As-Commissioned etc.
​
The As-Maintained or As-Operated would not really be a baseline, but rather the current state of the connected information structures at any given time during operations. However, it must be possible to compare the current state of the connected information with both the As-Built and the As-Designed baselines. It would also be advisable to perform baselines or snapshots of systems at intervals to be able to say something about how the facility has evolved, and especially prior to any large modifications to the facility.
 
Bjorn Fidjeland
0 Comments

PLM Tales from a true mega project Ch. 8 – Digital Twin

6/29/2020

9 Comments

 
Picture
Image courtesy of European Spallation Source ERIC

So why this strong focus on structured and connected data? Throughout the different chapters we’ve looked into the details of how the European Spallation Source have defined data structures needed throughout the lifecycle of the facility, and how interoperability between connected objects across those data structures is achieved by utilizing governed and shared master data.

If you would like to read previous chapters first before we take a deeper dive, you can find them all here: Archive
​

The figure below shows an overview of where ESS have put their focus in terms of structured data.
Picture
Figure 1

But why? What is the overall objective?

The main objective is to support the evolution from project to a sustainable facility enabling world leading science for more than 40 years, and to establish the foundation needed for cost-efficient operation and maintenance. High up-time and tough reliability requirements together with tight budgets fosters a need to re-use and utilize data from all stakeholders in the project across the full facility lifecycle.

​
Picture
Figure 2.

By structuring and connecting data in the way described in this article series, ESS obtains traceability and control of all the facility data, which is vital from a regulatory perspective as we saw in chapter 1, but will also be crucial to obtain effective operations and maintenance.

But where does the digital twin fit into all of this?

In my view, the digital twin is in fact all the things we’ve been looking into in this article series, and the fact that the data is all linked together, navigable and comparable. This means that if I’m in operations I can interrogate any function (tag) in the facility for its physical location, to what system it is a part of, the design of that particular system, the asset that is implementing the function in the facility and all its data (the actual physical product in the facility), the full maintenance history of that asset and when it is next scheduled for maintenance, what part the asset was sourced from including its design data, the manufacturer of the part and so forth.

Another example would be going to a part and see all the functions (tags) in the overall facility where this part is used to fulfill a function from facility design perspective, or how many physical assets there are in the facility and in the warehouse sourced from this part.

A third example would be to interrogate a physical asset to see if there are similar ones as spare parts in the warehouse, how long the asset has served at that particular functional location, whether there are any abnormal readings from any of its sensors, when it’s scheduled for maintenance or if it has served at any other functional locations during its lifetime.

It is not strictly necessary that the digital twin has a glossy three-dimensional representation. At least I sometimes get the feeling that some companies tend to focus a lot on this aspect. And that’s exactly what it is, only one aspect of the digital twin. Most of the other aspects are covered in this article series, and yes there are other aspects as well depending on what kind of company you are, and what needs you have.
​
The common denominator however is that data must be linked, navigable and comparable.
Picture
Figure 3. The kind of information a digital twin can consist of

Figure 3 shows what kind of data the digital twin can consist of, provided that the data is structured and connected. A three-dimensional representation is in itself of limited value, but if connected to underlying data structures it would be a tremendously good information carrier, allowing an end user to quickly orientate herself or himself in vast amounts of information. However, it is not a pre-requisite.

I once, with another client, came across an absolutely fantastic 3D model of a facility to be used for operations. It was portrayed as a digital twin, but the associated data (design together with actual installed and commissioned assets) where all PDFs. My question was: If all the data is in PDFs and not as data objects and real attribute values, how can it be utilized by computer systems for predictive maintenance? For instance, how can data harvested from sensors in the field via the integrated control and safety system be compared to design criteria’s, and historical asset data to determine whether the readings are good or bad?

It could not.

To their defense, there were initiatives in place to look into other aspects and to start structuring data, but they focused on the 3D first.

In my view, the problem with such an approach is that it gives a false sense of being done when the 3D representation is in place. Basically, this would only represent the location aspect we discussed in chapter 3, only just in three dimensional space. You might argue that it could also include spatial integration discussed in chapter 5, but I would respond that a lot more structured data and consolidation of such data is needed to achieve this.

The thing is…. In order to achieve what is described in this article series, most companies would have to change the way they are currently thinking and working across their different departments, which brings us to the fact that real business transformation would also be required. The latter is most of the time a much larger and more time-consuming obstacle than the technical ones because it involves a cultural change.

If you would like to read even more about my thoughts around the Digital Twin, please read:

Digital Twin – What needs to be under the hood?

It is my hope that this article series can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.
​
Bjorn Fidjeland
9 Comments

PLM Tales from a true mega project ch. 7 - Reference Data

2/28/2020

0 Comments

 
Picture
Image courtesy of European Spallation Source ERIC

In previous chapters we’ve discussed the different data and information structures that needs to be in place in order to support a capital facilities project like the European Spallation Source from engineering through operations, maintenance and ultimately decommissioning.
Structured data is excellent, but wouldn’t it be even better to also have aligned definitions across data-structures and tools?
It certainly would, so in this chapter we’re going to look into what has been done at ESS to achieve interoperability across both data structures and software tools.
​
If you would like to read previous chapters first before we take a deeper dive, you can find them all here:
Archive
​
​
Picture
The figure above shows that Tag’s, Parts (in EBOM’s) and Installed Assets all use the same reference data library to obtain class names and attributes, meaning that a centrifugal pump is called exactly that across all structures as well as in different authoring tools, the PLM system and the Enterprise Asset Management system. Furthermore, they share the same attribute names and definitions including unit of measures.
​
Several years ago, it was decided to use ISO 15926 as a reference data library. We were able to obtain an export of the RDL (Reference Data Library) by the excellent help of then POSC Caesar Services and imported ISO 15926 part 4 into the PLM platform. Easy right? Well, not quite. We discovered that we now had more than 7000 classes beautifully structured and about 1700 attributes. However, none of the attributes were assigned to the main classes as the attributes were all defined as sub classes of a class called property.

​
Picture
Figure 2. Image courtesy of European Spallation Source ERIC

Figure 2 shows a small portion of the reference data library.
What this in essence meant was that you could select a class to be used for a Tag, Part or an Asset and it would be clear that it was a Choke Valve across all entities, but the entities would not have any attributes defined.
​
The solution to this problem was to form a cross discipline reference data group whose mandate is to assign needed attributes for ESS to the classes. It was soon discovered that the standard did not contain everything needed to describe a research facility, so the group also received a mandate to define new classes and attributes whenever needed. The reference data group met every week for the first two years, but now, it meets every second week.
The group is also tasked with defining letter codes for all classes to be used at ESS according to the standard ISO 81346 which is the chosen tagging standard.

​After every meeting any new classes, attributes and letter codes are deployed to the master reference data library in the PLM platform. The library serves as a common data contract across tags, parts and assets meaning that every entity get the same set of attributes and more importantly, identical names and definitions. This is also enforced across different software tools, rendering integration between the tools a lot easier as the need for complex mapping files disappears.

​
Picture
Figure 3. Image courtesy of European Spallation Source ERIC

Figure 3 shows some of the attributes that have been associated to the class Valve. As the PLM platform supports inheritance of attributes in class libraries, special care is taken with respect to adding attributes at an appropriate level so that the attributes are valid for all sub classes.
​
Picture
Figure 4. Image courtesy of European Spallation Source ERIC

Figure 4 is from the functional breakdown structure where some of the functional objects (tags) are listed. Please note the classification column indicating which class from the reference data library has been used to define attributes to each specific tag.
​
Picture
Figure 5. Image courtesy of European Spallation Source ERIC
​
Let’s examine one of the tags a bit closer. Figure 5 shows some of the attributes for the specific pressure transmitter selected (but without production data).
The same kind of information is available on any part selected to realize the tag requirements and ultimately on the delivered asset itself that was installed in the facility to fulfill the tag’s requirements.

The challenges described in this article are of course not unique to ESS, and several companies have done similar exercises or defined their own proprietary master data. The problem with all of them is that it becomes reference data libraries that are unique to a specific project or for one company only and thereby not solving interoperability problems between companies participating in the value chain of a capital facilities project.

I’m happy to see that initiatives like CFIHOS (Capital Facilities Information HandOver Specification, now that’s a tongue twister) seems promising and are worth checking out for anybody thinking about embarking on a similar journey, however for ESS it was never an option as we needed usable reference data fast.

It is my hope that this article can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.

Bjorn Fidjeland
0 Comments

From data silos to data flow - part 1

10/19/2018

0 Comments

 
Picture
In these two articles I’ll try to explain why and how a data flow approach between main systems during a plants lifecycle is far more effective than a document based handover process between project phases. I have earlier discussed the various information structures that needs to be in place across the same lifecycle. If you’re interested the list of articles is found at the end of this article.
​
During design and engineering the different plant and product design disciplines authoring tools play a major role as they are feeder systems to the Plant PLM Platform. All the information coming from these tools needs to be consolidated, managed and put under change control. The Plant PLM Platform also plays a major role to document the technical baselines of the plant, such as As-Designed, As-Built, As-Commissioned and As-Maintained. See figure 1.
​
Picture
Figure 1.

​When moving into the procurement phase, a lot of information needs to flow to the ERP system for purchasing of everything needed to construct the plant. The first information that must be transferred is released product designs, so Engineering Bill Of Materials. This is the traditional Product Lifecycle Management domain. The released EBOM says that seen from product engineering everything is ready for manufacturing and ERP can start procuring parts and materials to manufacture the product. Depending on the level of product engineering done in the plant project this can be a lot or just individual parts representing standard components or standard Parts.

The next information that needs to go to ERP is released tag information where the tag is connected to a released part. Here a typical example would be that a piping system is released with let’s say 8 pump tags and the pumps individual requirements in the system can all be satisfied by a generic part from a manufacturer. This would mean that in the Plant PLM system there are 8 pump tag objects that are released and they are all connected to the same generic released part. This constitutes a validated project specific demand for 8 pumps. At this stage an As-Designed baseline can be created in the Plant PLM platform for that particular system.
​
This information must be transferred to ERP where it now means that procurement should place an order for 8 pumps and manage the logistics around this. However, seen from Project planning and execution it might be identified that according to the project execution plan several other systems are scheduled for release shortly which would make the order 50 pumps instead of 8. After communicating with affected stakeholders, it may be that it is decided to defer the order.
Picture
Figure 2

As the order is placed together with information regarding each specific Tag requirement, preparation for goods receival, intermediate storage and work orders for installation must be made. This is normally done in an Enterprise Asset Management (EAM) system which also needs to be aware of the Tag’s and their requirements, physical locations to install the arrived pumps and what corresponding part definition the received physical asset is representing. All of this information is fed to the EAM system from the Plant PLM platform. As the physical assets are received, each of our now 50 pumps needs to be inspected, logged in the EAM system together with the information provided by the vendor and associated with the common part definition. If the pumps are scheduled for immediate installation, each delivered physical asset is Tagged as they now are installed to fulfill a dedicated function in the plant.
​
At this stage the information about the physical asset and its relations to Tag, physical location, and corresponding part is sent back to the Plant PLM platform for consolidation. This step is crucial if a consolidated As-Built baseline is needed and there is a need to compare As-Designed with As-Built. Alternately the EAM system needs to “own” the baselines.
Picture
Figure 3.

The next step is to make the Integrated Control & Safety system aware of the installed assets, and this will be among the topics for the next article.

If you want to know more about what kind of information structures and data that needs to be consolidated and flow between the systems, you can find more information here:


Plant Information Management - Information Structures
Archive of articles


Bjorn Fidjeland


The header image used in this post is by Wavebreakmedia Ltd  and purchased at dreamstime.com

0 Comments

Data Integration – Why Dictionaries…..?

8/19/2017

0 Comments

 
Picture
Most companies of more than medium size that does any engineering soon finds itself in a similar situation to the figure below.
The names of the applications might be different, but the underlying problem remains the same: Engineering data is created by different best of bread design tools used by different engineering disciplines, and the data must at some point in time be consolidated across disciplines and communicated to another discipline. This other discipline being procurement, project execution, manufacturing and/or supply chain.
​
This article is a continuation of the thoughts discussed in an earlier post called integration strategies, if the full background is wanted.

​

Picture
​As the first figure indicates,  this has often ended up in a lot of application specific point to point integrations. For the last 15 years, more or less, so called Enterprise Service Buses have been available to organizations, enterprise and software architects. The Enterprise Service Bus, often referred to as ESB is a common framework for integration allowing different applications to subscribe to published data from other applications and thereby creating a standardized “information highway” between different company domains and their software of choice.
Picture
​By implementing such an Enterprise Service Bus, the situation in the company would look somewhat like the figure above. Now from an enterprise architecture point of view this looks fine, but what I often see in organizations I work with is more depressing. Let’s dive in and see what often goes on behind the scene.
Picture
Modern ESB’s have graphical user interfaces that can interpret the publishing applications data format, usually by means of xml or rather the xsd. The same is true for the subscribing applications.
This makes it easy to create integrations by simply dragging and dropping data sources from one to the other. Of course, often, one will have to combine some attributes from one application into one specific attribute in another application, but this is also usually supported.
​
So far everything is just fine, and integration projects have become a lot easier than before. BUT, and there is a big but. What happens when you have multiple applications integrated?
Picture
The problems of point to point integrations have effectively been re-created inside the Enterprise Service Bus, because if I change the name of an attribute in a publishing application’s connector, all the subscribing application’s connectors must be changed as well.
How can this be avoided? Well several ESB’s support the use of so called dictionaries, and the chances are that the Enterprise Service Bus ironically is already using one in the background.

So, what is a dictionary in this context?
Think of it as a Rosetta stone. Well, what is a Rosetta stone you might ask. The find of the Rosetta stone was the breakthrough in understanding Egyptian hieroglyphs. The stone contained a decree with the same text in hieroglyphs, Demotic script and ancient Greek allowing us to decipher Egyptian hieroglyphs.
​Imagine the frustration before this happened. A vast repository of information carved in stone all over the magnificent finds from an earlier civilization…. And nobody could make sense of it….. Sounds vaguely familiar in another context.
​
Back to our more modern integration issues.
Picture
​If a dictionary or Rosetta stone is placed in the middle, serving as an interpretation layer, it won’t matter if the name of some of the attributes in one of the publishing applications changes. None of the other applications connectors will be affected, since it is only the mapping to the dictionary that must be changed which is the responsibility of the publishing application.
Picture
If such a dictionary is based on an industry standard, it will also have some very beneficial side effects.
Why?
Because if your internal company’s integration dictionary is standards based, then the effort of generating information sent to clients and suppliers, traditionally referred to as transmittals or submittals, will be very easy indeed.

If we expand our line of thought to interpretation of data from operational systems (harvesting data from physical equipment in the field). Commonly referred to as IoT, or acquisition of data through SCADA systems, then the opportunities becomes even greater.

In this case it really is possible to kill two birds with one stone, and thereby creating a competitive advantage!

Bjorn Fidjeland


The header image used in this post is by Bartkowski  and purchased at dreamstime.com
0 Comments

Digitalization - sure, but on what foundation?

4/7/2017

5 Comments

 
Picture
The last couple of years I’ve been working with some companies on digitalization projects and strategies. Digitalization is of course very attractive in a number of industries:

  • Equipment manufacturers, where digitalization can be merged with Internet Of Things to create completely new service offerings and relationships with the customers
  • Capital projects EPC’s and operators, where a digital representation of the delivery can be handed over as a “digital twin” to the operator , and where the operator can use it and hook it up to EAM or MRO solutions to monitor the physical asset real-time in a virtual world. The real value for the operator here is increased up-time and lower operational costs, whereas EPC’s can offer new kinds of services and in addition mitigate project risks better.
  • Construction industry, where the use of VDC (Virtual Design & Construction) technology can be extended to help the facility owner minimize operational costs and optimize comfort for tenants by connecting all kinds of sensors in a modern building and adjust accordingly.
But hang on a second: If we look at the definition of digitalization, at least the way Gartner views it

“Digitalization is the use of digital technologies to change a business model and provide new revenue and value-producing opportunities; it is the process of moving to a digital business.” (Source: Gartner)

…The process of moving to a digital business….

The digitalization strategies of most of the companies I’ve been working with focuses on the creation of new services and revenue possibilities on the service side of the lifecycle of a product or facility, so AFTER the product has been delivered, or the plant is in operation.
There is nothing wrong with that, but if the process from design through engineering and manufacturing is not fully digitalised (by that I do not mean documents in digital format, but data as information structures linked together) then it becomes very difficult to capitalize on the promises of the digitalization strategy.
​
Consider 2 examples
Picture
Figure 1.
​
Figure 1 describes a scenario where design and engineering tools work more or less independently and where the result is consolidated in documents or excel before communicated to ERP. This is the extreme scenario to illustrate the point, and most companies have some sort of PDM/PLM or Engineering Register to perform at least partial consolidation of data before sending to ERP. However I often find some design or engineering tools operating as “islands” outside the consolidation layer.

So if we switch viewpoint to the new digital service offering promoted to end customers. What happens when a sensor is reporting back a fault in the delivered product? The service organization must know exactly what has been delivered, where the nearest spare parts are, how the product  is calibrated etc. to quickly fix the problem with a minimum use of resources in order to make a profit and to exceed customer expectation to gain a good reputation.
​
How likely is that to happen with the setup in figure 1?

​
Picture
Figure 2.
​
The setup in figure 2 describes a situation where design and engineering information is consolidated together with information regarding the actually delivered physical products. This approach does not necessarily dictate that the information is only available in one and only one software platform, however the essence is that the data must be structured and consolidated.

Again let’s switch viewpoint to the new digital service offering promoted to end customers. What happens when a sensor is reporting back a fault in the delivered product?
When data is available as structured and linked data it is instantly available to the service organization, and appropriate measures can be taken while informing the customer with accurate data.
​
My clear recommendation is that if you are embarking on a digitalization journey to enhance your service offering and offer new service models, then make sure you have a solid digital foundation to build those offerings on. Because if you don’t it will be very difficult to achieve the margins you are dreaming of.
​
Bjorn Fidjeland


The header image used in this post is by kurhan and purchased at dreamstime.com
5 Comments

Plant Information Management – Operations and Maintenance

1/29/2017

0 Comments

 
Picture
This post is a continuation of the posts in the Plant Information Management series of:
“Plant Information Management - Installation and Commissioning”
“Handover to logistics and supply chain in capital projects”
“Plant Engineering meets Product Engineering in capital projects”
 “Plant Information Management - What to manage?”

During operations and maintenance, the two main structures of information needed in order to operate the plant in a safe and reliable manner is the functional or tag structure and the physically installed structure.
The functional tag structure is a multidiscipline consolidated view of all design requirements and criteria, whereas the physically installed structure is a representation of what was actually installed and commissioned together with associated data. It is important to note that the physically installed structure evolves over time during operations and maintenance, so it is vital to make baselines of both structures together to obtain “As-Installed” and “As-Commissioned” documentation
​
Picture
Figure 1.
​

Let’s zoom in on some of the typical use cases of the two structures.
Picture
Figure 2.
​

The requirements in the blue tag structure are fulfilled by the physical installation, the yellow structures. In a previous post I promised to get back to why they are represented as separate objects. The reason for this is that during operations one would often like to replace a physical individual on site with another physical individual. This new physical individual still has to fulfill the tag requirements, as the tag requirements (system design) have not changed. In addition we need full traceability of not only what is currently installed, but also what used to be installed at that functional location (see figure 3).
Picture
Figure 3.

Here we have replaced the vacuum pump during operations with another vacuum pump from another vendor. The new vacuum pump must comply with the same functional requirements as the old one even if they might have different product designs.
This is a very common use case where a product manufacturing company comes up with a new design a few years later. The product might be a lot cheaper and still fulfills the requirements, so if the operator of the plant has 500 instances of such products in the facility, it makes perfect sense to replace them when the old product nears end of life or have extensive maintenance programs.
 
Another very important reason to keep the tag requirements and physically installed as separate objects is if….or rather when the operator wishes to execute a modification or extension project to the plant.
In such cases one must still manage and record the day to day operation of the plant (work requests and work orders performed on physical equipment in the plant) while at the same time performing a plant design and execution project. This entails Design, Engineering, Procurement, Construction and Commissioning all over again.
Picture
Figure 4.
​

The figure shows, that when the blue functional tag structure is kept separate from the yellow physically installed structure we can still operate the current plant on a day to day basis, and at the same time perform new design on the revised system (Revision B).
This allows us to execute all the processes right up until commissioning on the new revision, and when successfully commissioned, the revision B becomes operational.
​
This all sounds very good in theory, but in practice it is a bit more challenging, as there in the meantime might have been made change orders that effected the design of the previous revision as a result of operations. This is one of the use cases where structured or linked data instead of a document centric approach really pays off, because such a change order would immediately indicate that it would affect the new design, and thus,  appropriate measures can be taken at an early stage instead of nasty surprises popping up during installation and commissioning of the new system.

Bjorn Fidjeland

The header image used in this post is by nightman1965 and purchased at dreamstime.com
0 Comments

Handover to logistics and supply chain in capital projects

12/12/2016

1 Comment

 
Picture
This post is a continuation of the post “Plant Engineering meets Product Engineering in capital projects” and “Plant Information Management - What to manage?”
​
As the last post dwelled on how EPC’s and product companies are trying to promote re-use in very Engineer To Order (ETO) intensive projects, we will focus on the handover to supply chain and logistics in this post.

The relationship between the tag containing the project specific requirements, and the article or part containing the generic product design constitutes a project specific demand that supply chain and logistics should know about. If both the tag and the connected part is released, a “signal” is sent with information regarding both the tag’s requirements and the part’s requirements.
​An exception to this rule is typically Long Lead Items (LLI). I’ve seen this handled via a special process that allows transfer of the information to supply chain and logistics even if the specific tag has not been released.
Picture
Figure 1.
As the project specific information regarding all three tags and the intended use of product design is sent to logistics and supply chain it is possible to distinguish what tags need special attention and what tags can be ordered “off the shelf”.

Let’s say that tag: =AB.ACC01.IS01.VS04.EP03 is in a safety classed area and the other two are not. Information in the purchase order for the safety classed tag must then contain information to the manufacturer that documentation regarding the manufacturing process must follow the produced individual that will be used to implement this specific tag, whereas the other two deliveries can have standard documentation.
​
Picture
Figure 2.
Figure 2 depicts that all three manufactured products or physical items with serial numbers come from the same Engineering Bill Of Material, but that the individual with serial number S/N: AL11234-12-15 has some extra information attached.
This is because since it is to be used in a safety classed environment, proof must be produced from the manufacturer’s side that the product fulfills the safety class requirements given on the tag. This could for instance be X-Ray documentation that all welds are up to spec or that the alloy used has sufficient quality.
As you can see, If the information is kept as information structures with relationships between the different data sets detailing what context the different information is used in, it becomes possible to trace and manage it all in project specific processes.
There are some other very important information structures that I mentioned in the post “Plant Information Management - What to manage?” like the Sales BOM (similar to manufacturing industries Manufacturing BOM), the Supply BOM and warehouse management, however I would like to cover those in more detail later in later posts.
​
For now let’s follow the journey of the manufactured products as they move into installation and commissioning.


Picture
Figure 3.
Provided that the information from the different structures and their context in relation to each other is kept, it is possible to trace perfectly what physical items should be installed where, corresponding to the tag requirements in the project (note: I’ve removed the connections from tag to EBOM in this figure for clarity).

We are now able to connect the information from tag: =AB.ACC01.IS01.VS04.EP03, the one in the safety classed area, to the physical item with serial number S/N: AL11234-12-15 that contains the documentation proving that it is fit for purpose in a safety classed area.
As the other two tags are not in a safety classed area, and have no special requirements, any of the two physical pumps can be used to fulfill the tag requirements, however we still want full traceability for commissioning, operations & maintenance.
​
Picture
Figure 4
Since we now have a connection between the tag requirements and the physically installed individuals, we can commence with various commissioning tests and verify that what we actually installed works as intended in relation to what we designed (the plant system), and furthermore we can associate certificates and commissioning documentation to the physical individuals.
The reason for this split between tag object and physical item object I’d like to come back to in a future post regarding operations and maintenance.

Bjorn Fidjeland


The header image used in this post is by Nostal6ie and purchased at dreamstime.com
1 Comment

Plant Engineering meets Product Engineering in capital projects

9/30/2016

0 Comments

 
Picture
This post is a follow up of “Plant Information Management - What to manage?”.

It focuses on the needed collaboration between Plant Engineering (highly project intensive) and Product Engineering which ideally should be “off the shelf” or at least Configure To Order (CTO), but in reality is more often than not, Engineer To Order (ETO) or one-offs.

More and more EPC’s (Engineering Procurement Construction companies), and product companies exposed to project intensive industries are focusing hard on ways to  re-use product designs from one project to the next or even internally in the same project through various forms of configuration and clever use of master data, see “Engineering Master Data - Why is it different?”.
​
However, we will never get away from the fact that the product delivery in a capital project will always have to fulfill specific requirements from Plant Engineering, and especially in safety classed areas of the plant.
If you look at the blue object structure, it represents a consolidated view of multi-discipline plant engineering. The system might consist of several pumps, heat exchangers, sensors, instrumentation and pipes, but we are going to focus on a specific tag and it’s requirements, namely one of the pumps in the system.​
Picture
At one point in the plant engineering process the design is deemed fit for project procurement to start investigating product designs that might fulfill the requirements stated in the plant system design.
If the plant design is made by an EPC that does not own any product companies, the representing product is typically a single article or part with associated preferred vendors/manufacturers who might be able to produce such a product or have it in stock. If the EPC does own product companies, the representing product might be a full product design. In other words a full Engineering Bill Of Material (EBOM) of the product.
 
This is where it becomes very interesting indeed because the product design (EBOM) is generic in nature. It represents a blueprint or mold if you will, used to produce many physical products or instances of the product design. The physical products typically have serial numbers, and you are able to touch them. However, due to requirements from the Owner/Operator, the EPC will very often dictate both project and tag specific documentation from the product company supplying to the project, which in turn often leads to replication of the product designs X number of times to achieve compliance with the documentation requirements in the project (Documentation For Installation and Operations).
​
Picture
So, even if it is exactly the same product design it ends up being copied each time there is a project specific delivery. This often happens even if let’s say 40 pumps are being supplied by the same vendor to the same project, as responses to the requirements on 40 different tags in the plant design……
Needless to say it becomes a lot of Engineering Bill Of Materials in order to comply with documentation requirements in capital projects. Even worse, for the product companies it becomes virtually impossible to determine exactly what they have delivered each time, since it is different Engineering Bills Of Materials all the time, yet 97% of the information might be the same. The standardized product has now become an Engineer To Order product.
So how is it possible to avoid this monstrous duplication of work?
More and more companies are looking into ways to make use of data structures used in different contexts. The contexts might be different deliveries to the same project or across multiple projects, but if one is able to identify and separate the generic information from what information that needs to be project specific it is also possible to facilitate re-use.
​
Picture
​The image above shows how a generic product design (EBOM) is able to fulfill three different project specific tags or functional locations in a plant. Naturally three physical instances or serial numbers must then be manufactured based on the generic product design, but since we have the link or relationship between the project specific requirements (the tags) and the generic (the EBOM), one can generate project specific data and documentation without making changes to the generic representation of the product (the EBOM).
This approach even enables the product company to identify and manufacture one of the pumps which happens to be in a safety classed area in the plant design according to regulatory requirements without having to make changes or duplicate the product design, however more on that next time.
 
Bjorn Fidjeland


The header image used in this post is by Nostal6ie and purchased at dreamstime.com
0 Comments

Managing Documentation For Installation and Operations

5/1/2016

0 Comments

 
Picture
​In one of my previous articles “Plant Information Management - What to manage?”
I wrote about different information structures needed in a plant project from early design through commissioning and operations.
​
The article left some questions hanging. One of them was: how can all this information be consolidated, managed and distributed to the various stakeholders in a plant project at the right time and with the right quality?

​Traditionally this has been called LCI or LifeCycle Information, at least in Norwegian oil & gas industry, DFI/DFO internationally or Documentation For Installation / Documentation For Operations. In short it is the operator’s requirements and needs for information from early design through engineering, procurement, construction and up to and including commissioning. The requirements are safety, regulatory and also requirements for information the operator finds important in order to control, monitor and guide progress of the project executed by the EPC.
Picture
​As the figure describes, the operator drives expectation of deliveries in terms of standardization, safety & regulatory and expected documentation needed to operate and maintain the plant after commissioning. All stakeholders in the value chain must abide to these requirements, and it is the EPC who usually has the task of coordinating and consolidating this mountain of information. A successful commissioning includes that the operator confirms that it has received all documentation and information required to operate the plant in a safe and regulatory compliant manner. At this point the EPC is excused from the project.
Picture
​In theory, the documentation handover would look like the figure above, however operators experience has told them that this seldom works well. Therefore a much more frequent information exchange is required between EPC and operator leading up towards commissioning. The main reason for this is that it enables the operator to monitor, check and verify the progress in the project. It also makes for a more gradual build-up and maturing of documentation in the project. For the EPC it means frantic activity to secure all required documentation from its own engineering disciplines, and all external companies in the projects value chain (see the pyramid in figure 1.) at each milestone.
Picture

Traditionally a whole host of LCI Coordinators has been needed both on the EPC side, and on the operator side to make sure that all documentation is present, and if not, to make sure it is created…. The very “best” LCI coordinators on the EPC side manage to produce the information without “bothering” engineering too much. It has largely been a document centric process separated from the plant & product engineering process.
 
As long as EPC’s are only active in one country, this approach is manageable for them, however once they turn global, they find themselves having to deal with many different safety standards, regulatory standards and last but not least varying requirements and formats from different operators. Even product companies and module suppliers delivering to projects in different parts of the world experience the same thing.

In later years I’ve experienced more and more interest in leaving the document centric approach for a more data centric approach. This means that data is created and consolidated from various disciplines in data structures as described in the article “Plant Information Management - What to manage?”, and that the LCI process becomes an integral part of the engineering, procurement, construction and commissioning processes instead of being a largely separated one.

 Of course there are varying strategies among companies when it comes to how much to manage, and how to hand it over.
  • Some create data structures in PLM like platforms, consolidates them, manage changes and transfer data to other stakeholders in the projects via generated transmittals. This is more similar to the document centric approach only more automated.
  • Some companies target re-use from project to project in addition to the aspects mentioned above by creating data structures in catalogs that can be selected in other projects as well. The selected data structure is then replicated in a project specific context and gets auto generated project specific information like tags and documentation.
  • Others again removes or reduces the need for transmittals and document handovers by letting the project stakeholders directly into their platform to work and deliver information there instead of handing over documents.
  • One approach was to not hand over documents at all, but simply give the operator access to the platform, link the information from the data structures as deliverables to the milestones the operator required, and then handing over the entire platform to the operator as Documentation For Operations after successful commissioning.

​Bjorn Fidjeland


The header image used in this post is by Norbert Buchholz and purchased at dreamstime.com
0 Comments
<<Previous

    plmPartner

    This is where we share our thoughts, ideas and experiences with you

    RSS Feed

    View my profile on LinkedIn

    Categories

    All
    AEC
    BIM
    Data Management
    Digital Enterprise
    Digital Transformation
    Digital Twin
    ERP
    Facility Lifecycle Management
    Governance
    Integration
    Internet Of Things
    IOT
    Platform
    PLM
    Process
    Product Lifecycle Management
    Strategy
    Structured Data
    Technical Information Management
    VDC
    Virtual Design And Construction

Contact us:
[email protected]