Follow us at
plmPartner
  • Home
  • About
  • Blog
  • Archive
  • Contact
  • Video Log
  • Learning Center
  • Privacy Policy

An advisor’s most important abilities: Knowing when to keep your mouth shut

4/22/2022

0 Comments

 
Picture
I realized through a question from a young consultant the other day that I’ve never really written about the main portion of what I do as advisor. Sure, I write about the strategies and outcomes of those, but not really about the advisory role.

Coming back to the question from the young consultant, she asked me “what would you say is the single most important ability for you as an advisor?” I thought for a while before responding: “the ability to keep my mouth shut”. She looked at me like I just fell out of the sky, so some further explanation was needed.

In my experience, especially when advising management, they have fewer people to confide in. It is a common phrase that “the closer to the top you get, the lonelier it gets”. And it is true in a lot of aspects, these people have a lot of responsibility and power to influence the everyday working life of people working in the organization, so most of them are very careful about speaking their mind too early for fear of being either misinterpreted or causing rumors that could potentially be damaging.

I’m going to use Tom as an example. Tom is CIO of a large company, and Tom is not his real name, by the way.
We were reviewing a vision and potential strategies to support it, and what each strategy would mean for internal business processes (ways of working) etc. I soon realized that Tom was in desperate need of a sounding board, and sometimes, not even that. Just the opportunity to formulate his thoughts calmly into the spoken word was enough to unlock several strategies and business opportunities.
My task would then be to simply probe and interrogate those strategies from different angles, and offer possible consequences to both ways of working and viable implementations.
If I had not kept my mouth shut to give Tom time to arrange his thoughts, his creative thinking process would have been interrupted, and the sessions nowhere near as valuable.

I’ve seen this pattern repeat itself with several clients.

Another essential rule is to never, ever relate anything from any such conversation as the one described above outside of the room, unless explicitly told to. They are to always be treated as confidential to foster an environment for candidness and openness. The reason being that if not adhering to that rule, rumors tend to start flying about new directions, they get tweaked along the way and before you know it, the organization is in turmoil.

Outcome: You will have failed as an advisor, your client will never trust you again, you will not work with that company anymore and the grapevine will ensure that you get less opportunities to work with other companies as well.

So, yeah, I would definitely say that an advisors most important ability is knowing when to keep your mouth shut.    
 
Bjorn Fidjeland
0 Comments

Digital Twin - What’s in it for the facility owner

4/1/2022

0 Comments

 
Picture
All facility owners want a facility that produces 24/7, 365 days a year, however, nobody has it.

Production needs to be halted for all sorts of reasons both scheduled, like planned maintenance, and un-scheduled, like interventions due to failures in critical equipment. The digital twin promises that if you have an exact data-based replica of your physical facility which actually “talks to” and “understands” your physical facility, you will be able to greatly optimize operations, maintenance, risk management, safety etc.

How?
Well, if data is acquired real-time from sensors on equipment in the operating facility and fed to the digital twin for analysis towards the design and engineering specifications of both the equipment itself and the facility system in which it operates, then one can learn really interesting stuff: Is the equipment operating within the thresholds set by the system design? Is the equipment nearing its end of useful life, is it behaving oddly, do we have historical knowledge to predict what happens when the equipment behaves like this? Can we reduce throughput and expect to reach the next scheduled maintenance window before it breaks, thereby limiting production downtime?

This sort of things, that can facilitate pro-active actions and predictive maintenance instead of re-active and corrective maintenance and thereby increasing the operational time.
In addition, if you do have such a data-based up to date replica of the facility, it becomes a lot easier to simulate and test everything from hazardous operations, to regular work with Lock-Out Tag-Out procedures, installation planning and execution of new equipment, inspection routes etc. because you can train for it in the virtual world of the digital replica. Of course, this is only true if the digital replica is kept up to date…….. 
So how can we go about actually getting such a digital replica of our facility?

There are essentially two ways to get there:
​
Facility owners can start specifying that a digital replica of the facility is a part of the EPC (I-C) contract, and that this delivery is just as important as the actual physical facility.
I have seen some facility owners moving in this direction; however, they then have to specify exactly what constitutes a successful digital replica delivery, and then make sure that it is updated continuously during operations.


Picture
Figure 1: "Digital Twin" created by EPC and handed over as part of commissioning
​
Or, if the facility is already built and operational, laser scans can be performed to gain an As-Operated model of the facility. However, this will only give you a pretty model. Data about initial design requirements, installed assets and their design, data from installation and commissioning, and what has happened to the equipment and systems since then must be reverse engineered and connected to achieve a digital twin fit for purpose.
Picture
Figure 2: "Digital Twin" created during operations by laser scanning and reverse engineering

Both of these approaches have one important thing in common. They both heavily depend on a properly defined information model that can handle information exchange across multiple disciplines, software tools and even companies throughout the lifecycle of the facility.

To achieve that, interoperability is vital.

What does that mean then, and how can it be done?
The “PLM tales from a true mega-project” series and "Digital Twin - What needs to be under the hood?"
 offers ways of doing it.
 
A facility owner that owns multiple facilities would benefit even greater by having such a defined information model, as it can be shared across facilities and digital twins. This would allow for a new level of insight across facilities. As an example: if a certain type of equipment keeps failing during certain circumstances, it can immediately be analyzed if that type of equipment is used the same way not only in one facility, but in all other owned facilities as well. This would open up for a lot more effective knowledge sharing across all owned sites, and prevent un-necessary downtime. 
​
Picture
​In my view, any facility owner embarking on a “digital twin” journey should pay great attention to the information model behind the “digital twin”, and devise strategies for how benefits can be utilized across a portfolio of twins as well as within a single twin.
​

After all, it does not make sense to do the same mistakes at every facility, when the knowledge to prevent it is there.
 
Bjorn Fidjeland

The two facility images used in this post is by Narmada Gharat and purchased at Dreamstime.com
0 Comments

Opportunities and strategies - Product Configuration Lifecycle Management

4/25/2021

0 Comments

 
Picture

This time an article that aims towards the more traditional Product Lifecycle Management domain and especially towards configurable products or so called Configure To Order (CTO) products. This article is a direct result of discussions I’ve had with Henrik Hulgaard, the CTO of Configit, on Configuration Management in general and Product Configuration Management in particular. Configit specializes in Product Configuration Management, or as they prefer to call it Configuration Lifecycle Management.
 
Most businesses that design, manufacture and sell products have a system landscape in place to support key areas during the lifecycle of a product pretty much as in the image below (there are of course differences from company to company).

​
Picture
Figure 1.

This works well as long as the product lifecycle is linear, like it has mostly been in the past. However, as more and more companies strive after being able to let customers “personalize” their products, (so, to configure them to support their individual personal needs), harvest data and behavior from “the field” through sensors to detect trends in usage as well as being able to offer new services while the product is in use (operational), the lifecycle cannot be linear anymore in my view. This is because all phases of the lifecycle need feedback and information from the other phases to some degree. You may call this “a digital thread”, “digital twin” or “digital continuity” if you will (figure 2).
Picture
Figure 2.

Such a shift puts enormous requirements on traceability and change management of data all the way from how the product was designed, through to how it is used, how it is serviced and ultimately how it is recycled. If the product is highly configurable, the number of variants of the product that can be sold and used is downright staggering.
Needless to say, it will be difficult to offer a customer good service if you do not know what variant of the product the customer has purchased, and how that particular instance of the product has been maintained or upgraded in the past.
 
So, what can a company do to address these challenges and also the vast opportunities that such feedback loops offer?

If we consider the three system domains that are normally present (there are often more), they are more often than not quite siloed. That is in my experience not because the systems cannot be integrated, but more as a result of organizations still working quite silo oriented (Figure 3).

​


Picture
Figure 3.

All companies I’ve worked with wants to break down these silos and internally become more transparent and agile, but what domain should take on the responsibility to manage the different aspects of product configuration data? I mean, there is the design & engineering aspect, the procurement aspect, the manufacturing aspect, the sales aspect, the usage/operation aspect, the service/maintained aspect and ultimately the recycling aspect.
 
Several PLM systems today have configuration management capabilities, and it would for many companies make sense to at least manage product engineering configurations here, but where do you stop? I mean, sooner or later you will have to evaluate if more transactional oriented data should be incorporated in the PLM platform which is not a PLM system’s strongpoint (figure 4).
Picture
Figure 4.

On the other hand, several ERP systems also offer forms of configuration management either as an addition or as part of their offering. The same question needs to be answered here. Where does it make most sense to stop as ERP systems are transactional oriented, while PLM systems are a lot more process and iteratively work oriented (figure 5).


Picture
Figure 5.

The same questions need to be asked and answered for the scenario regarding CRM. Where does it make sense to draw the boundaries towards ERP or PLM, like in figure 6.

​
Picture
Figure 6.

I have seen examples of companies wanting to address all aspect with a single software vendor’s portfolio, but in my experience, it only masks the same questions within the same portfolio of software solutions. Because, who does what, where and with responsibility for what type of data when, is not tackled by using a single vendor’s software. Those are organizational, and work process related questions, not software questions.
 
Another possible solution is to utilize what ERP, PLM and CRP systems are good at in their respective domains, and implement the adjoining business processes there. Full Product Configuration Management or Configuration Lifecycle Management needs aspects of data from all the other domains to effectively manage the full product configuration, so a more domain specific Configuration Management platform could be introduced.


Picture
Figure 7.

Such a platform will have to be able to reconcile information from the other platforms and tie it together correctly, hence it would need a form of dictionary to do that. In addition, it needs to define or at least master the ruleset defining what information from PLM can go together with what information in ERP and CRM to form a valid product configuration that can legally be sold in the customer’s region.

As an example, consider: What product design variant that meets the customer requirements can be manufactured most cost effectively and nearest the customer with the minimal use of resources and still fulfill regulatory requirements in that customers country or region?
These are some of the questions that must be answered.

More strategic reasons to evaluate a setup like in figure 7 could be:
  • As the departmental silos in an organization is often closely linked to the software platform domain, it might be easier to ensure collaboration and acceptance by key stakeholders across the organization with a “cross-cutting” platform that thrives on quality information supplied by the other platforms.
  • It poses an opportunity for companies with a strategy of not putting too many eggs in the basket of one particular software system vendor.
  • It could foster quality control of information coming from each of the other domains as such a CLM solution is utterly dependent on the quality of information from the other systems.
  • Disconnects in the information from the different aspects can be easily identified.

I would very much like to hear your thoughts on this subject.
​
Bjorn Fidjeland 


​The header image used in this post is by plmPartner
0 Comments

Facility Configuration Management

10/30/2020

0 Comments

 
Picture
During the last eight articles we have covered different aspects of facility information management in a real-world project. In this article I will focus on a facility’s configuration management.
If you would like to read previous chapters first before we take a deeper dive into facility configuration management, you can find them all here: Archive
 
In my view there are two important parts to facility configuration management. One is the management of changes, traceability and control of the individual information structures we’ve talked about in previous chapters, so, how is the information in the Functional Breakdown Structure, Location Breakdown Structure, product designs (EBOM) and installed asset information managed when changes needs to be made.

As an example, a system design for a water-cooling system in the Functional Breakdown Structure will most likely undergo several design changes during its lifetime, even after it has been taken into operation. Such design changes will lead to work performed on the already installed assets in the facility. Either in the form of re-calibration of existing assets, replacement of assets or even new installations and subsequent commissioning of those installations.

This leads us nicely to the second part of facility configuration management, because as the Functional Breakdown Structure (the facility design) evolves during the lifetime of a facility, we need to be able to identify at least the following:
  • What the facility’s design requirements originally were (As-Designed)
  • What we installed in the facility to fulfill the design requirements (assets) including operational configuration information (As-Built)
  • What has been done to the installed assets after commissioning of the facility (As-Maintained or As-Operated)

This all means that it is not enough to have control of all changes. A possibility must also exist to be able to say what the exact initially agreed design requirements were, what the As-Built was like (combination of design requirement and physically installed asset information to see that what was installed is in accordance with the design requirements) AND how the assets have evolved as a result of operations and maintenance work since initial commissioning (As-Maintained or As-Operated).
Having control of this is extremely important, both to be in compliance with regulatory requirements if an accident were to occur or in the event of an audit, but also for analysis and tracking of the facility’s well being and effective maintenance.


The figure below is loosely borrowed from an excellent publication from IAEA (International Atomic Energy Agency) on configuration management in nuclear plants ( IAEA-TECDOC-1335) 


Picture
Figure 1: Image used  in infographic is courtesy of European Spallation Source ERIC

Although the publication is old (from 2003), it explains very well what information needs to be controlled (even if it was document centric back then). I have translated it to what it will mean from a structured data management perspective. In essence: Design requirements must conform to what we say is there, and what we say is there includes the information stored in a functional breakdown structure, location breakdown structure, associated product data and all data regarding the installed asset including operational configuration information and maintenance information.

But hang on, that is only the right-hand part of the picture!
Exactly, because we also need to be able to prove that what we say is there conforms to what is actually there physically on site in the facility, and that what is physically there on site conforms to the design requirements.

This means that work processes must be in place from the facility owner side to assure that:
  • Design data, installed assets and all associated information conform all of the time
  • All changes are authorized
  • Conformance can be verified
  
How can this be achieved?
If facility data is structured and connected like in the previous article series of “PLM tales from a true mega-project” and Plant Information Management (see Archive), configuration management becomes a lot easier, but is still by all means not trivial.
Picture
Figure 2: Important structured information for Configuration Management

As the facility design (Functional Breakdown Structure) evolves over time, it must be possible to make “snapshots” or rather baselines of each individual system design within the functional breakdown structure when they reach sufficient maturity (released from design perspective to manufacturing). These “snapshots”, when performed on each and every system in the facility will gradually populate a complete As-Designed baseline of the full facility design.  

The exact same mechanism must be in place to create an As-Built, but here the installed asset information also needs to be included as well as any “red-line drawings” or deviation from the As-Designed information. Such deviations, if managed correctly, has already introduced changes in the functional breakdown structure via a change order which renders it different compared to the As-Designed baseline.

The new incremental baselines of designed systems including design changes made during installation together with actually installed asset information, calibration, certificates and traceability of performed work altogether forms the As-Built baseline.
When such capabilities are in place, we are able to say exactly what the As-Designed looked like and what the As-Built was like, and it allows for comparison between the two in order to determine what the differences were, why they occurred and who authorized them.
With such capabilities one could also create other forms of baselines if needed, like As-Installed, As-Commissioned etc.
​
The As-Maintained or As-Operated would not really be a baseline, but rather the current state of the connected information structures at any given time during operations. However, it must be possible to compare the current state of the connected information with both the As-Built and the As-Designed baselines. It would also be advisable to perform baselines or snapshots of systems at intervals to be able to say something about how the facility has evolved, and especially prior to any large modifications to the facility.
 
Bjorn Fidjeland
0 Comments

PLM Tales from a true mega project Ch. 8 – Digital Twin

6/29/2020

9 Comments

 
Picture
Image courtesy of European Spallation Source ERIC

So why this strong focus on structured and connected data? Throughout the different chapters we’ve looked into the details of how the European Spallation Source have defined data structures needed throughout the lifecycle of the facility, and how interoperability between connected objects across those data structures is achieved by utilizing governed and shared master data.

If you would like to read previous chapters first before we take a deeper dive, you can find them all here: Archive
​

The figure below shows an overview of where ESS have put their focus in terms of structured data.
Picture
Figure 1

But why? What is the overall objective?

The main objective is to support the evolution from project to a sustainable facility enabling world leading science for more than 40 years, and to establish the foundation needed for cost-efficient operation and maintenance. High up-time and tough reliability requirements together with tight budgets fosters a need to re-use and utilize data from all stakeholders in the project across the full facility lifecycle.

​
Picture
Figure 2.

By structuring and connecting data in the way described in this article series, ESS obtains traceability and control of all the facility data, which is vital from a regulatory perspective as we saw in chapter 1, but will also be crucial to obtain effective operations and maintenance.

But where does the digital twin fit into all of this?

In my view, the digital twin is in fact all the things we’ve been looking into in this article series, and the fact that the data is all linked together, navigable and comparable. This means that if I’m in operations I can interrogate any function (tag) in the facility for its physical location, to what system it is a part of, the design of that particular system, the asset that is implementing the function in the facility and all its data (the actual physical product in the facility), the full maintenance history of that asset and when it is next scheduled for maintenance, what part the asset was sourced from including its design data, the manufacturer of the part and so forth.

Another example would be going to a part and see all the functions (tags) in the overall facility where this part is used to fulfill a function from facility design perspective, or how many physical assets there are in the facility and in the warehouse sourced from this part.

A third example would be to interrogate a physical asset to see if there are similar ones as spare parts in the warehouse, how long the asset has served at that particular functional location, whether there are any abnormal readings from any of its sensors, when it’s scheduled for maintenance or if it has served at any other functional locations during its lifetime.

It is not strictly necessary that the digital twin has a glossy three-dimensional representation. At least I sometimes get the feeling that some companies tend to focus a lot on this aspect. And that’s exactly what it is, only one aspect of the digital twin. Most of the other aspects are covered in this article series, and yes there are other aspects as well depending on what kind of company you are, and what needs you have.
​
The common denominator however is that data must be linked, navigable and comparable.
Picture
Figure 3. The kind of information a digital twin can consist of

Figure 3 shows what kind of data the digital twin can consist of, provided that the data is structured and connected. A three-dimensional representation is in itself of limited value, but if connected to underlying data structures it would be a tremendously good information carrier, allowing an end user to quickly orientate herself or himself in vast amounts of information. However, it is not a pre-requisite.

I once, with another client, came across an absolutely fantastic 3D model of a facility to be used for operations. It was portrayed as a digital twin, but the associated data (design together with actual installed and commissioned assets) where all PDFs. My question was: If all the data is in PDFs and not as data objects and real attribute values, how can it be utilized by computer systems for predictive maintenance? For instance, how can data harvested from sensors in the field via the integrated control and safety system be compared to design criteria’s, and historical asset data to determine whether the readings are good or bad?

It could not.

To their defense, there were initiatives in place to look into other aspects and to start structuring data, but they focused on the 3D first.

In my view, the problem with such an approach is that it gives a false sense of being done when the 3D representation is in place. Basically, this would only represent the location aspect we discussed in chapter 3, only just in three dimensional space. You might argue that it could also include spatial integration discussed in chapter 5, but I would respond that a lot more structured data and consolidation of such data is needed to achieve this.

The thing is…. In order to achieve what is described in this article series, most companies would have to change the way they are currently thinking and working across their different departments, which brings us to the fact that real business transformation would also be required. The latter is most of the time a much larger and more time-consuming obstacle than the technical ones because it involves a cultural change.

If you would like to read even more about my thoughts around the Digital Twin, please read:

Digital Twin – What needs to be under the hood?

It is my hope that this article series can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.
​
Bjorn Fidjeland
9 Comments

PLM Tales from a true mega project ch. 7 - Reference Data

2/28/2020

0 Comments

 
Picture
Image courtesy of European Spallation Source ERIC

In previous chapters we’ve discussed the different data and information structures that needs to be in place in order to support a capital facilities project like the European Spallation Source from engineering through operations, maintenance and ultimately decommissioning.
Structured data is excellent, but wouldn’t it be even better to also have aligned definitions across data-structures and tools?
It certainly would, so in this chapter we’re going to look into what has been done at ESS to achieve interoperability across both data structures and software tools.
​
If you would like to read previous chapters first before we take a deeper dive, you can find them all here:
Archive
​
​
Picture
The figure above shows that Tag’s, Parts (in EBOM’s) and Installed Assets all use the same reference data library to obtain class names and attributes, meaning that a centrifugal pump is called exactly that across all structures as well as in different authoring tools, the PLM system and the Enterprise Asset Management system. Furthermore, they share the same attribute names and definitions including unit of measures.
​
Several years ago, it was decided to use ISO 15926 as a reference data library. We were able to obtain an export of the RDL (Reference Data Library) by the excellent help of then POSC Caesar Services and imported ISO 15926 part 4 into the PLM platform. Easy right? Well, not quite. We discovered that we now had more than 7000 classes beautifully structured and about 1700 attributes. However, none of the attributes were assigned to the main classes as the attributes were all defined as sub classes of a class called property.

​
Picture
Figure 2. Image courtesy of European Spallation Source ERIC

Figure 2 shows a small portion of the reference data library.
What this in essence meant was that you could select a class to be used for a Tag, Part or an Asset and it would be clear that it was a Choke Valve across all entities, but the entities would not have any attributes defined.
​
The solution to this problem was to form a cross discipline reference data group whose mandate is to assign needed attributes for ESS to the classes. It was soon discovered that the standard did not contain everything needed to describe a research facility, so the group also received a mandate to define new classes and attributes whenever needed. The reference data group met every week for the first two years, but now, it meets every second week.
The group is also tasked with defining letter codes for all classes to be used at ESS according to the standard ISO 81346 which is the chosen tagging standard.

​After every meeting any new classes, attributes and letter codes are deployed to the master reference data library in the PLM platform. The library serves as a common data contract across tags, parts and assets meaning that every entity get the same set of attributes and more importantly, identical names and definitions. This is also enforced across different software tools, rendering integration between the tools a lot easier as the need for complex mapping files disappears.

​
Picture
Figure 3. Image courtesy of European Spallation Source ERIC

Figure 3 shows some of the attributes that have been associated to the class Valve. As the PLM platform supports inheritance of attributes in class libraries, special care is taken with respect to adding attributes at an appropriate level so that the attributes are valid for all sub classes.
​
Picture
Figure 4. Image courtesy of European Spallation Source ERIC

Figure 4 is from the functional breakdown structure where some of the functional objects (tags) are listed. Please note the classification column indicating which class from the reference data library has been used to define attributes to each specific tag.
​
Picture
Figure 5. Image courtesy of European Spallation Source ERIC
​
Let’s examine one of the tags a bit closer. Figure 5 shows some of the attributes for the specific pressure transmitter selected (but without production data).
The same kind of information is available on any part selected to realize the tag requirements and ultimately on the delivered asset itself that was installed in the facility to fulfill the tag’s requirements.

The challenges described in this article are of course not unique to ESS, and several companies have done similar exercises or defined their own proprietary master data. The problem with all of them is that it becomes reference data libraries that are unique to a specific project or for one company only and thereby not solving interoperability problems between companies participating in the value chain of a capital facilities project.

I’m happy to see that initiatives like CFIHOS (Capital Facilities Information HandOver Specification, now that’s a tongue twister) seems promising and are worth checking out for anybody thinking about embarking on a similar journey, however for ESS it was never an option as we needed usable reference data fast.

It is my hope that this article can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.

Bjorn Fidjeland
0 Comments

PLM tales from a true megaproject Ch. 6 – Asset Management

12/15/2019

1 Comment

 
Picture
Image courtesy of European Spallation Source ERIC
​

In this chapter we’re going to take a look at how physically installed assets are treated from an information management perspective, how assets are related to their specifying tag information, physical location and work performed on the assets themselves from arrival on site to installation and commissioning.
If you would like to read previous chapters first before we take a deeper dive, you can find them all here: Archive


​
Picture
Figure 1.
​
As physical assets arrive at ESS they are registered in the Enterprise Asset Management (EAM) system through a goods receival process, and work orders are then required to install the asset to fulfill the tag requirements stated in the Functional Breakdown structure during design and engineering.
​
Picture
Figure 2. Image courtesy of European Spallation Source ERIC

Figure 2 is from the Enterprise Asset Management system and shows a subset of installed assets. Note that the tags they fulfil are called Positions. Information regarding tag/position, location etc. comes from the plant PLM system via integration whereas the asset information is registered in the EAM system and managed there. All asset documentation is then fed back to the plant PLM system for consolidation across all information structures. 
Picture
Figure 3. Image courtesy of European Spallation Source ERIC

The EAM system governs work performed on assets in the facility from preparation and rigging to installation work orders, commissioning and maintenance work orders. Figure 3 shows a chart of different types of work orders executed over a short period of time.
​
The information from the plant PLM system entered during design and engineering is now put to good use as it provides all information about what function the asset is supposed to fulfil in the facility, how it should be calibrated and where it is to be installed. All this information is accessible directly from the EAM system for the people performing the work.
Picture
Figure 4. Image courtesy of European Spallation Source ERIC

Figure 4 shows detailed information about the asset. Through the Position/Tag and Location, all information and documentation from engineering is available. Furthermore, we can see that the asset is installed and that commissioning has been performed.
​
All documentation including needed certification regarding the Asset, together with design documentation is available through one screen for maintenance personnel. To make access and input of relevant information easier for persons working in the field, a simple user interface for rugged hand held devices has been put in place as an overlay to the EAM system.
Picture
Figure 5. Image courtesy of European Spallation Source ERIC

So with this, the “information circle” is complete with structured data all the way from design and engineering through installation, commissioning, operations and maintenance.
Picture
Figure 6. Image courtesy of European Spallation Source ERIC
 
Using this principle has allowed the European Spallation Source to move from document centric handovers between lifecycle phases to data centric transitions where the handover is in terms of responsibility for the data needed.

But hold on a second. This all explains the different data structures needed throughout the plant lifecycle. However, it does not explain how data on the different entities across the structures can be interpreted and compared to gain meaning and insight. So, how to achieve interoperability of data across disciplines and software tools?

That will be the topic of my next article in this series.

It is my hope that this article can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.
​
Bjorn Fidjeland
1 Comment

PLM tales from a true megaproject ch. 5 - Spatial Integration

9/20/2019

0 Comments

 
Picture
Image courtesy of ESS Spatial Integration Team

In the past chapters I’ve talked an awful lot about structured data and information structures, and yes, in my view this is very important as it is the very essence to obtain effective Plant Lifecycle Management, however in this chapter let’s take breather from the data structures and have a look at how ESS manages the aspect of space management (which is also a structure….. Of course I almost hear you say, but it looks a lot shinier, and by the way, yes it is connected to the tag structure (FBS) and the other information structures).

At ESS there is a team headed by Fabien Rey, responsible for Spatial Integration which includes an all discipline 3D master-model called the EPL (ESS Plant Layout) of the entire facility.
​
So, what is this Spatial Integration?
​
It is defined as configuration management of the space available. This means that everything that is designed and that will go into the facility and will occupy space, must have received an initial space claim which is then refined throughout the engineering process. This is true for all disciplines from conventional building, machine systems, product engineering, plant & process to electrical.
Picture
Image courtesy of ESS Spatial Integration Team

When examining the EPL from afar, it looks pretty much like what you would expect from any architectural model, but when focusing on the machine aspects in the facility it gets more interesting, however as ESS is a huge facility, so, still not much detail
Picture
Image courtesy of ESS Spatial Integration Team

Let’s zoom in on the tiny little area at the bottom right corner, which is where the proton beam starts its journey towards the target to create spallation of neutrons.
Picture
Image courtesy of ESS Spatial Integration Team

Here we get a taste for the enormous level of detail we are talking about.
The person to the right is to get a feel for the scale. This picture only shows the first few meters of the 650-meter-long accelerator.
The image is from the Virtual Reality room at ESS. The VR room is used for several different purposes, but among them, multi-discipline reviews for everything from design to installation and commissioning activities.
​
​Let's look the other way
Picture
Image courtesy of ESS Spatial Integration Team

The next picture is taken, not from the  VR room, but still the same EPL.
This time from a different software with a slightly different purpose. What is unique in my experience is that it is the same model, under configuration control, loaded into different environments for different purposes.
Picture
Image courtesy of Piero Valente, Group Leader Plant & Process at European Spallation Source ERIC
 
So how does ESS control all of this from a process point of view?
​
If you look at the picture below, you’ll see the actual engineering process (high level) together with the evolution of a space claim and refinement of design space, or rather the space allocation.
Picture
Image courtesy of ESS Spatial Integration

BUT WAIT!

Why does the process continue from as-designed into as-built and as-scanned??
​
Well, I never said that the EPL was purely design space configuration management. ESS has taken it a huge step forward to also incorporate, not only as built models, but rather As-Scanned models as well, which means there is a huge infrastructure in place to secure detailed 3D scans that can be imported into the EPL and put as an “overlay” to the design model like in the picture of the models below.
Picture
Image courtesy of ESS Spatial Integration Team

In such a model, inaccuracies between design model and actually installed becomes painfully apparent. I choose this image because I wanted to commend the extreme accuracy of this piping section, however there are numerous examples where errors have been caught which would have posed problems for other installation disciplines afterwards. Early correction of such mistakes is vital to avoid cascading effects for installation, and therefore scans are performed regularly and compared with the design model.

Below you can see an example of an As-Scanned Colored 3D Point Cloud only…. Remember the pipe from the previous picture….
Picture
Image courtesy of ESS Spatial Integration Team

As we have now visually seen design requirements compared to actually installed, from a spatial integration perspective, I will show the same for tag requirements and installed physical assets in the next chapter. I know I promised this in the last chapter, but I could not resist showing it from a spatial integration perspective first.

It is my hope that this article can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.
​
Bjorn Fidjeland
0 Comments

PLM tales from a true megaproject Ch. 4

6/20/2019

0 Comments

 
Picture
Image courtesy of European Spallation Source ERIC

In chapter four we will enter familiar and traditional PLM territory as we will take a closer look at product designs and EBOM’s (Engineering Bill Of Materials). European Spallation Source face the complexities of pure Engineer To Order (ETO) which means that they will only manufacture one of the designed products in the facility, as well as product designs that will be manufactured in series.
It is important to note that some of the products going into the facility was not even invented at the time the decision was made to build the European Spallation Source.

If you would like to read the previous chapters first before we take a deeper dive, you can find them here:
PLM tales from a true megaproject Ch. 1
PLM tales from a true megaproject Ch. 2 – Functional Breakdown Structure
PLM tales from a true megaproject Ch. 3 – Location Breakdown Structure

If you’d like to familiarize yourself more with the concepts of the different structures, please visit:
Plant Information Management - Information Structures


Picture
Figure 1.

As the management of product designs and their data is the home turf of any PLM system (Product Lifecycle Management), this area of the plant PLM platform has been left as much out of the box as possible, but I’ll go through some examples all the same.
The EBOM consists of Parts ordered in a hierarchical structure usually largely defined by mechanical product engineering and their design model. The structure is in itself multidiscipline, meaning that it contains mechanical parts, electrical parts and sometimes parts representing other things like drops of glue, software etc.
Based on an EBOM, one or many products can be manufactured. In other words, it is generic in nature.

​
Picture
Figure 2. Image courtesy of European Spallation Source ERIC

The image above is from the plant PLM system and shows a simple EBOM which we can see is released. So what does released mean? Well It means that it is ready seen from the product engineering aspect. Such a released product design can be selected to fulfill one or many functional locations (tags) in the overall facility, as we discussed in chapter 1.
A part is specified by a specification, so it has specifying documentation connected in the form of a 3D model, a drawing or a document

​
Picture
Figure 3. Image courtesy of European Spallation Source ERIC

In our example in figure 3 there is a 3D model associated, which is specifying the mechanical aspects of the part (note: I have masked owner and released date).
In order to release a part and ultimately an EBOM consisting of parts, a few PLM principles must be observed. Specifying information must always be released prior to the release of the part. So bottom up.
The same is true for the EBOM. Child parts must be released before the parent part can be released. (this is the opposite of the release of the functional structure, but we’ll discuss that in a later chapter)


To govern the release process, a Change Order is used (in PLM also referred to as ECO or Engineering Change Order). In many serial manufacturing companies, it is common to have a process prior to deciding if a change should be implemented. This is because they want to make very sure that they understand all possible impacts a design change might have before they manufacture millions of their products based on the new design.
Such a process, in PLM often referred to as ECR or Engineering Change Request, is omitted at ESS, however the same analysis is performed early on in the change order process.
The release process is one of the areas where ESS have deviated from the out of the box solution in order to streamline as much as possible for their needs.
Let’s have a look at the process with another example.
 
​
Picture
Figure 4. Image courtesy of European Spallation Source ERIC

Figure 4 shows an EBOM structure used for training at ESS (It is not an ESS design, but merely an example I’ve created in their plant PLM system). Please observe that the EBOM of this plug valve contains a few parts, is three levels deep and is currently in a lifecycle state called In Work (there are more lifecycle states than showed in the images of this article). All Parts and specifications have individual lifecycle states.
​​
Picture
Figure 5. Image courtesy of European Spallation Source ERIC

The image above is seen from the Change Order governing the move of both parts and specifications through their lifecycle states. We can see at the top left of the image that the CO (Change Order) is in “In Work” state. I’ve chosen to let one CO be responsible for the release of the full EBOM and all associated specifications, but I could have split the responsibility across multiple CO’s if I’d wanted to.
In figure 5 we can also see that all parts and their specifications are in state Approved. This means that the responsible engineering discipline feels that they are ready and have done their part of the work

​​
Picture
Figure 6. Image courtesy of European Spallation Source ERIC

The last stretch of the release process is to move all the parts and their specifications from the Approved state to the Released state.
A workflow with electronic signatures is responsible for doing this. The workflow above states that Bjorn Fidjeland… (Yes, me) is responsible for reviewing the entire EBOM and all specifications. In a real live process, the members of a CDR (Critical Design Review) are listed as reviewers, and one or more final approvers assumes responsibility for the release. At ESS the CDR is a multi-discipline review with both internal and external stakeholders.
Normally it is not allowed to have the same person as both reviewer and approver, but since I’ve got admin rights to this environment, and did not want to show the names of ESS reviewers and approvers, the example is as it is.

When the last person in the workflow sequence has approved, all specifications and parts governed by the Change Order are automatically promoted from Approved state to Released state, and the Change Order itself is marked complete. The system itself takes care of the bottom up release rules of the EBOM.

​
Picture
Figure 7. Image courtesy of European Spallation Source ERIC

Figure 7 shows the fully released EBOM, including all specifications governed by this one Change Order.

The next chapter will be about how ESS manages information about their physical assets, how physically installed assets are linked to the facility’s tag requirements in the Functional Breakdown Structure, where they are located in the Location Breakdown Structure and from what product design they originate from.

It is my hope that this article can serve as inspiration for other companies as well as software vendors.
I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.
​
Bjorn Fidjeland
0 Comments

PLM tales from a true megaproject Ch. 3

5/23/2019

0 Comments

 
Picture
Image courtesy of European Spallation Source ERIC

In this chapter we are going to take a look at how the location breakdown structure is implemented at the European Spallation Source.
The location structure is a decomposition of physical locations of areas into buildings, levels, rooms, cells and sub-cells. The Location Breakdown Structure contains a consolidated view of all data from a physical location perspective.
​
This chapter is built up much the same way as the previous one about the functional breakdown structure, due to their similar look and feel, even though they describe different aspects of the facility. 

If you would like to read previous chapters first before we take a deeper dive, you can find it here:
PLM tales from a true megaproject Ch. 1
PLM tales from a true megaproject Ch. 2 – Functional Breakdown Structure
If you’d like to familiarize yourself more with the concepts of the different structures, please visit:
Plant Information Management - Information Structures



Picture
Figure 1.

When examining the location breakdown structure, you’ll notice that is looks like it also has a form of tag.
​This is entirely correct, the standard used at ESS, EN/ISO 81346, was among other things selected for its ability to name multiple aspects, where the functional aspect is indicated with a equal sign, and the location aspect is indicated with a plus sign. 
​
Picture
Figure 2. Image courtesy of European Spallation Source ERIC

The image above is from the plant PLM system and shows a small part of the location breakdown structure at ESS.
Let’s go through what we see in the image, and use the TS2 Area (Test Stand 2) row 15 – +ESS.G02.100.1001.102 as an example.
​
Picture
Figure 3. Image courtesy of European Spallation Source ERIC

The first column shows the location name of the individual location object.
The second column with the little green icon gives you the option to zoom in on the location if further details are needed, for instance attribute values of the location such as owner of the location, status, specifications, reference documents, history etc.

The third column shows a paperclip if there are specifying documentation associated with the location. In figure 4 we can see that the Tunnel has 308 documents associated whereas 271 of them are considered specifying documentation to the location and 7 are requirement specifications (the green check mark means that it has the lifecycle state released). We can also see that 37 documents are regarded as reference documents. This means that they are describing the location, but are not regarded as specifying to the location.
​
Picture
Figure 4. Image courtesy of European Spallation Source ERIC

The Tag column shows the full location tag , and the description column indicates a description of the location.
The type column indicates the type of area. At ESS this can be area, building, level (where 100 is floor level), room, cell and sub cells.
The FBS (Functional Breakdown Structure) column in figure 3 allows you to see at what functional locations, so Tags, the physical location actually contains. The functional locations are displayed in the split view as shown in figure 5.
​
Picture
Figure 5. Image courtesy of European Spallation Source ERIC

We can clearly see that the TS2 Area currently contains 258 functional tags (master tags), and all information regarding each functional tag is directly available in this view.

The IS column in figure 3 refers to the actually installed assets in the plant that is used to implement the functional object requirements that are located in the physical location of TS2 Area (an asset is a physical thing that typically has a serial number). See figure 6.
​
Picture
Figure 6. Image courtesy of European Spallation Source ERIC

The released part column in figure 3 gives an overview of what released product designs (Engineering Bill Of Materials) or standard parts that can fulfill the functional object requirements physically located in the TS2 Area (there might be several options prior to procurement, however the installed asset will only have an association to one part as it was manufactured based on that particular product design).

The last column in figure 3, called Change Order displays a link to the Change Order responsible for releasing the physical location together with all specifying documentation.

So, from one view in the plant PLM system, the European Spallation Source is able to access all related data to all physical locations in their location breakdown structure from engineering and design through installation, commissioning, operations, maintenance and ultimately decommissioning.

The next chapter will be about how ESS manages product design (Engineering Bill Of Materials), both from an Engineer To Order perspective and a serial manufacturing perspective.

It is my hope that this article can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.

Bjorn Fidjeland
0 Comments
<<Previous

    plmPartner

    This is where we share our thoughts, ideas and experiences with you

    RSS Feed

    View my profile on LinkedIn

    Categories

    All
    AEC
    BIM
    Data Management
    Digital Twin
    ERP
    Facility Lifecycle Management
    Integration
    Internet Of Things
    IOT
    Platform
    PLM
    Process
    Product Lifecycle Management
    Strategy
    Structured Data
    Technical Information Management
    VDC
    Virtual Design And Construction

Contact us:
plmPartner AS    Lyngfjellveien 14    4580 Lyngdal    Norway    +47 99 03 05 19    info@plmpartner.com