Follow us at
plmPartner
  • Home
  • About
  • Blog
  • Archive
  • Contact
  • Video Log
  • Learning Center
  • Privacy Policy

PLM tales from a true megaproject Ch. 2

4/12/2019

0 Comments

 
Picture
Image courtesy of Fabien Rey, Group Leader Machine Engineering Service Group at European Spallation Source ERIC

In this chapter we are going to take a look at how the functional breakdown structure is implemented at the European Spallation Source. The functional structure is a functional decomposition of systems and subsystems all the way down to individual functions, or as ESS calls them, components. The Functional Breakdown Structure contains a consolidated view of data from all plant engineering disciplines including electrical, plant & process and mechanical.
If you would like to read chapter one first before we take a deeper dive, you can find it here:

PLM tales from a true megaproject Ch. 1

If you’d like to familiarize yourself more with the concepts of the different structures, please visit:
Plant Information Management - Information Structures and
Plant Engineering meets Product Engineering in capital projects


​
Picture
Figure 1.

The first thing you’ll notice is the tagging. It was decided to use the standard EN/ISO 81346 as a common master tag at the European Spallation Source. The equal sign means that it is the functional aspect, however anybody familiar with the standard will notice something a bit odd. The first 2 levels are not quite according to standard. It was decided that the first level was to be ESS, and the second levels ACC (Accelerator), TS (Target Station), NSS (Neutron Scattering Systems) and INFR (Infrastructure). Anything below the first and second level is according to the guidelines of the standard.


Picture
Figure 2. Image courtesy of European Spallation Source ERIC

The image above is from the plant PLM system and shows the functional breakdown structure of the Test Stand 2 piping system as an example. At the time of writing, the functional breakdown structure contains about 50.000 tags, but is expected to grow to well over 1 million tags.
Let’s go through what we see in the image, and use the first row – W02 (the Test Stand 2 piping system) as an example in the beginning.

​
Picture
Figure 3. Image courtesy of European Spallation Source ERIC

The first column shows the tag name of the individual functional object (W02). The second column with the little green icon gives you the option to zoom in on the object if further details are needed, for instance all the attribute values of the object coming from object type or class (European Spallation Source uses ISO 15926-4 as a basis for their master reference data).
The third column shows a paperclip if there are specifying documentation associated with the tag. In figure 4 we can see that the Test Stand 2 piping system has one released P&ID (the green check mark means that it has the lifecycle state released), and that there are 15 other reference documents associated.

​​
Picture
Figure 4. Image courtesy of European Spallation Source ERIC

The Tag column shows the full functional master tag, and the description column indicates a description of the functional object.
The classification column shows what kind of functional object it is. This refers to the master reference data class that is used to describe the properties or attributes this specific tag has got. In order to explain better we need to take a step back.
​
I mentioned that the European Spallation Source opted to use ISO 15926-4 as a basis for their master reference data. This means that there is a vast library of classes that defines what attributes, let’s say a Temperature Sensor should have, and also what letter codes (defined based on EN 81346) it should have. So, when a functional object is first created, it only has basic attributes that are shared across all functional objects, however when the system is told that it is a Temperature Sensor, it gets all the attributes defined for the class Temperature Sensor in addition to it’s tag which is computed by the parent object’s tag, the letter code and the number of other Temperature Sensors at this level in the functional breakdown structure plus one.

​
Picture
Figure 5. Image courtesy of European Spallation Source ERIC

The image above shows some of the attributes for the selected Temperature Sensor tag, but without operational data.
​
The LBS (Location Breakdown Structure) column in figure 3 allows you to see at what physical location the functional object is located.
If the functional object is a pipe or cable that spans multiple locations, then several physical locations are displayed in the split view as shown in figure 6.

​
Picture
Figure 6. Image courtesy of European Spallation Source ERIC

The IS column in figure 3 refers to the actually installed asset in the plant that implements the functional object requirements (physical item with serial number). See figure 7.

​
Picture
Figure 7. Image courtesy of European Spallation Source ERIC
​
The released part column in figure 2 gives an overview of what released product designs (Engineering Bill Of Materials) or standard parts that can fulfill the functional object requirements (there might be several options prior to procurement, however the installed asset will only have an association to one part as it was manufactured based on that particular product design).
 
So from one view in the plant PLM system, the European Spallation Source is able to access all related data to all functional objects in their functional breakdown structure from design and engineering through installation, commissioning, operations, maintenance and ultimately decommissioning.
The next chapter will be about the Location Breakdown Structure.

It is my hope that this article can serve as inspiration for other companies as well as software vendors. I also want to express my gratitude to the European Spallation Source and to Peter Rådahl, Head of Engineering and Integration department in particular for allowing me to share this with you.


Bjorn Fidjeland
0 Comments

PLM tales from a true mega-project Ch. 1

3/27/2019

0 Comments

 
Picture
Image courtesy of European Spallation Source ERIC

I’ve been asked on several occasions if I can share some more details from any of the projects I’ve been involved with. Especially the ones addressing plant lifecycle management and the use of structured data.
Naturally, most commercial companies who face fierce competition every day are reluctant to do so, as it is deemed highly important for their competitiveness. Or as one client put it “This is truly our backbone, while our master-data is our lifeblood”

​
Picture
However, there is one very special company I’ve been fortunate to be involved with for several years now who has agreed to share some of their details.
​It is the European Spallation Source ERIC or ESS for short. An organization tasked with executing a true mega-project, to design, build and operate the world’s brightest neutron source for scientific use.

So what is the European Spallation Source?
In short it is a 750 meters long and 250 meters wide facility that houses a huge linear proton accelerator or LINAC. The accelerator is responsible for accelerating protons produced by an Ion Source up to 96% of the speed of light. The protons are then collided into the target which is a 2.6 m-diameter stainless steel disk containing bricks of a neutron-rich heavy metal called Tungsten. This is where Spallation occurs, where neutrons are flung out from the target wheel. These neutrons are the main product of the European Spallation Source, and they are guided through neutron guides to the instruments that allow researchers to do their research. It is anticipated that 22 instruments will be installed in total.
For more information, check out europeanspallationsource.se

​
Picture
Image courtesy of Fabien Rey, Group Leader Machine Engineering Service Group at European Spallation Source ERIC
 
What kind of research will be conducted?
Some examples are: chemistry of materials, magnetic & electronic phenomena, life science & soft condensed matter, engineering materials, geosciences, archeology & heritage conservation, fast neutron applications and particle physics

Well, back to the real question at hand. What have they done with respect to plant lifecycle management and technical information management?
If you want to freshen up on my views with respect to needed information structures, you may do so here:
Plant Information Management - Information Structures
Archive of articles

What ESS has put in place is truly remarkable. By extending a Product Lifecycle Management (PLM) system to also manage:
  • Functional Breakdown Structure (Tags)
  • Location Breakdown Structure (Physical Locations)
  • Engineering Bill of Material (EBOM for product designs, traditionally the home turf of a PLM system)
  • Management of all installed assets or physically installed items (the front end for asset management and warehouse management is an Enterprise Asset Management system, whereas assets installed in the facility are also created and consolidated in the PLM system together with relationships to their corresponding Tag, Location, Part and their common reference data class with attributes)
Reference data: A class library of common reference data used across Functional Breakdown Structure, Engineering Bill of Material and Assets. Each class has attributes defined deemed important to ESS.

​
Picture
In addition to the actual data structures, each object in any structure is governed by revision control and change management, not only of the objects themselves but also their associated specifications in the form of 3D design models, drawings, certificates and reports etc.

Why has the European Spallation Source done this when they are only building one such facility?
The main reason is to support the European Spallation Source evolution from project to a sustainable facility enabling world-leading science for ≥40 years, and to establish the foundation needed for future cost-efficient operation and maintenance.
 
A second reason is the fact that the facility in some areas is producing radiation in the form of radioactivity. This means that parts of the facility fall under the regulatory requirements of the Swedish Radiation Safety Authority which in turn means rigorous control of all technical information as well as configuration management of such information.
​
In the coming articles I will address each information structure, as well as the topics of digital twin, master reference data, change management and revision control with live examples from the European Spallation Source.

In this regard I would like to offer a special thanks to Peter Rådahl, Head of Engineering and Integration department at the European Spallation Source whom I’ve had the privilege to serve as an advisor for several years now. Peter Raadahl had a clear vision from the start on how to best serve the European Spallation Source with respect to managing technical information, formed a strategy for how to get there and stuck to the strategy through many an obstacle.
 
Bjorn Fidjeland
0 Comments

From data silos to data flow - part 2

12/16/2018

0 Comments

 
Picture
This article is a continuation of "From data silos to data flow - part 1" where data flow need between main systems during the engineering, procurement and construction phases were discussed.
The next step is to make the Integrated Control & Safety system aware of the installed assets, and make the connections with all signals that needs to be monitored and controlled. This information can be fed from the consolidated data set in the Plant PLM platform into for instance the cable registry of the Integrated Control & Safety System where the physical cables with their tags are “mated”, checked and tested towards the programmed signals going through the cable to be monitored and controlled.
​
Picture
Commissioning is easier to perform when comparing data sets between the Tag requirements and the installed asset in the EAM system as opposed to a traditional document centric process. There is however one important pre-requisite, and that is that the data shares the same definition. What do I mean by that? Well that “stuff” is called the same thing both on the tag side and the physical asset side. In other words, there needs to be interoperability across the information sources. In operations the real benefits can be harvested as the Integrated Control & Safety System can provide data to the EAM system to enable predictive maintenance based on historical data stored there or in the Plant PLM platform.
​
Picture
​Regular maintenance is handled by the EAM system, and information that affects the physical assets are synchronized back to the Plant PLM system, however, maintenance can also be larger feats like re-design of parts of the plant. In such a case it becomes very important to have full configuration control of the data in the Plant PLM platform, because the operator needs to operate and maintain based on the released Tag and physical asset information in the platform, whereas engineering needs to work on a new revision of the same Tag information. Both information sets needs to be monitored to check how the new design will influence future operations, and if changes in operations due to ongoing maintenance affects the design. As engineering is progressing with the new design, the information needs to follow the same cycle as described in this article until commissioning of the new and upgraded part of the plant is performed and handed over to operations.
Picture
During decommissioning planning and execution, the EAM system still serves as the “front-end” as it allows for detailed planning, execution and logging of the work breakdown structure for the decommissioning project through all work orders performed. The Plant PLM platform on the other hand can greatly aid and serve the decommissioning project with detailed historical data from As-Designed, As-Built, As-Commissioned and As-Maintained baselines. This data is important as a basis to determine what measures needs to be taken when it comes to the levels of hazardous materials currently existing in the plant and how previously used materials have been dealt with. Having such a traceability and foundation for analysis and simulation is vital both in terms of the safety aspect but also when in comes to proving to regulatory bodies that the decommissioning can and will be executed safely.

If you want to know more about what kind of information structures and data that needs to be consolidated and flow between the systems, you can find more information here:

Plant Information Management - Information Structures
Archive of articles

Bjorn Fidjeland

The header image used in this post is by Cherezoff  and purchased at dreamstime.com
​
0 Comments

From data silos to data flow - part 1

10/19/2018

0 Comments

 
Picture
In these two articles I’ll try to explain why and how a data flow approach between main systems during a plants lifecycle is far more effective than a document based handover process between project phases. I have earlier discussed the various information structures that needs to be in place across the same lifecycle. If you’re interested the list of articles is found at the end of this article.
​
During design and engineering the different plant and product design disciplines authoring tools play a major role as they are feeder systems to the Plant PLM Platform. All the information coming from these tools needs to be consolidated, managed and put under change control. The Plant PLM Platform also plays a major role to document the technical baselines of the plant, such as As-Designed, As-Built, As-Commissioned and As-Maintained. See figure 1.
​
Picture
Figure 1.

​When moving into the procurement phase, a lot of information needs to flow to the ERP system for purchasing of everything needed to construct the plant. The first information that must be transferred is released product designs, so Engineering Bill Of Materials. This is the traditional Product Lifecycle Management domain. The released EBOM says that seen from product engineering everything is ready for manufacturing and ERP can start procuring parts and materials to manufacture the product. Depending on the level of product engineering done in the plant project this can be a lot or just individual parts representing standard components or standard Parts.

The next information that needs to go to ERP is released tag information where the tag is connected to a released part. Here a typical example would be that a piping system is released with let’s say 8 pump tags and the pumps individual requirements in the system can all be satisfied by a generic part from a manufacturer. This would mean that in the Plant PLM system there are 8 pump tag objects that are released and they are all connected to the same generic released part. This constitutes a validated project specific demand for 8 pumps. At this stage an As-Designed baseline can be created in the Plant PLM platform for that particular system.
​
This information must be transferred to ERP where it now means that procurement should place an order for 8 pumps and manage the logistics around this. However, seen from Project planning and execution it might be identified that according to the project execution plan several other systems are scheduled for release shortly which would make the order 50 pumps instead of 8. After communicating with affected stakeholders, it may be that it is decided to defer the order.
Picture
Figure 2

As the order is placed together with information regarding each specific Tag requirement, preparation for goods receival, intermediate storage and work orders for installation must be made. This is normally done in an Enterprise Asset Management (EAM) system which also needs to be aware of the Tag’s and their requirements, physical locations to install the arrived pumps and what corresponding part definition the received physical asset is representing. All of this information is fed to the EAM system from the Plant PLM platform. As the physical assets are received, each of our now 50 pumps needs to be inspected, logged in the EAM system together with the information provided by the vendor and associated with the common part definition. If the pumps are scheduled for immediate installation, each delivered physical asset is Tagged as they now are installed to fulfill a dedicated function in the plant.
​
At this stage the information about the physical asset and its relations to Tag, physical location, and corresponding part is sent back to the Plant PLM platform for consolidation. This step is crucial if a consolidated As-Built baseline is needed and there is a need to compare As-Designed with As-Built. Alternately the EAM system needs to “own” the baselines.
Picture
Figure 3.

The next step is to make the Integrated Control & Safety system aware of the installed assets, and this will be among the topics for the next article.

If you want to know more about what kind of information structures and data that needs to be consolidated and flow between the systems, you can find more information here:


Plant Information Management - Information Structures
Archive of articles


Bjorn Fidjeland


The header image used in this post is by Wavebreakmedia Ltd  and purchased at dreamstime.com

0 Comments

PLM Benchmark 3 – EPC 2  What did they do and why?

8/2/2018

1 Comment

 
Picture
This is an article in the series regarding PLM benchmarking among operators, EPC’s and product companies.
The articles cover the motivation for doing as they did, and where their main focus was put in order to achieve their goals.

I will continue to use my information structure map, or the “circle of life” as a client jokingly called it, to explain where the different companies put their focus in terms of information management and why.
Picture
​EPC 2 had a slightly different focus from the EPC in my previous article as they were in an industry where there is a less clear split between Engineering Procurement and Construction companies and Product companies in the capital project value chain.
This company had the challenges of both EPC’s and product companies in ETO (Engineer To Order) projects as they owned several product companies, and naturally used a lot of their products in their EPC projects.
​
Picture
Figure 2 shows the different data structures that EPC 2 focused on

Their first objective was to respond to fearsome competition from other parts of the world who suddenly emerged on the global scene. In order to do so it was considered crucial to limit the amount of engineering hours used to gain projects. To achieve this, they decided to build a catalog of re-usable data structures with different perspectives (plant, product, execution) in order to promote a controlled re-use of both plant and product engineering data. Similarly, as for EPC 1 they recognized that standardization across disciplines would be necessary to make it all work. The reference/master data put in place for all disciplines to share was a proprietary company standard.

Secondly, they needed to replace a homegrown engineering data hub. This homegrown solution was very impressive indeed and contained a lot of functionality that commercial systems lack even today, however its architecture was built around processes that did no longer work as EPC 2 entered new markets.

Thirdly they wanted to connect their plant engineering disciplines with the various product engineering disciplines throughout their own product companies worldwide. Naturally this meant run-time sharing and consolidation of data on a large scale. The emergence of the catalog with different aspects meant that plant engineering could pick systems and products from the catalog and have auto generated project specific tag information in the functional structure of their projects. It also meant that product engineering would be able to either generate a unique Engineer To Order bill of material if needed, or if plant engineering had not done any major modifications, link it to an already existing Engineering Bill of Material for the full product definition. 
​
Their fourth objective was to obtain full traceability of changes across both plant and product engineering disciplines from FEED (Front End Engineering & Design) to delivered project. The reason for this objective was twofold. One part was to be able to prove to clients (operators) where changes originated from (largely from the client itself), and secondly to be able to measure what changes originated from their own engineering disciplines without project planning and execution knowing about it….. Does it sound familiar?
In order to achieve this, engineering data change management was enforced on both FEED functional design structures (yes, there could be several different design options for a project) and the functional structure in the actually executed EPC project. The agreed FEED functional structure was even locked and copied to serve as the starting point for the EPC project. At this point all data in the functional structure was released, subjected to full change management (meaning traceable Change Orders would be needed to change it) and made available to project planning and execution via integration.
Picture
Figure 3 shows the sequence of data structures that was focused on in the implementation project.

Since product design and delivery was a large portion of their projects, the Engineering Bill of Material (EBOM) and variant management (the catalog structures) got a lot more focus compared with EPC 1 in my previous article. This was natural because, as mentioned, EPC 2 owned product companies and wanted to make a shift from Engineer To Order (ETO) towards more Configure To Order (CTO).
It was however decided to defer the catalog structures towards the end because they wanted to gain experience across the other aspects as well before starting to create the catalog itself.


The Functional Structure with the consolidated plant design, project specific data and associated documentation was out next together with the establishment of structures for project execution (WBS), estimation and control (Sales structure), and logistics (Supply structure).

Once the various data structures were in place, the focus was turned to “gluing it all together” with the re-usable catalog structures and the reference data which enabled interoperability across disciplines.

A more comprehensive overview explaining the different structures can be found in the article:
Plant Information Management - Information Structures, and further details regarding each information structure are discussed in:
Plant Engineering meets Product Engineering in capital projects
Handover to logistics and supply chain in capital projects
Plant Information Management - Installation and Commissioning
Plant Information Management – Operations and Maintenance

​Bjorn Fidjeland


The header image used in this post is by 8vfand and purchased at dreamstime.com
1 Comment

PLM for Engineer To Order (ETO) businesses - Free online course

6/10/2018

0 Comments

 
Picture
What needs to be considered when making a powerful platform for  full lifecycle support in Capital Projects?

This FREE online course from plmPartner, powered by SharePLM, explains. 
​No subscription or registration required, but please let us know what you think.




0 Comments

PLM Benchmark 2 – EPC 1 What did they do and why?

4/27/2018

0 Comments

 
Picture
This is the second article in the series regarding PLM benchmarking among operators, EPC’s and product companies where I share some experiences with you originating from different companies.
The articles cover the motivation for doing as they did, and where their main focus was put in order to achieve their goals.
In this series I use my information structure map, or the “circle of life” as a client jokingly called it, to explain where the different companies put their focus in terms of information management and why. 
Picture
​EPC 1’s first objective was to replace an in-house built engineering data hub. The reason for this was that over the years, needs and requirements had changed both from customers (operators) as the EPC went global, and internally within the organization. This situation lead to more and more customizations of the engineering data hub resulting in sky rocketing cost of ownership, and ironically, less and less flexibility.

This is by no means a unique situation as many EPC’s were forced to build such hubs in the late nineties for consolidation and control of multidiscipline plant information since no software vendor at the time could support their needs.

Secondly it was considered crucial to enable standardization and re-use of previously delivered designs and engineering data.
A huge effort was put on building reference data for sharing and alignment across plant engineering disciplines, procurement and ultimately client handover of Documentation For Installation & Operations (DFI/DFO). An ISO 15926 ontology was put in place for this purpose.
The main reason for enabling standardization and re-use of engineering data however, was to reduce the gigantic number of engineering hours that were spent in the early phases of each project delivery. Especially during the FEED phase (Front End Engineering and Design). Another important reason was to connect engineering with procurement and the wider supply chain more seamlessly.
Picture
​Figure 2. shows what information structures EPC 1 put most emphasis on. Quite naturally the Functional Location structure (tag structure, multi discipline plant design requirements) received a lot of focus. To enable re-use and efficient transfer of data, both the reference data and a library of re-usable design structures using the reference data was built.

Extensive analysis of previously executed projects revealed that even if the EPC had a lot of engineering concepts and data that could be re-used across projects, they more often than not created everything from scratch in the next project. In order to capitalize on and manage the collective know-how of the organization, the re-usable design structures received a lot of focus.

EPC 1 also faced different requirements from operators with respect to use of tagging standards depending on what parts of the world they delivered projects to, so as a consequence, multiple tagging standards needed to be supported. It was decided that no matter what format the operator wanted to receive, all tags in all projects would be governed by an internal “master-tag” in the EPC’s own system while communicated to the customer in their specified format.

The third focus area was an extensive part (or article) library with internal part numbers and characteristics showing what kind of products could fulfill the tag requirements in the functional structure. Each part was then linked via a relationship to objects representing preferred suppliers of that product in different regions of the world. This concept greatly aided engineering procurement when performing Material Take-Off (MTO) since each tag would be linked to a part where preferred supplier could be selected. 
Picture
​EPC 1 chose to focus on the reference data first in order to get a common agreement regarding needed data across their disciplines during the EPC project lifecycle. Next in line was the catalog of re-usable engineering structures. These structures could be used and selected as a starting point in any EPC project.
The third delivery in the project centered around delivering the capabilities to create and use the different plant engineering structures (functional structure, tags, with connected parts where both entities used the same reference data )
 
An overview explaining the different structures can be found in the article:
Plant Information Management - Information Structures, and further details regarding each information structure are discussed in:
Plant Engineering meets Product Engineering in capital projects
Handover to logistics and supply chain in capital projects
Plant Information Management - Installation and Commissioning
Plant Information Management – Operations and Maintenance

Bjorn Fidjeland

The header image used in this post is by Viacheslav Iacobchuk and purchased at dreamstime.com
0 Comments

PLM Benchmark – Operator 1 What did they do and why?

3/9/2018

0 Comments

 
Picture
This as a first in a series of articles where I share some experiences with you from different product companies, EPC’s and operators.
The articles will cover the motivation for doing as they did, and where their main focus was put in order to achieve their goals.

There is a span in the different experiences of almost 20 years… I would like you to reflect a bit on that and keep in mind some of the buzzwords of today. Especially digital twin, IOT and Big Data analytics.
In this series I will use my information structure map, or the “circle of life” as a client jokingly called it, to explain where the different companies put their focus in terms of information management strategy and why.

An overview explaining the different structures can be found in the article:
Plant Information Management - Information Structures, and further details regarding each information structure are discussed in:
Plant Engineering meets Product Engineering in capital projects
Handover to logistics and supply chain in capital projects
Plant Information Management - Installation and Commissioning
Plant Information Management – Operations and Maintenance
​
Picture
Operator 1’s first objectives was to shorten the project execution time from design through installation and commissioning by letting the projects information model be gradually built up through all project phases and by all stakeholders in one common platform.
By doing it this way there would be no handover of documentation but rather a handover of access and responsibility of data. A large focus was put on standardizing information exchange between both stakeholders in the capital projects and between computer systems. The entry point to all information was a 3D representation of the data structures!

Makes you think of digital twin……. However this initiative was before anybody had heard of it...The 3D representation was NOT a design model, but rather a three-dimensional representation of the asset linked to all the information structures creating different dimensions or information layers if you will.

So it had to be quite small assets this operator was dealing with you might think?

Actually no, one of the assets managed was about a million tags. Concepts from the gaming industry like Level Of Detail and back-face culling were used to achieve the level of performance needed from the 3D side.
So why this enormous effort by an operator to streamline just the initial stages of an assets lifecycle?
I mean the operators real benefit comes from operating the asset in order to produce whatever it needs to produce, right?
​
Because it was seen as a prerequisite to capitalize on plant information in training, simulation, operations, maintenance and decommissioning. Two words summarizes the motivation: Maximum up-time. How to achieve it: operational run-time data from sensors linked and compared with accurate and parametric as-designed, as-built and as-maintained data.

​
Picture
​Figure 2. shows what information structures the operator put most emphasis on. Quite naturally the Functional structure (tag structure and design requirements), and corresponding physically installed asset information was highly important, and this is what they started with (see figure 3). Reference Data to be able to compare and consolidate data from the different structures was next in line together with an extensive parts (article) catalog of what could be supplied by whom in different regions of the world.
Picture
There was an understanding that a highly document-oriented industry could not shift completely to structured data and information structures overnight for everything, so document management was also included as what was regarded as an intermediate step. The last type of structure they focused on was project execution structures (Work Breakdown Structures). This was not because it was regarded as less important, actually it was regarded as highly important since it introduced the time dimension with traceability and control of who should do what, or did what when. The reasoning behind it was that since work breakdown structures tied into absolutely everything, they wanted to test and roll out the “base model” of data structures in the three-dimensional world (the 3D database) before introducing the fourth dimension.

​Bjorn Fidjeland

​
The header image used in this post is by Jacek Jędrzejowski and purchased at dreamstime.com
0 Comments

Big Data and PLM, what’s the connection?

1/3/2018

3 Comments

 
Picture
I was challenged the other day to explain the connection between Big Data and PLM by a former colleague. The connection might not be immediately apparent if your viewpoint is from traditional Product Lifecycle Management systems which primarily has to do with managing the design and engineering data of a product or plant/facility.

However, if we first take a look at a definition of Product Lifecycle Management from Wikipedia:

“In industry, product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from inception, through engineering design and manufacture, to service and disposal of manufactured products. PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.”
​
Traditionally it has looked much like this
Picture
Then let’s look at a definition of Big Data
​

“Big data is data sets that are so voluminous and complex that traditional data processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating and information privacy. There are three dimensions to big data known as Volume, Variety and Velocity.
Lately, the term "big data" tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem." Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on.”

Included in Big Data you’ll find data sets harvested from sensors within all sorts of equipment and products as well as data fed back from software running within products. One can say that a portion of Big Data is the resulting feedback from the Internet of Things. Data in itself is not of any value whatsoever, but if the data can be analyzed to reveal meaning, trends or knowledge about how a product is used by different customer segments then it has tremendous value to product manufacturers.
If we take a look at the operational phase of a product, and by that, I mean everything that happens from manufactured product to disposal, then any manufacturer would like to get their hands on such data, either to improve the product itself or sell services associated with it. Such services could be anything from utilizing the product as a platform for an ecosystem of connected products to new business models where the product itself is not the key but rather the service it provides. You might sell guaranteed uptime or availability provided that the customer also buys into your service program for instance.

 
The resulting analysis of the data should in my view be managed by, or at least serve as input to the product definition because the knowledge gleamed from all the analytics of Big Data sets ultimately impacts the product definition itself since it should lead to revised product designs that fulfills the customer needs better. It might also lead to the revelation that it would be better to split a product in two different designs going after two distinct end user behavior categories found as a result of data analysis from the operational phase of the products.
​
Connected products, Big Data and analysis will to a far greater extent than before allow us to do the following instead:
Picture
It will mean that experience throughout the full lifecycle can be made available to develop better products, tailor to new end user behavior trends and create new business models.

Note: the image above focuses on the feedback loops to product engineering, but such feedback loops should also be made available from for instance service and operation to manufacturing.

Most companies I work with tell me that the feedback loops described in the image above is either too poor, or virtually nonexistent. Furthermore, they all say that such feedback loops are becoming vital for their survival as more and more of their revenue comes from services after a product sale and not from the product sale itself. This means that it is imperative for them to have as much reliable and analyzed data as possible about their products performance in the field, how their customers are actually using them and how they are maintained.

For these companies at least, the connection between Big Data analysis and its impact on Product Lifecycle Management is becoming clearer and clearer.


Bjorn Fidjeland


The header image used in this post is by garrykillian and purchased at dreamstime.com

3 Comments

Digital Twin - What needs to be under the hood?

10/22/2017

1 Comment

 
Picture
In the article Plant Information Management – Information Structures, and the following posts regarding Plant Information Management (see Archive) I explained in more detail the various information structures, the importance of structuring the data as object structures with interconnecting relationships to create context between the different information sources. 

​What does all of this have to do with the digital twin? - Let's have a look.

Picture
​Information structures and their interconnecting relationships can be described by one of the major fashion word these days, the digital thread or digital twin.
The term and concept of a digital twin was first coined by Michael Grieves at the University of Michigan in 2002, but has since taken on a life of its own in different companies.
 
Below is an example of what information can be accessed from a digital twin or rather what the digital twin can serve as an entry point for:
Picture
​If your data is structured in such a way with connected objects, attributes and properties, an associated three-dimensional representation of the physically delivered instance is a tremendously valuable asset as a carrier of information. It is however, not a pre- requisite that it is a 3D model, a simple dashboard giving access to the individual physical items might be enough. The 3D stuff is always promoted in the glossy sales representations by various companies, but it’s not needed for every possible use case. In a plant or aircraft, it makes a lot of sense, since the volume of information and number of possible entry points to the full data set is staggering, but it might not be necessary to have individual three-dimensional representations for all mobile phones ever sold. It might suffice to have each data set associated with each serial number.
 
On the other hand, if you have a 3D representation, it can become a front end used by end users for finding, searching and analyzing all connected information from the data structures described in my previous blog posts. Such insights takes us to a whole new level of understanding of each delivered products life, their challenges and opportunities in different environments and the way they are actually being used by end customers.
 
Let’s say that we via the digital twin in the figure above select a pump. The tag of that pump uniquely identifies the functional location in the facility. An end user can pull information from the system the pump belongs to in the form of a parametric Piping & Instrumentation Diagram (P&ID), the functional specification for the pump in the designed system, information about the actually installed pump with serial number, manufacturing information, supplier, certificates, performed installation & commissioning procedures and actual operational data of the pump itself.
 
The real power in the operational phase becomes evident when operational data is associated with each delivered pump. In such a case the operational data can be compared with environmental conditions the physical equipment operates in. Let’s say that the fluid being pumped contains more and more sediments, and our historical records of similar conditions tells us that the pump will likely fail during the next ten days due to wear and tear of critical components. However, it is also indicated that if we reduce the power by 5 percent we will be able to operate the full system until the next scheduled maintenance window in 15 days. Information like that gives real business value in terms of increased uptime.
 
Let’s look at some other possibilities.
If we now consider a full facility with a three-dimensional representation:
During the EPC phase it is possible to associate the 3D model with a fourth dimension, time, turning it into a 4D model. By doing so, the model can be used to analyze and validate different installation execution plans, or monitor the actual ongoing installation of the Facility. We can actually see the individual parts of the model appearing as time progresses.
 
A fifth dimension can also be added, namely cost. Here the cost development over time according to one or several proposed installation execution plans or the actual installation itself can be analyzed or monitored.
This is already being done by some early movers in the construction industry where it is referred to as 5D or Virtual Design & Construction.
 
The model can also serve as an important asset when planning and coordinating space claims made by different disciplines during the design as well as during the actual installation. It can easily give visual feedback if there is a conflict between space claims made by electrical engineering and mechanical engineering, or if there is a conflict in the installation execution plan in terms of planned access by different working crews.
More and more companies are also making use of laser scanning in order to get an accurate 3D model of what has been actually installed so far. This model can easily be compared with the design model to see if there are any deviations. If deviations are found, they can be acted upon by analyzing how it will impact the overall system if it is left as it is, or will it require re-design? Does the decision to leave it as it is change the performance of the overall system? Are we still able to perform the rest of the installation, due to less available space?
Answers to these questions might entail that we will have to dismantle the parts of the system that has deviations. It is however a lot better and cost effective to identify such problems as early as possible.
 
This is just great, right? Such insights as mentioned would have huge impacts on how EPC’s manage their projects, operators run their plants and how product vendors can operate or service their equipment in the field, as well as feeding information back to engineering to make better products.
​
​New business models can be created in the likes of: “We sell power by the hour, dear customer, you don’t even have to buy the asset itself”!
(Power-by-the-Hour is a trademark of Rolls-Royce, although the concept itself is 50 years old you can read about a more recent development here)
 
So why haven’t more companies already done it?
 
Because in order to get there, the underlying data must be connected, and in the form of… yes data as in objects, attributes and relationships. It requires a massive shift from document orientation to connected data orientation to be at its most effective.
 
On the bright side, several companies in very diverse industries have started this journey, and some are already starting to harvest the fruits of their adventure.
​
My advice to any company thinking about doing the same would be along the lines of:
When eating this particular elephant, do it one bite at the time, remember to swallow and let your organization digest between each bite.

Bjorn Fidjeland

The header image used in this post is by Elnur and purchased at dreamstime.com

​
1 Comment
<<Previous
Forward>>

    plmPartner

    This is where we share our thoughts, ideas and experiences with you

    RSS Feed

    View my profile on LinkedIn

    Categories

    All
    AEC
    BIM
    Data Management
    Digital Twin
    ERP
    Facility Lifecycle Management
    Integration
    Internet Of Things
    IOT
    Platform
    PLM
    Process
    Product Lifecycle Management
    Strategy
    Structured Data
    Technical Information Management
    VDC
    Virtual Design And Construction

Contact us:
plmPartner AS    Lyngfjellveien 14    4580 Lyngdal    Norway    +47 99 03 05 19    info@plmpartner.com