Follow us at
plmPartner
  • Home
  • About
  • Blog
  • Archive
  • Contact
  • Video Log
  • Learning Center
  • Privacy Policy

From data silos to data flow - part 1

10/19/2018

0 Comments

 
Picture
In these two articles I’ll try to explain why and how a data flow approach between main systems during a plants lifecycle is far more effective than a document based handover process between project phases. I have earlier discussed the various information structures that needs to be in place across the same lifecycle. If you’re interested the list of articles is found at the end of this article.
​
During design and engineering the different plant and product design disciplines authoring tools play a major role as they are feeder systems to the Plant PLM Platform. All the information coming from these tools needs to be consolidated, managed and put under change control. The Plant PLM Platform also plays a major role to document the technical baselines of the plant, such as As-Designed, As-Built, As-Commissioned and As-Maintained. See figure 1.
​
Picture
Figure 1.

​When moving into the procurement phase, a lot of information needs to flow to the ERP system for purchasing of everything needed to construct the plant. The first information that must be transferred is released product designs, so Engineering Bill Of Materials. This is the traditional Product Lifecycle Management domain. The released EBOM says that seen from product engineering everything is ready for manufacturing and ERP can start procuring parts and materials to manufacture the product. Depending on the level of product engineering done in the plant project this can be a lot or just individual parts representing standard components or standard Parts.

The next information that needs to go to ERP is released tag information where the tag is connected to a released part. Here a typical example would be that a piping system is released with let’s say 8 pump tags and the pumps individual requirements in the system can all be satisfied by a generic part from a manufacturer. This would mean that in the Plant PLM system there are 8 pump tag objects that are released and they are all connected to the same generic released part. This constitutes a validated project specific demand for 8 pumps. At this stage an As-Designed baseline can be created in the Plant PLM platform for that particular system.
​
This information must be transferred to ERP where it now means that procurement should place an order for 8 pumps and manage the logistics around this. However, seen from Project planning and execution it might be identified that according to the project execution plan several other systems are scheduled for release shortly which would make the order 50 pumps instead of 8. After communicating with affected stakeholders, it may be that it is decided to defer the order.
Picture
Figure 2

As the order is placed together with information regarding each specific Tag requirement, preparation for goods receival, intermediate storage and work orders for installation must be made. This is normally done in an Enterprise Asset Management (EAM) system which also needs to be aware of the Tag’s and their requirements, physical locations to install the arrived pumps and what corresponding part definition the received physical asset is representing. All of this information is fed to the EAM system from the Plant PLM platform. As the physical assets are received, each of our now 50 pumps needs to be inspected, logged in the EAM system together with the information provided by the vendor and associated with the common part definition. If the pumps are scheduled for immediate installation, each delivered physical asset is Tagged as they now are installed to fulfill a dedicated function in the plant.
​
At this stage the information about the physical asset and its relations to Tag, physical location, and corresponding part is sent back to the Plant PLM platform for consolidation. This step is crucial if a consolidated As-Built baseline is needed and there is a need to compare As-Designed with As-Built. Alternately the EAM system needs to “own” the baselines.
Picture
Figure 3.

The next step is to make the Integrated Control & Safety system aware of the installed assets, and this will be among the topics for the next article.

If you want to know more about what kind of information structures and data that needs to be consolidated and flow between the systems, you can find more information here:


Plant Information Management - Information Structures
Archive of articles


Bjorn Fidjeland


The header image used in this post is by Wavebreakmedia Ltd  and purchased at dreamstime.com

0 Comments

PLM Benchmark 3 – EPC 2  What did they do and why?

8/2/2018

0 Comments

 
Picture
This is an article in the series regarding PLM benchmarking among operators, EPC’s and product companies.
The articles cover the motivation for doing as they did, and where their main focus was put in order to achieve their goals.

I will continue to use my information structure map, or the “circle of life” as a client jokingly called it, to explain where the different companies put their focus in terms of information management and why.
Picture
​EPC 2 had a slightly different focus from the EPC in my previous article as they were in an industry where there is a less clear split between Engineering Procurement and Construction companies and Product companies in the capital project value chain.
This company had the challenges of both EPC’s and product companies in ETO (Engineer To Order) projects as they owned several product companies, and naturally used a lot of their products in their EPC projects.
​
Picture
Figure 2 shows the different data structures that EPC 2 focused on

Their first objective was to respond to fearsome competition from other parts of the world who suddenly emerged on the global scene. In order to do so it was considered crucial to limit the amount of engineering hours used to gain projects. To achieve this, they decided to build a catalog of re-usable data structures with different perspectives (plant, product, execution) in order to promote a controlled re-use of both plant and product engineering data. Similarly, as for EPC 1 they recognized that standardization across disciplines would be necessary to make it all work. The reference/master data put in place for all disciplines to share was a proprietary company standard.

Secondly, they needed to replace a homegrown engineering data hub. This homegrown solution was very impressive indeed and contained a lot of functionality that commercial systems lack even today, however its architecture was built around processes that did no longer work as EPC 2 entered new markets.

Thirdly they wanted to connect their plant engineering disciplines with the various product engineering disciplines throughout their own product companies worldwide. Naturally this meant run-time sharing and consolidation of data on a large scale. The emergence of the catalog with different aspects meant that plant engineering could pick systems and products from the catalog and have auto generated project specific tag information in the functional structure of their projects. It also meant that product engineering would be able to either generate a unique Engineer To Order bill of material if needed, or if plant engineering had not done any major modifications, link it to an already existing Engineering Bill of Material for the full product definition. 
​
Their fourth objective was to obtain full traceability of changes across both plant and product engineering disciplines from FEED (Front End Engineering & Design) to delivered project. The reason for this objective was twofold. One part was to be able to prove to clients (operators) where changes originated from (largely from the client itself), and secondly to be able to measure what changes originated from their own engineering disciplines without project planning and execution knowing about it….. Does it sound familiar?
In order to achieve this, engineering data change management was enforced on both FEED functional design structures (yes, there could be several different design options for a project) and the functional structure in the actually executed EPC project. The agreed FEED functional structure was even locked and copied to serve as the starting point for the EPC project. At this point all data in the functional structure was released, subjected to full change management (meaning traceable Change Orders would be needed to change it) and made available to project planning and execution via integration.
Picture
Figure 3 shows the sequence of data structures that was focused on in the implementation project.

Since product design and delivery was a large portion of their projects, the Engineering Bill of Material (EBOM) and variant management (the catalog structures) got a lot more focus compared with EPC 1 in my previous article. This was natural because, as mentioned, EPC 2 owned product companies and wanted to make a shift from Engineer To Order (ETO) towards more Configure To Order (CTO).
It was however decided to defer the catalog structures towards the end because they wanted to gain experience across the other aspects as well before starting to create the catalog itself.


The Functional Structure with the consolidated plant design, project specific data and associated documentation was out next together with the establishment of structures for project execution (WBS), estimation and control (Sales structure), and logistics (Supply structure).

Once the various data structures were in place, the focus was turned to “gluing it all together” with the re-usable catalog structures and the reference data which enabled interoperability across disciplines.

A more comprehensive overview explaining the different structures can be found in the article:
Plant Information Management - Information Structures, and further details regarding each information structure are discussed in:
Plant Engineering meets Product Engineering in capital projects
Handover to logistics and supply chain in capital projects
Plant Information Management - Installation and Commissioning
Plant Information Management – Operations and Maintenance

​Bjorn Fidjeland


The header image used in this post is by 8vfand and purchased at dreamstime.com
0 Comments

PLM for Engineer To Order (ETO) businesses - Free online course

6/10/2018

0 Comments

 
Picture
What needs to be considered when making a powerful platform for  full lifecycle support in Capital Projects?

This FREE online course from plmPartner, powered by SharePLM, explains. 
​No subscription or registration required, but please let us know what you think.




0 Comments

PLM Benchmark 2 – EPC 1 What did they do and why?

4/27/2018

0 Comments

 
Picture
This is the second article in the series regarding PLM benchmarking among operators, EPC’s and product companies where I share some experiences with you originating from different companies.
The articles cover the motivation for doing as they did, and where their main focus was put in order to achieve their goals.
In this series I use my information structure map, or the “circle of life” as a client jokingly called it, to explain where the different companies put their focus in terms of information management and why. 
Picture
​EPC 1’s first objective was to replace an in-house built engineering data hub. The reason for this was that over the years, needs and requirements had changed both from customers (operators) as the EPC went global, and internally within the organization. This situation lead to more and more customizations of the engineering data hub resulting in sky rocketing cost of ownership, and ironically, less and less flexibility.

This is by no means a unique situation as many EPC’s were forced to build such hubs in the late nineties for consolidation and control of multidiscipline plant information since no software vendor at the time could support their needs.

Secondly it was considered crucial to enable standardization and re-use of previously delivered designs and engineering data.
A huge effort was put on building reference data for sharing and alignment across plant engineering disciplines, procurement and ultimately client handover of Documentation For Installation & Operations (DFI/DFO). An ISO 15926 ontology was put in place for this purpose.
The main reason for enabling standardization and re-use of engineering data however, was to reduce the gigantic number of engineering hours that were spent in the early phases of each project delivery. Especially during the FEED phase (Front End Engineering and Design). Another important reason was to connect engineering with procurement and the wider supply chain more seamlessly.
Picture
​Figure 2. shows what information structures EPC 1 put most emphasis on. Quite naturally the Functional Location structure (tag structure, multi discipline plant design requirements) received a lot of focus. To enable re-use and efficient transfer of data, both the reference data and a library of re-usable design structures using the reference data was built.

Extensive analysis of previously executed projects revealed that even if the EPC had a lot of engineering concepts and data that could be re-used across projects, they more often than not created everything from scratch in the next project. In order to capitalize on and manage the collective know-how of the organization, the re-usable design structures received a lot of focus.

EPC 1 also faced different requirements from operators with respect to use of tagging standards depending on what parts of the world they delivered projects to, so as a consequence, multiple tagging standards needed to be supported. It was decided that no matter what format the operator wanted to receive, all tags in all projects would be governed by an internal “master-tag” in the EPC’s own system while communicated to the customer in their specified format.

The third focus area was an extensive part (or article) library with internal part numbers and characteristics showing what kind of products could fulfill the tag requirements in the functional structure. Each part was then linked via a relationship to objects representing preferred suppliers of that product in different regions of the world. This concept greatly aided engineering procurement when performing Material Take-Off (MTO) since each tag would be linked to a part where preferred supplier could be selected. 
Picture
​EPC 1 chose to focus on the reference data first in order to get a common agreement regarding needed data across their disciplines during the EPC project lifecycle. Next in line was the catalog of re-usable engineering structures. These structures could be used and selected as a starting point in any EPC project.
The third delivery in the project centered around delivering the capabilities to create and use the different plant engineering structures (functional structure, tags, with connected parts where both entities used the same reference data )
 
An overview explaining the different structures can be found in the article:
Plant Information Management - Information Structures, and further details regarding each information structure are discussed in:
Plant Engineering meets Product Engineering in capital projects
Handover to logistics and supply chain in capital projects
Plant Information Management - Installation and Commissioning
Plant Information Management – Operations and Maintenance

Bjorn Fidjeland

The header image used in this post is by Viacheslav Iacobchuk and purchased at dreamstime.com
0 Comments

PLM Benchmark – Operator 1 What did they do and why?

3/9/2018

0 Comments

 
Picture
This as a first in a series of articles where I share some experiences with you from different product companies, EPC’s and operators.
The articles will cover the motivation for doing as they did, and where their main focus was put in order to achieve their goals.

There is a span in the different experiences of almost 20 years… I would like you to reflect a bit on that and keep in mind some of the buzzwords of today. Especially digital twin, IOT and Big Data analytics.
In this series I will use my information structure map, or the “circle of life” as a client jokingly called it, to explain where the different companies put their focus in terms of information management strategy and why.

An overview explaining the different structures can be found in the article:
Plant Information Management - Information Structures, and further details regarding each information structure are discussed in:
Plant Engineering meets Product Engineering in capital projects
Handover to logistics and supply chain in capital projects
Plant Information Management - Installation and Commissioning
Plant Information Management – Operations and Maintenance
​
Picture
Operator 1’s first objectives was to shorten the project execution time from design through installation and commissioning by letting the projects information model be gradually built up through all project phases and by all stakeholders in one common platform.
By doing it this way there would be no handover of documentation but rather a handover of access and responsibility of data. A large focus was put on standardizing information exchange between both stakeholders in the capital projects and between computer systems. The entry point to all information was a 3D representation of the data structures!

Makes you think of digital twin……. However this initiative was before anybody had heard of it...The 3D representation was NOT a design model, but rather a three-dimensional representation of the asset linked to all the information structures creating different dimensions or information layers if you will.

So it had to be quite small assets this operator was dealing with you might think?

Actually no, one of the assets managed was about a million tags. Concepts from the gaming industry like Level Of Detail and back-face culling were used to achieve the level of performance needed from the 3D side.
So why this enormous effort by an operator to streamline just the initial stages of an assets lifecycle?
I mean the operators real benefit comes from operating the asset in order to produce whatever it needs to produce, right?
​
Because it was seen as a prerequisite to capitalize on plant information in training, simulation, operations, maintenance and decommissioning. Two words summarizes the motivation: Maximum up-time. How to achieve it: operational run-time data from sensors linked and compared with accurate and parametric as-designed, as-built and as-maintained data.

​
Picture
​Figure 2. shows what information structures the operator put most emphasis on. Quite naturally the Functional structure (tag structure and design requirements), and corresponding physically installed asset information was highly important, and this is what they started with (see figure 3). Reference Data to be able to compare and consolidate data from the different structures was next in line together with an extensive parts (article) catalog of what could be supplied by whom in different regions of the world.
Picture
There was an understanding that a highly document-oriented industry could not shift completely to structured data and information structures overnight for everything, so document management was also included as what was regarded as an intermediate step. The last type of structure they focused on was project execution structures (Work Breakdown Structures). This was not because it was regarded as less important, actually it was regarded as highly important since it introduced the time dimension with traceability and control of who should do what, or did what when. The reasoning behind it was that since work breakdown structures tied into absolutely everything, they wanted to test and roll out the “base model” of data structures in the three-dimensional world (the 3D database) before introducing the fourth dimension.

​Bjorn Fidjeland

​
The header image used in this post is by Jacek Jędrzejowski and purchased at dreamstime.com
0 Comments

Big Data and PLM, what’s the connection?

1/3/2018

2 Comments

 
Picture
I was challenged the other day to explain the connection between Big Data and PLM by a former colleague. The connection might not be immediately apparent if your viewpoint is from traditional Product Lifecycle Management systems which primarily has to do with managing the design and engineering data of a product or plant/facility.

However, if we first take a look at a definition of Product Lifecycle Management from Wikipedia:

“In industry, product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from inception, through engineering design and manufacture, to service and disposal of manufactured products. PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.”
​
Traditionally it has looked much like this
Picture
Then let’s look at a definition of Big Data
​

“Big data is data sets that are so voluminous and complex that traditional data processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating and information privacy. There are three dimensions to big data known as Volume, Variety and Velocity.
Lately, the term "big data" tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem." Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on.”

Included in Big Data you’ll find data sets harvested from sensors within all sorts of equipment and products as well as data fed back from software running within products. One can say that a portion of Big Data is the resulting feedback from the Internet of Things. Data in itself is not of any value whatsoever, but if the data can be analyzed to reveal meaning, trends or knowledge about how a product is used by different customer segments then it has tremendous value to product manufacturers.
If we take a look at the operational phase of a product, and by that, I mean everything that happens from manufactured product to disposal, then any manufacturer would like to get their hands on such data, either to improve the product itself or sell services associated with it. Such services could be anything from utilizing the product as a platform for an ecosystem of connected products to new business models where the product itself is not the key but rather the service it provides. You might sell guaranteed uptime or availability provided that the customer also buys into your service program for instance.

 
The resulting analysis of the data should in my view be managed by, or at least serve as input to the product definition because the knowledge gleamed from all the analytics of Big Data sets ultimately impacts the product definition itself since it should lead to revised product designs that fulfills the customer needs better. It might also lead to the revelation that it would be better to split a product in two different designs going after two distinct end user behavior categories found as a result of data analysis from the operational phase of the products.
​
Connected products, Big Data and analysis will to a far greater extent than before allow us to do the following instead:
Picture
It will mean that experience throughout the full lifecycle can be made available to develop better products, tailor to new end user behavior trends and create new business models.

Note: the image above focuses on the feedback loops to product engineering, but such feedback loops should also be made available from for instance service and operation to manufacturing.

Most companies I work with tell me that the feedback loops described in the image above is either too poor, or virtually nonexistent. Furthermore, they all say that such feedback loops are becoming vital for their survival as more and more of their revenue comes from services after a product sale and not from the product sale itself. This means that it is imperative for them to have as much reliable and analyzed data as possible about their products performance in the field, how their customers are actually using them and how they are maintained.

For these companies at least, the connection between Big Data analysis and its impact on Product Lifecycle Management is becoming clearer and clearer.


Bjorn Fidjeland


The header image used in this post is by garrykillian and purchased at dreamstime.com

2 Comments

Digital Twin - What needs to be under the hood?

10/22/2017

0 Comments

 
Picture
In the article Plant Information Management – Information Structures, and the following posts regarding Plant Information Management (see Archive) I explained in more detail the various information structures, the importance of structuring the data as object structures with interconnecting relationships to create context between the different information sources. 

​What does all of this have to do with the digital twin? - Let's have a look.

Picture
​Information structures and their interconnecting relationships can be described by one of the major fashion word these days, the digital thread or digital twin.
The term and concept of a digital twin was first coined by Michael Grieves at the University of Michigan in 2002, but has since taken on a life of its own in different companies.
 
Below is an example of what information can be accessed from a digital twin or rather what the digital twin can serve as an entry point for:
Picture
​If your data is structured in such a way with connected objects, attributes and properties, an associated three-dimensional representation of the physically delivered instance is a tremendously valuable asset as a carrier of information. It is however, not a pre- requisite that it is a 3D model, a simple dashboard giving access to the individual physical items might be enough. The 3D stuff is always promoted in the glossy sales representations by various companies, but it’s not needed for every possible use case. In a plant or aircraft, it makes a lot of sense, since the volume of information and number of possible entry points to the full data set is staggering, but it might not be necessary to have individual three-dimensional representations for all mobile phones ever sold. It might suffice to have each data set associated with each serial number.
 
On the other hand, if you have a 3D representation, it can become a front end used by end users for finding, searching and analyzing all connected information from the data structures described in my previous blog posts. Such insights takes us to a whole new level of understanding of each delivered products life, their challenges and opportunities in different environments and the way they are actually being used by end customers.
 
Let’s say that we via the digital twin in the figure above select a pump. The tag of that pump uniquely identifies the functional location in the facility. An end user can pull information from the system the pump belongs to in the form of a parametric Piping & Instrumentation Diagram (P&ID), the functional specification for the pump in the designed system, information about the actually installed pump with serial number, manufacturing information, supplier, certificates, performed installation & commissioning procedures and actual operational data of the pump itself.
 
The real power in the operational phase becomes evident when operational data is associated with each delivered pump. In such a case the operational data can be compared with environmental conditions the physical equipment operates in. Let’s say that the fluid being pumped contains more and more sediments, and our historical records of similar conditions tells us that the pump will likely fail during the next ten days due to wear and tear of critical components. However, it is also indicated that if we reduce the power by 5 percent we will be able to operate the full system until the next scheduled maintenance window in 15 days. Information like that gives real business value in terms of increased uptime.
 
Let’s look at some other possibilities.
If we now consider a full facility with a three-dimensional representation:
During the EPC phase it is possible to associate the 3D model with a fourth dimension, time, turning it into a 4D model. By doing so, the model can be used to analyze and validate different installation execution plans, or monitor the actual ongoing installation of the Facility. We can actually see the individual parts of the model appearing as time progresses.
 
A fifth dimension can also be added, namely cost. Here the cost development over time according to one or several proposed installation execution plans or the actual installation itself can be analyzed or monitored.
This is already being done by some early movers in the construction industry where it is referred to as 5D or Virtual Design & Construction.
 
The model can also serve as an important asset when planning and coordinating space claims made by different disciplines during the design as well as during the actual installation. It can easily give visual feedback if there is a conflict between space claims made by electrical engineering and mechanical engineering, or if there is a conflict in the installation execution plan in terms of planned access by different working crews.
More and more companies are also making use of laser scanning in order to get an accurate 3D model of what has been actually installed so far. This model can easily be compared with the design model to see if there are any deviations. If deviations are found, they can be acted upon by analyzing how it will impact the overall system if it is left as it is, or will it require re-design? Does the decision to leave it as it is change the performance of the overall system? Are we still able to perform the rest of the installation, due to less available space?
Answers to these questions might entail that we will have to dismantle the parts of the system that has deviations. It is however a lot better and cost effective to identify such problems as early as possible.
 
This is just great, right? Such insights as mentioned would have huge impacts on how EPC’s manage their projects, operators run their plants and how product vendors can operate or service their equipment in the field, as well as feeding information back to engineering to make better products.
​
​New business models can be created in the likes of: “We sell power by the hour, dear customer, you don’t even have to buy the asset itself”!
(Power-by-the-Hour is a trademark of Rolls-Royce, although the concept itself is 50 years old you can read about a more recent development here)
 
So why haven’t more companies already done it?
 
Because in order to get there, the underlying data must be connected, and in the form of… yes data as in objects, attributes and relationships. It requires a massive shift from document orientation to connected data orientation to be at its most effective.
 
On the bright side, several companies in very diverse industries have started this journey, and some are already starting to harvest the fruits of their adventure.
​
My advice to any company thinking about doing the same would be along the lines of:
When eating this particular elephant, do it one bite at the time, remember to swallow and let your organization digest between each bite.

Bjorn Fidjeland

The header image used in this post is by Elnur and purchased at dreamstime.com

​
0 Comments

Data Integration – Why Dictionaries…..?

8/19/2017

0 Comments

 
Picture
Most companies of more than medium size that does any engineering soon finds itself in a similar situation to the figure below.
The names of the applications might be different, but the underlying problem remains the same: Engineering data is created by different best of bread design tools used by different engineering disciplines, and the data must at some point in time be consolidated across disciplines and communicated to another discipline. This other discipline being procurement, project execution, manufacturing and/or supply chain.
​
This article is a continuation of the thoughts discussed in an earlier post called integration strategies, if the full background is wanted.

​

Picture
​As the first figure indicates,  this has often ended up in a lot of application specific point to point integrations. For the last 15 years, more or less, so called Enterprise Service Buses have been available to organizations, enterprise and software architects. The Enterprise Service Bus, often referred to as ESB is a common framework for integration allowing different applications to subscribe to published data from other applications and thereby creating a standardized “information highway” between different company domains and their software of choice.
Picture
​By implementing such an Enterprise Service Bus, the situation in the company would look somewhat like the figure above. Now from an enterprise architecture point of view this looks fine, but what I often see in organizations I work with is more depressing. Let’s dive in and see what often goes on behind the scene.
Picture
Modern ESB’s have graphical user interfaces that can interpret the publishing applications data format, usually by means of xml or rather the xsd. The same is true for the subscribing applications.
This makes it easy to create integrations by simply dragging and dropping data sources from one to the other. Of course, often, one will have to combine some attributes from one application into one specific attribute in another application, but this is also usually supported.
​
So far everything is just fine, and integration projects have become a lot easier than before. BUT, and there is a big but. What happens when you have multiple applications integrated?
Picture
The problems of point to point integrations have effectively been re-created inside the Enterprise Service Bus, because if I change the name of an attribute in a publishing application’s connector, all the subscribing application’s connectors must be changed as well.
How can this be avoided? Well several ESB’s support the use of so called dictionaries, and the chances are that the Enterprise Service Bus ironically is already using one in the background.

So, what is a dictionary in this context?
Think of it as a Rosetta stone. Well, what is a Rosetta stone you might ask. The find of the Rosetta stone was the breakthrough in understanding Egyptian hieroglyphs. The stone contained a decree with the same text in hieroglyphs, Demotic script and ancient Greek allowing us to decipher Egyptian hieroglyphs.
​Imagine the frustration before this happened. A vast repository of information carved in stone all over the magnificent finds from an earlier civilization…. And nobody could make sense of it….. Sounds vaguely familiar in another context.
​
Back to our more modern integration issues.
Picture
​If a dictionary or Rosetta stone is placed in the middle, serving as an interpretation layer, it won’t matter if the name of some of the attributes in one of the publishing applications changes. None of the other applications connectors will be affected, since it is only the mapping to the dictionary that must be changed which is the responsibility of the publishing application.
Picture
If such a dictionary is based on an industry standard, it will also have some very beneficial side effects.
Why?
Because if your internal company’s integration dictionary is standards based, then the effort of generating information sent to clients and suppliers, traditionally referred to as transmittals or submittals, will be very easy indeed.

If we expand our line of thought to interpretation of data from operational systems (harvesting data from physical equipment in the field). Commonly referred to as IoT, or acquisition of data through SCADA systems, then the opportunities becomes even greater.

In this case it really is possible to kill two birds with one stone, and thereby creating a competitive advantage!

Bjorn Fidjeland


The header image used in this post is by Bartkowski  and purchased at dreamstime.com
0 Comments

Who owns what data when…..?

7/7/2017

0 Comments

 
Picture
​A vital questions when looking at cross departmental process optimization and integration is in my view: who owns what data when in the overall process? 
Picture
Usually this question will spark up quite a discussion between the process owners, company departments, data owners and the different enterprise architects. The main reason for this is that depending on where the stakeholders have their main investment, they tend to look at “their” part of the process as the most important and the “master” for their data.

Just think about sales with their product configurators, engineering with CAD/PLM, supply chain, manufacturing & logistics with ERP and MES. Further along the lifecycle you encounter operations and service with EAM, Enterprise Asset Management, systems sometimes including MRO, Maintenance Repair and Operations/Overhaul. The last part being for products in operational use. Operations and service is really on the move right now due to the ability to receive valuable feedback from all products used in the field (commonly referred to as Internet of Things) even for consumer products, but hold your horses on the last one just for a little while.
​
The different departments and process owners will typically have claimed ownership of their particular parts of the process, making it look something like this:
Picture
This would typically be a traditional linear product engineering, manufacturing and distribution process. Each department has also selected IT tools that suit their particular needs in the process.
This in turn leads to information handovers both between company departments and IT tools, and due to the complexity of IT system integration, usually, as little as possible of data is handed from one system to the next.
​
So far it has been quite straight forward to answer “who owns what data”, especially for the data that is actually created in the departments own IT system, however, the tricky one is the when in “ who owns what data when”, because the when implies that ownership of certain data is transferred from one department and/or IT system to the next one in the process. In a traditional linear one, such information would be “hurled over the wall” like this:
Picture
Now, since as little information as possible flowed from one department / IT system to the next, each department would make it work as best as they could, and create or re-create information in their own system for everything that did not come directly through integration.
Only in cases where there were really big problems with lacking or clearly faulty data, an initiative would be launched to look at the process and any system integrations that would be affected.

The end result being that the accumulated information throughout the process that can be associated with the end product, that is to say the physical product sold to the consumer, is only a fraction of the actual sum of information generated in the different department’s processes and systems.
​
Now what happens when operations & services get more and more detailed information from each individual product in the field, and starts feeding that information back to the various departments and systems in the process?
Picture
The process will cease to be a linear one, it becomes circular with constant feedback of analyzed information flowing back to the different departments and IT systems.

Well what’s the problem you might ask.

The first thing that becomes clear is that each department with their systems does not have enough information to make effective use of all the information coming from operations, because they each have a quite limited set of data concerning mainly their discipline.

Secondly, the feedback loop is potentially constant or near real-time which will open up for completely new service offerings, however, the current process and infrastructure going from design through engineering and manufacturing was never built to tackle this kind of speed and agility.

Ironically, from a Product Lifecycle Management perspective, we’ve been talking about breaking down information and departmental silos in companies to utilize the L in PLM for as long as I can remember, however the way it looks now, it is probably going to be operations and the enablement of Internet Of Things and Big Data analytics that will force companies to go from strictly linear to circular processes.

And when you ultimately do, please always ask yourself “who should own what data when”, because ownership of data is not synonymous with the creation of data. Ownership is transferred along the process and accumulates to a full data set of the physically manufactured product until it is handed back again as a result of fault in the product or possible optimization opportunities for the product.

 – And it will happen faster and faster
​
Bjorn Fidjeland


The header image used in this post is by Bacho12345 and purchased at dreamstime.com
0 Comments

Digitalization - sure, but on what foundation?

4/7/2017

2 Comments

 
Picture
The last couple of years I’ve been working with some companies on digitalization projects and strategies. Digitalization is of course very attractive in a number of industries:

  • Equipment manufacturers, where digitalization can be merged with Internet Of Things to create completely new service offerings and relationships with the customers
  • Capital projects EPC’s and operators, where a digital representation of the delivery can be handed over as a “digital twin” to the operator , and where the operator can use it and hook it up to EAM or MRO solutions to monitor the physical asset real-time in a virtual world. The real value for the operator here is increased up-time and lower operational costs, whereas EPC’s can offer new kinds of services and in addition mitigate project risks better.
  • Construction industry, where the use of VDC (Virtual Design & Construction) technology can be extended to help the facility owner minimize operational costs and optimize comfort for tenants by connecting all kinds of sensors in a modern building and adjust accordingly.
But hang on a second: If we look at the definition of digitalization, at least the way Gartner views it

“Digitalization is the use of digital technologies to change a business model and provide new revenue and value-producing opportunities; it is the process of moving to a digital business.” (Source: Gartner)

…The process of moving to a digital business….

The digitalization strategies of most of the companies I’ve been working with focuses on the creation of new services and revenue possibilities on the service side of the lifecycle of a product or facility, so AFTER the product has been delivered, or the plant is in operation.
There is nothing wrong with that, but if the process from design through engineering and manufacturing is not fully digitalised (by that I do not mean documents in digital format, but data as information structures linked together) then it becomes very difficult to capitalize on the promises of the digitalization strategy.
​
Consider 2 examples
Picture
Figure 1.
​
Figure 1 describes a scenario where design and engineering tools work more or less independently and where the result is consolidated in documents or excel before communicated to ERP. This is the extreme scenario to illustrate the point, and most companies have some sort of PDM/PLM or Engineering Register to perform at least partial consolidation of data before sending to ERP. However I often find some design or engineering tools operating as “islands” outside the consolidation layer.

So if we switch viewpoint to the new digital service offering promoted to end customers. What happens when a sensor is reporting back a fault in the delivered product? The service organization must know exactly what has been delivered, where the nearest spare parts are, how the product  is calibrated etc. to quickly fix the problem with a minimum use of resources in order to make a profit and to exceed customer expectation to gain a good reputation.
​
How likely is that to happen with the setup in figure 1?

​
Picture
Figure 2.
​
The setup in figure 2 describes a situation where design and engineering information is consolidated together with information regarding the actually delivered physical products. This approach does not necessarily dictate that the information is only available in one and only one software platform, however the essence is that the data must be structured and consolidated.

Again let’s switch viewpoint to the new digital service offering promoted to end customers. What happens when a sensor is reporting back a fault in the delivered product?
When data is available as structured and linked data it is instantly available to the service organization, and appropriate measures can be taken while informing the customer with accurate data.
​
My clear recommendation is that if you are embarking on a digitalization journey to enhance your service offering and offer new service models, then make sure you have a solid digital foundation to build those offerings on. Because if you don’t it will be very difficult to achieve the margins you are dreaming of.
​
Bjorn Fidjeland


The header image used in this post is by kurhan and purchased at dreamstime.com
2 Comments
<<Previous
Forward>>

    plmPartner

    This is where we share our thoughts, ideas and experiences with you

    RSS Feed

    View my profile on LinkedIn

    Categories

    All
    AEC
    BIM
    Data Management
    Digital Twin
    ERP
    Facility Lifecycle Management
    Integration
    Internet Of Things
    IOT
    Platform
    PLM
    Process
    Product Lifecycle Management
    Strategy
    Structured Data
    Technical Information Management
    VDC
    Virtual Design And Construction

Contact us:
plmPartner AS    Lyngfjellveien 14    4580 Lyngdal    Norway    +47 99 03 05 19    info@plmpartner.com