Follow us at
plmPartner
  • Home
  • About
  • Blog
  • Archive
  • Video Log
  • Podcast
  • Learning Center
  • Privacy Policy

The Digital Enterprise - Business Software

4/7/2025

0 Comments

 
Picture
 When business needs are identified and processes are defined it becomes important to identify a business software strategy to support business process execution.
Implementing the business processes in software allows the enterprise to make sure the process is followed, measure it, ensure feedback with respect to both the process itself and regarding its outcomes.
Effective implementation of the business processes in software is absolutely crucial for a digital enterprise to be able to continuously improve, detect changes, transform and respond to new business opportunities as well as threats.

In recent times I have encountered some interesting points for debate. Some startup companies only roughly define their business processes before scanning the market for business software. The idea being that such software solutions generally come with pre-defined processes at least on a lower discipline level, and that they may be adopted straight away by the startup as there are no legacy processes nor legacy data.

I see both pros and cons to this approach, but would love to hear your opinion first.


If we examine some different business software strategy approaches, I would like to focus on the three most common ones.
 
The monolithic approach:
In this approach, one software solution or at least software from only one vendor is selected. It does have the advantage that, at least in theory, there is a one stop shop for Product Lifecycle Management, Enterprise Resource Planning, Manufacturing Execution System, after sales and services etc.
The downside is that all eggs are in one basket, and it will be difficult to ever change systems.
​And trust me, the software provider knows this.

​​ 
Picture

A few core enterprise business systems:

The strategy here is to identify best of breed platforms for major “chunks” of the business process. Examples could be one platform for design and engineering, another for procurement and manufacturing, a third for aftermarket and service. For this setup to work it becomes necessary to spend quite a bit of time and money on integration strategies to ensure that sufficient information flows back and forth between the software platforms. A key enabler here is to define a common language across the enterprise, meaning a Reference Data Library (RDL) of master data classes to ensure interoperability between the software platforms. This will greatly aid integrations as cumbersome data mapping tables can be eliminated in the integrations (Data Integration – Why Dictionaries…..? )

​
Picture
Orchestrated microservices:
The idea here is to utilize central orchestration which manages the interactions and workflows between different microservices while the services themselves are developed to perform the activities.
This approach is flexible and allows for using tools like Kubernetes for container orchestration and workflow engines like camunda and apache airflow for managing business processes. The downside is that it will lead to considerable development to implement your business processes

​
Picture
One should always carefully consider the amount of energy and resources put into in-house developed solutions as they will have to be maintained as well as upgraded technically to ensure they stand the test of time both functionality wise as well as from a software security aspect.

​Bjorn Fidjeland


0 Comments

Opportunities and strategies - Product Configuration Lifecycle Management

4/25/2021

0 Comments

 
Picture

This time an article that aims towards the more traditional Product Lifecycle Management domain and especially towards configurable products or so called Configure To Order (CTO) products. This article is a direct result of discussions I’ve had with Henrik Hulgaard, the CTO of Configit, on Configuration Management in general and Product Configuration Management in particular. Configit specializes in Product Configuration Management, or as they prefer to call it Configuration Lifecycle Management.
 
Most businesses that design, manufacture and sell products have a system landscape in place to support key areas during the lifecycle of a product pretty much as in the image below (there are of course differences from company to company).

​
Picture
Figure 1.

This works well as long as the product lifecycle is linear, like it has mostly been in the past. However, as more and more companies strive after being able to let customers “personalize” their products, (so, to configure them to support their individual personal needs), harvest data and behavior from “the field” through sensors to detect trends in usage as well as being able to offer new services while the product is in use (operational), the lifecycle cannot be linear anymore in my view. This is because all phases of the lifecycle need feedback and information from the other phases to some degree. You may call this “a digital thread”, “digital twin” or “digital continuity” if you will (figure 2).
Picture
Figure 2.

Such a shift puts enormous requirements on traceability and change management of data all the way from how the product was designed, through to how it is used, how it is serviced and ultimately how it is recycled. If the product is highly configurable, the number of variants of the product that can be sold and used is downright staggering.
Needless to say, it will be difficult to offer a customer good service if you do not know what variant of the product the customer has purchased, and how that particular instance of the product has been maintained or upgraded in the past.
 
So, what can a company do to address these challenges and also the vast opportunities that such feedback loops offer?

If we consider the three system domains that are normally present (there are often more), they are more often than not quite siloed. That is in my experience not because the systems cannot be integrated, but more as a result of organizations still working quite silo oriented (Figure 3).

​


Picture
Figure 3.

All companies I’ve worked with wants to break down these silos and internally become more transparent and agile, but what domain should take on the responsibility to manage the different aspects of product configuration data? I mean, there is the design & engineering aspect, the procurement aspect, the manufacturing aspect, the sales aspect, the usage/operation aspect, the service/maintained aspect and ultimately the recycling aspect.
 
Several PLM systems today have configuration management capabilities, and it would for many companies make sense to at least manage product engineering configurations here, but where do you stop? I mean, sooner or later you will have to evaluate if more transactional oriented data should be incorporated in the PLM platform which is not a PLM system’s strongpoint (figure 4).
Picture
Figure 4.

On the other hand, several ERP systems also offer forms of configuration management either as an addition or as part of their offering. The same question needs to be answered here. Where does it make most sense to stop as ERP systems are transactional oriented, while PLM systems are a lot more process and iteratively work oriented (figure 5).


Picture
Figure 5.

The same questions need to be asked and answered for the scenario regarding CRM. Where does it make sense to draw the boundaries towards ERP or PLM, like in figure 6.

​
Picture
Figure 6.

I have seen examples of companies wanting to address all aspect with a single software vendor’s portfolio, but in my experience, it only masks the same questions within the same portfolio of software solutions. Because, who does what, where and with responsibility for what type of data when, is not tackled by using a single vendor’s software. Those are organizational, and work process related questions, not software questions.
 
Another possible solution is to utilize what ERP, PLM and CRP systems are good at in their respective domains, and implement the adjoining business processes there. Full Product Configuration Management or Configuration Lifecycle Management needs aspects of data from all the other domains to effectively manage the full product configuration, so a more domain specific Configuration Management platform could be introduced.


Picture
Figure 7.

Such a platform will have to be able to reconcile information from the other platforms and tie it together correctly, hence it would need a form of dictionary to do that. In addition, it needs to define or at least master the ruleset defining what information from PLM can go together with what information in ERP and CRM to form a valid product configuration that can legally be sold in the customer’s region.

As an example, consider: What product design variant that meets the customer requirements can be manufactured most cost effectively and nearest the customer with the minimal use of resources and still fulfill regulatory requirements in that customers country or region?
These are some of the questions that must be answered.

More strategic reasons to evaluate a setup like in figure 7 could be:
  • As the departmental silos in an organization is often closely linked to the software platform domain, it might be easier to ensure collaboration and acceptance by key stakeholders across the organization with a “cross-cutting” platform that thrives on quality information supplied by the other platforms.
  • It poses an opportunity for companies with a strategy of not putting too many eggs in the basket of one particular software system vendor.
  • It could foster quality control of information coming from each of the other domains as such a CLM solution is utterly dependent on the quality of information from the other systems.
  • Disconnects in the information from the different aspects can be easily identified.

I would very much like to hear your thoughts on this subject.
​
Bjorn Fidjeland 


​The header image used in this post is by plmPartner
0 Comments

From data silos to data flow - part 1

10/19/2018

0 Comments

 
Picture
In these two articles I’ll try to explain why and how a data flow approach between main systems during a plants lifecycle is far more effective than a document based handover process between project phases. I have earlier discussed the various information structures that needs to be in place across the same lifecycle. If you’re interested the list of articles is found at the end of this article.
​
During design and engineering the different plant and product design disciplines authoring tools play a major role as they are feeder systems to the Plant PLM Platform. All the information coming from these tools needs to be consolidated, managed and put under change control. The Plant PLM Platform also plays a major role to document the technical baselines of the plant, such as As-Designed, As-Built, As-Commissioned and As-Maintained. See figure 1.
​
Picture
Figure 1.

​When moving into the procurement phase, a lot of information needs to flow to the ERP system for purchasing of everything needed to construct the plant. The first information that must be transferred is released product designs, so Engineering Bill Of Materials. This is the traditional Product Lifecycle Management domain. The released EBOM says that seen from product engineering everything is ready for manufacturing and ERP can start procuring parts and materials to manufacture the product. Depending on the level of product engineering done in the plant project this can be a lot or just individual parts representing standard components or standard Parts.

The next information that needs to go to ERP is released tag information where the tag is connected to a released part. Here a typical example would be that a piping system is released with let’s say 8 pump tags and the pumps individual requirements in the system can all be satisfied by a generic part from a manufacturer. This would mean that in the Plant PLM system there are 8 pump tag objects that are released and they are all connected to the same generic released part. This constitutes a validated project specific demand for 8 pumps. At this stage an As-Designed baseline can be created in the Plant PLM platform for that particular system.
​
This information must be transferred to ERP where it now means that procurement should place an order for 8 pumps and manage the logistics around this. However, seen from Project planning and execution it might be identified that according to the project execution plan several other systems are scheduled for release shortly which would make the order 50 pumps instead of 8. After communicating with affected stakeholders, it may be that it is decided to defer the order.
Picture
Figure 2

As the order is placed together with information regarding each specific Tag requirement, preparation for goods receival, intermediate storage and work orders for installation must be made. This is normally done in an Enterprise Asset Management (EAM) system which also needs to be aware of the Tag’s and their requirements, physical locations to install the arrived pumps and what corresponding part definition the received physical asset is representing. All of this information is fed to the EAM system from the Plant PLM platform. As the physical assets are received, each of our now 50 pumps needs to be inspected, logged in the EAM system together with the information provided by the vendor and associated with the common part definition. If the pumps are scheduled for immediate installation, each delivered physical asset is Tagged as they now are installed to fulfill a dedicated function in the plant.
​
At this stage the information about the physical asset and its relations to Tag, physical location, and corresponding part is sent back to the Plant PLM platform for consolidation. This step is crucial if a consolidated As-Built baseline is needed and there is a need to compare As-Designed with As-Built. Alternately the EAM system needs to “own” the baselines.
Picture
Figure 3.

The next step is to make the Integrated Control & Safety system aware of the installed assets, and this will be among the topics for the next article.

If you want to know more about what kind of information structures and data that needs to be consolidated and flow between the systems, you can find more information here:


Plant Information Management - Information Structures
Archive of articles


Bjorn Fidjeland


The header image used in this post is by Wavebreakmedia Ltd  and purchased at dreamstime.com

0 Comments

Data Integration – Why Dictionaries…..?

8/19/2017

0 Comments

 
Picture
Most companies of more than medium size that does any engineering soon finds itself in a similar situation to the figure below.
The names of the applications might be different, but the underlying problem remains the same: Engineering data is created by different best of bread design tools used by different engineering disciplines, and the data must at some point in time be consolidated across disciplines and communicated to another discipline. This other discipline being procurement, project execution, manufacturing and/or supply chain.
​
This article is a continuation of the thoughts discussed in an earlier post called integration strategies, if the full background is wanted.

​

Picture
​As the first figure indicates,  this has often ended up in a lot of application specific point to point integrations. For the last 15 years, more or less, so called Enterprise Service Buses have been available to organizations, enterprise and software architects. The Enterprise Service Bus, often referred to as ESB is a common framework for integration allowing different applications to subscribe to published data from other applications and thereby creating a standardized “information highway” between different company domains and their software of choice.
Picture
​By implementing such an Enterprise Service Bus, the situation in the company would look somewhat like the figure above. Now from an enterprise architecture point of view this looks fine, but what I often see in organizations I work with is more depressing. Let’s dive in and see what often goes on behind the scene.
Picture
Modern ESB’s have graphical user interfaces that can interpret the publishing applications data format, usually by means of xml or rather the xsd. The same is true for the subscribing applications.
This makes it easy to create integrations by simply dragging and dropping data sources from one to the other. Of course, often, one will have to combine some attributes from one application into one specific attribute in another application, but this is also usually supported.
​
So far everything is just fine, and integration projects have become a lot easier than before. BUT, and there is a big but. What happens when you have multiple applications integrated?
Picture
The problems of point to point integrations have effectively been re-created inside the Enterprise Service Bus, because if I change the name of an attribute in a publishing application’s connector, all the subscribing application’s connectors must be changed as well.
How can this be avoided? Well several ESB’s support the use of so called dictionaries, and the chances are that the Enterprise Service Bus ironically is already using one in the background.

So, what is a dictionary in this context?
Think of it as a Rosetta stone. Well, what is a Rosetta stone you might ask. The find of the Rosetta stone was the breakthrough in understanding Egyptian hieroglyphs. The stone contained a decree with the same text in hieroglyphs, Demotic script and ancient Greek allowing us to decipher Egyptian hieroglyphs.
​Imagine the frustration before this happened. A vast repository of information carved in stone all over the magnificent finds from an earlier civilization…. And nobody could make sense of it….. Sounds vaguely familiar in another context.
​
Back to our more modern integration issues.
Picture
​If a dictionary or Rosetta stone is placed in the middle, serving as an interpretation layer, it won’t matter if the name of some of the attributes in one of the publishing applications changes. None of the other applications connectors will be affected, since it is only the mapping to the dictionary that must be changed which is the responsibility of the publishing application.
Picture
If such a dictionary is based on an industry standard, it will also have some very beneficial side effects.
Why?
Because if your internal company’s integration dictionary is standards based, then the effort of generating information sent to clients and suppliers, traditionally referred to as transmittals or submittals, will be very easy indeed.

If we expand our line of thought to interpretation of data from operational systems (harvesting data from physical equipment in the field). Commonly referred to as IoT, or acquisition of data through SCADA systems, then the opportunities becomes even greater.

In this case it really is possible to kill two birds with one stone, and thereby creating a competitive advantage!

Bjorn Fidjeland


The header image used in this post is by Bartkowski  and purchased at dreamstime.com
0 Comments

Who owns what data when…..?

7/7/2017

0 Comments

 
Picture
​A vital questions when looking at cross departmental process optimization and integration is in my view: who owns what data when in the overall process? 
Picture
Usually this question will spark up quite a discussion between the process owners, company departments, data owners and the different enterprise architects. The main reason for this is that depending on where the stakeholders have their main investment, they tend to look at “their” part of the process as the most important and the “master” for their data.

Just think about sales with their product configurators, engineering with CAD/PLM, supply chain, manufacturing & logistics with ERP and MES. Further along the lifecycle you encounter operations and service with EAM, Enterprise Asset Management, systems sometimes including MRO, Maintenance Repair and Operations/Overhaul. The last part being for products in operational use. Operations and service is really on the move right now due to the ability to receive valuable feedback from all products used in the field (commonly referred to as Internet of Things) even for consumer products, but hold your horses on the last one just for a little while.
​
The different departments and process owners will typically have claimed ownership of their particular parts of the process, making it look something like this:
Picture
This would typically be a traditional linear product engineering, manufacturing and distribution process. Each department has also selected IT tools that suit their particular needs in the process.
This in turn leads to information handovers both between company departments and IT tools, and due to the complexity of IT system integration, usually, as little as possible of data is handed from one system to the next.
​
So far it has been quite straight forward to answer “who owns what data”, especially for the data that is actually created in the departments own IT system, however, the tricky one is the when in “ who owns what data when”, because the when implies that ownership of certain data is transferred from one department and/or IT system to the next one in the process. In a traditional linear one, such information would be “hurled over the wall” like this:
Picture
Now, since as little information as possible flowed from one department / IT system to the next, each department would make it work as best as they could, and create or re-create information in their own system for everything that did not come directly through integration.
Only in cases where there were really big problems with lacking or clearly faulty data, an initiative would be launched to look at the process and any system integrations that would be affected.

The end result being that the accumulated information throughout the process that can be associated with the end product, that is to say the physical product sold to the consumer, is only a fraction of the actual sum of information generated in the different department’s processes and systems.
​
Now what happens when operations & services get more and more detailed information from each individual product in the field, and starts feeding that information back to the various departments and systems in the process?
Picture
The process will cease to be a linear one, it becomes circular with constant feedback of analyzed information flowing back to the different departments and IT systems.

Well what’s the problem you might ask.

The first thing that becomes clear is that each department with their systems does not have enough information to make effective use of all the information coming from operations, because they each have a quite limited set of data concerning mainly their discipline.

Secondly, the feedback loop is potentially constant or near real-time which will open up for completely new service offerings, however, the current process and infrastructure going from design through engineering and manufacturing was never built to tackle this kind of speed and agility.

Ironically, from a Product Lifecycle Management perspective, we’ve been talking about breaking down information and departmental silos in companies to utilize the L in PLM for as long as I can remember, however the way it looks now, it is probably going to be operations and the enablement of Internet Of Things and Big Data analytics that will force companies to go from strictly linear to circular processes.

And when you ultimately do, please always ask yourself “who should own what data when”, because ownership of data is not synonymous with the creation of data. Ownership is transferred along the process and accumulates to a full data set of the physically manufactured product until it is handed back again as a result of fault in the product or possible optimization opportunities for the product.

 – And it will happen faster and faster
​
Bjorn Fidjeland


The header image used in this post is by Bacho12345 and purchased at dreamstime.com
0 Comments

ERP integrations - valuable knowledge lost in translation?

9/8/2015

1 Comment

 
Picture
When working with engineering, whether it is product engineering, plant engineering or construction, sooner or later the topic of ERP (Enterprise Resource Planning) integration comes up. It is of course vital that the engineering knowledge (how the product or project is designed) is transferred to manufacturing and supply chain (how we will manufacture it).


Traditionally this transition of knowledge has been an exercise of “hurl over the wall”. The engineering information is submitted to another department, and another system. Manufacturing in turn scratches their heads and wonder how on earth engineering intended the product or project to be manufactured. Manufacturing then “hurls a lot of information back over the wall” containing redlined drawings or models and a lot of requests for clarification.

This process was ironically smother in the past when engineering and manufacturing often co-located. Manufacturing engineers could just bring the drawings to engineering and explain why it was impossible to manufacture the product the way it was designed. A collaboration process was as a result started on the human level between engineering and manufacturing which resulted in either a revised product design, or maybe a new way of manufacturing the product.

Nowadays, in these global times, manufacturing is often far away from engineering, and in addition there might be huge cultural differences between the location where engineering and manufacturing takes place. This adds a whole new dimension of complexity.

The engineering tools of today focus a lot on a virtual model, often backed by object structures that facilitates multi discipline collaboration within engineering, but what about collaboration between engineering and manufacturing?

A real life example sheds some light on the topic:

During a large PLM implementation (Product Lifecycle Management) we analyzed the current practice of transferring information from the PLM system to the ERP system. This was a global company with both engineering and manufacturing all over the world. The current system had a quite impressive multi-discipline Engineering Bill of Material (EBOM, the design intent data structure) that was multiple levels deep. I asked how this was transferred to manufacturing and the ERP system, and the answer was: “As a flat list”.

I bit my lip and asked the next question: “Doesn’t that mean that a lot of information that would be valuable for manufacturing gets lost in translation between the two systems and departments?”

Answer: “Very much so, and especially now that we have become a truly global company.  Even worse, we struggle with cultural differences between the two departments which lead to very limited collaboration between the two”

There are however some examples of companies that have taken radical steps to mitigate these problems, but they are, I’m afraid, in minority.



So what are your thoughts? Am I too negative and pessimistic or is this a real real business problem?

Some points to ponder
Bjorn Fidjeland


The image used in this post is by Adempercem and purchased at dreamstime.com 
1 Comment

Integration strategies

3/7/2015

4 Comments

 
One of the things that still strikes me after having been part of big Product and Plant Lifecycle Management projects during the last decade is how little focus there is on integration strategies. By integration strategy I mean decisions on how information should flow between authoring tools, PLM platform, procurement and supply chain. In other words between different departments within the company as well as external companies in the value chain.

In my view you may have the perfect platform for managing engineering information across engineering disciplines, but it still isn’t worth much if the information flow to and from project execution, procurement and supply chain is severely hampered.

Essentially there are 3 main strategies for integration
  • Point to point integrations: Each system integrates through an adaptor to whatever system needs information from it (traditionally this has led to so called spaghetti issues. Lots of integrations that are hard to change since it is very difficult to foresee how a change in one system will affect the processes in other systems).



Picture

  • Data warehouse (Enterprise Service Bus):  Solves the point-to-point mapping issues by converting all data flows to a common, neutral, format and storing them in a data warehouse. When a system publishes information, it publishes it in its own structure to its own adaptor and the adaptor changes it to the structure of the data warehouse. Each system acts like it is the only one in the world.



Picture
  • Dictionary approach: If a common dictionary (or Rosetta stone if you will) is built on an industry standard or even a proprietary company dictionary, then changes in one system only needs to be mapped to the dictionary, not to attributes in other systems. Changes in one system will not affect any of the other systems in terms of their integration since everyone maps to the dictionary. This is the approach promoted by standards like ISO 15926 to solve interoperability issues.
Picture

I’ve often heard the following: Of course we’re not doing point to point anymore. We’ve got an Enterprise Service Bus that takes care of it…… But then, what goes on behind the scenes?
The Enterprise Service Bus has a nice Graphical User Interface for creating integrations where you simply drag and drop attributes from one systems adaptor and maps it to attributes from another systems adaptor…

Consequence: Point to point issues are re-created in the Enterprise Service Bus, even if the exchange format is completely neutral.

A clear integration strategy could also yield considerable business benefits even outside solving internal integration issues. What would happen if a dictionary approach was selected, and the dictionary was an industry standard?
Well, then information could be supplied to other companies, like operators, customers or suppliers on that industry standard format without having to develop special integrations for interoperability with other companies in the value chain.


Some points to ponder
Bjorn Fidjeland

4 Comments

    plmPartner

    This is where we share our thoughts, ideas and experiences with you

    RSS Feed

    View my profile on LinkedIn

    Categories

    All
    AEC
    BIM
    Data Management
    Digital Enterprise
    Digital Transformation
    Digital Twin
    ERP
    Facility Lifecycle Management
    Governance
    Integration
    Internet Of Things
    IOT
    Platform
    PLM
    Process
    Product Lifecycle Management
    Strategy
    Structured Data
    Technical Information Management
    VDC
    Virtual Design And Construction

Contact us:
[email protected]