Follow us at
plmPartner
  • Home
  • About
  • Blog
  • Archive
  • Video Log
  • Podcast
  • Learning Center
  • Privacy Policy

PLM - the KPI's, the ROI and the simple

8/20/2016

0 Comments

 
Picture
It is often hard to measure since PLM affects so many different and interchanging aspects of an engineering and manufacturing company.

​However I would like to share a story about a multi-billion dollar company who was about to embark on a global PLM implementation journey.
They were required to come up with a series of KPI’s and to report the progress to top management. The PLM project spent considerable effort in doing so.
One of the 10 KPI’s they identified was their engineering change process. They measured time spent from the engineering change was requested to it was released, and furthermore the impact it had on the project execution schedules in their customer projects (The company is heavily engaged in project intensive industries).

All measurements were done on a monthly basis before and after the PLM platform went live.

Before the introduction of the platform it was largely a manual process where the project and product managers would gather and have so called “sign fests” where engineering changes was simply signed without much review since the magnitude of changes was staggering, and so they had to rely on the judgement of their junior managers. This sounds like a rather risky approach, but the fact was that the cost of not approving was far greater than just approving. Needless to say this manual process still took quite some time since the junior managers spent a lot of time evaluating before submitting to the “sign fest”.

After the introduction of the PLM platform, they continued measuring. Top management was still eager to get reports on all the KPI’s, but then they started to notice something. The engineering change process showed a relatively modest drop in time spent globally in the beginning, but the calculation of impact on time and money saved in the projects showed impressive results. As the organization got more and more familiar with the new engineering change process, the time spent from change request to release fell rapidly, and the result in ongoing customer projects was even greater.

After less than a year they concluded that the improvements on throughput of Engineering Changes and time spent on them alone paid for the entire project.
​
In my view, yes there are a lot of important benefits like “increased collaboration” using a PLM platform that are hard to measure, but a lot of times it makes even more sense to measure smaller but more identifiable benefits and then trace the impact they have further down the lifecycle of the product.

Bjorn Fidjeland

The header image used in this post is by Andrewgenn and purchased at dreamstime.com


0 Comments

PLM platforms, the difficult organizational rollout

3/6/2016

0 Comments

 
Picture
What is PLM really about? In my view it is about tying relevant information to business processes, you know, the stuff that makes your company truly unique and then tying your employees to those very same processes throughout the life of a product.

So it’s about information, processes, people and an IT platform, in this case a PLM platform.
​

To be successful, ALL areas must intersect.



It does not matter if you have the perfect PLM system with perfectly defined processes if the information you need to manage is bad.

Just as little as it will help to have good quality data with perfectly defined processes and an organization ready to adopt it if the PLM platform is unable to scale to your needs.

It will not help to have good quality data tied to perfectly defined processes and a state of the art PLM system either if nobody is using it….

So going back to the headline: PLM platforms, the difficult organizational rollout.
I’ve seen far too many PLM implementations underperform due to unsuccessful rollout in the organization.
I find it strange that although the projects are often run iteratively to develop or customize smaller chunks of functionality in each iteration to ensure success, one expects the end users to devour the full elephant of the project in more or less one big bite…

In my view a rollout of such a large and business critical platform should also be considered iterative and with time for the end users to come to terms with what they have learned after each iteration before the next iteration starts.
I would compare it to building a house.
​ You would never start erecting the walls before the concrete slab is sufficiently cured.
The same is true for an organization. If more functionality and new processes are put on top before the previously learned functionality and processes has had time to settle, you get resistance, and the foundation becomes weak.

Another important factor is to not only train the end users in a classroom environment and then expect them to perform well in their new system… Because they won’t.
They’re still afraid to do something wrong, and they will struggle to remember what they learned in the classroom.
Then they will try to find solutions in the manuals, and growing more and more frustrated by the minute.

If this frustration is allowed to continue for too long, you can be sure that the end result is that they feel that the system is too difficult to use and basically suck. It might sound childish, but holding hands work! Have some super users or trainers available in the everyday work situation to help and guide the users the first few weeks.​
That will mitigate the fear factor of doing something wrong, and steadily build confidence and ability.

​Bjorn Fidjeland
0 Comments

Challenges when going from entrepreneur to industrialized manufacturer

1/3/2016

0 Comments

 
Picture
​

​In my neck of the woods there are a lot of very talented engineers, and a lot of entrepreneurial spirit. My region (South Western part of Norway) is very much exposed to oil & gas industry and the delivery of products to plants, oil platforms etc. This means that it is very project focused and ETO (Engineer To Order) intensive.

The entrepreneurial spirit I mentioned has led to a whole host of startups with good ideas of how to solve some problem with a new product in better and more cost effective ways.

One story I keep hearing from such product companies, not only in Norway, but also in other countries and project intensive industries goes something like this:
So we won our first contract and the customer is really impressed with our product and our technology. It became a bit more expensive to deliver the project than we thought, but we managed and we were sure we would have better returns on the next project.
​
As time progresses we expect our cost in the projects to drop significantly.

​
Picture
​But what happens in a lot of such product companies?
Picture
​There is nothing strange in expecting such a development. One would instinctively think that one would be able to shorten the project execution time as one gains experience and have successfully delivered such a product before. The organization knows what suppliers can deliver and which cannot. Engineers and employees in installation and commissioning are becoming more and more experienced etc.
​It becomes very, very hard to drive down the cost of delivering projects, even if the product delivered from project to project is very similar. The transition from entrepreneur to industrialized manufacturer becomes hard for a lot of these companies.
Why is that?

Personally I think there are several factors
  • It was too hard to say no to those small insignificant changes that the client required in the next project….. That for engineering or manufacturing turned out to be not so insignificant.
  • Product development is constantly being performed in the projects. Engineers will always search for the perfect and most elegant solution. That does not mean that it is the best or most cost effective way to manufacture the product.
  • Clients or operators documentation requirements in terms of LCI (LifeCycle Information) deliveries. If the product company is unable to define a process to deal with shifting requirements from operator to operator, this becomes a manual nightmare that constantly diverts resources. Such a process should be an integrated part of the project execution process, and not as it mostly is today, a separated process.
  • It is in my view paramount that smaller parts of the product at least, is standardized and modularized  in such a way that the engineering information can be re-used from project to project (You can read more about my views here: “Engineering Master Data - Why is it different?” and “Can PLM help industrializing Oil & Gas projects?”   )
  • Last but not least there is a screaming need to manage project specific engineering data (Tag structures, P&ID’s, D&ID’s, electrical) together with, but NOT in a one to one relation with more generic product development data

I’ve seen the three last bullets addressed with PLM platforms at various companies, however the technology itself is just one factor. The organizational processes and how they are enforced in the platform is of far bigger importance.

The first two bullets are a lot harder, as they require a shift in mindset from entrepreneur to industrialized manufacturer of the organization. This includes going from quick and nimble to more standardized processes, and continuous process improvement. If you consider PLM as a mindset rather than just a technology, you will also harvest  benefits here, but it is hard work.

Bjorn Fidjeland




0 Comments

Customization – Upgradeability

10/24/2015

0 Comments

 
Picture
In one of my previous posts “Customization – Do you fit in the box?” I touched upon an important aspect when deciding to customize a software platform (same principles apply whether it is a PLM platform or for instance a CRM platform).



How will my customization impact the upgradeability of the platform?

The reason why this becomes very important becomes obvious when you want to upgrade the platform from an old release to a newer release. Most companies skip one or two releases in each release cycle from the vendor before upgrading, and that makes it all the more important to perform an upgrade analysis beforehand.

If no customization is made it becomes an analysis of:
  •  What does our current data model look like, and what will it look like after the upgrade
  • What does our data look like, and how will the modifications from the vendor impact our data set.


If customizations have been made the analysis becomes a bit more complex:
  • What did the original data model of the software platform look like
  •  What does our current and customized data model look like?
  •  What will the data model look like after the upgrade?
  • Are there any conflicts between our current customized data model and how the data model will look like after the upgrade?
  • Should some of our customizations be removed since the new release covers some of our customizations?
  • What does our data look like, and how will our customizations and the modifications from the vendor impact our new data set?

In my view there are a few rules that should be followed to avoid too many problems with upgrades.


  • Try as much as possible to avoid changing the OOTB (Out Of The Box) data model itself, it is usually a lot safer to add to the OOTB data model and GUI.
  • Avoid to make changes in the OOTB business logic itself. If you have to make changes, then override the OOTB logic and create your own separately, BUT make sure to document such overrides so that you can switch back to OOTB. Remember that if you change the business logic directly, chances are those modifications will be overwritten by an upgrade.
  • If you decide to make your own data model completely customized with GUI and Business logic you are generally safe from an upgrade point of view, but you then stand the risk of not being able to benefit from the software vendors new releases of the platform in the future if they have decided to incorporate the same kind of functionality in the OOTB platform. In such cases you will be faced with a migration project, not an upgrade….
  • One of the things that always cause a hassle when upgrading is changes to the user interface (GUI). I would offer the same advice here. Do not change the original user interface! Make a copy instead, implement the changes and override the original. This is because if you change the original, your modifications will probably be overwritten during an upgrade. In addition when performing an upgrade analysis you’ll have to perform a three way comparison between the old OOTB, your customizations and the new OOTB to reveal the consequence of the changes.
  • If the software platform comes with a framework that rapidly enables you to build a user interface, data model and associate business logic, this is often preferable, BUT make sure to always analyze what new functionality is provided in the new release. Can you remove some of your older customizations? If you have overridden or hidden the old OOTB user interface, you will not be able to see the new juicy stuff that came as a result of an upgrade.
  • Never upgrade the production environment without having tested extensively in a sandbox first (a copy of the production environment). Not even the best upgrade analysis in the world will find all issues or problems when performing an upgrade

Conclusion:
When dealing with software platforms you should always perform an upgrade analysis to determine how the upgrade will impact your installation. In my view, this should be done even if you have gone strictly OOTB. Such an analysis can help you weed out the worst of the problems, and should serve as a decision point for the upgrade project. Test your upgrade and procedures in a sandbox environment extensively first before upgrading the production environment.

Some points to ponder
Bjorn Fidjeland

The image used in this post is by Dirk Ercken and purchased at dreamstime.com

0 Comments

ERP integrations - valuable knowledge lost in translation?

9/8/2015

1 Comment

 
Picture
When working with engineering, whether it is product engineering, plant engineering or construction, sooner or later the topic of ERP (Enterprise Resource Planning) integration comes up. It is of course vital that the engineering knowledge (how the product or project is designed) is transferred to manufacturing and supply chain (how we will manufacture it).


Traditionally this transition of knowledge has been an exercise of “hurl over the wall”. The engineering information is submitted to another department, and another system. Manufacturing in turn scratches their heads and wonder how on earth engineering intended the product or project to be manufactured. Manufacturing then “hurls a lot of information back over the wall” containing redlined drawings or models and a lot of requests for clarification.

This process was ironically smother in the past when engineering and manufacturing often co-located. Manufacturing engineers could just bring the drawings to engineering and explain why it was impossible to manufacture the product the way it was designed. A collaboration process was as a result started on the human level between engineering and manufacturing which resulted in either a revised product design, or maybe a new way of manufacturing the product.

Nowadays, in these global times, manufacturing is often far away from engineering, and in addition there might be huge cultural differences between the location where engineering and manufacturing takes place. This adds a whole new dimension of complexity.

The engineering tools of today focus a lot on a virtual model, often backed by object structures that facilitates multi discipline collaboration within engineering, but what about collaboration between engineering and manufacturing?

A real life example sheds some light on the topic:

During a large PLM implementation (Product Lifecycle Management) we analyzed the current practice of transferring information from the PLM system to the ERP system. This was a global company with both engineering and manufacturing all over the world. The current system had a quite impressive multi-discipline Engineering Bill of Material (EBOM, the design intent data structure) that was multiple levels deep. I asked how this was transferred to manufacturing and the ERP system, and the answer was: “As a flat list”.

I bit my lip and asked the next question: “Doesn’t that mean that a lot of information that would be valuable for manufacturing gets lost in translation between the two systems and departments?”

Answer: “Very much so, and especially now that we have become a truly global company.  Even worse, we struggle with cultural differences between the two departments which lead to very limited collaboration between the two”

There are however some examples of companies that have taken radical steps to mitigate these problems, but they are, I’m afraid, in minority.



So what are your thoughts? Am I too negative and pessimistic or is this a real real business problem?

Some points to ponder
Bjorn Fidjeland


The image used in this post is by Adempercem and purchased at dreamstime.com 
1 Comment

PLM and disconnected corporate processes

7/19/2015

2 Comments

 
Picture
How many times have you seen flashy corporate “blue books” with impressive process maps of how an organization do their business and perform their work?

I’ve seen quite a few. It’s not that I have anything against them, it’s just…. Is this really how the organization work? Are the corporate processes updated to actually reflect how work is performed, or are the processes really enforced in the organization? Are the processes adjusted based on feedback from the project organizations? And last but not least, are projects measured against the processes?
To my experience this very rarely happens, if at all.

I have however seen one company take drastic measures to do something to bridge the gap between the corporate processes and how the organization actually worked. This company decided to visualize their corporate processes in a PLM platform for use across all countries they were involved in. Due to regulatory differences between countries they managed variants of the processes in each country as well.
So how is this different from any other process map you might ask. Well, they not only visualized the processes, they also instantiated the processes, so when a particular project were to be executed the project manager would select the appropriate process and got an instantiated project with template WBS (Work Breakdown Structure) together with all document deliverables that were expected for such a project related to tasks and milestones.

This way they forced the organization to follow the defined processes….. As you might expect there was an outrage, because the instantiated processes was not at all how the organization worked. The project organization had through experience figured out where the corporate process did not work in the real world and had found ways to overcome the problems. This was however the intention, because now there was a very clear feedback loop so that the processes could be adjusted to reflect how the organization actually worked, and since the processes were instantiated in real projects, they could now also be analyzed, measured and improved across the entire organization through dashboards in the PLM platform. This practice became a competitive advantage, and it also allowed processes to be verified and tested in one country before being rolled out in other countries.

I’ve also seen examples of what typically happens when there is only a loose coupling between the corporate processes and how projects are executed and measured. In one company they had very impressive process maps with clearly defined input, output, description of responsibility and activities to be performed within each process step. However, the processes were not really instantiated and measured in the projects.

The obvious question then became: “How is it possible to measure and improve the processes if they are not instantiated and measured in actual projects?”

Well, it worked as long as the company was fairly small and key persons managed to have an overview of most projects. Mostly the projects were executed based on experience and a functioning culture at the main site. The challenge then became apparent when trying to replicate this for other sites where the culture was different. Those other sites tried to execute their projects solely by the process maps….

And that led to some very real problems

In such cases it is of paramount importance to harvest all experience from such projects, analyze, validate and update the process maps accordingly.

So where am I going with this?

In my view companies should work hard on closing the gap between their corporate processes and how the organization actually perform their work. Creating feedback loops from the project organization is one way of doing it. Another way is to actually instantiate the processes for use in projects and thereby making sure that the processes are followed. If the latter approach is selected it becomes very important to have an organization in place to collect feedback, analyze and adjust the processes as the company evolve and develop over time.


Some points to ponder
Bjorn Fidjeland
2 Comments

From digital archive to intelligent data

6/7/2015

1 Comment

 
Picture
A lot of companies these days are working hard to turn their big digital archives into more intelligent data. These initiatives usually comes from some kind of digitalization strategy that has been formed to support a vision.

We see it every day, data is power, data can be analyzed, used in different contexts to support end customers, to sell new services or support internal processes in the company. 

Picture
However, for this to happen it is not enough so simply store and manage data in digital format. The data must be “connected”, stored in object or information structures that represents the data used in different contexts. Coming from a PLM background, some of the aspects are quite easy to identify. From a product perspective you’ve got a requirements breakdown structure, maybe a model configuration or variant structure, an engineering bill of material that represents the design intent and supporting CAD structures from various design tools. All of the structures mentioned are being managed today as digital information, but very few companies have structured the information and put it all in context of the other information structures to achieve full traceability and change consequence control.

Note, I’ve so far only touched the product design aspect, so when considering the manufacturing intent (the manufacturing bill of material), the manufactured product, the sold product and the installed product the complexity grows, but so does the benefits of managing it all as connected data structures stored in context of each other. This data can be used to sell services to the end customers.

Examples could be a pump manufacturer who has full traceability on all pumps sold to different facilities. The pump manufacturer could offer services for maintenance of the pumps, and if the pump contains different sensors, the manufacturer could also analyze operational data to schedule preventive maintenance. This data could then serve as valuable input to the design processes of new and even better pumps. As a consequence of the fact that all data structures are connected, the pump manufacturer knows the location of all pumps sold, and can offer the new and improved model not only to all customers, but to all customers and all the locations for each customer.

All of a sudden we are touching one of the biggest fashion words these days “the Internet Of Things”, because what would happen if a large portion of the pumps were installed on ships and they contained sensors? The pump manufacturer could set up maintenance offices in large ports. Knowing exactly what pumps would arrive in what ports, at what times and what maintenance need they have would allow the manufacturer or service provider to order the right spare parts just in time and to reduce the maintenance time. This would minimize the risk of fines by the ship owners because the ship had to stay in port longer than scheduled or even worse, having the service personnel performing the service at sea and thereby leaving the service office in the port severely under manned.

This is only one example of the power of “connected data” or digitalization. Quite a few companies have similar business models as our pump manufacturer, but very few have the opportunity to utilize the services by “connected data”. Instead there are a lot of manual work, interpretation and searching for data in different digital archives. This in turn leads to errors, misunderstandings and lost business opportunities.

 Some points to ponder

Bjorn Fidjeland


All images used in this post are purchased at dreamstime.com

1 Comment

The journey of PLM vs the journey of PDM

3/28/2015

0 Comments

 
Picture
Inspired by the blog post: “Is PLM a Journey? Follow (or Join!) the Blogfight!" in "The PLM Dojo" group at LinkedIn I started thinking about the topic of “PLM as a journey”. In my previous company I wrote the post “PLM – Tool or Mindset”, and a PLM implementation is in my view a journey pretty much as Jos Voskuil describes it. If and there is an if, you think about the scope of PLM (Product Lifecycle Management) then I think it is crucial to have a vision, strategy and a clear commitment from business in order to be able to execute. This is because the initiative involves several different departments, and in bigger organizations also multiple sites on different continents. Such a project becomes a journey, because in order to eat that particular elephant it is very important to do it one bite at a time, and in between each bite, business and the organization needs to digest and mature.
This is where organizational rollout and communication comes in as a very important factor. IT must in this case work very closely with business to deliver functionality after each bite has been swallowed…. A lot of eating here, but I think the analogy is good.


 I’ve been fortunate enough to be a part of a few such PLM projects, but in the beginning I must say that it was the proverbial catfight between business and IT. 
But, as healthy group processes were promoted and everybody tried to see it from the other party’s perspective they became ONE team with ONE goal. 

This could happen because there were external personnel present to mediate and translate the language of business to IT and vise versa.   

After a few months it was impossible to tell who was business and who was IT!
I  would describe this process as an important part of the PLM journey. Then, as business gets more and more of their processes implemented in the PLM system and the solution matures, more and more bites can be introduced and devoured. 
Picture

So what about PDM (Product Data Management)?
Well PDM is where PLM originally came from, and it primarily addresses the needs of product engineering and design. Mostly a one department effort, although such projects can also span multiple sites on different continents. 

However if we face the facts, most PLM projects today are STILL abut implementing PDM functionality in a full blown PLM platform. 

Why is that? 
Well in my view it is because:
a.       It started as an engineering or IT effort without appropriate business vision and strategy. 
b.      The PLM project, with business involved and strategy developed, got constipated because they bit over more than they could chew.

If the PDM project started as an IT or engineering department only effort, then there seems to be a glass ceiling preventing the project to get acceptance from business to grow the scope into a full PLM implementation.
I cannot really explain why, so feel free to comment!
My hunch though, is that it has to do with a certain “Not invented here” syndrome…. 
But I seriously doubt that business would say that out loud......

Some points to ponder
Bjorn Fidjeland

0 Comments
Forward>>

    plmPartner

    This is where we share our thoughts, ideas and experiences with you

    RSS Feed

    View my profile on LinkedIn

    Categories

    All
    AEC
    BIM
    Data Management
    Digital Enterprise
    Digital Transformation
    Digital Twin
    ERP
    Facility Lifecycle Management
    Governance
    Integration
    Internet Of Things
    IOT
    Platform
    PLM
    Process
    Product Lifecycle Management
    Strategy
    Structured Data
    Technical Information Management
    VDC
    Virtual Design And Construction

Contact us:
[email protected]