Follow us at
plmPartner
  • Home
  • About
  • Blog
  • Archive
  • Contact
  • Video Log
  • Learning Center
  • Privacy Policy

Linking vision with strategy and implementation, but prepare for disruption

6/24/2016

2 Comments

 
Picture

​
​Over the years I’ve seen the importance of defining a vision for where one would like to get.



Picture

​Such a vision should serve as a guiding star.
This is true for full corporate visions as well as for smaller areas of the enterprise.

It could be for our product lifecycle management and the internet of things, how we see ourselves benefitting from big data, or harnessing virtual design and construction.
Picture
​
​The strategy is our plan for how we intend to get to the promised land of our vision. It should be detailed enough to make sense of what kind of steps we need to take to reach our vision, but beware:

Don’t make it too detailed, the first casualty of any battle is the battle plan.
​Prepare, but allow for adjustments as maturity and experience grows.

​An important part of the strategy work is to acid test the vision itself. Does it make sense, will it benefit us as a company and if so, HOW.
​
During the strategy work, all stakeholders should be consulted. And yes both internal and external stakeholders. The greatest vision in the world with a good strategy and top notch internal implementation won’t help much if you have external dependencies that cannot deliver what you need according to your new way of working…..
​Business processes will simply come grinding to a halt.
Picture
​The implementation rarely takes the same path as the strategy intended it to do. There are usually many reasons for this.
My best advice regarding implementation is to take it one step at a time and evaluate each step. Try to deliver some real business value in each step, and align the result towards vision and strategy. Such an approach will give you the possibility to respond to changing business needs as well as external threats.

But as we come closer to the promised land of our vision, we can simply sense the feel of it and can almost touch it…. Then, disruption happens........

Picture
​Such a disruption might present itself in the form of a merger and acquisition where vision, strategy and also implementation must be re-evaluated.
The “new” company might have entered another industry where strategy and processes need to be different and old technical solutions won’t fit.

Today everybody seems to be talking about disruptive technology and or disruptive companies. Most companies acknowledge that there is a real risk that it will happen, but the problem is, since most disruptive companies haven’t even been formed yet, it is difficult to identify where the attack will come from and in what form.
​
This leads back to vision, strategy and implementation. Vision and strategy can be changed quickly, but unless the implementation is flexible enough and able to respond to changing needs, the company will be unable to respond quick enough to meet the disruption.

Bjorn Fidjeland

The header image used in this post is by Skypixel and purchased at dreamstime.com
2 Comments

Customization – Upgradeability

10/24/2015

0 Comments

 
Picture
In one of my previous posts “Customization – Do you fit in the box?” I touched upon an important aspect when deciding to customize a software platform (same principles apply whether it is a PLM platform or for instance a CRM platform).



How will my customization impact the upgradeability of the platform?

The reason why this becomes very important becomes obvious when you want to upgrade the platform from an old release to a newer release. Most companies skip one or two releases in each release cycle from the vendor before upgrading, and that makes it all the more important to perform an upgrade analysis beforehand.

If no customization is made it becomes an analysis of:
  •  What does our current data model look like, and what will it look like after the upgrade
  • What does our data look like, and how will the modifications from the vendor impact our data set.


If customizations have been made the analysis becomes a bit more complex:
  • What did the original data model of the software platform look like
  •  What does our current and customized data model look like?
  •  What will the data model look like after the upgrade?
  • Are there any conflicts between our current customized data model and how the data model will look like after the upgrade?
  • Should some of our customizations be removed since the new release covers some of our customizations?
  • What does our data look like, and how will our customizations and the modifications from the vendor impact our new data set?

In my view there are a few rules that should be followed to avoid too many problems with upgrades.


  • Try as much as possible to avoid changing the OOTB (Out Of The Box) data model itself, it is usually a lot safer to add to the OOTB data model and GUI.
  • Avoid to make changes in the OOTB business logic itself. If you have to make changes, then override the OOTB logic and create your own separately, BUT make sure to document such overrides so that you can switch back to OOTB. Remember that if you change the business logic directly, chances are those modifications will be overwritten by an upgrade.
  • If you decide to make your own data model completely customized with GUI and Business logic you are generally safe from an upgrade point of view, but you then stand the risk of not being able to benefit from the software vendors new releases of the platform in the future if they have decided to incorporate the same kind of functionality in the OOTB platform. In such cases you will be faced with a migration project, not an upgrade….
  • One of the things that always cause a hassle when upgrading is changes to the user interface (GUI). I would offer the same advice here. Do not change the original user interface! Make a copy instead, implement the changes and override the original. This is because if you change the original, your modifications will probably be overwritten during an upgrade. In addition when performing an upgrade analysis you’ll have to perform a three way comparison between the old OOTB, your customizations and the new OOTB to reveal the consequence of the changes.
  • If the software platform comes with a framework that rapidly enables you to build a user interface, data model and associate business logic, this is often preferable, BUT make sure to always analyze what new functionality is provided in the new release. Can you remove some of your older customizations? If you have overridden or hidden the old OOTB user interface, you will not be able to see the new juicy stuff that came as a result of an upgrade.
  • Never upgrade the production environment without having tested extensively in a sandbox first (a copy of the production environment). Not even the best upgrade analysis in the world will find all issues or problems when performing an upgrade

Conclusion:
When dealing with software platforms you should always perform an upgrade analysis to determine how the upgrade will impact your installation. In my view, this should be done even if you have gone strictly OOTB. Such an analysis can help you weed out the worst of the problems, and should serve as a decision point for the upgrade project. Test your upgrade and procedures in a sandbox environment extensively first before upgrading the production environment.

Some points to ponder
Bjorn Fidjeland

The image used in this post is by Dirk Ercken and purchased at dreamstime.com

0 Comments

Integration strategies

3/7/2015

3 Comments

 
One of the things that still strikes me after having been part of big Product and Plant Lifecycle Management projects during the last decade is how little focus there is on integration strategies. By integration strategy I mean decisions on how information should flow between authoring tools, PLM platform, procurement and supply chain. In other words between different departments within the company as well as external companies in the value chain.

In my view you may have the perfect platform for managing engineering information across engineering disciplines, but it still isn’t worth much if the information flow to and from project execution, procurement and supply chain is severely hampered.

Essentially there are 3 main strategies for integration
  • Point to point integrations: Each system integrates through an adaptor to whatever system needs information from it (traditionally this has led to so called spaghetti issues. Lots of integrations that are hard to change since it is very difficult to foresee how a change in one system will affect the processes in other systems).



Picture

  • Data warehouse (Enterprise Service Bus):  Solves the point-to-point mapping issues by converting all data flows to a common, neutral, format and storing them in a data warehouse. When a system publishes information, it publishes it in its own structure to its own adaptor and the adaptor changes it to the structure of the data warehouse. Each system acts like it is the only one in the world.



Picture
  • Dictionary approach: If a common dictionary (or Rosetta stone if you will) is built on an industry standard or even a proprietary company dictionary, then changes in one system only needs to be mapped to the dictionary, not to attributes in other systems. Changes in one system will not affect any of the other systems in terms of their integration since everyone maps to the dictionary. This is the approach promoted by standards like ISO 15926 to solve interoperability issues.
Picture

I’ve often heard the following: Of course we’re not doing point to point anymore. We’ve got an Enterprise Service Bus that takes care of it…… But then, what goes on behind the scenes?
The Enterprise Service Bus has a nice Graphical User Interface for creating integrations where you simply drag and drop attributes from one systems adaptor and maps it to attributes from another systems adaptor…

Consequence: Point to point issues are re-created in the Enterprise Service Bus, even if the exchange format is completely neutral.

A clear integration strategy could also yield considerable business benefits even outside solving internal integration issues. What would happen if a dictionary approach was selected, and the dictionary was an industry standard?
Well, then information could be supplied to other companies, like operators, customers or suppliers on that industry standard format without having to develop special integrations for interoperability with other companies in the value chain.


Some points to ponder
Bjorn Fidjeland

3 Comments

    plmPartner

    This is where we share our thoughts, ideas and experiences with you

    RSS Feed

    View my profile on LinkedIn

    Categories

    All
    AEC
    BIM
    Data Management
    Digital Twin
    ERP
    Facility Lifecycle Management
    Integration
    Internet Of Things
    IOT
    Platform
    PLM
    Process
    Product Lifecycle Management
    Strategy
    Structured Data
    Technical Information Management
    VDC
    Virtual Design And Construction

Contact us:
plmPartner AS    Lyngfjellveien 14    4580 Lyngdal    Norway    +47 99 03 05 19    info@plmpartner.com