The Productivity J-curve: the most predictable surprise in PLM

Share this Article

When an organization decides to implement a PLM practice, the common assumption is everything will be as smooth as it explained during the sales pitch. The consultant, the vendor, the presentations, everyone keeps talking about efficiency, about saving time, about better collaboration. However, nobody put on a disclaimer on the slides in the pitch deck,

“By the way, for first few months, your team will be frustrated, your manager will ask you many questions, and you will wonder if you made terrible mistake.”

This experience is very common. It has even a name. The Productivity J-curve. It was formally named and studied by economists Erik Brynjolfsson, Daniel Rock, and Chad Syverson in their 2021 paper published in the American Economic Journal.

First, what is this J-curve thing?

If we take a paper and draw the letter J, this is what your productivity looks like during PLM Implementations. At the beginning, things go down. Then, slowly, they come back to where there were. Then, if everything goes okay, things go higher than before. The whole shape looks like the letter J.

This is not specific to PLM implementations. We observe similar pattern in many big changes, for example, when a country opens its economy, when a company changes its ERP, when an athlete changes his technique. There is always a period where things look worse before they look better. The question is: how deep is the dip, and how long does it last?

For PLM specifically this varies case to case. In few organizations, the dip is very small, maybe three or four months of struggle, then quick recovery. There are also companies where the dip lasted almost two years and almost killed the project completely. The difference is not the software. The difference is how we manage the transition.

Why does productivity actually fall before they rise in PLM Implementations? 

When we hear the term “J-curve” we think it is just theory. But there are real and very concrete reasons why things slow down.

The money goes out before the benefits come in

This is the most obvious part but management always forget to prepare for it psychologically. When a company sign the contract for PLM, the organization starts to pay. When the consultant comes to configure, the company pays. When the engineers spend time in training instead of doing their normal work, the company is also paying, just in different way. All this cost arrives in months one through six or even more. The savings? They come much later. So, on any financial report we look at during this period, the numbers are bad. This is not because PLM does not work. It is because the company is building something, and building costs money before the building is finished.

Two systems running at same time is nightmare

This nobody talks about enough. When any organization go live with PLM, they cannot immediately abandon their old way of working. Some data is still in old Excel files. Some engineers are not trained yet. Some processes are still in transition. So for many months, the company ends up running two systems in parallel, the old one and the new one. This doubles the work. The users are doing everything twice. Of course, productivity goes down. The organization asks the users to carry two bags instead of one, and promising that soon they can drop the old bag.

Data cleanup is the hidden monster

Let’s call it out directly, most companies have terrible product data. BOMs that nobody updated for three years. Part numbers that exist in three different spreadsheets with three different names. Drawings that reference documents which nobody can find anymore. When we put this data into PLM, the system shows us, very clearly, very honestly, how messy the information really is. And now someone must clean it. This is not exciting work. It is slow, careful, boring work. It takes months. And during those months, the engineers who should be designing products are instead fixing old data. This is a real cost that almost nobody includes in the original business case.

People resist, even when they know it is better

This is the part that’s most surprising. The common assumption is that once people see the new system is better, they will adopt it quickly. Human beings do not work this way. Even when we know something is better, we resist it because the old way is familiar, and familiar feels safe. The senior engineer who has been doing his job for fifteen years, knows where everything is in his own filing system, he knows his shortcuts, he knows who to call. Now suddenly all this knowledge is worth less, because the new system works differently. This is threatening for people, even if they do not say it openly. This resistance slows everything down.

How deep can the dip go?

The severity of the J-curve dip depends on a few specific things. Let’s be practical here, not just general.

The first factor is how bad the organizations data was before they started. If the product information was clean, structured, and consistent, the PLM implementation goes much smoother. If the data was stored in twenty different places, in formats that nobody agreed on, the cleanup phase alone can add six to nine months of pain. This destroys project timelines completely.

The second factor is how much executive support the project has. Not support on paper,real, active support. When a senior manager stands up in a meeting and says, “We are doing this and I am behind it,” people behave differently than when PLM is seen as “the IT project.” The dip is much shallower when leaders are visibly committed.

The third factor is whether the company did proper change management or just proper technical implementation. Many companies spend ninety percent of their energy on configuring the software correctly and ten percent on helping the humans change. It should be closer to fifty-fifty. The software is not the hard part. The humans are the hard part.

What to do in each phase,practical advice

Before the dip: set the expectations honestly

The biggest gift any organization can give its team, and management is honesty before the project starts. Show them the J-curve. Tell them directly,for the first six to twelve months, things will feel harder than today. This is normal. This is expected. This does not mean the project is failing. When people know the dip is coming, they do not panic when it arrives. When it comes as a surprise, the immediate reaction is to question everything.

A data audit before implementation is also not optional. Finding problems early is far less painful than finding them in month ten when the entire migration depends on that data being correct. Most organizations resist this step because it feels like extra work before the real work. It is actually the real work.

During the dip: protect the team and measure the right things

When the organization is in the bottom of the J,and it will be obvious when that moment arrives, because everyone is tired and the questions from management increase, the most important response is to not panic publicly. Leaders who show panic cause panic. Leaders who stay calm and acknowledge that this period was expected give the team permission to keep going.

Traditional productivity metrics will look terrible during the transition. This does not mean the project is failing. It means the organization is running more than one system parallely, cleaning years of accumulated data problems, and retraining how people think about their work. Measuring those same old metrics and presenting them to leadership without context creates unnecessary alarm.

Better metrics for this phase are adoption-based: how many users are actively logging in, how many processes have been moved to the new system, how many data records have been cleaned and validated. These numbers show real progress even when the output metrics look flat or negative.

At the turn: consolidate and do not allow drift back

One thing that surprises many organizations is that the rise in the J-curve is not automatic. The turn can happen and then slowly reverse if the gains are not actively consolidated. This happens when the old systems are not properly decommissioned, when teams continue to use workarounds “just for this one case,” or when new employees join and nobody trains them properly in the new way of working.

Every workaround that survives past the transition is a small crack. Left unaddressed, these cracks eventually become the reason why the full benefits of PLM never fully arrive,not because the system failed, but because the old habits were never fully retired.

After the rise: document what changed and show it

When the benefits start to arrive,faster engineering changes, less rework, better visibility across the product lifecycle,they need to be documented and shared. This is not just about justifying the original investment. It is about building the organizational memory that this was worth it, so that when the next big change comes, the team has evidence that difficult transitions can lead somewhere better.

Teams that see the numbers improve and understand why they improved become the strongest advocates for the system. They are also the people who will help the next wave of users get through their own version of the dip faster.

The point most people miss

Superficial changes do not produce J-curves, they produce flat lines. The dip exists because the organization is dismantling how things used to work and building something genuinely different in its place. That process is uncomfortable in the middle.

The J-curve is not a sign that something went wrong. It is a sign that a real change is happening.

Share this Article
Scroll to Top