image

Industrial Objectives and Industrial Performance

Concepts and Fuzzy Handling

Lamia Berrah

Vincent Clivillé

Laurent Foulloy

Wiley Logo

Foreword

The era of “make then sell”, harking back to a world where supply was lower than demand, is now long gone for most products, and therefore for most companies. Competition has made a permanent quest for improvement into an absolute imperative for guaranteed survival. With this in mind, performance measurement is the first step (diagnosis) and the last step (results analysis) and also the leitmotif of improvement projects, all at the same time.

It is therefore not surprising that the number of scientific publications about performance evaluation increased spectacularly at the end of the 1990s, with more than 3600 articles published between 1994 and 1996, and a book published every two weeks just in the USA1.

On the basis of consultants’ practice on the one hand, and inspiring references on the other, companies have set up scorecards often including tens of KPIs (Key Performance Indicators), with the aim of comparing themselves to and measuring themselves up against the competition. Affirmation of “Business Intelligence”, supported by ever more widereaching information systems, could therefore have sounded the death knell for the performance measurement adventure and its shift to becoming a standard routine activity; in short, a done deal. However, the problem appears to be far from solved: in fact more than an estimated 50,000 scientific articles will be published on the subject in 2017...

Indeed, just by looking at the reality of industry, we can clearly see the problems that remain: in many companies, precise and up-to-date scorecards (a requirement of visual management…) continue to be based on badly formalized objectives, handled by performance measurements with many undesired effects; misunderstandings build up between supply chain partners, brought on by different interpretations of common performance measurements which are often greater sources of dissent than they are founding blocks for collaboration.

In this context, it can be tempting to have more confidence in external reference sources than in one’s own beliefs. If, in this book, you are looking for a list of performance measurements randomly grouped together à la Jacques Prévert that you can stick onto your production systems, it would be best to put the book back on the real or virtual shelf from which you picked it up. If, on the contrary, you are prepared to undergo a journey through the world of performance which will make you think first about the finality of performance measurement, then about its constituent elements, without denying its subjective nature, then this book is for you. If you are curious about the reasons why a concept exists, about looking beyond its label, then the authors will give you the keys to a process of deep thinking which will lead you through all the different definition steps of the performance measurements that you require.

In this book, there will be no narrow-minded vocabulary, which would erase any doubts you may have but would effectively cut you off from your partners... While a large number of authors make the observation that performance evaluation, an exercise which by nature ought to be multidisciplinary, has been monopolized by various schools of thought that communicate little with each other, here the authors are instead seeking to build bridges rather than barriers. Additional clarification of the concepts used will therefore be provided, using a “pointillist” approach which leads to analysis of the interactions between the different aspects of production.

As you turn the pages, you will be taken back to user requirements, you will think about the finalities of the industrial system, about the links to be established between its performance measurement system and its control system, and also its improvement issues. You will clarify the links between objectives, goals and finalities, criteria, variables, values, etc. You will see that subjective performance evaluation is possible, and finally you will think about measurement of objective achievement...

Enjoy the journey!

Bernard GRABOT
Professor
Ecole Nationale d’Ingénieurs de Tarbes

1
The Industrial System

1.1. Introduction

Once upon a time there was a system and an actor. The system functioned and evolved within its environment. The actor, responsible for this operation and this evolution, spent their time observing the system, as a whole, as different parts. They attributed objectives to it, planned actions whose implementation they then managed, expressed the level of performance achieved, and started over with their observation cycle.

So, the tale of objectives begins with a relationship between an actor, a system observer and the system in question. The actor observes the system. Arising from this observation, a representative model is born, brought on by the presence of the actor acting for the system’s structure and operation. Intentions then occur to the actor, for all or part of the system. Therefore, we have the system, the actor, the state of the system observed by the actor, and so the actor’s intention acts as the decision-maker for the system or part of the system. In particular, the actor defines the goals and objectives to be achieved by the system (or part of it).

Thus, the notion of objective emerges from the relationship between the actor and the system. This relationship, both objective and subjective, real and tangible, is based on a large number of aspects that are probably interacting with each other. This is why we will borrow systems theory’s principles and language to comprehend this relationship. Flexible and all-encompassing, systems theory will then allow us to identify links between the various aspects of a system, in particular the entities, the finality, the structure and the behavior... and, consequently, the goals and objectives, all this in a given context and for a given observer.

So, let us begin by recalling some elementary principles of systems theory. Placing ourselves in an industrial context, we will then describe, using the systemic language, what we intuitively call the “industrial system”. By industrial system, we mean all the operations and all the equipment, used in industrial activities1. The two latter parts of this description will be dedicated to objective-related information and then to objectives themselves. A representation of the emergence process of the objective, as proposed by the systems theory model, will then round off this exploratory chapter.

But before we get to the heart of the matter, let us take ourselves back to January 2009 and pause to look at the story of Mr. C.C., executive of the RB company and newly appointed associate manager for the “Hydraulic Cylinder Production” line.

1.2. The RB company’s “Hydraulic Cylinder Production” line

1.2.1. The Overall Equipment Effectiveness – OEE

Classic productivity indicator, the Overall Equipment Effectiveness – OEE was defined in the 1980s in Japan as being associated, on an elementary level, with the productivity of a “piece of equipment” within the productive system (machine, production cell, line) [MUC 08]. The Overall Equipment Effectiveness – OEE is computed for predetermined amounts of time, generally a day, a week or a month and applies to both a “Piece of Equipment” and “All Equipment” in the system.

The Overall Equipment Effectiveness – OEE is computed as a ratio between the useful time and the Planned production time associated with, respectively, the “Piece of Equipment” or “All Equipment” under consideration. The Planned production time is obtained from the Open time of the productive system, from which all the planned stops within the observation period have been removed. The Useful time is computed from the Planned production time by cutting out, this time around, all the unplanned stops (unplanned stops, loss of performance and quality losses) as shown in Figure 1.1, extracted from the standard NF-E60 182 [AFN 02]. To a great extent now standardized, computation of the Overall Equipment Effectiveness – OEE is therefore based on a generic model which identifies all the related types of planned stops and unplanned stops, for the part of the system under observation.

image

Figure 1.1. Details of the time periods used to compute the Overall Equipment Effectiveness – OEE (inspired from [AFN 02])

1.2.2. The Non-compliance rate

Intuitive, the Non-compliance rate relates to the compliance of “Manufactured products”. This rate is an overall computation, on the basis of the ratio between the Quantity of products affected by a compliance problem (i.e. some kind of non-compliance) and the Produced quantity [WEB 12].

1.2.3. The Throughput time

The Throughput time can be defined as follows: “the amount of time required for a product to pass through a manufacturing process, thereby being converted from raw materials into finished goods” [BRA 14]. Computation of the Throughput time is based on observation of both the value-added time corresponding to line activities and the no value-added time encompassing waiting time, transport and product storage. More specifically, in companies using discontinuous production processes, value-added operations on products generally represent a very low proportion of the time spent by the products on the production lines. Most of the time, the product waits “in fact for the whole batch to be finished, for transport to another machine, for a compliance control check... This relationship between value-added time and waiting time can be of the order of 1/10000. In companies with ‘just-in-time production’ this relationship is of the order of 1/100 and, in the best case scenario, of the order of 1/10”* [MAR 13].