PROCESS CONTROL A PRACTICAL APPROACH PDF

adminComment(0)

Fortunately he quickly discovered that the practical application of process control bore little resemblance to the theory he had covered at university. He later. Process Control: A Practical Approach, Second Edition the needs of the process industry, and closes the gap between theory and practice. Myke King's Process Control. A Practical Approach. ABOUT THE INSTRUCTOR. Myke King is a consultant operating independently of any supplier or.


Process Control A Practical Approach Pdf

Author:ERMELINDA PATRIN
Language:English, Dutch, Hindi
Country:Ukraine
Genre:Business & Career
Pages:590
Published (Last):10.11.2015
ISBN:191-4-29453-808-6
ePub File Size:20.74 MB
PDF File Size:19.56 MB
Distribution:Free* [*Sign up for free]
Downloads:47506
Uploaded by: JANITA

King, Michael, Process control: a practical approach / Michael King. Chemical process control. I. Title. e-PDF ISBN: o-Book ISBN: . Process control: a practical approach / Michael King. p. cm. e-PDF ISBN: o-Book ISBN: . So why write yet another book on process control?. Process Control: A Practical Approach - site edition by Myke King. Download it once and read it on your site device, PC, phones or tablets. Use features.

We observed that the steady state change in temperature was different from that of the valve. This Process Control: Numerically Kp may be positive or negative. In our example temperature rises as the valve is opened. If we were to increase heater feed rate and keep fuel rate constant then the temperature would fall. Kp, with respect to changes in feed rate, would therefore be negative. Nor is there is any constraint on the absolute value of Kp.

Very large and very small values are commonplace. In unusual circumstances Kp may be zero; there will be a transient disturbance to the PV but it will return to its starting point. The other differences, in Figure 2. We can see that the temperature begins moving some time after the valve is opened. This delay is known as the process deadtime; until we develop a better definition, it is the time difference between the change in MV and the first perceptible change in PV.

It is usually given the symbol y. Deadtime is caused by transport delays.

In this case the prime cause of the delay is the time it takes for the heated fluid to move from the firebox to the temperature instrument. The DCS will generate a small delay, on average equal to half the controller scan interval ts. While this is usually insignificant compared to any delay in the process it is a factor in the design of controllers operating on processes with very fast dynamics — such as compressors.

The field instrumentation can also add to the deadtime; for example on-stream analysers may have sample delays or may be discontinuous.

Clearly the value of y must be positive but otherwise there is no constraint on its value. Many processes will exhibit virtually no delay; there are some where the delay can be measured in hours or even in days. Finally the shape of the temperature trend is very different from that of the valve position.

The heater coil will comprise a large mass of steel. Burning more fuel will cause the temperature in the firebox to rise quickly and hence raise the temperature of the external surface of the steel. But it will take longer for this to have an impact on the internal surface of the steel in contact with the fluid. Similarly the coil will contain a large quantity of fluid and it will take time for the bulk temperature to increase. The field instrumentation can add to the lag. For example the temperature is likely to be a thermocouple located in a steel thermowell.

The thermowell may have thick walls which cause a lag in the detection of an increase in temperature. Lag is quite different from deadtime. Lag does not delay the start of the change in PV.

Without deadtime the PV will begin changing immediately but, because of lag, takes time to reach a new steady state. We normally use the symbol t to represent lag. To help distinguish between deadtime and lag, consider liquid flowing at a constant rate F into a vessel of volume V.

The process is at steady state. By mass balance the change in the quantity of the component in the vessel is the difference between what has entered less what has left. Assuming the liquid is perfectly mixed then V: However, if absolutely no mixing took place in the vessel, the change in composition would pass through as a step change — delayed by the residence time of the vessel, i. In practice, neither perfect mixing nor no mixing is likely and the process will exhibit a combination of deadtime and lag.

When trying to characterise the shape of the PV trend we also have to consider the order n of the process.

While processes in theory can have very high orders, in practice we can usually assume that they are first order. However there are occasions where this assumption can cause problems, so it is important to understand how to recognise this situation. Conceptually order can be thought of as the number of sources of lag. In our example the overall lag will be dictated by the lag of the valve positioner, the mass of combustion products in the firebox, the mass of the heater casing and its coil, the mass of the fluid in the coil and the steel in the thermowell.

It comprises two identical vessels, both open to the atmosphere and both draining through identical valves. Both valves are simultaneously opened fully. The flow through each valve is determined by the head of liquid in the vessel so, as this falls, the flow through the valve reduces and the level falls more slowly. Trend A in Figure 2.

It shows the characteristic of a first order response in that the rate of change of PV is greatest at the start of the change. Trend B shows the level in the lower vessel — a second order process. Since this vessel is receiving liquid from the first then, immediately after the valves are opened, the inlet and outlet flows are equal.

The level therefore does not change immediately. This apparent deadtime is a characteristic of higher order systems and is additive to any real deadtime caused by transport delays. Thus by introducing additional deadtime we can approximate a high order process to first order. This approximation is shown as the dashed line close to trend B. The accuracy of the approximation is dependent on the combination of process lags. While trend B was drawn with both vessels identical, trend C arises if we increase the lag for the top vessel e.

We know that the system is still second order but visually the trend could be first order. Our approximation will therefore be very accurate. However, if we reduce the lag of the top vessel below that of the bottom one then we obtain trend D. This is inverse response; the PV initially moves in a direction opposite to the steady-state change. Fitting a first order model to this response would be extremely inaccurate.

Examples of processes prone to this type of response include steam drum levels, described in Chapter 4, and some schemes for controlling pressure and level in distillation columns, as described in Chapter Figures 2. Each response is to the same change in MV. Changing Kp has no effect on the behaviour of the process over time. The time taken to reach steady state is unaffected; only the actual steady state changes.

Changing y, t or n has no effect on actual steady state; only the time taken to reach it is affected. The similarity of the family of curves in Figures 2.

One such situation is the use of cascade control, where one controller the primary or master adjusts the SP of another the secondary or slave. The technique is applied where the process dynamics are such that the secondary controller can detect and compensate for a disturbance much faster than the primary.

Consider the two schemes shown in Figure 2. If there is a disturbance to the pressure of the fuel header, for example because of an increase in consumption on another process, the flow controller will respond quickly and maintain the flow close to SP. As a result the disturbance to the temperature will be negligible. Without the flow controller, correction will be left to the temperature controller.

But, because of the process dynamics, the temperature will not change as quickly as the flow and nor can it correct as quickly once 1. As a result the temperature will deviate from SP for some significant time.

Cascade control also removes any control valve issues from the primary controller. If the valve characteristic is nonlinear, the positioner poorly calibrated or subject to minor mechanical problems, all will be dealt with by the secondary controller. This helps considerably when tuning the primary controller. Cascade control should not normally be employed if the secondary cannot act more quickly than the primary. Imagine there is a problem with the flow meter in that it does not detect the change in flow for some time.

If, during this period, the temperature controller has dealt with the upset then the flow controller will make an unnecessary correction when its measurement does change. This can make the scheme unstable.

Designing Controls for the Process Industries

Tuning controllers in cascade should always be completed from the bottom up. Firstly the secondary controller will on occasions be in use without the primary.

There may, for example, be a problem with the primary or its measurement may be out of range during start-up or shutdown of the process.

We want the secondary to perform as effectively as possible and so it should be optimally tuned as a standalone controller. The second reason is that the MVof the primary controller is the SP of the secondary. When performing step tests to tune the primary we will make changes to this SP.

The secondary controller is now effectively part of the process and its tuning will affect the dynamic relationship between the primary PV and MV. If, after tuning the primary, we were to change the tuning in the secondary then the tuning in the primary would no longer be optimum. Cascade control, however, is not the only case where the sequence of controller tuning is important. In general, before performing a plant test, the engineer should identify any controllers that will take corrective action during the test itself.

Any such controller should be tuned first. In the case of cascade control, clearly the secondary controller takes corrective action when its SP is changed. But consider the example shown in Figure 2. The heater has a simple flue gas oxygen control which adjusts a damper to maintain the required excess air. When the downward step is made to the fuel flow SP the oxygen controller, if in automatic mode, will take corrective action to reduce the air rate and return the oxygen content to SP.

However, if this controller is in manual mode then no corrective action is taken, the oxygen level will rise and the heater efficiency will fall.

As a result the heater outlet temperature will fall by more than it did in the first test. Imagine now that the oxygen control is retuned to act more slowly. The dynamic behaviour of the temperature with respect to fuel changes will be quite different. So we have the situation where an apparently unrelated controller takes corrective action during the step test. It is important therefore that this controller is properly tuned before conducting the test.

In the case of testing to support the design of a MVC, the MVs are likely to be mainly basic controllers and it is clear that these controllers should be well-tuned before starting the step tests. However, imagine that one of the MVs is the feed flow controller. When its SP is stepped there is likely to be a large number of regulatory controllers that will take corrective action during the test.

Many of these will not be MVs but nevertheless need to be tuned well before testing begins. The techniques available fall into one of two approaches — open loop and closed loop testing. Open loop tests are performed with either no controller in place or, if existing, with the controller in manual mode.

A disturbance is injected into the process by directly changing the MV. Closed loop tests may be used if a controller exists and already provides some level of stable control.

Under these circumstances the MV is changed indirectly by making a change to the SP of the controller. Such plant testing should be well organised. While it is clear that the process operator must agree to the test there needs to be discussion about the size and duration of the steps. The operator of course would prefer that no disturbance be made!

The operator also needs to appreciate that other changes to the process should not be made during the test. While it is possible to determine 12 Process Control the dynamics of simultaneous changes to several variables, the analysis is complex and more prone to error.

It seems too obvious to state that the process instrumentation should be fully operational. Many data historians included a compression algorithm to reduce the storage requirement. When later used to recover the original data some distortion will occur.

While this is not noticeable in most applications, such as process performance monitoring and accounting, it can affect the apparent process dynamics. Any compression should therefore be disabled prior to the plant tests. It is advisable to collect more than just the PV and MV.

If the testing is to be done closed loop then the SP should also be recorded. Any other process parameter which can cause changes in the PV should also be collected. This is primarily to ensure that they have not changed during the testing, or to help diagnose a poor model fit. While such disturbances usually invalidate the test, it may be possible to account for them and so still identify an accurate model.

Ideally, testing should be planned for when there are no other scheduled disturbances. It can be a good idea to avoid shift changeovers — partly to avoid having to persuade another crew to accept the process disturbances but also to avoid the changes to process conditions that operators often make when returning from lengthy absences.

If ambient conditions can affect the process then it is helpful to avoid testing when these are changing rapidly, for example at dawn or dusk and during rainstorms. Testing should also be scheduled to avoid any foreseen changes in feed composition or operating mode.

Laboratory samples are often collected during plant tests. These are usually to support the development of inferential properties as described in Chapter 9. Occasionally a series of samples are collected to obtain dynamic behaviour, for example if an onstream analyser is temporarily out of service or its installation delayed. The additional laboratory testing generated may be substantial compared to the normal workload. If the laboratory is not expecting this, then analysis may be delayed for several days with the risk that the samples may degrade.

The most accurate way of determining the dynamic constants is by a computer-based curve fitting technique which uses the values of the MV and PV collected frequently throughout the test. If we assume that the process can be modelled as first order plus deadtime, then in principle this involves fitting the following equation to the collected data. When y is not an exact multiple of the data collection interval ts , then the MV is interpolated between the two values either side of the required value.

An iterative approach is then followed to find the best value for y. For example Equation 2. Kp, y, t1, t2, t3 and bias can be fitted directly. Or a1, a2, b1, b2 can be identified by linear regression for the best value of y.

If not, then the process cannot be described by the second order model. The value for t3 is obtained by substituting the results for Kp, t1 and t2 into either Equation 2. For example, from Equation 2. Assuming we need around 30 points to achieve a reasonably accurate fit and that we make both an increase and a decrease in the MV, then collecting data at a one-minute interval would be adequate for a process which has time constants of around two or three minutes.

This model identification technique can be applied to both open and closed loop tests. Multiple disturbances can be made in order to check the repeatability of the results and to check linearity.

However it is important to avoid correlated steps. Consider the series of steps shown in Figure 2. Performing a series of steps of varying size and duration, as in Figure 2. While not necessary for every step made, model identification will be more reliable if the test is started with the process as steady as possible and allowed to reach steady state after at least some of the steps. Model identification software packages will generally report some measure of confidence in the model identified.

A low value may have several causes. Noise in either the MV or PV, if of a similar order of magnitude to the changes made, can disguise the model. These are shown in Figure 2. Stiction, caused by excessive friction, requires that the signal change to start the valve moving is greater than the signal to keep it moving. Thus a small change in the signal may have no affect on the PV, whereas a subsequent change will affect it as expected.

If suspected, these faults can usually be diagnosed by making a series of steps in one direction followed by a series in the opposite direction. If the change in PVat each step is not in constant proportion to the change in MV, the valve should be overhauled.

The relationship between PV and MV may be inherently nonlinear. Some model identification packages can analyse this. If not, then plotting the steady-state values of PV against MV will permit linearity to be checked and possibly a linearising function developed.

While computer-based packages are readily available, there may be circumstances where they cannot be applied. For example, if no facility exists to collect process data in numerical PV PV signal to valve signal to valve Figure 2. They can also only be used to identify first order plus deadtime models and the MV must be changed as a single step, starting and ending at steady state. This is not always possible.

Any existing controller will need to be switched to manual mode. This may be undesirable on an inherently unstable process. There are many processes which rarely reach true steady state and so it would be optimistic to start and finish the test under these conditions.

The size of the step must be large enough to have a noticeable effect on the process. If the PV is subject to noise, small disturbances will be difficult to analyse accurately. The change in PV needs to be at least five times larger than the noise amplitude.

This may cause an unacceptable process disturbance. Dynamics, as we shall see later in Chapter 6, are not only required for changes in the MV but also for disturbance variables DV. It may be that these cannot be changed as steps. If a single step is practical it will still be necessary to conduct multiple tests, analysing each separately, to confirm repeatability and to check for linearity.

The most widely published method is based on the principle that a process with zero deadtime will complete If, in Equation 2. While, in theory, the process will never truly reach steady state, within five time constants it will be very close — having completed In general, however, we have to accommodate deadtime in our calculation of dynamics.

Ziegler and Nichols Reference 1 proposed the method using the tangent of steepest slope. Shown in Figure 2. Where it crosses the value of the PV at the start of the test gives the process deadtime y. There are two methods for determining the process lag t. While not mentioned by Ziegler and Nichols, the time taken to reach Ziegler and Nichols, as we shall see later when looking at their controller tuning method, instead characterised the process by determining the slope of the tangent R.

This is equivalent to defining t as the distance labelled t in Figure 2. For a truly first order process with deadtime this will give the same result. For higher order systems this approach is inaccurate. Kp is determined from Equation 2. The resulting first order approximation is included in Figure 2. The method forces it to pass through three points — the intersection of the tangent with the starting PV, the The method is practical but may be prone to error.

Correctly placing the line of steepest slope may be difficult — particularly if there is measurement noise. Drawing it too steeply will result in an overestimate of y and an underestimate of t. An alternative approach is to identify two points on the response curve. A first order response is then forced through these two points and the steady-state values of the PV. Another approach is to use more points from the curve and apply a least squares technique to the estimation of y and t.

Rearranging Equation 2. With any model identification technique care should be taken with units. As described earlier in this chapter Kp should be dimensionless if the value is to be used in tuning a DCS-based controller. For computer-based MVC Kp would usually be required in engineering units. It is common for the integral time Ti and the derivative time Td to be in minutes, in which case the process dynamics should be in minutes; but this is not universally the case.

It shows that, for large values of n, the response becomes closer to a step change. This confirms that a series of lags can be approximated by deadtime. But it also means that deadtime can be approximated by a large number of small lags. We will cover, in Chapters 6, 7 and 8, control schemes that require a deadtime algorithm. If this is not available in the DCS then this approximation would be useful. Following the disturbance to the fuel valve the temperature will reach a new steady state without any manual intervention.

Not all processes behave this way. For example, if we trying to obtain the dynamics for a future level controller we would make a step change to the manipulated flow. The level would not reach a new steady state unless some intervention is made. This non-self-regulating process can also be described as an integrating process. Process Dynamics 21 While level is the most common example there are many others.

For example, many pressure controllers show a similar behaviour. Pressure is a measure of the inventory of gas in a system, much like a level is a measure of liquid inventory. An imbalance between the gas flow into and out of the system will cause the pressure to ramp without reaching a new steady state. However, not all pressures show pure integrating behaviour.

For example if the flow in or out of the system is manipulated purely by valve position, i. Even with flow controllers in place, if flow is measured by an uncompensated orifice type meter, the error created in the flow measurement by the change in pressure will also cause the process to be self-regulating.

Some temperatures can show integrating behaviour. If increasing heater outlet temperature also causes heater inlet temperature to rise, through some recycle or heat integration, then the increase in energy input will cause the outlet temperature to ramp up.

The response of a typical integrating process is shown as Figure 2. Since it does not reach steady state we cannot immediately apply the same method of determining the process gain from the steady-state change in PV. Nor can we use any technique which relies on a percentage approach to steady state. By including a bias because it is not true that the PV is zero when the MV is zero we can modify Equation 2.

Any may be used provided consistency is maintained. We can omit the lag term when characterising the process dynamics of an integrating process.

King M. Process Control: A Practical Approach, Second Edition

Although the process is just as likely to include a lag, this manifests itself as deadtime. In this case a lag of 3 minutes has caused the apparent deadtime to increase by about the same amount. After the initial response the PV trend is still a linear ramp. We can thus characterise the response using only Kp and y. There are processes which show a combination of these two types of behaviour.

For example steam header pressure generally shows integrating behaviour if boiler firing is changed. If there is a flow imbalance between steam production and steam demand the header pressure will not reach a new steady state without intervention. However, as header pressure rises, more energy is required to generate a given mass of steam and the imbalance reduces.

While the effect is not enough for the process to be self-regulating, the response will include some self-regulating behaviour. Instead of the planned temperature controller being mounted on a tray in the distillation column it has been installed on the reboiler outlet.

As the reboiler duty is increased, by increasing the flow of the heating fluid, the outlet temperature will increase. This will in turn cause the reboiler inlet temperature to increase — further increasing the outlet temperature which will then show integrating behaviour.

However the higher outlet temperature will result in increased vaporisation in the base of the column, removing some of the sensible heat as heat of vaporisation. The term open-loop unstable is also used to describe process behaviour.

Some would apply it to any integrating process. But others would reserve it to describe inherently unstable processes such as exothermic reactors. The additional conversion caused by the temperature increase generates additional heat which increases conversion further.

It differs from most non-self-regulating processes in that the rate of change of PV increases over time. It often described as a runaway response. Of course, the outlet temperature will eventually reach a new steady state when all the reactants are consumed; however this may be well above the maximum permitted. The term open-loop unstable can also be applied to controllers that have saturated. This means that the controller output has reached either its minimum or maximum output but not eliminated the deviation between PV and SP.

It can also be applied to a controller using a discontinuous on-stream analyser that fails. Such analysers continue to transmit the last measurement until a new one is obtained. If, as a result of analyser failure, no new measurement is transmitted then the controller no longer has feedback. Dynamics are rarely constant and it is important to assess how much they might vary before finalising controller design.

Dynamics vary due to a number of reasons. The process may be inherently nonlinear so that, as process conditions vary, a controller tuned for one set of conditions may not work well under others.

This is illustrated by Figure 2. A step test performed between points A and B would give a process gain of about 1. In our example an average gain of 0. This would require a modified approach to controller design, such as the inclusion of some linearising function, so it is important that we conduct plant tests over the whole range of conditions under which the controller will be expected to operate. A common oversight is not taking account of the fact that process dynamics vary with feed rate.

Consider our example of a fired heater. If it is in a nonvaporising service we can write the heat balance Ffeed: On the fuel side F is the flow of fuel, NHV the net heating value calorific value and Z the heater efficiency.

In fact it is inversely proportional to feed rate. A little thought would have predicted this. Making the same increase in fuel at a higher feed rate would result in a smaller temperature increase because there is more feed to heat. So, for example, doubling the feed rate halves the gradient of the line.

Some might describe the behaviour as nonlinear, using the term for any process in which the process gain is variable. Strictly this is a linear process; changing feed rate clearly affects the process gain but behaviour remains linear at a given feed rate.

This effect is not unique to fired heaters; almost all process gains on a plant will vary with feed rate. Assuming a reference feed rate of , our controller will work reasonably well for feed rates between 80 and The turndown ratio of a process is defined as the maximum feed rate divided by the minimum. We can see that if this value exceeds 1. Fortunately most processes have turndown ratios less than 1.

The technique used, if this is not the case, is covered in Chapter 6. Feed flow rate may also affect process deadtime. If the prime cause of deadtime is transport delay than an increase in feed will cause the residence time to fall and a reduction in deadtime.

At worst, deadtime may be inversely proportional to feed rate. If so then the maximum turndown limit of 1. In fact controllers are more sensitive to increases in deadtime than decreases. Techniques for accommodating excessive variation in deadtime are covered in Chapter 7. Feed rate generally has little effect on process lag- although Equation 2.

However, this only applies when there is perfect mixing. In general, only in relatively small sections of most processes does this occur. For example, the lag caused by a vessel will change depending on the level of liquid in the vessel — as shown Equation 2. Changes in vessel inlet temperature or composition will be more slowly detected at the vessel outlet if the level is high.

Whether this is significant will depend on a number of factors. There are likely to be other sources of lag which, when added to that caused by the vessel, reduce the impact of inventory changes. Similarly although the indicated level in the vessel may appear to change a great deal, it is unlikely that the level gauge operates over the full height of the vessel.

However a check should be made if averaging level control, as described in Chapter 4, is used — since this can permit large sustained changes in inventory. The addition of filtering, to deal with measurement noise, can also affect the process dynamics.

This noise has then been removed by the additional of a filter as described in Chapter 5. The filter adds lag and, because it increases the order of the system, also increases the apparent deadtime. Adding a filter after a controller has been tuned is therefore inadvisable. Either the plant test should be repeated to identify the new dynamics or, if the model identification package permits it, the original test data may be used with the filter simulated in the package.

It is very common for filters to be implemented unnecessarily. They are often added visually to smooth the trended measurement. But the main concern should be the impact they have on the final control element, for example the control valve.

This is a function not only of the amplitude of measurement noise but also the gains through which it passes. These may be less than one and so attenuate the noise. Not all filtering is implemented in the DCS. Most transmitters include filters. Provided the filter constant is not changed then model identification will include the effect of the transmitter filter in the overall dynamics. However, if the filter in the transmitter is changed by a well-intentioned instrument technician unaware of its implications, this can cause degradation in controller performance.

If dynamics can vary from those obtained by plant testing, it is better that the Process Dynamics 27 controller becomes more sluggish than more oscillatory. It is therefore safer to base controller tuning on higher values of Kp and y and on a lower value of t. So that they can be recognised, the transforms for the common types of process are listed here. Self-regulating second order with inverse response As shown in the example in this chapter, inverse response is caused by two competing processes — the faster of which takes the process first in a direction opposite to the steady state.

If t3 is greater than 0 then the process will show PV overshoot and t3 is said to add lead to the process. References 1. Ziegler, J. Transactions of the ASME, 64, — While many DCS vendors have attempted to introduce other more effective algorithms it remains the foundation of almost all basic control applications. The basic form of the algorithm is generally well-covered by academic institutions. Its introduction here follows a similar approach but extends it to draw attention to some of the more practical issues.

Importantly it also addresses the many modifications on offer in most DCS, many of which are undervalued by industry unaware of their advantages. This chapter also covers controller tuning in detail. Several commonly known published methods are included but mainly to draw attention to their limitations.

An alternative, well-proven, technique is offered for the engineer to use. In Chapter 2 we defined PV the process variable that we wish to control and MV the manipulated variable. We will also use M to represent the controller output, which will normally be the same as MV. To these definitions we have also added SP i. Misinterpreting the definition will result in the controller taking corrective action in the direction opposite to that it should, worsening the error and driving the control valve fully closed or fully open.

The term C is necessary since it is unlikely to be the case that zero error coincides with zero controller output. In some control systems the value of C may be adjusted by the process operator, in which case it is known as manual reset. Its purpose will be explained later in this section. We have seen that the process gain Kp may be positive or negative but the controller gain Kc is always entered as an absolute value.

The control algorithm includes therefore an additional engineer-defined parameter known as action. If set to direct, the controller output will increase as the PV increases; if set to reverse, output decreases as PV increases. If we consider our fired heater example we would want the controller to reduce the fuel rate if the temperature increases and so we would need to set the action to reverse. In other words if the process gain is positive then the controller should be reverse acting; if the process gain is negative then it should be direct acting.

This definition is consistent with that adopted by the ISA Reference 1 but is not used by all DCS vendors and is not standardised in text books. Some base the action on increasing E, rather than PV. If they also define error as SP — PV, then our heater temperature controller would need to be configured as direct acting. Confusion can arise if the controller is manipulating a control valve. Valves are chosen to either fail open or fail closed on loss of signal — depending on which is less hazardous.

Some texts take this into account when specifying the action of the controller. However most DCS differentiate between the output from the controller, which is displayed to the operator, and what is sent to the valve. To the operator and the controller all outputs represent the fraction or percentage that the valve is open. Any reversal required is performed after this. Under these circumstances, valve action need not be taken into account when specifying controller action.

The controller as specified in Equation 3. A more useful form is the incremental or velocity form which generates the change in controller output DM. We can convert the controller to this form by considering two consecutive scans. Secondly the controller will have bumpless initialisation. When any controller is switched from manual to automatic mode it PID Algorithm 31 should cause no disturbance to the process.

With the full position version it would be necessary first to calculate C to ensure that M is equal to the current value of the MV. Since the velocity form generates increments it will always start from the current MVand therefore requires no special logic. Some systems require the proportional band PB rather than gain. Conversion between the two is straightforward. If we assume PV is constant then from Equation 3.

This is a one-off change because DSP will be zero for future scans until another change is made to SP. The response is shown in Figure 3. In this case Kc has been set to 2. Of course, increasing the fuel will cause the temperature to rise and reduce the error — so the controller output will only remain at this new value until the process deadtime has expired. The full trend is shown in Figure 3. The PV will never reach SP except at initial conditions. Figure 3. We will show that these oscillations, on any process, become unstable before offset can be reduced to zero.

Found in lavatory cisterns and header tanks it provides basic control of level. It operates by a float in effect measuring the deviation from the target level and a valve which is opened in proportional to the position of the float.

It is a proportional-only controller. So why does it not exhibit offset? This is because it is not a continuous process. The inlet flow can only be nonzero if the error is nonzero. We can represent this mathematically. Before the leak develops the error is zero. When the process again reaches steady state the controller will have changed the inlet flow by f and the error will be E.

By putting these values into Equation 3. PID Algorithm 33 This confirms what we know, i. One might ask why not move the SP to account for the offset? One could, of course, and indeed we have the facility to do the equivalent by adjusting the manual reset term C , as described in Equation 3.

However Equation 3. Thus we would have to make this correction for virtually every disturbance and automation will have achieved little. It is not to say, however, that proportional-only control should never be used. There are situations where offset is acceptable such as in some level controllers as described in Chapter 4. However, in most situations we need the PV always to reach the SP. Sometimes called reset action it continues to change the controller output for as long as an error exists.

It does this by making the rate of change of output proportional to the error, i. Equation 3. Kc has been reduced to a value of 1. The response shows that, for a constant error, the rate of change of output is constant.

The change made by integral action will eventually match that of the initial proportional action. In this example Ti is about 5 minutes.

1st Edition

In many DCS Ti will have the units of minutes, but some systems use hours or seconds. Others define the tuning constant in repeats per minute, i. The advantage of this is that, should more integral action be required, the engineer would increase the tuning constant. In the form of the algorithm we are using, higher values of Ti give less integral action.

We therefore have to be careful in the use of zero as a tuning constant. Fortunately most systems recognise this as a special case and disable integral action, rather than attempt to make an infinite change.

Even a very small amount of integral action will eliminate offset. Attempting to remove it too quickly will, as with any control action, cause oscillatory behaviour. However this can be compensated for by reducing Kc. Optimum controller performance is a trade-off between proportional and integral action. For most situations a PI controller is adequate. Indeed many engineers will elect not to include derivative action to simplify tuning the controller by trial-and-error.

A twodimensional search for optimum parameters is considerably easier than a three-dimensional one. However in most situations the performance of even an optimally tuned PI controller can be substantially improved. It anticipates by taking action if it detects a rapid change in error.

The error may be very small even zero but, if changing quickly, will surely be large in the future. Derivative action attempts to prevent this by changing the output in proportion to the rate of change of error, i. Converting Equation 3. This probably has no practical application but including integral action would make the trends very difficult to interpret. Combining Equations 3.

This time we have not made a step change to the SP, instead it has been ramped. The initial step change in the output is not then the result of proportional action but the derivative action responding to the change, from zero, in the rate of change of error. The subsequent ramping of the output is due to the proportional action responding to the ramping error.

The proportional action will eventually change the output by the same amount as the initial derivative action. The time taken for this is Td which, like Ti, can be expressed in units such as minutes or repeats per minute, depending on the DCS. Also shown in Figure 3. It can be seen that derivative action takes action immediately that the proportional action takes Td minutes to do.

In effect it has anticipated the need for corrective action, even though the error was zero at the time. The anticipatory nature of derivative action is beneficial if the process deadtime is large; it compensates for the delay between the change in PV and the cause of the disturbance. Thus most controllers, when responding to a change in SP, do not obviously benefit from the addition of derivative action. It is often said that derivative action should only be used in temperature controllers.

It is true that temperatures, such as those on the outlet of fired heater and on distillation column trays, will often exhibit significantly more deadtime than measurements such as flow, level and pressure. However this is not universally the case, as illustrated in Figure 3.

Manipulating the bypass of the stream on which we wish to install a temperature controller, in this case around the tube side of the exchanger, will provide an almost immediate response. Indeed, if accurate control of temperature is a priority, this would be preferred to the alternative configuration of bypassing the shell side.

While there are temperatures with very short deadtimes there will be other measurements that, under certain circumstances, show long deadtimes. In Chapter 4 we include a level control configuration that is likely to benefit from derivative action.

TC Figure 3. Consider how the derivative action responds to a change in SP. The derivative action will then be a change of the same magnitude but opposite in direction, i. Bearing in mind that Td will be of the order of minutes and ts in seconds the magnitude of DM is likely to be large, possibly even full scale, and is likely to cause a noticeable process upset. Derivative action is not intended to respond to SP changes. But the response of derivative action to process disturbances is unaffected.

In many DCS this modification is standard. Others retain both this and the derivative-on-error versions as options. It is also common for this algorithm to include some form of filtering to reduce the impact of the spike but, even with this in place, there is no reason why the engineer should ever use the derivative-on-error version if the derivative-on-PV version is available. While this modified algorithm deals with the problem of a spike resulting from a SP change, it will still produce spikes if there are steps in the PV.

The most common example is some types of on-stream analysers, such as chromatographs. The sample-and-hold technique these employ will exhibit a staircase trend as the PV changes.

Process Control: A Practical Approach

Each step in the staircase will generate a spike. This is a particular issue because analysers tend to be a significant contributor to deadtime and thus the composition controller would benefit from the use of derivative action. This problem is addressed in Chapter 7. However the problem can also arise from the use of digital field transmitters. Even if the analog-to-digital conversion is done to a high resolution — say to 0. Care should therefore be taken in the selection of such transmitters if they are to be installed in situations where derivative action would be beneficial.

As might be expected, the response becomes more oscillatory as Td is increased. Perhaps more surprising is that reducing Td also causes an oscillatory response.

This is because the addition of derivative action permits more integral action to be used, so the oscillation observed by removing the derivative action is caused by excessive integral action. This interdependence means that we cannot simply add derivative action to a well-tuned PI controller. It will only be of benefit if all three tuning constants are optimised. Similarly, if we wished to remove derivative action from a controller, we should re-optimise the proportional and integral tuning.

If measurement noise is present then we need to be cautious with the application of derivative action. While the amplitude of the noise may be very small, it will cause a high rate of change of the PV. Derivative action will therefore amplify this. This is perhaps another reason why there may be a reluctance to use it. However modern DCS provide a range of filtering techniques which permit advantage still to be taken of derivative action. These techniques are covered in Chapter 5.

The open loop response was produced by making a step change to the MVof the same magnitude as that ultimately made by the controllers. The closed loop responses were placed by overlaying their trends so that change in SP is at the same point in time as the start of the open loop test.

Given that it is impossible for any controller to reach SP before the deadtime has elapsed, this is a substantial improvement. Different versions are in common use for two reasons. Firstly, there are a variety of approaches taken by different DCS vendors in converting the equations written in analog form into their discrete version. Addressing the first of these issues we can write, by combining Equations 3. An alternative method is to apply the trapezium rule, where the integral is treated as series of trapeziums.

The algorithm will perform in exactly the same way provided that the tuning is adjusted to take account of the change. Again by equating coefficients we can show that. This confirms that all three are good approximations to analog control. It also shows that, provided the tuning constants are large compared to the scan interval, the values required vary little between the algorithms.

Tuning constants are generally of the order of minutes, while the scan interval is of the order of seconds, and so it will generally be the case that we do not need to know the precise form of the algorithm. This is somewhat fortunate with many DCS; the vendor will describe the algorithm in its analog form but not always divulge how it has been converted to its discrete form.

However, if the process dynamics are very fast, resulting in tuning constants measured in a few seconds, then knowing the precise form of the algorithm becomes important.

For reasons which will become apparent, the controller with which we have been working is known as the noninteractive form. It can also be described as the parallel form. This is because Equation 3. To convert the box diagram to an equation, functions in parallel are additive, those in series are multiplicative. This is more representative of the algorithm used by early pneumatic instruments and is retained, usually as an option, in some DCS.

Again, by equating coefficients we can develop formulae for modifying the tuning as we change from one algorithm to the other. So that one algorithm can be adopted as the standard approach for all situations, there are several arguments for choosing the noninteractive algorithm.

Secondly, most DCS use this algorithm — often describing it as ideal; others give the option to use either. Finally, most published tuning methods are based on the noninteractive version. PID Algorithm 43 To switch from the interactive to the noninteractive version requires tuning to be changed as follows.

This introduces a lag into the controller of time constant aTd that is intended to reduce the amplification of measurement noise by the derivative action. Setting a to zero removes this filter, setting it to 1 will completely disable the derivative action. In some systems the value of a is configurable by the engineer.

In many it is fixed, often at a value of 0. The reciprocal of a is known as the derivative gain limit. Its inclusion is of dubious value. If no noise is present and derivative action is required then we have to modify the controller tuning to account of its presence. Few published tuning methods take this into account.

Secondly, the filtering is identical to that provided by the standard DCS filter see Chapter 5.

The DCS filter is generally adjustable by the engineer whereas that in the control algorithm may not be. Secondly, even if a is adjustable, its upper limit means that the filter time constant cannot be increased beyond Td. If noise is an issue then an engineer-configurable filter is preferred.

This strengthens the argument not to use the interactive version of the controller. However a may still not be adjustable by the engineer and is not usually taken account of in published tuning methods.

Taking the algorithm developed as Equation 3. This would appear to undermine the main purpose of proportional action by eliminating the proportional kick it produces whenever the SP is changed. Indeed, only the integral action will now respond to the SP change, producing a much gentler ramping function. This can be seen in Figure 3. The absence of the initial proportional kick can be seen on the trend of the MVand results in the PV taking much longer to reach its new SP.

Many believe therefore that this algorithm should be applied on processes where the MV should be adjusted slowly. However, if this performance were required, it could be achieved by tuning the more conventional proportional-on-error algorithm. To get the free app, enter your mobile phone number.

Written with the practicing control engineer in mind, the book:. Would you like to tell us about a lower price? So why another book on process control? Written with the practicing control engineer in mind, the book: Read more Read less. Discover Prime Book Box for Kids.

Learn more. site Cloud Reader Read instantly in your browser. Editorial Reviews Review "The author is experienced and he dosen't hesitate to tell you what he thinks, making this a good book to increase your practical knowledge of regulatory control. See all Editorial Reviews. Product details File Size: Wiley; 1 edition April 16, Publication Date: April 16, Sold by: English ASIN: Enabled X-Ray: Not Enabled. No customer reviews. Share your thoughts with other customers.

Write a customer review. site Giveaway allows you to run promotional giveaways in order to create buzz, reward your audience, and attract new followers and customers. Learn more about site Giveaway. This item: A Practical Approach. Set up a giveaway. There's a problem loading this menu right now.

Learn more about site Prime. Get fast, free shipping with site Prime. Back to top. Get to Know Us.An undamped system is one which oscillates at constant amplitude. Of course, the outlet temperature will eventually reach a new steady state when all the reactants are consumed; however this may be well above the maximum permitted. Three cations is considered in addition to the state of the art different projects were implemented: 1 a fuzzy logic presented in section 2.

Deadtime Compensation 7. The Ziegler-Nichols open loop tuning technique is an extension of the steepest slope method that we described in Chapter 2 as a means of obtaining the process dynamics. Improving the basic controls is not usually an option once the MVC is in place. Control Engineering Education at July Control engineering staff and their contractors have invested x Preface thousands of man-hours in the necessary plant testing and commissioning.

So why does it not exhibit offset?