By Angus Ramsay
The success of humans as engineers capable of designing and making tools and, later, improving on these as knowledge of materials and manufacturing processes has advanced is without question; today we can create new materials such as graphene, and new ways of manufacturing items, e.g., three-dimensional printing. Equally, as humans, we are perfectly capable of making mistakes. Sometimes these are trivial and do not affect others, but at other times they can be serious. When serious, mistakes can lead to injury or even loss of life and/or significant financial loss for an organisation. In such cases, an expert witness may be employed to assist in uncovering the reason for the mistake so that blame may be fairly apportioned and costs recovered.
At the root of most engineering design is the necessity for the artefact, a structure or mechanical system, to possess sufficient stiffness for it to be serviceable and sufficient strength for it to be able to withstand the ultimate load it is likely to see. In the modern limit state design, these conditions are, respectively, known as the serviceability limit state (SLS) and the ultimate limit state (ULS). The engineering discipline that deals with such questions is that known as Strength of Materials. This discipline has a long and interesting history, see Timoshenko’s ‘History of Strength of Materials’, , which, through application of the Scientific Method (see Figure 1), leads to development of the Theory of Elasticity.
Strength of Materials and the Theory of Elasticity
Anyone who has delved into a strength of materials text will realise that to obtain the theoretical solution to a particular problem in the theory of elasticity, requires the solution of three sets of equations; these being equilibrium (balance between applied loads and internal stresses), compatibility (strains that lead to continuous displacements) and the constitutive or material relations (between stresses and strains, e.g., Hooke’s Law) – see Figure 2 which illustrates how these equations are derived for a bar of area ? and length ? under an axial load ?. Strength of materials solutions satisfy these three sets of equations exactly and, therefore, may be considered as known theoretical solutions. Figure
Whilst the theory of elasticity is quite general, the number of known (strength of materials) solutions is limited to problems with simple geometry, supports, loading and materials. For example, whilst solutions are known for the case of cylinders or spheres under internal pressure, when these are put together, e.g., the case of a cylindrical pressure vessel with hemispherical ends – see Figure 3, the geometry is such that there is no known theoretical solution. The blue line represents the undeformed geometry at zero pressure and the black line the deformed geometry when the vessel is subject to internal pressure.
The radial displacement due to internal pressure for a hemisphere is less than that for a cylinder under the same pressure. This means that the radial displacement at the interface between the two components will be something in between that for the cylinder and that for the hemisphere. This will disturb the solution local to the transition but away from this point the solution in terms of displacements and stresses will revert to that of the basic component.
In the days before the theoretical solution to this problem could be approximated using computation approaches like, for example, the finite element (FE) method, the engineer would probably have had to rely on empirical, measured, data obtained by conducting experiments on real pressure vessels to predict the stresses in the transition region.
With the development of the FE method and digital computers in the second part of the last century, it became possible to solve for the displacements/stresses in problems such as that of Figure 3. There is nothing mysterious about the FE method, it is simply a numerical method that approximates the governing equations (see Figure 2) over the domain of interest (represented by a mesh of finite elements). This discretisation process leads to a set of simultaneous, linear equations for the unknown quantities (generally nodal displacements) which can be rapidly solved for to provide displacement and then stress fields.
With the FE method, the practising engineer now has a software tool that, in educated hands, enables them quickly to determine displacements and stresses in structures/components that don’t fit into the framework of problems having known theoretical solutions.
Its use in industry was, for a long time, left in the hands of specialist design analysts who worked alongside design engineers providing them with analytical support; the author began his career as a design analyst. There was a reason for this and it lies in the requirement for a great deal of specialised training and knowledge to ensure that the results from the finite element system are sound. Times have changed. Today, the finite element method is considered a mature technology, with very sophisticated software and graphical user interfaces safe for engineers with little or no theoretical background to use; the so-called Democratisation of Simulation. With democratisation of simulation, it is envisaged that engineers with little or no finite element background can use sophisticated software and generate sound engineering solutions. This is how the software is sold by vendors. But this is of great concern to those engineers who have long worked in the field of numerical simulation and realise, through experience, the potential danger of adopting this approach.
In the early days of the commercialisation of the finite element method it was realised by some that: “… both coding and modelling errors were commonplace and only time separated the [simulation] community from computer-aided catastrophe [CAC]”.
This quotation comes from Professor John Robinson who was one of the founders of the National Agency for Finite Element Methods and Standards or NAFEMS, . NAFEMS is now an international organisation but retains the original acronym.
Just such an incident of CAC did occur in the early 1990s when the Sleipner Platform A sank in a Norwegian Fjord, . No one was injured but the estimated cost of the incident was some $700m! The subsequent inquiry found that FE modelling local to the failure had been inadequate, underpredicting the shear forces by some 45%. This, together with inadequate reinforcement detailing in the failure region, was identified as the cause of the failure. It is extremely revealing that had the engineer or his managers checked the finite element result using a simple hand calculation, the error would have been spotted!
Henry Petroski has written extensively and very readably, on the subject of failure in engineering design and points out the important role of failure in successful design, . Case studies of engineering failure provide an invaluable resource for practising engineers. In , for example, Petroski points out that major failures, at least for bridges, have been observed to be spaced at approximately thirty-year intervals. The reason for this is postulated as being the result of a ‘communication gap’ between one generation of engineers and the next; the raison d’être why structural members or components were designed in the way they were, being lost.
Adopting a similar mode of enquiry, but applying it to the field of numerical simulation, the author of this present article has, over recent years, developed an interest in explaining some of the possibilities for engineering and in particular finite element malpractice. The findings from these studies have been published widely. In particular, an initiative called the NAFEMS Benchmark Challenge has led to two volumes of studies, .
The articles referred to were written for practising engineers but the lessons learnt from these are of wider significance and in this current article the author has attempted to distil the essential findings of his study and cast it in a manner which it is hoped will be suitable for the expert witness. Whilst this article concentrates on the author’s particular field of experience, i.e., mechanical/structural engineering and the safe design of structures and components, the explanation, findings and conclusions will find equal resonance in other fields of engineering endeavour, e.g., the application of computational fluid dynamics (CFD) to the prediction of fluid flow around structures or inside turbomachines; the principles are the same and the same message applies.
Finite Element Malpractice
The FE method is an approximate method in that for a given mesh, the solution will contain some error. However, it is also a convergent method which means that as the mesh is refined it will normally, provided the problem has been properly modelled, converge to the theoretical solution. Whilst there are a number of different FE formulations, the conforming finite element (CFE) formulation is the one which is used almost ubiquitously in commercial FE systems. By definition, as the method is approximate for a given mesh, one or more of the three conditions identified in Figure 2 will need to be approximated. In the CFE formulation it is the equilibrium conditions that are approximated with the constitutive and compatibility conditions being satisfied exactly.
The following example demonstrate how the approximation of equilibrium, implicit in the CFE formulation of FE, can lead to unsafe structural designs.
A Simple Structural Design Problem
This problem comprises a rectangular plate loaded uniformly over the entire area and simply supported on two opposite sides as shown in Figure 4. The design engineer wishes to ensure that the plate remains elastic under the design load and a single variable, the plate thickness, is available for optimising. In order to do this the maximum bending moment needs to be determined. It is a simple problem in that the internal actions can be found from simple consideration of static equilibrium and any engineer should know the equation for the bending moment along the centre line of an equivalent beam where it is a maximum. The bending moments as they act on the centre line of the plate are shown in the figure.
If the engineer is unaware of the fact that the problem is statically determinate and that an expression is available for the maximum bending moment, he/she might try his hand at a FE model to obtain the solution. As the geometry of the plate is rectangular it could be meshed with a single element but our engineer is aware that some form of mesh refinement might give a better solution and so chooses a 4x4 mesh of lower-order (four-noded) plate elements as shown in the figure.
In processing the finite element results, the engineer integrates the stresses across the centre line and calculates the average bending moment. The value thus obtained is only 0.875 times the theoretical value.
The lack of scrutiny exhibited by the engineer leads to him accepting an average bending moment below the correct value and means that he thinks the plate can take some 12.5% more load than it can actually cope with before beginning to become plastic. If he is using a formal code of practice, e.g., a British Standard or a European Code, for the design then this might well provide highly conservative allowable stress values for such a design to cater for, amongst others, the fact that the strength of the plate material might vary from that specified by the manufacturer. What this conservatism does not and cannot account for is that the engineer has, through finite element malpractice, failed to obtain an accurate value for the bending moment!
To assist the engineer in avoiding such FE malpractice as noted above and also to provide a logical framework for the expert witness to uncover such bogus results, there is a relatively new field of scrutiny that has been developed called Simulation Governance, a term coined by Barna Szabo  – see Figure 5. It involves three aspects namely Verification, Validation and Uncertainty Quantification. Validation requires the mathematical model accurately to predict real behaviour, i.e., nothing more than an application of the scientific method. Verification, on the other hand, admits that the mathematical model, even if the correct model, may not generally be solved exactly through numerical simulation and offers guidance on how the errors inherent in the approximation might be recognised and controlled. Uncertainty quantification acknowledges the fact that some data in an engineering analysis, e.g., the material properties, might not be known exactly and that the accuracy of this data might ultimately influence the results of an analysis and also the engineering decisions taken from these results.
When simulating structures or mechanical systems, the basic mathematical model is that of the theory of elasticity and, through the work of pioneers presented in , there is plenty of evidence that, accepting uncertainties in the engineering data, this model matches closely with observations. As such it is verification that is of primary concern to many practising engineers, i.e., how close does the simulation match the results that would be obtained if the mathematical model were solved exactly? Verification can be considered in two parts, namely software verification and solution verification.
Software Verification –Simulation for Known Theoretical Solutions
The practising engineer needs to guard against the possibility that the FE system he/she is using contains a bug. Commercial FE systems contain millions of lines of code and multiple ways that different parts of the code may be accessed and it is highly unlikely that any such code is free from bugs or errors. It is also the case that numerical schemes within FE systems whilst correctly coded, might not be appropriate for the problem being studied, e.g., numerical schemes are used to integrate quantities over elements and these may be approximate or exact. It is thus incumbent on the practising engineer to ensure that the software being used is actually capable of modelling the sort of problem being studied. The way this is done is to test the software on a problem which has a known theoretical solution such as one given in a strength of materials text. In the absence of issues with the simulation software, solution verification provides valuable insight into how the software converges with mesh refinement and, thereby, particularly if the software verification problem is chosen to be similar to that being studied, a useful indication to the level of mesh refinement required for the real problem being considered. For the simple plate problem considered earlier, the ratio of the finite element moment to that of the theoretical moment is shown in Figure 6 for uniform meshes of increasing refinement.
It is observed from Figure 6 that the finite element result is converging to the theoretical value in a monotonic and asymptotic manner. This is the expected behaviour if there are no bugs in the software and if the correct mathematical model is being used.
From this software verification example, useful guidance is obtained. Firstly, it is seen that the solution does appear to converge to the theoretical result. Secondly, the moment can be recovered to within 1% accuracy with a 16x16 mesh. It is, in addition, noted from this study that convergence is from below the theoretical value. In contrast to a situation where the FE solution converges from above the theoretical value, this means that the engineer really does need to conduct mesh refinement if his result is not going to be non-conservative.
Solution Verification – Simulation for Unknown Theoretical Solutions
Having conducted the prerequisite software verification, the engineer is in a good position to consider the actual problem; the necessary faith that the software can model such a problem having been developed. He/she should also have a reasonable idea of the level of mesh refinement required to produce a solution of acceptable engineering accuracy.
A very similar problem to that used for software verification is the case of a skewed plate and this will be used for the solution verification problem. For the skewed plate, the engineer assumed that there is no known theoretical solution for the bending moment across the centre of the plate. However, through software verification we know that the software we are using is capable of recovering the theoretical solution for the non-skewed plate and this gives us confidence and faith that the same will be true for the skewed plate.
The Status Quo in FE Analysis
Finite elements for continua are generally offered in the form of triangles and quadrilaterals, for two-dimensional problems, or tetrahedra and hexahedra for three dimensional problems. Given an engineering problem then the first step in any analysis is to create a mesh. This might require using a number of elements, but the idea of a basic mesh is a useful concept, this being the one that captures the geometry of the problem with the least number of elements, possibly even a single element as for the plate problem presented above.
Commercially available CFE software tends to adopt what might be termed very low-fidelity elements in that they cannot model much more than a linearly varying stress field. Most engineering problems involve significantly higher degree stress fields so that the basic mesh for a problem will generally produce a rather poor approximation to the theoretically exact stress field. As discussed previously, the approximation for CFEs is in the equilibrium conditions so that the basic mesh will produce stresses that are not in equilibrium with the applied load. This is of concern since if the engineer cannot rely on the stresses being in equilibrium he/she cannot therefore ensure that sufficient material (plate thickness in the plate example) is available to resist the stresses, i.e., a sound design cannot be established.
There are other potential issues with the use of low-fidelity CFE elements. For example, because the lowest degree elements are found to perform rather poorly under certain loading conditions, ‘numerical wheezes’ have been adopted, to improve their performance. Whilst this is not the place to discuss these issues in any detail, it is worth noting that as a result of these the engineer using a typical FE system is generally faced with making a choice of element type, often from a large range, for one particular structural form, e.g. a plate type structure as considered earlier in the design problem. Each element type will generally produce a different result for a given mesh and may even converge to a different solution with mesh refinement – some of these converged solutions being spurious or incorrect. Whilst seasoned FE practitioners understand these issues and are generally able to make informed decisions as to the type of element to use, it is unreasonable to expect an inexperienced engineer to do likewise.
In the author’s view, the low-fidelity nature of commercial FE systems is a major hindrance to the idealised aim of the democratisation of simulation. Many of these issues disappear when high-fidelity elements are adopted and, particularly, when different element formulations are used. For example, the equilibrium finite element (EFE) formulation provides solutions that, as the name suggest, satisfy exactly the equations of equilibrium. Clearly, they are still approximate and this approximation manifests itself in discontinuous displacements at the vertices of element edges. However, with these elements even a single element (a basic mesh) could have been used to solve exactly the design problem presented earlier. The EFE formulation also removes many of the issues occurring with low-fidelity CFE systems. For example, an EFE system would only offer the engineer a single plate element capable of working effectively in all situations. As the reader will have detected, the author is passionate about the virtues of the EFE formulation for practising engineers and some of these virtues, which allow the engineer to concentrate on engineering rather than the numerical vagaries of the FE system, were discussed in .
Issues with Published Advice and Data
The practising engineer, in his/her quest for the truth, is susceptible to misinformation particularly when published by authoritative sources. An engineer’s time is often extremely limited and if a seemingly sound source of helpful information is available then it is likely that it will be used. However, as will be illustrated in this section, this is not always the case with poor advice being offered by organisations that should know better and published engineering design data being incorrect and even sometimes unconservative.
Uncertainty in ASME Thermal Expansion Data
In a recent project, the author had to make use of published thermal strains listed to only one significant digit. The published data came from the Boiler & Pressure Vessel Code of Practice published by the American Society of Mechanical Engineers (ASME). The thermal strain for a temperature rise from 20oC to 50oC was listed as 0.3mm/m. This means that the actual value could lie between 0.25 and 0.35mm/m which led to an uncertainty in the calculations of ±16.67%.
FE in Codes of Practice – FIB
The International Federation for Structural Concrete (fib) publish authoritative documentation on the best design practice for concrete structures in their Model Code for Concrete Structures 2010, 
In this document, they permit the use of FE analysis as an approach to the design of concrete structures and in section 22.214.171.124 a description of the Finite Element Method is provided together with some basic guidance and words of caution including the statement that:
‘The internal stresses [from a FE model] are lower, compared with an exact solution.’
Anyone with a background in FE theory will recognise this statement as nonsense. Theory shows that for the CFE formulation, the strain energy of the model will generally be less than the theoretical value when the model is force (rather than displacement) driven. Whilst firm statements can be made on the bounds of integral quantities such as the strain energy, it is not possible to extend this statement to pointwise values of stress.
The simple plate problem of Figure 4 provides an example, it was seen that for the four-noded element the bending moment (a function of the stress) did indeed converge from below the theoretical value. However, as shown in Figure 8, if the eight-noded plate element is used then the same quantity converges from above the true solution.
Whilst the advice offered by fib is clearly incorrect, it does have the virtue that if the engineer believes this to be true then it is possible that they might consider mesh refinement more seriously in order not to be using an overtly non-conservative stress value in assessing the safety of a design. T
imoshenko’s ‘Theory of Plates & Shells’
Whilst one might hope that engineering text books are free of errors this is not always the case and indeed errors may propagate through new editions, reprints and, if as in the case presented here where the text is effectively the primary monograph on the subject, even to texts by other authors.
Timoshenko’s ‘Theory of Plates & Shells, , is a renowned treatise providing practising engineers with theoretical solutions for plate and shell members. These solutions are essentially strength of material solutions but differ from those presented in standard texts in that the solutions are not closed-form, i.e., they are based on an infinite series of transcendental, typically trigonometric, functions. Thus, in addition to providing the equations for displacements and stresses, tables with non-dimensional displacements and stresses/moments are provided for a range of common plate and shell configurations. The plate configuration given in Figure 9 is identical to that studied earlier in this article.
In the plate studied earlier in this article we were concerned with the (total) bending moment across the centre of the plate. Timoshenko’s solution to this problem is expressed as the distribution of moments per unit length with the units Nm/m as opposed to Nm for a moment. A plot of the theoretical moment distributions is given in Figure 10.
The maximum moment occurs at the centre of the free edges and this would govern the design of a steel plate. For a reinforced concrete slab, the moment in the ? direction is also important as the slab must be able to resist these moments through the addition of transverse reinforcement bars. The moment in the ? direction is a maximum at the centre and the value of beta for this moment quoted in Figure 9 (0.0102) is not correct. The exact value is near to 0.0120 which is about 18% greater than the value quoted – it looks like a typographical error whereby the last two digits have been transposed. Thus, using the values from Timoshenko could, in this case, lead to the designer not placing sufficient reinforcement and thereby obtaining an unsafe design.
It is interesting to note that this error has propagated into more recent texts on plates. For example, the same error can be found in  which was published in 2004, i.e., some 15 years after the last reprint of Timoshenko’s text.
This error was detected by the author when comparing results from an FE model he had generated with those of Timoshenko, . This demonstrates a useful point, namely, that when used correctly, a FE model may be used to check published data. The author is collaborating with a colleague in checking other results in Timoshenko’s text and, by this process, a number of other errors have been detected.
NAFEMS Benchmark Challenge Number 2
In this challenge (NBC02) the author, after finding anomalies in published data, set the problem of considering the collapse load of a uniformly loaded rectangular plate simply supported on all edges. For the particular configuration considered the uniform load or pressure to cause collapse from two published results gave:
The Steel Construction Institute’s (SCI), Steel Designers’ Manual (SDM),  - 103kPa
Roark, Formulas for Stress & Strain,  - 178kPa
Clearly, these results are rather different and with such a big difference and no knowledge of which figure is correct, the author analysed the plate in newly developed finite element software designed for this purpose. The result was: Ramsay Maunder Associates (RMA), Equilibrium Finite Elements (EFE) - 231kPa
NBC02 requested that readers consider the reason for the difference between published results and to conduct a FE analysis using conventional commercial FE software to determine the true value. Whilst not all readers obtained the same value as EFE, two readers reported exactly the same value to three significant figures. As the value from EFE was not available to the readers, this exercise served as a blind experiment that provided verification for EFE.
Upon further research, the author discovered that the value reported in the SDM was derived from an archaic and incorrect linear elastic approximation whereby the 103kPa reported, rather than being the collapse load, was an approximation of the load to cause first yield in the plate. The results presented in Roark (178kPa) were based on numerical simulation of some forty years ago and were insufficiently refined to give reliable results. Whilst both published results were conservative in that they underpredicted the collapse load, in an economic climate where material waste needs to be minimised, the use of the SDM to design plates might have led to the use of a significantly thicker plate member than was actually required.
Practical Conclusions for the Expert Witness
Accepting the degree of uncertainty in the various parameters used to define a model, all problems tackled by engineers have a theoretical solution. However, only a few of the problems have known theoretical solutions. Where there is no known theoretical solution the engineer adopts numerical simulation tools, such as the FE method, to obtain an approximation to the theoretical solution from which he/she can assess the stiffness and strength of a design and check whether this is sufficient to satisfy the appropriate SLS and ULS conditions laid down in the relevant code of practice. Whilst computer software is available to undertake design directly, e.g., to provide values of a plate thickness for a given set of supports and loads, the majority of design is conducted in what is termed a design-by-analysis or iterative approach, e.g., the designer tries a particular plate thickness and then modifies this according to whether or not the SLS and ULS conditions are satisfied.
The conventional finite element software available commercially is typically based on a CFE formulation of low-fidelity. The CFE formulation fudges equilibrium so that a sound design can only be assured if the mesh has been refined sufficiently. The low-fidelity nature of the elements used means that the level of mesh refinement required might be quite considerable and, further, to overcome numerical issues with low-fidelity elements, software vendors offer a veritable plethora of different element types for the same structural form, e.g., plate elements.
The democratisation of simulation means, in practice, that software vendors are supplying ever more sophisticated engineering software for an audience of increasingly inexperienced and uneducated engineers and they are doing this without paying due attention to the good practice of simulation governance that more experienced practitioners naturally adopt in their work.
Computer-aided catastrophes have occurred and the current trend is only likely to increase the risk of finite element malpractice leading to more events. Whilst these events may or may not be as financially catastrophic as the Sleipner Incident, they may cause injury or death and they may lead to such a significant loss of corporate reputation that a company ultimately fails. Death and injury may lead to an investigation by a body such as the Health & Safety Executive and such an inquiry might well involve the employment of technical experts and expert witnesses. Also, the financial loss to a company who, for example, outsourced the design of a critical component or structure, might well wish to sue that design house for damages and loss of business and/or reputation if the design failed spectacularly in service. Such scenarios are very common and again it will be the technical expert who is called upon to assist in an inquiry or legal proceedings.
The technical expert/witness called upon to present their opinion about the facts in such a case needs to have a complete understanding of the scientific method as applied to numerical simulation, i.e., simulation governance. This understanding will not be gained without considerable practice in the field of simulation, e.g., finite element analysis, and also a significant academic understanding of the mathematical methods used in such simulation tools. Thus, those firms wishing to employ a sound technical expert will need to scrutinise the CV of the potential expert to establish that this essential mix of practical and academic credentials is met.
The technical expert scrutinising decisions based on the outcome of numerical simulation needs to be cautious particularly if no evidence has been provided of simulation governance and it is increasingly the case that engineering reports provide no evidence of software and solution verification essential if the conclusions are to be considered valid. This article has presented some of the potential pitfalls that may lead to inaccurate or simply erroneous results being presented. There are others not discussed in this article and the best advice available to the technical expert or expert witness is to treat all results presented as suspect until they can be proven otherwise, i.e., to adopt a familiar interpretation of the Napoleonic code of jurisprudence, ‘guilty until proven innocent’!
The regularly updated knowledge base at the author’s company website provides comprehensive information on many of the topics discussed in this article including original versions of the NAFEMS Benchmark Challenges and can be reached at the link below. www.ramsay-maunder.co.uk/knowledge-base/
The author is grateful for the comments and suggestions of Edward Maunder, Independent Technical Director at Ramsay Maunder Associates who provided a technical review of this article and to Max Ramsay for proof reading the article.
 Stephen P. Timoshenko, ‘History of Strength of Materials’, Dover (1983).
 Peter Bartholomew, ‘NAFEMS: the early days’, NAFEMS Benchmark Magazine, January 2016.
 Bernt Jakobsen, ‘The Sleipner Accident and its Causes’, Engineering Failure Analysis, Vol. 1, No. 3, pp 193-199, 1994.
 Henry Petroski, ‘To Engineer is Human; the Role of Failure in Successful Design’, St Martin’s Press, 1985.
 Henry Petroski, ‘Design Paradigms: Case Histories of Error and Judgement in Engineering’, Cambridge University Press, 1994.
 Angus Ramsay, ‘The NAFEMS Benchmark Challenge: Volumes 1 and 2’, NAFEMS, 2017.
 Barna Szabo, ‘A Case for Simulation Governance’, Desktop Engineering, February 2015.
 Angus Ramsay, Edward Maunder & Jose Moitinho de Almeida, ‘What is Equilibrium Finite Element Analysis, NAFEMS Benchmark Magazine, January 2017.
 The fib Model Code for Concrete Structures 2010, Ernst & Sohn, 2013.
 S.P. Timoshenko & S. Woinowsky-Krieger, ‘Theory of Plates and Shells’, 2nd Edition, McGraw-Hill International Series, 28th Printing 1989.
 R. Szilard, ‘Theory and Application of Plate Analysis’, Wiley 2004
 Angus Ramsay & Edward Maunder, ‘An Error in Timoshenko’s Theory of Plates & Shells’, The Structural Engineer, June 2016.
 Steel Designers’ Manual, The Steel Construction Institute, 7th Edition, January 2016.
 Roark’s Formulas for Stress & Strain, 6th Edition, McGraw Hill, 1989.
Angus Ramsay is a chartered engineer and fellow of the Institution of Mechanical Engineers. He is the managing/engineering director at Ramsay Maunder Associates and is a Technical Expert at HKA (formerly Cadogans). Angus is a founding member of, and assessor for, the NAFEMS Professional Simulation Engineering (PSE) certification scheme and is a member of this organisation’s Education & Training Working Group. He is also a member of the Structural Technology & Materials Group, a special interest group of the IMechE and has held honorary positions at Nottingham Trent University and the University of Exeter.