Paste your Bing Webmaster Tools verification code here Paste your Google Webmaster Tools verification code here

Why simulate, measure, correlate or automate?

WhyCalculateFirst and foremost, the purpose of any analysis
is to reduce risk
when a decision must be made

Risk reduction can be achieved as early as the concept stage or as late as the design process using analytic or numerical tools.

Finite Element Analysis is a common tool used in structural, thermal-acoustic, vibro-acoustic and similar analysis. Other tools include Computational Fluid Dynamics and analytic models.

The following framework can be adapted to whatever analytical tool you choose to use.

The crux of the matter is that, in order to truly reduce risk, you must catch potential problems before they happen in real life.

The quote from G.E.P. Box "essentially, all models are wrong, but some are useful" (Link) is good to take to heart when evaluating modelling.

A chief reason behind this statement is that any model is based on a set of assumptions .

The path to trustworthy simulation lies in finding assumptions that match

  • the scope that needs to be modelled for the model to assist in the decision making,
  • the accuracy to which the model must deliver results to be of use,
  • the conditions required for the model to capture real life output,
    e.g. when it models goods that is produced or assembled the way production is made must be captured by or not influence the model output.

A very insightful and thoughtful note on this matter by S.J. Rienstra is found here: Link

Theory on Theory Building written by C. Christensen is found here: Link

If you have not tested the applicability of these assumptions beforehand, you simply cannot know whether to trust your simulation. There is also a limit as to what we can simulate; some situations will remain out of reach. Finally, if you cannot produce what you simulate, this will in practice reduce the usefulness of a simulation – even if it is 100% correct.

To exemplify some of the above:

  • Systems containing gaps, e.g. a gearbox, produce different results depending on how things come together in the system. This implies that you may end up with two different results when running two measurements after each other or between two identical gearboxes. Now, which of these cases should you model, and is either case truly representative of a real-life situation?
  • A fundamental assumption in Finite Element Analysis is that when you divide your geometry into smaller groups, the solution comes together more accurately. If you model a straight plate, this assumption holds, but only when the plate you produce is straight.
  • Similarly, if you apply a beam or pipe element to a circular pipe, you implicitly assume that your pipe is perfectly straight and round. Unfortunately, most pipe sections are neither perfectly straight nor perfectly round.

Therefore, it logically follows that any model must be based on assumptions that are:

  • Correct
  • Incorrect
  • Unknown

Correct assumptions are those we rely on and draw upon in our analysis.

Incorrect assumptions can sometimes be compensated for if we know how they affect the end result.

Unknown assumptions are those that may kill the analysis or project.

Note that an unknown assumption that turns out to be 100% correct does not reduce risk when decisions have to be made.

The above discussion on assumptions explains why simulation is not guaranteed to work when applied as a troubleshooting tool in a new situation.

Sometimes simulation is the only tool available and you must rely on the analysis procedure anyway, but keep it mind that this can increase your chances of risk.

Analysis that truly reduces risk can be used for:

  • Concept selection.
  • Checking and approving designs.
  • Optimising a design.
  • Identifying relations between inputs (e.g. dimensions, and outputs, mechanical stress or vibrations) for deeper understanding.

 

WhyMeasure2Measure for Knowledge

"There are two possible outcomes: if the result confirms the hypothesis, then you've made a measurement. If the result is contrary to the hypothesis, then you've made a discovery."

Enrico Fermi

An accurate measurement can give you the opportunity to identify things you did not previously know about a problem.

In other words, a measurement can provide both expected factual data and unknown values, helping you identify the unexpected elements that could be part of your problem.

More on Measurement

 

????????????????????Correlation is a form of Quality Assurance (QA) involving corroborating or rejecting your model assumptions

"He who loves practice without theory is like the sailor who boards ship without a rudder and compass and never knows where he may cast."

Leonardo da Vinci

A correlation can be made in the following order:

  • Simulation
  • PreTest Analysis
  • Test
  • Correlation
  • Updating

A correlation can be used for:

  • Test - Test
  • Simulation -Simulation
  • Test - Simulation

Test - Test correlation can be made, e.g. to study variability in production, to backtrack effects from unexpected changes or simply, to verify test repeatability.

Simulation - Simulation correlation can be made, e.g. to study that key model features are still captured when making a model faster for use in an optimisation loop or to study if there is an effect on results from different modelling assumptions.

Test - Simulation is typically made to verfiy modelling assumptions, but the reverse also is possible, e.g. when developing test methods and a well defined reference case is chosen - the simulation may hold moe true than does the test.

Note that making tests for the purpose of  verifying simulation requires special care. Failing to recognise this may become costly, as inaccurate test results may generate poor modelling practise.

An overview paper by D.J. Ewins on model correlation can be found here: Link

Remember that the way you understand a situation is your mental model and is therefore based on a set of assumptions.

For example:

  • When not measuring all data synchronously, i.e. in one go, you assume the system to be linear and its operation to be in a steady state.
  • When recording data, if you reduce data to:
    • RMS values, you lose all spectral information and, hence, assume that there is no value in this data.
    • Spectral Data, you lose all time data and, hence, assume that operation is harmonic.
  • If you only measure data once, you assume that you have perfect repeatability of your test work and that its operation is time invariant, i.e. in a steady state and perfectly linear.

Correlating one simulation with another makes sense, for example to:

  • Compare a numerical solution to an analytic solution.
  • Different ways of modelling a phenomenon can be compared, e.g. the use of constraint equations versus physical modeling of a mechanism.
  • Squeeze the model size down as much as possible for optimisation runs and know how coarse a model you can make without sacrificing the details you need.
  • You may want to see if there is an effect from different material models or the knowledge span of data for such data, e.g. when biomaterials are a concern.

When correlating, it usually is good practice to start simply. A correlation can be made for many things, including:

  • Weight
  • Physical dimensions
  • Static deflection
  • Natural frequency and mode shape

There is very little chance that a mode shape will be correct if the object weight and dimensions are not accurately captured.

Note that several Test/Simulation natural frequencies can be matched to good accuracy, without any guarantee of improving deflection/mode shapes. Such correlation is 'fool's gold', as it involves only a subset of the model information.

Correct modelling QA requires correlation of deflection/mode shapes which in turn requires a matching of test and simulation geometry.

PreTest analysis is a technique in which physical sensor positions on an object are identified using simulated data and simulation model geometry.

Our experience, somewhat surprisingly, is that PreTest analysis pays off, even when later work shows that the simulation model used was completely wrong.

In short, a PreTest analysis made before a measurement is worth the bother as it:

  • Improves Test model geometry when a CAE model geometry is re-used
  • Synchronises Test-CAE units and coordinate systems when the test follows standards set in upstream work.
  • Improves the observability of your test objective. For example, it makes modes easier to distinguish from each other, something that has a large payoff when correcting the simulation model.
  • Helps you you identify whether something unexpected is going on if upstream results are not in the same ballpark as those measured.
  • Is more effective than simple guesswork on where to measure. In fact, our experience is that the number of test positions usually can be cut down by ~25% and it reduces the work required for identifying excitation positions.

A test for correlation must be made with care, as it will serve as a reference for model correlation. Things the test engineer must be concerned with include:

  • Documenting the actual sensor position on the test object (which must be done with a higher level of accuracy than when troubleshooting test geometry).
  • Using test sanity checks such as reciprocity and repeatability in the measurement process.
  • Carefully suspending the test object so that its boundary conditions can be mimicked in the simulation model. Free-free conditions should be approximated when rigid body modes fall below 1/10 of the first natural frequency that involves deformation.

Updating should only be made once the assumptions upon which the model relies have been verified.

This is because finding a correct model for your problem is an inverse problem, i.e. a type of problem where you know the answer is 4 and you must find the question.

Therefore, you can always get the right answer for the wrong reason. Consequently, an inverse approach does not reduce risk in a new situation.

Keep in mind that what is implied by the phrase 'a correct model' may vary from one situation to the next,. For example, during the early design phase it is only necessary to get some of the rough system properties right, while in a later design phase you would want more precision.

 

WhyAutomate

Faster, Lighter, Higher, Better, ...

 

"I couldn't tell you in any detail how my computer works. I use it with a layer of automation."

Conrad Wolfram

 

 

Looking at how CAE work is made and comparing it to production shows that the current CAE work style is that of the artisan rather than that of the modern factory. The way one artisan works may differ from that of another artisan. Differences in software may also affect results.

It is evident that the above affects simulation and test reliability. It most likely affects efficiency as well.

One way to reduce such undue influence is to automate the application of design procedures, much in the way textiles and other production began to be automated in the 16th century.

An often-faced problem in automation is the definition of a workflow process, i.e. the ability to see the way of a procedure and perhaps to find shortcuts along this way. There may be also the 'Spinning Jenny' discussion, i.e. a fear that jobs may be lost.

A common objection is that it may be difficult to automate 100%, which may be true - but, in no way reduces the effect from automating a part of the work load or, hinders workflow to be modified to allow 100% automation.

The latter insight is important. To effectively utilize new technology, one cannot let old technology dictate the terms. In a sense, letting old technology set requirements for new technology would be like insisting on having the space also for the horse in the tractor cabin when plowing.

Making full use of new technology implies finding new ways of thinking and identifying new work styles.

The major reasons for wanting to automate are:

  • Learning. In automating, you make a systematic description of your design process. By doing so, you define rules and inputs. This work will in itself help you improve the workflow, as it forces you to think and reflect on what you do.
  • Safety. A documented process can be reviewed and QAed in many ways that are impossible for manual work.
    •  Corporate learning. Any process can be refined and improved.
    • Knowledge dissemination.
      • Most products require the expertise of more than a single person.
      • Defining and automating your workflow allows collaboration at new levels.
  • Systematic. Building one analysis function at a time allows the re-use of each function.
  • Efficiency. Efficiency is more of a result than an actual driving force.

As discussed later, you do need efficiency to tap into design space exploration and optimisation.

However, the author's opinion is that the fundamental reason for efficiency lies in shifting work time to time where you can analyse and reflect on what you are doing instead of drowning in the work of carrying it out.

In the 1990s, 85% of manhours were spent on model creation, 10% on model execution and 5% on reporting. With automation, it should be possible to change the bulk part of the manhours to 85% reporting.

Looking at the above, it is obvious that the same thoughts have already been applied at the tool level (FE solver, Pre/Post, CAD etc) – why not also extend the same methodology and background reasoning to the design workflow?

Please note that in this text CAE mostly is discussed, but the advice applies equally well for testing.

Automation always carries a layer of abstraction and can be expressed as Y = H*X, where X is the input, Y is the output and H is what happens in your workflow. H may involve one or many execution steps.

To clarify and define, automation is a process where a set of input variables that define a product are used to create one or more simulation models that are executed in parallel, serial or a combination of both, to produce outputs that define product performance.

A useful analogy for workflow automation is that of printing a document. The inputs in this case would be a document containing a mixture of text and picture and the particular printer (page type, single or double page, colour or BW, etc). Printing the document involves converting data from the document format to a printer-friendly format (PostScript, PCL, etc), interfacing with a printer drive, establishing contact with the printer, possibly across a network, and then starting and monitoring the print job. Without a doubt, at one time this was a rather advanced endeavour.

The author was told a story of a colour print job that was made in the early 1980s using a modem connection to a printer situated 400 km away. This job required one month to create a successful colour picture on paper (to no avail, as it was discovered the end customer was colourblind).

The point here is that printing nowadays is possible for most people because we have automated the printing process and in doing so, that the general situation is improved.

It is possible to also automate portions of engineering work. The simplest type of design is to check and approve a design, in which you simply verify that the design passes a set of criteria and make no effort to improve it. This mode of working is sufficient for cases where components are assembled or specific design guides must be followed.

Advantages to automating the workflow for the check and approve design process are:

  • The execution of the workflow can be QAed to ensure that results are correct and follow company procedure.
  • Execution will be faster.
  • Anyone can execute a fully automated workflow.
    • Company experts would only have to automate the workflow rather than execute it. This implies that expert resources will be free to further develop the CAE tool chest or use a mature workflow to save costs.
    • Combining workflows from various disciplines allows knowledge dissemination, as one expert can enable another or both experts can enable a non-expert.

The reduced lead time for the actual CAE process using automation is staggering. In the late 1990s, the author saw examples where FE models of products defined by ~100 inputs could be generated in ~60 seconds rather than the 14 days it would take to generate them manually using identical CAE tools.

With 100 inputs, the risk for making mistakes when manually entering them in the CAE tool is obvious. Automation is not only faster, but also removes that risk.

Having automated the workflow, the next step is to evaluate various ways of executing it in search of an improved design.

Methods to exploit the automated workflow are:

  • Table Execution
  • Optimisation
  • Design of Experiments
  • Multi Disciplinary Analysis
  • Random Experiments
More on these Methods is found here

The author’s experience from running different types of optimisation on a consulting basis over a ten year period is that:

  • The end result achieved with time constraints and a fixed budget improves with the number of design iterations you manage to complete during the project time.
  • Workflow automation does not cost much once you are in the habit of thinking in terms of input variables, workflow and design outputs.
  • Workflow automation does improve reliability – it even pays off with low execution numbers in the three simulation cases.
  • Methodical design space exploration can make a huge difference to the end result.
  • Multiple methods should be available. DoE/RSM is very useful, but it does not always work. Good optimisers capable of finding global optimum should also be available.

Last, a note— the author, Claes Fredö, analysed a particular design involving a rotating machine over many years at annual or bi-annual intervals.

The design objective was to create as much Separation Margin (as wide a difference) as possible between two known machine disturbance frequencies and the natural frequencies of a connecting structure that involved some piping plus a tank.

Addressing the problem manually, it was possible to find ways to avoid the first disturbance frequency, but not the second or vice versa. The problem was analysed several times for different machines and always had the same outcome. It was highly annoying to never be able to prevent the both SMs.

Finally, the end customer stated that we really had to find a way to handle four disturbance frequencies this time, as the design should cater to both 50 Hz and 60 Hz installations.

The author decided to use the multi-disciplinary tool chest. As the model creation part and the problems were known, an automated workflow could be set up rather quickly. Taguchi screening quickly revealed dominant variables and the design was quickly pushed as far as possible with respect to pipe, tank dimensions and the position of a tank’s internal structural element.This exercise revealed that a countermeasure had to be introduced, i.e. a ring stiffener had to be added to the tank.

The model was updated, new design inputs were added and a new Taguchi design screening was made to pinpoint dominant variables. Many of the design variables in this case were discrete variables, as pipe and plate dimensions were standard. A Table Execution case covering all design possibilities was set up. The table involved about 2400 cases. Simulation per case was quick, as the analysis was simple and the model small. The Table Execution was therefore made overnight. Avoiding four excitation frequencies was no problem whatsoever and very satisfactory designs could be identified.

A consultant - admittedly, very pleased with himself - sorted the table outputs, mailed the end customer an Excel sheet with the 100 best designs, and told him, "Simply choose what you prefer from the top down and have a nice day."

Start to finish, the case involved a reasonable effort as it was handled within two days time.

The moral of the story was that the end customer rummaged through the Excel sheet and, in the end, was forced to admit to himself that the variation he could see in the (incomplete) table from the design variables told him that it was impossible for him to manufacture the item to the tolerance required to avoid the four excitation frequencies.

In the end, it does not matter that you can design 'it', if 'it' cannot be reliably produced.

 

Qring Technology International AB | von Holtens väg 4, 443 32, Lerum | VAT ID SE556873544201