Untitled - EnginSoft

Transcript

Untitled - EnginSoft
Newsletter EnginSoft Year 7 n°3 -
3
EnginSoft Flash
Creativity is one of the best
ingredients for innovation –
says Mr Nazario Bellato of
Magneti Marelli Powertrain
Italy in his recent interview
with EnginSoft which we are
proud to present to our
readers in this latest edition
of the Newsletter.
Mr Bellato explains the
principles
of
complex
technological processes that
lead to successful products:
Each
competence Ing. Stefano Odorizzi
contributes to a unique EnginSoft CEO and President
final
result.
Strong
competitive advantage is based on many reliable technical
results!
At EnginSoft each engineer and professional contributes
his/her knowledge to our Network of engineering expertise
that has expanded from Italy to France, Spain, Sweden onto
Germany, the UK, Greece and the USA since 2006.
th
On 8 December 2010, Stanford University will host a unique
Workshop on Optimization as part of its affiliation with
Cascade Technologies and EnginSoft. We ask our readers to
refer to the Event Calendar on page 66 for more information.
Initiatives like these are the outcomes of years of knowledge
exchange, personal efforts, trust, close collaboration or what
we like to call: creative international networking !
Fall 2010 sees the EnginSoft teams working intensively on
the preparations of the EnginSoft International Conference,
21st–22nd October 2010, Fiera Montichiara/Brescia, Italy.
The event is a culmination of knowledge of engineering, CAE,
simulation and Virtual Prototyping, brought together in Italy
from Italy and around the world.
An “excerpt” of this knowledge is presented to our readers on
the following pages. Articles this time include among others:
• Aerodynamic and acoustic optimization of radial fans by
Technical
Faculty
Friedrich-Alexander-University
Erlangen-Nuremberg;
• Combustion noise prediction in a small diesel engine by
Instituto Motori CNR and Università di Napoli;
• Illumination analysis and design optimization of an
automotive speed meter by DENSO Corporation Japan;
• Simple optimization of gradated biomaterial scaffolds..by
Aalto University Foundation, Finland;
• Reliability based structural optimization of an aircraft
wing by Istanbul Technical University;
• Model of a multimass hyperelastic system and its
parametric identification by Tula State University, Russia;
• An introduction of Feat Group, a world leader within the
field of steel forging.
Moreover, we wish to inform our readership about the latest
software upgrades and applications with:
• an outlook on ANSYS 13;
• an example of how Ansoft Maxwell2D/3D e RMxprt and
ANSYS are used to predict the functioning of an electrical
engine;
• a benchmark by Ansaldo Energia with ANSYS EKM
(Engineering Knowledge Manager);
• the ESAComp 4.1 Release Notes and an example for the
analysis of composite materials;
• an introduction of the software Coldform 2010;
• FTI’s Forming Suite for cost optimization and forming
simulation;
• a presentation of Kraken, a reservoir simulation
postprocessor developed by Engineering Simulation and
Scientific Software (ESSS) Brazil;
• a simple parallel implementation of a FEM solver in Scilab
• a summary and emphasis on the importance of
Simulation-Quality Material Data by DatapointLabs;
• an article about CADdoctor for product data quality in PLM.
Our corporate news feature, among others, EnginSoft’s new
membership with E2BA, the Energy Efficient Buildings
Association, and our certification with EMAS, the European
Eco-Management and Audit Scheme that evaluates the
environmental performance of enterprises. After two
technical articles, the Japan Column tells us about the
culture of wood and Wood MONODUKURI in the land of the
rising sun.
The Editorial Team and EnginSoft look forward to welcoming
our readers to this year’s Conference and the beautiful Lake
Garda region - Please meet us to discuss opportunities and
let us share our knowledge to foster innovation!
Stefano Odorizzi
Editor in chief
4
- Newsletter EnginSoft Year 7 n°3
Sommario - Contents
CASE STUDIES
6
Combustion Noise Prediction in a Small Diesel Engine Finalized to the Optimization of the Fuel Injection
Strategy
13
16
19
22
Aerodynamic and Acoustic Optimization of Radial Fans
Electromagnetic Interference: an Advanced FEM Calculation Approach
Reliability Based Structural Optimization of an Aircraft Wing
Simple Optimization of Gradated Biomaterial Scaffolds made of Calcium Phospates
SOFTWARE NEWS
24
26
ANSYS 13: Preview
I prodotti ANSYS al servizio della progettazione e della simulazione dei motori elettrici: La verticalizzazione
RMxprt-Maxwell-ANSYS Mechanical.
29
32
33
35
36
38
The New ANSYS Frontier Product: ANSYS EKM (Engineering Knowledge Manager)
ESAComp Versione 4.1 – Strumento basilare per la progettazione delle strutture in composito
ESAComp 4.1: New Features
Peculiarità del software Coldform®
Preventivazione e Valutazione di Formabilità
Pushing Reservoir Data Handling to New Frontiers
IN DEPTH STUDIES
41
46
48
A Simple Parallel Implementation of a FEM Solver in Scilab
The Need for “Simulation-Quality” Material Data
Model of a Multimass Hyperelastic System and its Parametric Identification
INTERVIEWS
50
An Interview with Mr Nazario Bellato, Simulation Manager of Magneti Marelli Powertrain
The EnginSoft Newsletter editions contain references to the following
products which are trademarks or registered trademarks of their respective owners:
MAGMASOFT is a trademark of MAGMA GmbH. (www.magmasoft.com)
ANSYS, ANSYS Workbench, AUTODYN, CFX, FLUENT and any and all
ANSYS, Inc. brand, product, service and feature names, logos and slogans are
registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries in the
United States or other countries. [ICEM CFD is a trademark used by ANSYS,
Inc. under license]. (www.ANSYS.com)
Forge and Coldform are trademarks of Transvalor S.A.
(www.transvalor.com)
modeFRONTIER is a trademark of ESTECO EnginSoft Tecnologie per
l’Ottimizzazione srl. (www.esteco.com)
LS-DYNA® is a trademark of Livermore Software Technology Corporation.
(www.lstc.com)
Flowmaster is a registered trademark of The Flowmaster Group BV in the
USA and Korea. (www.flowmaster.com)
SCULPTOR is a trademark of Optimal Solutions Software, LLC
(www.optimalsolutions.us)
ESAComp is a trademark of Componeering Inc.
(www.componeering.com)
AdvantEdge is a trademark of Third Wave Systems
(www.thirdwavesys.com)
.
Grapheur is a product of Reactive Search SrL, a partner of EnginSoft
For more information, please contact the Editorial Team
Newsletter EnginSoft Year 7 n°3 -
JAPAN CAE COLUMN
54
Illumination Analysis and Design Optimization of an
Automotive Speed Meter
57
The Culture of Wood
RESEARCH AND TECHNOLOGY TRANSFER
62
EnginSoft Joined the E2BA
TESTIMONIAL
63
FEAT Group: We forge all you need
EVENTS
65
Enginsoft ha partecipato allo Users’ Meeting Europeo di
FORGE il 7 e 8 giugno 2010 a Sophia Antipolis, Francia
66
66
Newsletter EnginSoft
Year 7 n°3 - Autumn 2010
To receive a free copy of the next EnginSoft
Newsletters, please contact our Marketing office at:
[email protected]
Elysium’s CADdoctor Enriches Product Data Quality in
PLM
61
5
EnginSoft contributes to the LION5 Conference
EnginSoft Event Calendar
All pictures are protected by copyright. Any reproduction
of these pictures in any media and by any means is
forbidden unless written authorization by EnginSoft has
been obtained beforehand.
©Copyright EnginSoft Newsletter.
Advertisement
For advertising opportunities, please contact our
Marketing office at: [email protected]
EnginSoft S.p.A.
24124 BERGAMO Via Galimberti, 8/D
Tel. +39 035 368711 • Fax +39 0461 979215
50127 FIRENZE Via Panciatichi, 40
Tel. +39 055 4376113 • Fax +39 0461 979216
35129 PADOVA Via Giambellino, 7
Tel. +39 49 7705311 • Fax 39 0461 979217
72023 MESAGNE (BRINDISI) Via A. Murri, 2 - Z.I.
Tel. +39 0831 730194 • Fax +39 0461 979224
38123 TRENTO fraz. Mattarello - via della Stazione, 27
Tel. +39 0461 915391 • Fax +39 0461 979201
www.enginsoft.it - www.enginsoft.com
e-mail: [email protected]
COMPANY INTERESTS
CONSORZIO TCN
38123 TRENTO Via della Stazione, 27 - fraz. Mattarello
Tel. +39 0461 915391 • Fax +39 0461 979201
www.consorziotcn.it
EnginSoft GmbH - Germany
EnginSoft UK - United Kingdom
EnginSoft France - France
EnginSoft Nordic - Sweden
Aperio Tecnologia en Ingenieria - Spain
www.enginsoft.com
PAGE 24 ANSYS 13: NEW RELEASE
ASSOCIATION INTERESTS
NAFEMS International
www.nafems.it
www.nafems.org
PAGE 50 AN INTERVIEW WITH NAZARIO
BELLATO, SIMULATION MANAGER OF
MAGNETI MARELLI POWERTRAIN
TechNet Alliance
www.technet-alliance.com
RESPONSIBLE DIRECTOR
Stefano Odorizzi - [email protected]
PRINTING
Grafiche Dal Piaz - Trento
The EnginSoft NEWSLETTER is a quarterly
magazine published by EnginSoft SpA
Autorizzazione del Tribunale di Trento n° 1353 RS di data 2/4/2008
PAGE 5 COMBUSTION NOISE
PREDICTION IN A SMALL DIESEL
ENGINE FINALIZED TO THE
OPTIMIZATION OF THE FUEL
INJECTION STRATEGY
ESTECO EnginSoft Tecnologie per l’Ottimizzazione
34016 TRIESTE Area Science Park • Padriciano 99
Tel. +39 040 3755548 • Fax +39 040 3755549
www.esteco.com
6
- Newsletter EnginSoft Year 7 n°3
Combustion Noise Prediction in a Small
Diesel Engine Finalized to the
Optimization of the Fuel Injection
Strategy
Paper published at SAE Noise and Vibration Conference and
Exhibition, May 2009, St. Charles, IL, USA, SAE Paper 200901-2077.
The worldwide demand for the engine optimization, in terms of
power output, produced pollutants and fuel consumption, is
continuously increasing [1]. A growing attention is also being
devoted by automobile manufacturers to NVH characteristics of
the whole vehicle, and, for this reason, the control of the noise
produced by the combustion process in a diesel engine is being
considered a very important topic [2,3]. For this reason, the
combustion noise reduction is nowadays considered as an
additional factor in engine development alongside performance,
fuel consumption and emissions.
The engine under investigation in the present work is a naturally
aspirated, light-duty diesel engine, equipped with a mechanical
Fuel Injection System (FIS) and utilized in non-road
applications. It is currently under development; however, a new
prototype equipped with a common-rail (CR) FIS, going to be
installed within small city cars. It is well known that the use of
a CR-FIS gives the possibility to respond to the noise emission
legislation and market demand through modulation of the
injection parameters. The above improvements are however more
difficult to obtain on small displacement engines, because of
the complexity and cost of the FIS itself [4,5]. In addition, a
long development phase is usually required at the test bench in
order to define the optimal injection strategies in different
engine operating conditions.
Based on the above considerations, the main scope of the
present work is the development of an optimization procedure
that is able to theoretically determine the best injection
strategies compatible with high performance and low noise
levels, while reducing the development phase and the time-tomarket of the engine.
To fulfill the above goal, a multi-objective optimization tool was
employed [6-8]. This tool was able to automatically vary the
control parameters, and to compare the related performances.
The optimization tool however required the development of
proper simulation models of the engine. It is currently possible
to simulate the physical and chemical processes occurring in the
operation of internal combustion engines by using appropriate
numerical codes (1D or 3D). 3D simulations can predict, for
example, spray behavior, mixture formation, combustion process
and toxic emissions [9]. However, they require high
computational times, even when high-speed computers are
employed. 1D models, on the contrary, are able to gain
information on the overall engine behavior [10] and, due to the
reduced computational efforts, are better suited to be employed
within an optimization procedure [11].
Concerning the prediction of noise emission, either detailed or
simplified models can be utilized. Detailed approaches are
usually based on the employment of FEM-BEM codes, which
include the in-cylinder pressure cycle as an excitation on the
engine structure [12,13]. A number of alternative and simplified
procedures are also available in the current literature, based on
a proper processing of the computed pressure cycle [14-16].
In this paper, a 1D model was chosen to predict the engine
performance and a recent methodology [14], based on the
decomposition of the 1D computed pressure signal, was utilized
to estimate the combustion radiated noise. The whole activity
was developed in four main steps:
1. an experimental campaign was initially carried out to gain
information on performance and noise levels on the engine
and to acquire the data required to validate the 1D and the
combustion noise models;
2. the 1D simulation of the tested engine was combined with
the GT-Power® code [17], for estimating the in-cylinder
pressure cycles and the overall performances;
3. the methodology reported in [14] was included within a
Matlab® routine to estimate the noise level. Some
coefficients included in the above correlation were properly
tuned to be in agreement with the experimental data;
4. an optimization process [18] was finally carried out with the
modeFRONTIER® code to identify an optimal injection
strategy of the prototype engine equipped with the CR-FIS.
The objectives established were the maximization of the
engine performance and the reduction of the noise emission,
at a constant load and rotational speed.
In the following, a description of each of the above steps is
presented and some conclusions are finally drawn concerning, in
particular, the trade-off between the two objectives, requiring
the selection of a compromise solution. The latter was identified
through the employment of a “Multi-Criteria Decision Making”
(MCDM) tool, provided by the modeFRONTIER® code.
Experimental Analysis
In this study, a naturally aspirated, four stroke, two valve, single
cylinder diesel engine (505 cm3 displacement) was
experimentally investigated. The engine test bed included an
electrical dynamometer, the data acquisition and control units,
as well as emission and an acoustic measurement equipment.
PERFORMANCE TESTS - A programmable electronic control unit
(PECU), based on a dSpace processor, was used to manage the
Newsletter EnginSoft Year 7 n°3 -
Fig. 1 - Engine and test-bench laboratory
engine operating conditions. The in-cylinder pressure was
detected by a piezoelectric pressure transducer, connected to
the AVL IndiModul 621. The air flow rate was also estimated
through the measurement of the fuel flow rate and Air/Fuel
Ratio. The latter was derived from the analysis of the exhaust
gas composition. Engine tests were carried out at full load, in a
range of engine speed going from 1400 to 3000 rpm.
ACOUSTIC TESTS - In order to measure the radiated noise with
reasonable accuracy, the acoustic characteristics of the
environment must be known. On an acoustic basis, the ideal
environment is a space with no reflecting surfaces and no
background noise. In practical terms the ‘best’ environment, for
engine applications, is an open-air site with one hard reflecting
surface (the ground) and no other obstructions for at least 50m
from the noise source and microphone positions. Moreover, the
background noise level should be at least 10 dB (preferably 20
dB) below the measured one.
Engine noise measurements could be also made in ‘nonanechoic’ test cells with acoustic absorption. The latter set-up
was followed for the present investigation, where certain
important parameters (reverberation time of the room and
Fig. 2 - In-cylinder pressure and accelerometer signal
7
background noise) were taken into account, as prescribed by the
applied ISO Standards [19]. For this aim, the engine was
acoustically isolated from the electrical dynamometer, through
properly designed acoustic absorption panels (see the white
shields shown in figure 1).
The radiated noise of the engine surfaces was measured by a
free-field microphone located 1 m away from the engine block,
to avoid near field effects, and in a position away from the
intake and exhaust systems. The purpose was to minimize the
effects of flow noise sources, so that the major contribution
considered is the one coming from the engine block. The
calibration of the microphone was performed before each test by
means of a pistonphone. Simultaneously, an accelerometer was
positioned on the engine head in order to correlate the
vibration, microphone and pressure signals. The acquisition of
the noise and vibration signals were performed at all
Fig. 3 - 1D scheme of the tested engine in GT-Power
investigated engine speeds, at sampling frequencies of 48 kHz.
In this way a useful bandwidth wider than 20kHz, free from
aliasing effects, was available. In order to synchronize the
measurements, a pulse signal supplied by an optical encoder was
used as a reference.
Figure 2 reports an example of the acquired accelerometer
signal, phased with the in-cylinder pressure, during both
combustion and motored conditions, at 3000 rpm. In both
cases, a strong vibration peak is well evident at the top dead
center, due to the piston slap phenomenon. Some reduced spikes
can also be identified, in motored conditions, when the incylinder pressure approximately crosses the crankcase pressure
(around the atmospheric level). This is probably an indication of
the occurrence of some piston movement around its pin,
captured as a small vibration signal. The same does not happen
when the piston is loaded by combustion pressure. The pressure
profiles also show the presence of a strong disturbance at the
inlet valve closure (IVC) event. The same spike is indeed absent
on the accelerometer signal.
Additional experimental data will be presented in the following
sections in comparison with the numerical results.
8
- Newsletter EnginSoft Year 7 n°3
Fig. 4- Comparison on the pressure cycle at 1400 rpm
Fig. 5 - Comparison on the pressure cycle at 2200 rpm
Fig. 6 - Comparison on the pressure cycle at 3000 rpm
Fig. 6 - Comparison on the pressure cycle at 3000 rpm
One-Dimensional Simulation
The well-known GT-Power one-dimensional simulation code was
employed to predict the performance of the investigated engine,
schematized as shown in figure 3. The 1D code solves the mass,
momentum and energy equations in the ducts constituting the
intake and exhaust system, while the gas inside the cylinder is
treated as a zero-dimensional system. Concerning the modeling
of the combustion process, a classical Wiebe equation was
utilized to compute the heat release rate in the base engine.
Proper values of the combustion process duration during both
premixed and diffusive phases were specified.
Figures 4-6 show the comparison of the computed and
experimental pressure cycles at three different engine speeds.
The model was able to correctly reproduce the pressure evolution
along both the compression, combustion and expansion phases.
Figures 7 and 8 instead report the comparison of the inlet air
and brake power through the whole range of investigated engine
speeds. Once again a good agreement between experimental and
numerical results was reached, especially concerning the air flow
rate (fig. 7). The good matching obtained with the experimental
data allowed the extension of the 1D analysis to the prototype
engine, equipped with the CR-FIS. In this case, however, the
employment of a Wiebe equation was no longer permitted. In
order to reproduce the effects of the injection parameters
modulation, a direct modeling of the spray behavior, fuel-air
mixing and combustion process was required. For this reason,
inside the optimization process, the GT-Power built-in DIJet
(Direct-Injection-Jet) model was utilized, as explained in the
next sections.
Combustion Noise Estimation
It is well known that the combustion noise generation
mechanism is a complex phenomenon, including non-stationary
and non-linear effects. In-cylinder pressure gradients during
combustion process are considered the main excitation forces
[20] on the cylinder liner and engine head. Additional
contributions come, however, through the excitations exerted on
the crankshaft by the inertia forces occurring as a consequence
of the rotation and alternative motion of various engine
components. While the first contribution depends on various
operating parameters (engine speed, load, injection phasing and
strategy, etc.), the second term is usually related solely to
engine speed. Of course, along the sound propagation pattern
from its generation inside the cylinder up to the noise
acquisition location (usually at 1 meter from the engine block),
the engine structure itself exerts a strong influence [21], in
terms of natural vibration frequencies and vibration modes. The
structure behavior is often synthesized in terms of a structural
attenuation curve [16]. However, a more recent methodology
was proposed [14] for the prediction of the overall combustion
noise, which includes in the correlation a strict dependency on
the engine operating conditions and injection strategy.
The above described approach was used in this work. The main
idea behind this technique was to decompose the total incylinder pressure signal (ptot) according to three main
contributions: compression-expansion (pmot), combustion (pcomb)
and resonance (pres) pressures:
Newsletter EnginSoft Year 7 n°3 -
9
[3]
[4]
n is the engine speed and nidle the idle rotational speed (fixed at
1000 rpm). The I1 index is a function of the maximum pressure
gradient of the combustion contribution, occurring after the
pilot (dpmax1/dt)comb and the main injection (dpmax2/dt)comb. The I1
index is also non-dimensionalized over the maximum pressure
gradient of the pseudo-motored pressure (dpmax/dt)comb. In the
case of a single-shot injection, a unique term is of course
present in the eq. (3) numerator. The I2 index takes into account
2
the acoustic energies (∫p dt) associated with resonance and
motored pressure signals. An additional index In is finally
defined in [14], accounting for mechanical noise contribution,
related, as stated, to the sole engine speed:
Fig. 7 - Computed and experimental air flow rate
[5]
Basing on the above definitions, the Overall Noise (ON) can be
finally computed as:
[6]
Fig. 8 - Computed and experimental mechanical power
(1)
The first contribution (also referred as pseudo-motored signal)
is only related to volume variation, and was used as a reference
signal. It was determined by a direct in-cylinder pressure
acquisition during a fuel switch-off operation. The sum of
combustion and resonance pressures is also referred to as excess
pressure (pexcess) and was determined by the difference between
the total and the pseudo-motored pressures:
(2)
This contribution is of course related to both fuel burning and
to high-frequency resonant pressure fluctuations, induced by the
pressure gradients during the combustion process [15]. To
separate the above two terms, a high pass-band filter of the
total pressure FFT was accomplished, as shown in figure 9.
Above a proper cut-off frequency (about 4.5kHz) in fact, the
pressure amplitudes (expressed in dB) tend to increase, thus
indicating the occurrence of a resonance phenomenon [14]. An
IFFT procedure was applied to the high pass-band filtered signal
allowing to finally reconstruct the three contributions in eq. (1).
They were compared together in figure 10. Despite the presence
of the previously discussed high-frequency amplitudes, the
resonant pressure was significantly lower than other
contributions. Nevertheless, it may still exert a non-negligible
effect on the overall noise.
The three decomposed pressures were utilized to compute two
characteristic indices I1 and I2 defined as:
Ci being proper tuning constants, depending on the engine
architecture and size.
Following the relations (3-6), a Matlab routine was developed to
properly process the in-cylinder pressure cycle and compute the
various noise indices and the overall noise. This routine was
applied to both the experimental and GT-Power computed
pressure cycles. In the experimental analysis, the motored signal
was directly acquired by means of a sudden fuel-switch off
maneuver. Similarly, to compute the motored pressure, the fuel
injection was completely disabled in the numerical analyses.
Figure 11 compares the ON computed levels with the ones
experimentally measured at the test-bench. Some adjustment of
the Ci constants was required with respect to the values
proposed in [14]. The agreement obtained through the
Fig. 9 - Total pressure spectrum (in dB), at 1400 rpm
10
- Newsletter EnginSoft Year 7 n°3
Fig. 12 - Parametric Injection strategy
Fig. 10 - Decomposition of the total pressure in motored, combustion and
resonance contributions, at 1400 rpm
tuning constants, acting as correction terms in the numerous
correlations included in the model, were also assigned. Due to
the absence of experimental data on the prototype engine, the
tuning constants were identified in order to match the
experimental pressure cycles measured on a similar engine,
equipped with the same FIS, as reported in [26].
A pilot-plus-main strategy was specified for the CR-FIS at 3000
rpm, at full load conditions. Figure 12 shows the way the fuel
injection strategy was schematized, based on the definition of
three degrees of freedom, namely the start of pilot injection
(SOIP), the dwell time between the pilot and main (DWELL) and
the duration of the main injection (MDUR). In order to maintain
the same injected fuel mass (22 mg), a constant overall duration
(PDUR+MDUR=18.8°) was specified. Constant values of needlelift ramp-up (1.9°) and ramp-down (1.6°) were also assigned,
depending on its dynamics. This allowed the computation of the
crank angle position in all main points of the injection strategy,
as a function of the 3 parameters SOIP, DWELL and MDUR:
Fig. 11 - Comparisons on the overall noise
employment of the experimental pressure cycles is satisfactory
at each engine speed. A maximum absolute error of about 1.3
dB was found at a medium engine speed.
As a consequence of the inaccuracies included in the engine
simulation, the agreement at high speed slightly worsens when
the predicted pressures are considered. Moreover, the employed
zero dimensional model is unable to take into account the high
frequency contributions of the computed pressure that are
strictly related to the resonance phenomenon. Nevertheless, the
satisfactory agreement shown authorizes the employment of the
recalled methodology within the optimization procedure
described in the next paragraph.
Optimization Procedure
The previously described 1D and combustion noise
models constituted the basis for the optimization of the
injection strategy of the prototype CR engine. However,
as already stated, a more advanced combustion model
was in this case required. The latter (DIJet model)
follows the multi-zone Hiroyasu approach [22-25] and is
able to describe the fuel injection, break-up, airentrainment, evaporation and combustion processes.
Details on the employed model can be found in [18,26].
The injector characteristics (in terms of holes number
and diameter) injection strategy profile and timing
represents the input data. The value of a number of
[7]
The logical development of the optimization problem within the
modeFRONTIER® environment is explained in figure 13. A
number of Transfer Variables objects - together with the three
Input Variables SOIP, DWELL and MDUR - were defined based on
relations (7), and were written inside the GT-Power input file
(LD500.dat). For each set of the above parameters, a proper
script procedure runs the GT-Power code and extracts the incylinder pressure (pressure3000rpm), which is required by the
Fig. 13 - Logic scheme of the optimization process within modeFRONTIER®
Newsletter EnginSoft Year 7 n°3 -
Matlab routine computing the overall combustion noise. A multiobjective optimization was so defined to contemporarily search
the maximum Indicated Mean Effective Pressure (IMEP) and the
minimum of the overall noise. To solve the above problem, the
MOGA-II algorithm was utilized, belonging to the category of
genetic algorithms [27] and employing a range adaptation
technique to carry out time-consuming evaluations.
Figure 14 displays a scatter chart of the optimization procedure
highlighting the Pareto Frontier occurring when the IMEP is
plotted against the ON (a solution is said to be Pareto optimal
if there is no other solution which is better in all objectives). A
trade-off between the two objectives has clearly occurred. The
position of the Base Engine, also shown in the same figure, was
far away from the Pareto Frontier, thus indicating the possibility
to attain a better level of both objectives.
In order to select a single solution among the ones located on
the Pareto frontiers, the “Multi Criteria Decision Making” tool
(MCDM) provided in modeFRONTIER® was employed. It allows
the definition of preferences expressed by the user through
direct specification of attributes of importance (weights) among
the various objectives. Depending on the above relations, the
MCDM tool was able to classify all the Pareto Frontier solutions
with a decreasing rank value.
11
the ones obtained in the CR engine in correspondence with the
previously shown injection strategies. The retarded combustion
process occurring in both the optimized cases contemporarily
determined a lower pressure peak and a reduced pressure
gradient (lower noise). Contemporary, a higher pressure level
was found during the expansion stroke, which was mainly
responsible of the small IMEP increase in the MCDM #1 solution.
Fig. 15 - Optimal injection strategies
Conclusion
The paper described a methodology for the identification of
optimal injection strategies of a CR light-duty diesel engine. The
above objective was reached through the development of proper
models for the prediction of engine performance and combustion
noise. Both models were validated in reference to experimental
data collected on a base, mechanical injection engine. Then a
multi-objective optimization process was carried out with the
aim of characterizing the trade-off between IMEP and ON on the
Fig. 14 - Scatter chart of the optimization process highlighting the Pareto
Front
Two different specifications were attempted: in the first case,
the IMEP was considered the most important objective and
assumed a weight two times higher than ON. Under this
hypothesis, the point gaining the highest rank was the “MCDM
Solution #1”, whose position is highlighted in figure 14. In this
way both a ON reduction and a small IMEP increase was obtained
with respect to the Base Engine. As an alternative, the same
weight was specified for both IMEP and ON. In this other case,
the solution #2 was selected and a small IMEP reduction was
accepted to obtain a more relevant ON drop. The position of the
two solutions puts into evidence that the MCDM procedure
effectively realized a compromise between the conflicting
needs, quantified by the attributes of importance previously
described. In addition, this procedure defined a standardized
method for the selection of the “global” optimum.
Figure 15 and table 1 compare the optimal injection strategies
selected by the MCDM tool and synthesize the related outputs.
High IMEP required an advanced start of both pilot and main
injections, with a reduced dwell time. As expected, a lower ON
is indeed found with a delayed SOI and a higher dwell time.
Figure 16 finally compares the base engine pressure cycle with
Table 1 – Parameters of the injection strategies and related performance and
overall noise at 3000 rpm
Fig. 16 - Comparisons of the pressure cycles obtained through the
optimization process
12
- Newsletter EnginSoft Year 7 n°3
prototype CR engine. A standardized procedure was also defined
in order to select a unique solution, on the base of the user
preferences and weight of importance of the single objective.
The optimization procedure was able to capture the expected
effects of the injection strategy parameters on the overall
performance and radiated noise. It represents a very useful tool
to reduce the huge experimental activity usually required to
develop the control logic of the FIS. The methodology can be
easily extended to multiple operating conditions and can include
additional constraints related, for example, to noxious species
emission predicted through 3D-CFD analyses.
Acknowledgements
The authors would like to express their thanks to Dr. Gerardo
Valentino for supporting the experimental activity carried out in
the present study.
References
[1]
M. Stotz, J. Schommers, F. Duvinage, A. Petrs, S. Ellwanger, K.
Koynagi, H. Gildein, “Potential of Common-Rail Injection System
for Passenger Car Di Diesel Engines”, SAE Paper 2000-01-0944,
2000
[2] Zavala P.A.G., Pinto M.G, Pavanello R., Vaqueiro J.,
“Comprehensive Combustion Noise Optimization”, Sae Paper
2001-01-1509, 2001
[3] F. Mallamo, M. Badami, F. Millo, “Effect of Compression Ratio and
Injection Pressure on Emissions and Fuel Consumption of a Small
Displacement Common Rail Diesel Engine”, SAE Paper 2005-010379, 2005
[4] L. Allocca, S. Alfuso, A. Montanaro, G. Valentino, M. Lolli,
“Innovative lift direct command to inner hydraulic circuit injector
comparison for diesel engines”, ICEF2006-1518: 2006 Fall
Conference of the ASME Internal Combustion Engine Division,
November 5-8, 2006, Sacramento, USA
[5] S. Alfuso, L. Allocca, G. Caputo, F.E. Corcione, A. Montanaro, G.
Valentino, M. Lolli: Spray Analysis of an Innovative Direct
Command Solenoid Injector for Common Rail Light Duty Diesel
Engines, ICLASS-2006, Aug.27-Sept.1, 2006, Kyoto, Japan
[6] Papalambros, P.V., and Wilde, D.J., “Principles of Optimal Design
Modeling and Computation”, Cambribde University Press,
Cambridge, 2000
[7] Assanis D.N., Polishak M.,“Valve event optimization in a sparkignition engine”, Journal of Engineering for Gas Turbines and
Power; Vol/Issue: 112:3, 1990
[8] Stephenson P.W., “Multi-Objective Optimization of a Charge Air
Cooler using modeFRONTIER® and Computational Fluid Dynamics”,
SAE Paper 2008-01-0886, 2008
[9] C. Beatrice, P. Belardini, C. Bertoli, M. G. Lisbona, G. M. Rossi
Sebastiano: Combustion process management in common-rail DI
diesel engines by multiple injection, SAE Paper 2001-24-0007
[10] D. Siano, F. E. Corcione, F. Bozza, A. Gimelli, S. Manelli
“Characterization of the Noise Emitted by a Single Cylinder Diesel
Engine: Experimental Activities and 1D Simulation”, SAE Noise
and Vibration 2005, Traverse City – 05NVC-133
[11] Siano D., Bozza F., Costa M., “Optimal Design of a Two-Stroke
Diesel Engine for Aeronautical Applications Concerning both
Thermofluidynamic and Acoustic Issues”, IMECE2008-68713,
ASME IMECE Congress, November 2008, Boston.
[12] Bozza F., Costa M., Siano D., “Design Issues Concerning
Thermofluidynamic and Acoustic Aspects in a Diesel Engine
Suitable for Aeronautical Applications”, to be published in the
International Journal of Vehicle Design, 2009.
[13] Zienkiewicz, O. C., and Taylor, R. L., 2000, “The Finite Element
Method”, Butterworth-Heinemann, ISBN 0750650494
[14] Torregrosa A.J., Broatch A., Martın J., Monelletta L., “Combustion
noise level assessment in direct injection Diesel engines by means
of in-cylinder pressure components”, Meas. Sci. Technol., 18
2131-2142 doi:10.1088/0957-0233/18/7/045, 2007
[15] F Payri, A Broatch, B Tormos and V Marant, “New methodology for
in-cylinder pressure analysis in direct injection diesel engines—
application to combustion noise”, Meas. Sci. Technol. 16 (2005)
540–547 doi:10.1088/0957-0233/16/2/029
[16] Corcione F. E., Siano D., Vaglieco B. M., Corcione G. E., Lavorgna
M., Viscardi M., Iadevaia M. and Lecce L., “Analysis and Control of
Noise Emissions of a Small Single Cylinder D.I. Diesel Engine”, SAE
Paper 2003-01-1459, 2003
[17] GT-Power User’s Manual, 2005.
[18] Bozza F., Siano D., Valentino G., “Integrated Numerical and
Experimental Methodologies for Performance Optimization and
Noise Reduction of a Light Duty Diesel Engine”, EnginSoft CAE
Users’ Meeting 2007, Stezzano (BG), October 2007.
[19] ISO 9614/1, ”Acoustics-Determination of sound power levels of
noise sources using sound intensity-Part 1:Measurement at
discrete points”, 1993.
[20] Osawa H, Nakada T, “Pseudo cylinder pressure excitation for
analysing the noise characteristics of the engine structure”, JSAE
Rev. 20 67–72, 1999
[21] Corcione F.E., Siano D., Iadevaia M., Viscardi M., Corcione G.E.,
“Correlation between the acoustic intensity measurements with
and without an electronically fuel injection system for a small
single cylinder diesel engine”, Euronoise 2003, Naples, 334
[22] Hiroyasu H., Arai M., Tabata M. “Empirical Equations for the
Sauter Mean Diameter of a Diesel Spray”, SAE Paper 890464.
[23] Hiroyasu H., Arai M., “Fuel Spray Penetration and Spray Angle of
Diesel Engines”, Trans. Of JSAE, Vol. 21, pp. 5-11, 1980
[24] Hiroyasu H., Kadota T., Arai M., “Development and Use of a Spray
Combustion Modeling to Predict Diesel Engine Efficiency and
Pollutant Emissions”, Bulletin of JSME, Vol. 26, N: 214, pp., 569575, 1983
[25] Young D., Assanis D.N., “Multi-Zone DI Diesel Spray Combustion
Model for Cycle Simulation Studies of Engine Performance and
Emissions”, SAE paper 2001-01-1246, 2001
[26] Siano D., Valentino G., Corcione F., Bozza F., Arnone L., Manelli
S., “Experimental and Numerical Analyses of Performance and
Noise Emission of a Common Rail Light Duty D.I. Diesel Engine”,
SAE Paper 2007-24-0017, ICE 2007 Congress, Capri, Settembre
2007.
[27] Sasaki, D., 2005, “ARMOGA, An efficient Multi-Objective Genetic
Algorithm”, Technical Report, January 2005
Daniela Siano (Researcher)
Istituto Motori CNR – Napoli - [email protected]
Fabio Bozza (Full Professor) (DIME). Università di Napoli
“Federico II” - [email protected]
Newsletter EnginSoft Year 7 n°3 -
13
Aerodynamic and Acoustic Optimization
of Radial Fans
In this work the multiple objective optimization of radial
fans with respect to aerodynamic efficiency and noise
generation has been performed. This has been achieved by
coupling together an LSTM In-house Excel-VBA Impeller
Design Tool (EVIDenT), the CAD program ProEngineer, the
grid generator ANSYS ICEM and the CFD solver ANSYS CFX, as
well as the LSTM In-house Acoustic Code SPySI (Sound
Prediction by Surface Integration) within the optimization
software modeFRONTIER®. From a technical point of view, the
coupling of the different tools was one of the main
challenges solved with modeFRONTIER®. The input variables
for the optimization were the shape parameter, i.e. the wrap
angle of the impeller and its number of blades. All
simulations have been performed in a 2D scenario in order to
capture primary fundamental aspects relevant to the impeller
design. As a result of the optimization, the efficiency of the
radial fans has been improved as well as the noise level
reduced substantially. A set of non-dominated solutions
(Pareto solutions) have been obtained which can be used
according to the specific user requirements. The results show
that the integration of acoustics and transient flow
simulations within a multiple objective fully automated
optimization process is feasible. Having established the fully
integrated and automated process an extension also to 3D
computations can be readily performed.
Fig. 1 - In-house Excel-VBA Impeller Design Tool (EVIDenT)
exported to the grid generator ANSYS ICEM where another
script generates also automatically the grids, Figure 2. The
grid is exported to CFX, the flow domain is automatically set
up, Figure 4, and the solver starts to run to compute the CFD
solution.
The results of the CFD simulation before and after the
optimization are shown in the stream line plots of Figure 3.
One can clearly see that in the optimized design the flow
velocities in the impeller where reduced keeping, however,
the same pressure and flow rate, as well as reducing
Introduction
The aim of this work is to analyze the possibility of
optimization with modeFRONTIER® [1] by
integration of In-house and commercial tools in
order to automate turbomachinery design with
respect to efficiency and noise generation.
The starting step in the optimization process is
the design of impellers with the In-house ExcelVBA Impeller Design Tool (EVIDenT), which
delivers high performance starting blade shapes
Fig. 3 - Radial impeller
for the fully integrated optimization process. The Fig. 2 - Wrap angle
main optimization parameters in this work were
the wrap angle, Figure 1, and the number of
blades. Many other parameters can be included,
e.g. the blade inlet and outlet angles, shroud
shape, but the scope of this work was to
establish the optimization work flow. Even so,
with those two parameters already very good
results were achieved. These geometries are then
exported into the CAD program ProEngineer
where the impeller, e.g. Figure 1, and the
corresponding flow domains are generated.
These geometries are then fully automatically Fig. 4 - Grid and fluid domain solver setup in ANSYS CFX Pre.
14
- Newsletter EnginSoft Year 7 n°3
modeFRONTIER® offers all features needed in
order to integrate the automation processes of
the different programs and to perform powerful
optimizations.
Fig. 5 - CFD analysis of non optimized (left) and optimized impeller (right)
Fig. 6 - Sound pressure level of non optimized (left) and optimized (right)
impeller
Multi-objective optimization
In this case, as there were 2 objectives, a multiobjective algorithm had to be chosen. Therefore
the MOGA II algorithm was selected with 5
generations and combined with a DOE Sobol of 8
designs.
The work flow in modeFRONTIER® is shown in Figure 8. Here
the different tools and scripts used are integrated [6] using
the modeFRONTIER® workflow connectors. The input variable
nodes are used for the optimization inputs (e.g. number of
blades and shape parameter). These are then connected to
the first node (1), the In-house Excel-VBA Impeller Design
Tool (EVIDenT) through the scheduler (8), shown in Figure 8.
This design tool EVIDenT generates the information about
the number of blades and the data for the shape of the blades
and writes them out as text files. These files are then passed
to the python node (2), which passes the variables in the
script to the CAD program ProEngineer (3). ProEngineer then
creates the flow domain for the blades as a parasolid file,
which is then transferred to the ICEM node (4). To this node
(4) also the ICEM script and the parasolid files for the other
parts of the geometry are transferred. Here in node (4) then
the mesh is created and transferred as a CFX5 file to the CFX
node (5). In this node (5) some additional CFX5 files and
scripts for pre and post processing arrive also from the
transfer and support file nodes. Node (5) runs then
simulation, calculates the efficiency and writes out the result
as a text file. From the CFX node (5) CSV (Comma separated
Value) files are transferred to the next tool in node (5),
which consist of the acoustic In-house tool SPySI. It runs the
SPySI tool and writes out the results, i.e. the sound pressure
level, as text file. These files are then transferred to the
output nodes (6), which are then finally transferred to design
Fig. 7 - Prototypes
substantially the sound pressure level. In
Figure 4 three prototypes are shown.
The modeFRONTIER® optimization
environment
As described above, the work flow was carried
out by integrating and automating with scripts
a set of commercial (ProEngineer [2], ANSYS
ICEM [3] and ANSYS CFX [4]) and In-house
tools (Python based acoustic tool SPySI [5] and
Inhouse Microsoft Excel-VBA Impeller Design
Tool EVIDenT [6]). But how to integrate all
these commercial and In-house tools in order
to perform a multi-objective optimization? The
answer was to use modeFRONTIER®. The multiobjective optimization environment tool
Fig. 8 - Work flow of the optimization process
Newsletter EnginSoft Year 7 n°3 -
15
[5] Scheit, C., Karic, B., Delgado, A., Epple, P. and Becker, S.
(2009) Experimental and Computational Study of Radial
Impellers With Respect to Efficiency and Noise
Production. Conference on Modeling Fluid Flow (CMFF’09)
The 14th International Conference on Fluid Flow
Technologies Budapest, Hungary, September 9-12.
[6] Masood, Rao M. A. (2010) Principle Study of Optimization
of Radial Fans with respect to Aerodynamics and
Aeroacoustics, Master Thesis, LSTM, University of
Erlangen-Nürnberg.
Fig. 9 - The Pareto Front
objective nodes (7). These nodes make sure that efficiency is
maximized and the noise level is minimized. Based on this
information, the scheduler node (8) analyzes and generates
a new design.
Results obtained with modeFRONTIER®
In this work a total of 96 possible designs were run, out of
which 35 designs were evaluated. From those the final three
optimized designs were selected and compared with the
three original starting designs.
A set of non-dominated solutions, as shown in Figure 9, have
been found which showed substantial improvements in the
efficiency and reduction in the noise level. In the case of a
multi-objective optimization, there is no single best design
but rather a set of non-dominated designs. The best design
with respect to efficiency has an increase of 35%, while the
best design with respect to noise level has a reduction of 3
dB as compared to the original design, which means a
reduction of 50% in the sound power level
Conclusions
This work has shown, Figure 8, how it is possible to integrate
and automate different codes, i.e. the In-house Excel
EVIDenT code, ProEngineer, ANSYS ICEM, ANSYS CFX and the
In-house Acoustic tool SPySI in modeFRONTIER® and finally
how to establish and carry out an multi-objective
optimization in this environment
The efficiency of radial fans has been improved as well as the
noise level reduced noticeably. A set of non-dominated
solutions (Pareto solutions) have been obtained which can
be used according to the user needs.
References
[1] modeFRONTIER®: http://www.esteco.com/products.jsp
[2] Pro/Engineer Wildfire:
http://ptc.com/products/proengineer/
[3] ANSYS ICEM CFD:
http://www.ansys.com/products/icemcfd.asp
[4] ANSYS CFX:
http://www.ansys.com/products/fluid-dynamics/cfx/
Institute of Fluid Mechanics - Technical Faculty
Friedrich-Alexander - University Erlangen-Nürnberg
MSc. Engr. Rao Muhammad Atif Masood
M.Sc Christoph Scheit
Dr.-Ing. Philipp Epple
Prof. Dr. A. Delgado - Professor and Head
The Institute of Fluid Mechanics (Lehrstuhl für
Strömungsmechanik - LSTM) of the Friedrich-AlexanderUniversität Erlangen-Nürnberg has 8 departments working
on a large variety of research topics: Aerodynamics,
Turbulence, Aeroacoustics, chemical reacting flows, fluid
flow process automatization, bio and medical technology,
numerical flow simulation, process fluid dynamics and
Turbomachinery, instationary fluid mechanics, Engineering
of Advanced Materials and thermo-fluid-dynamics of bio technological processes. There are about 70 researchers working at the LSTM.
The LSTM has many years of experience in the design, numerical computation and aerodynamic and acoustic optimization of turbomachines of all kinds – axial, diagonal and radial. The aerodynamic design and acoustic computation are
done with Inhouse-codes as well as with commercial tools.
www.lstm.uni-erlangen.de
Der Lehrstuhl für Strömungsmechanik (LSTM) der FriedrichAlexander-Universität Erlangen-Nürnberg setzt sich aus 8
Forschungsbereichen mit einer sehr breit angelegten thematischen Ausrichtung zusammen: Aerodynamik, Turbulenz und
Aeroakustik, Strömungen mit chemischen Reaktionen,
Prozessautomatisierung von Strömungen in Bio- und
Medizintechnik,
Numerische
Strömungsmechanik,
Prozessfluiddynamik und Strömungsmaschinen, Instationäre
Strömungsmechanik, Engineering of Advanced Materials und
Thermofluiddynamik biotechnischer Prozesse. Insgesamt arbeiten und forschen hier ca. 70 Mitarbeiter und
Mitarbeiterinnen.
Der LSTM besitzt langjährige Erfahrung in der Auslegung, numerischen Berechnung und strömungsmechanischen und akustische Optimierung von Turbomaschinen aller Bauformen (radial, diagonal und axial). Die strömungsmechanische
Auslegung und akustische Berechnung erfolgen sowohl mit
Inhouse-Codes wie auch über kommerziellen Tools.
16
- Newsletter EnginSoft Year 7 n°3
Electromagnetic Interference: an
Advanced FEM Calculation Approach
Accurate analysis of the electromagnetic interference of
complex systems is a fundamental requirement when two
or more structures share the same working environment.
The analytical and the empirical approaches are not
an extension of the methods available in the literature. In
order to obtain fast and accurate results, the equivalent
circuit has been analyzed by means of parametric
software; ANSYS APDL and ANSYS Maxwell have been used
to solve the finite element problem.
Formulation
By referring to the Figure 1, let us consider the pipeline
(yellow line) and the three high voltage lines.
In order to completely define the electromagnetic
interference phenomena, both inductive and conductive
coupling have been considered.
Fig. 1 - Geometry of the high voltage line – pipeline system
The inductive coupling represents the main cause of
electromagnetic interference between the pipeline and
the high voltage line. This effect is due to the currents
induced on the pipeline by a time-varying magnetic field,
generated by a sinusoidal current flowing on the high
voltage line. For accurate evaluation of the induced
coupling, a typical approach consists of estimating the
suitable to this aim; in particular, in the former case the
advantages of a rigorous method are offset by the limited
availability of known solutions for few simple cases. In
the latter case, post factum mitigation actions are
completely based on operator experience,
which is generally not sufficient to solve
these problems due to the complexity of
modern systems. The main drawback of
the wide variety of empirical/numerical
methodologies consists in the need for
formulating a simplified hypothesis based
on
the
electromagnetic
material
properties. Moreover extensive experience
is needed in order to define the
applicability field of the involved
mathematical relationships. In recent
years the improvements of computational
capability and the memory availability of Fig. 2 - two-dimensional model of the pipeline-transmission line system: complete computational
computing resources, allow the efficient domain (a), particular of the pipeline (b) and symmetric computational domain (c)
solution of problems characterized by
several unknowns. This work presents a
calculation approach to the study of
electromagnetic interference generated
by high voltage lines and metallic
pipelines, buried into the soil. In
particular, a procedure for calculating the
inductive and conductive coupling has
been developed. The calculation of the
equivalent generators of induced
electromotive force has been performed
by means of a finite element model. This
approach produces a generalization and Fig. 3 - 2D computational domain mesh (a) and particular of the pipeline (b).
Newsletter EnginSoft Year 7 n°3 -
Fig. 4 - Current density on the pipeline section (a) and the flux lines (b)
Fig. 5 - elementary cell of the equivalent circuit
currents induced on the pipeline/high-voltage line system
during both the working normal condition and in single
phase breakdown case.
The conductive coupling occurs in the single phase
breakdown case in proximity to the inductive installations
provided with good grounding, such as electrical
substations and electricity network trellis.
Under these conditions, the breakdown current flows into
the soil increasing the electric potential in the local
domain surrounding the pipeline. The evaluation of the
generated voltage allows establishment of safety
conditions for the operators, according to the limits
imposed by the normative.
Electromagnetic analysis has been performed with the
following steps:
1. Generation of the 2D FEM model to get the equivalent
generators of electromotive induced force.
2. Building and solution of the transmission line model
using equivalent circuits.
3. Generation of the 3D FEM model to analyze the
pipeline/high-voltage line system for the conductive
disturb.
2D FEM model for analysis of inductive coupling
In order to evaluate the inductive coupling between the
pipeline and the high voltage line, the two-dimensional
model of a transverse section of the geometry, represented
in Figure 1, has been taken into account. In particular, it
is composed by (Figure 2-a):
• The soil;
• The air;
• The inducting line;
• The pipeline.
17
By observing the symmetry of the
problem, the computational domain has
been reduced to half of the real one
(Figure 2-c). The transmission line has
been modeled by means of a current
flowing orthogonally to the model plane,
while the pipeline has been modeled by
considering a metallic circular ring.
The computational domain has been
properly discretized by means of a
triangular element mesh (Figure 3-a)
with quadratic shape function. Moreover,
in order to correctly evaluate the skin
effect on the pipeline, a finer mesh has
been obtained on the external pipeline
surface (Figure 3-b).
The flux lines of magnetic field have
been imposed parallel in proximity of the
boundary of the computational domain.
Figure 4 shows some outputs of the
electromagnetic analysis: the current
density on the pipeline steel (a), and the flux lines on the
computational domain (b). The plot, shown in Figure 4-a,
highlights the skin effect on the pipeline and justifies the
mesh model done on the external pipeline surface. The
plot Figure 4-b depicts how the current induced on the
pipeline affects the magnetic field.
Mathematical model
In order to evaluate the current induced on the pipeline,
the characteristic impedance of the system has been
evaluated in ANSYS APDL (ANSYS Parametric Design
Language).
This procedure allows the user to obtain an equivalent
circuit model of the pipeline as a series of elementary
equally spaced cells. Each cell (Figure 5) is constituted by:
• An electromotive force that takes into account the
effects of the inducting system on the induced one;
• A longitudinal impedance that represents the pipeline
impedance;
Fig. 6 - 3D model of the pipeline-transmission line system for the conductive
problem
18
- Newsletter EnginSoft Year 7 n°3
the position along the pipeline and
due to the three lines effect, has
been shown in Figure 9.
By referring to the previous figure,
the voltage induced by each
transmission line has been shown
together with their quadratic mean
square value.
Fig. 7 - mesh of the computational domain (a) and of the dielectric cover (b)
Fig. 8 - induced current (a) and zero voltage condition (b)
• A trasversal impedance that accounts for the
impedance between the pipeline and the transmission
line.
3D FEM model for the analysis of the conductive
coupling
The analysis of the conductive coupling has been carried
out by using a three-dimensional model (Figure 6) where
a part of 30 meters of pipeline has been modeled buried
into the soil, by performing a parameterization of both
geometric dimensions and the material properties
according to the inductive case.
The model is constituted by the soil, the metallic pipeline
and an insulating layer placed around the pipeline itself.
The three-dimensional domain has been discretized by
means of a tetrahedral element mesh with quadratic shape
function (Figure 7-a). A finer mesh (Figure 7-b) has been
obtained on the dielectric cover.
In order to define the boundary conditions, a zero
potential has been imposed on the lower part of the
calculation domain (Figure 8-b), while the excitation has
been modeled as a current orthogonally directed with
respect to the surface soil (Figure 8-a).
Results
Inductive and conductive interferences have been
analyzed on a pipeline placed in proximity to three highvoltage lines. The results relative to the inductive
coupling obtained in normal working conditions have
been shown in this article. By referring to the model
shown in Figure 2-c, the inductive coupling has been
evaluated; the induced voltage, calculated as function of
Conclusions
A FE-based procedure for the
analysis
of
electromagnetic
interference, generated by a highvoltage line and a pipeline, has been
presented in this article.
In order to completely describe the
interference system, both the
inductive and the conductive
coupling have been taken into
account. The two-dimensional and
the three-dimensional FEM models
have been generated and analyzed,
with proper boundary conditions, by using ANSYS Maxwell.
The equivalent generators of electromotive force have
been obtained from the FEM 2D model and given as input
to the transmission line equivalent circuit. The 3D FEM
model supplies the electric potential acting on the
pipeline in case of single phase fault.
The aim of the study is the generalization of semi-analytic
approaches to electromagnetic interference problems: FEM
models allow the user to obtain results even out of the
validity domain of Carson Clem formulas.
For more information
Alice Pellegrini - EnginSoft
[email protected]
Dott. Giovanni Falcitelli - Enginsoft S.p.A.
PhD Ing. Alice Pellegrini – Enginsoft S.p.A.
Ing. Emiliano D’Alessandro – Enginsoft S.p.A.
Fig. 9 - Induced voltage along the pipeline
Newsletter EnginSoft Year 7 n°3 -
19
Reliability Based Structural
Optimization of an Aircraft Wing
Today, in aircraft industry, there is a great competition to
release new aircraft designs which are faster, more efficient,
more economical, more reliable and even quieter than the
former ones - both in military and civil applications. The
challenging multi-disciplinary task of aircraft design can be
realized by incorporation of numerical optimization
techniques in the industrial design process. However, there
are always uncertainties related to design parameters,
Fig. 1 - Computational Model of the Wing Structure
modelling, manufacturing process, operating conditions and
human factors when designing a new aircraft. Conventional
deterministic design and optimization processes may yield
unreliable designs since the uncertainties are not accounted
for in the design process.
However, reliability-based design optimization (RBDO) is a
methodology which can include probabilistic design criteria
involving aleatory uncertainties in the optimization process.
In this work, we propose an implementation of an RBDO
algorithm into a structural optimization framework composed
in the modeFRONTIER® optimization tool. Reliability analysis
and optimization are two essential components of RBDO: (1)
Reliability Analysis focuses on analyzing the probabilistic
constraints to ensure the reliability levels are satisfied; (2)
Optimization is seeking for the optimal performance subject
to the probabilistic constraints.
A simple aircraft wing which has a NACA0012 airfoil profile
is modeled parametrically in Catia V5-R16. The wing's three
dimensional geometric model consists of 90 skin panels, 10
ribs and 4 spars while some of the skin panels are stiffened
by stringers along the wing span. The wing has a
rectangular planform with 6m semi-span and 1.6m chord
length. The finite element model of the wing is prepared for
Abaqus 6.7.1 and is composed of linear shell and beam
elements. The model is shown in Figure 1, and consists of
17,070 linear quadrilateral elements of shell type, 1264
linear line elements of beam type, for a total element
number of 18,334 and 16,024 nodes, thus 96,144 degrees
of freedom. In all members of the structure, aluminium is
employed with Young's modulus E = 70000MPa, Poisson
ratio v = 0.33, density ρ = 2700kg/m3, yield strength 𝜎yield =
400MPa. As a cantilevered boundary condition, all of the
degrees of freedom at the root of the wing are set to zero.
The aerodynamic load that will be applied to
the wing is supplied from a computational fluid
dynamics (CFD) analysis performed for the
initial design. An Euler inviscid flow analysis
by using Fluent commercial software was
performed for Mach = 0.3 at sea level.
A structural optimization problem with two
random variables which are Young's Modulus E
and yield strength 𝜎yield of the material will be
solved. Thus, the allowable stress which is
𝜎allowable = 𝜎yield / 1.5 is calculated with a safety
factor of 1.5 which counts for epistemic
uncertainties. The constraints concerning
stress (g1), displacement (g2) in those
equations become probabilistic constraints due
to their dependencies on the random variables vector X =
[E 𝜎allowable]. E and 𝜎allowable are modeled with normal
distributions using N(70000, 350) MPa and N(270, 20) MPa.
On the other hand, there are two reliability subroutines,
which one of them corresponds to reliability index (βs) for
stress constraint and the other one corresponds to
reliability index (βd) for displacement constraint, in main
reliability code in the optimization process. The
optimization variables are chosen as the thicknesses of skin
panels, ribs, spars, stringers and location of first four ribs
and two spars (Nikbay et al. [1]). Thus, the optimization
problem can be formulated as;
In terms of reliability index, the probabilistic constraints of
the above optimization problem can be expressed as;
20
- Newsletter EnginSoft Year 7 n°3
The nodes shown in Figure 2 can be
explained briefly as follows. The
Logic Nodes are the nodes used to
define workflow actions like logic
start, logic end and logic failed. The
optimization algorithm and the way
how the starting points for the
process are distributed in the design
space are indicated by this node.
The Goal Nodes are the nodes used
to define the user’s strategy for the
optimization problem. The Variable
Nodes are the nodes used to define
the data definition for the
optimization problem.
On the other hand, the File Nodes
identify text files that are sent to an
application node or from which to
extract values to be assigned to
output variables. Dos Batch Node and Calculator Node stores
and configures scripts in Windows ® DOS syntax and stores
and configures scripts in JavaScript syntax, respectively. In
this study Dos Batch Nodes are used to call ABAQUS and
Fig. 2 - modeFRONTIER® Workflow Nodes
Here, ( ) and ( ) are the target
reliability indexes for stress and
displacement constraints and chosen
as to be 3.0 for a reliability of
0.99865 or a probability of failure of
0.00135. The actual reliability index
values for the current design at each
optimization iteration is calculated
and passed to the outer optimization
loop as a constraint.
Fig. 4 - Paretos of Aircraft Wing Structural Optimization With RBDO
Figures 2 and 3 show the workflow of the problem and its
nodes constructed by using the modeFRONTIER®
optimization tool.
MATLAB to run the processes whenever they are needed.
Finally the CATIA V5 Node is used to wrap a CATIA V5
document, transferring data from and to it, and executing
macro files on it. A CATIA document is either a CATPart, a
CATProduct or a CATAnalysis.
The optimization process is run with 52 design
of experiments (DOE) with "Sobol sequence"
where 300 maximum number of iterations per
sub-iterations for the NLPQLP are defined.
Finally, a total number of 140 designs are
generated for the optimization problem.
Solution of the problem took about 21 hours on
a workstation with Intel(R) Core(TM)2 Quad CPU
[email protected] GHz processor, with 2 GB of RAM on
Microsoft Windows XP operating system.
Fig. 3 - Workflow of the Reliability Based Structural Optimization Problem: Generic Wing
Finally, 40 designs were found to be feasible
that satisfy the constraint conditions.
Furthermore, there are 16 error designs. As a
result, 4 designs are found in the pareto front
set for this optimization problem. These Paretos
are demonstrated in Figure 4. The design which
corresponds to Pareto 2 in Figure 4 is chosen as
Newsletter EnginSoft Year 7 n°3 -
optimum design due to its minimum mass value while still
satisfying the reliability index target constraint.
Conclusion
In this work, a reliability based design optimization
methodology is proposed by implementation of a homemade RBDO code based on Reliability Index Approach (RIA)
into a structural optimization framework composed of highfidelity commercial software. The presented work shows
good results when compared to the deterministic
optimization results of the same problem studied formerly
(Nikbay et al. [1]).
About The Istanbul Technical University (ITU)
ITU was established in 1773, during the time of the
Ottoman Sultan Mustafa III. With its original name
"Muhendishane-i Bahr-i Humayun", The Royal School of
Naval Engineering, its responsibility was to educate chart
masters and ship builders. In 1795, the "Muhendishane-i
Berr-i Humayun", The Royal School of Military Engineering,
was established to educate the technical staff in the army.
In 1847, education in the field of architecture was also
introduced.
Established in 1883, the School of Civil Engineering
assumed the name "Engineering Academy", with the aim of
teaching essentials skills needed in planning and
implementing the country's new infrastructure projects.
Gaining university status in 1928, the Engineering Academy
continued to provide education in the fields of engineering
and architecture until it was incorporated into ITU in 1944.
Finally, in 1946, ITU became an autonomous university
which included the Faculties of Architecture, Civil
Engineering, Mechanical Engineering, and Electrical and
Electronic Engineering.
Of ITU's five campuses, the main campus is located at
Ayazağa, a recently developed business area. The Rector's
office and administrative units are situated on this campus.
Faculties of: Civil Engineering, Electric and Electronic
Engineering, Chemical and Metallurgical Engineering, Minig,
Science and Letters, Aeronautics, and Naval Architecture
and Ocean Engineering are all on this campus which extends
over an area of 247 hectares. Of the five institutes ITU has,
four are located on this campus comprising: the Institute of
Earth Sciences and the Institute of Information Technology.
The Faculty of Aeronautics and Astronautics was established
on March 3rd, 1983 as 11th Faculty of Istanbul Technical
University. The Faculty consists of three departments:
Aeronautical Engineering, Astronautical Engineering, and
Meteorological Engineering. Aeronautical Engineering
Department was first established as a branch of the
Mechanical Engineering Faculty in 1941, and then in 1944
became a department of the Mechanical Engineering
Faculty. Meteorological Engineering Department was first
established in 1953 within the Faculty of Electricity. In
21
1971, the department was transferred to the Faculty of
Basic Sciences, in 1982 to the Mining Faculty, and in 1983
to the Faculty of Aeronautics and Astronautics.
Astronautical Engineering Department was first established
in 1983 together with the faculty and started accepting
undergraduate students in 1986 [2].
The authors of this article are studying at the Faculty of
Aeronautics and Astronautics, also in the frame of the
National Scholarship Program for Graduate Students of The
Scientific and Technological Research Council of Turkey
(TUBITAK-BAYG) - under the special direction of Assistant
Professor Melike NIKBAY.
As they told EnginSoft: “We are now going on our graduate
program in Aeronautical Engineering, this will see us
working as researchers in the TUBITAK Project which is
about “Analysis and Reliability Based Design Optimization
of Fluid-Structure Interaction Problems Subject to
Instability Phenomena”. We have been using the
modeFRONTIER® optimization tool for our optimization
problems for some time. Until now, we have presented our
work at 4 International Conferences: American Institute of
Aeronautics and Astronautics (AIAA), International
Conference on Machine Design and Production (UMTIK),
World Congress on Computational Mechanics and Asian
Pacific
Congress
on
Computational
Mechanics
(WCCM/APCOM), and 2 papers at the Third International
Conference on Multidisciplinary Design Optimization and
Applications by ASMDO Association for Simulation and
Multidisciplinary Design Optimization”.
References
[1] Nikbay, M. and Ulucenk, C. and Yanangonul, A. and
Aysan, A. Reliability-Based Multi-objective Optimization
of an Aircraft Wing Structure with Abstract Optimization
Variables In Proc. 5. Ankara International Aerospace
Conference Metu, Ankara, Turkey, 2009.
[2] ITU Official Website, http://www.itu.edu.tr/en.
Melike NIKBAY - Istanbul Technical University, Turkey
Assistant Professor, Astronautical Engineering Department,
Faculty of Aeronautics and Astronautics, Istanbul Technical
University, Maslak, Istanbul, Turkey, 34469; AIAA Member
([email protected])
Necati FAKKUSOĞLU - Istanbul Technical University, Turkey
Research Assistant, Faculty of Aeronautics and Astronautics,
Istanbul Technical University, Maslak, Istanbul, Turkey,
34469 ([email protected])
Muhammet N. KURU - Istanbul Technical University, Turkey
Graduate Student, Informatics Institute, Computational
Science and Engineering, Istanbul Technical University,
Maslak, Istanbul, Turkey, 34469
([email protected])
22
- Newsletter EnginSoft Year 7 n°3
Simple Optimization of Gradated
Biomaterial Scaffolds made of Calcium
Phospates
One of the mainstreams of modern biomaterials is the
application of calcium hydroxyapatite (HAP) [1,2]. Various
combinations of HAP as scaffolds and coatings or as a
component of bioceramics and composites are being tested.
To enhance osteoconductivity and osteointegraton, additions
of other calcium phosphates (CP) like beta-tricalcium
phosphate Ca3(PO4)2 (ß-TCP), glass-ceramics etc. are being
exploited. Different processing methods of these compounds
result in a variety of their mechanical, chemical and
biological properties, which is also affected by density
(porosity) and possible cross-interactions. There are also
differences between in vitro and in vivo conditions, and not
all of them are precisely known.
Earlier homogeneous compositions of HAP+(25…75%) ßTCP have been considered as the compromise solution for
optimal osteointegration, since ß-TCP is known to dissolve
faster in both simulated body fluid (SBF) and in vivo
conditions [2], although this also depends on porosity and
crystal size. Protein adsorption on HAP delays bone
formation, so a compromise solution should be sought to
balance all these factors to ensure proper osteointegration.
It has been suggested that a functionally gradated material
(FGM) with a smoothly changing concentration and porosity
profile would provide a better solution for the implants and
scaffolds [2]. This kind of controlled porous material can be
manufactured by a powder metallurgy technique with mixing
and sintering. The latter however leads to generation of
thermal stresses due to the difference in thermal
expansion and sintering rates. Thus in the case of a
FGM disk, this will lead to bending and twisting, up
to possible cracking. Uneven stresses and remained
porosity will not guarantee proper bioresorbabilty of
the material - degradation of CP occurs preferentially
on grain boundaries when the soluble phases
disappear and the grains of less-soluble CP phases
are released into a body environment. Such particle
release is a cause of concern due to osteolysis (bone
loss).
To define the most optimal FGM profile including
thermal and sintering residual stresses over the
whole processing range, kinetics of co-sintering of
HAP and ß-TCP should be analysed and the
experimental data fed into the sintering model. The
developed generic model of sintering [3,4] is based
on visco-elasto-plastic behaviour of the material, when its
properties depend on porosity and grain size, coupled with
thermal expansion. For example, for pure HAP it was
experimentally found that the sintering kinetics could be
approximated (±10% error) with Avraami-Erofeev’s equation:
where α – degree of sintering as measured by dilatometry.
Explicit solution of this differential equation for shrinkage is:
for any programmable heating rate. When sintering a mixture,
the shrinkage is more complex and a numerical fitting is
usually required to incorporate it into models [3,4]. Fig. 1
shows the measured and interpolated shrinkage of HAP + ßTCP mixtures. It could be seen that with <20% of HAP in
the mixture, shrinkage remains low until high temperatures,
and for mixtures with >60% HAP, it does not deviate too
much from the pure HAP.
Using experimental data and equations for thermal expansion
and contraction, the MathCAD model for thermal stresses of
FGM with arbitrary thickness and gradation profile has been
set up. Properties of FGM were calculated using a
Fig. 1 - Interpolated global shrinkage of the HAP + ß-TCP composites from 700°C.
Newsletter EnginSoft Year 7 n°3 -
micromechanical model [4] and the simplified stress analysis
was preformed using the linear plate theory. The gradation
profile was assumed to follow a power function:
23
three S-criteria are plotted together (Fig. 2), the effect of
gradation parameter (colour) is clearly more important than
the thickness. Whereas mean values of P (green bubbles),
corresponding to near linear gradation, provide minimum of
S1, this does not automatically guarantee minimum of S2
values neither curvature (S3).
where the volume fraction of HAP depends on x (the
thickness coordinate), HFGM (thickness of the FGM layer
(1…5 mm)) and P – gradation parameter (0.01… 100).
Because stress evolution is non-linear for processing
temperature and composition, it is necessary to establish
single integral variables, which represent some measure of
these stresses and relate them to other compositions with
different thickness and gradation. Global integrated
parameters chosen for this analysis are stress derivatives
differences (S1), combined averaged stress difference (S2)
and combined curvature of the FGM plate (S3):
0
The choice of these parameters instead of traditional ones
was dictated by the need of a single parameter, which is
capable of integration of information about the whole FGM
plate during the whole temperature range from beginning of
sintering (T0) to sintering temperature (Tsin), without explicit
analysis of stresses in every point at every time and
temperature step. It is known from FGM barrier coatings
optimization that stress differences and their derivative
differences are also important for material performance
besides the absolute stress magnitude. The criteria S1 and S2
do not distinguish between tensile and compressive stresses
but consider only their absolute values and thus might be
overcautious in indication of “optimal” gradation.
Nevertheless, they are believed to represent the general
trends and to define the area, which would be further
analysed using numerical methods in more detail. The
objective of finding this optimal gradation is in minimising
of all three criteria S1-S3.
The model was set up with modeFRONTIER® 4.1.2, using a
single MathCAD node for model calculation. Stored results
were exported to Grapheur 1.0 visualisation software, where
these stress differential changes (S1), averaged stress
difference (S2) and averaged curvature (S3) were analysed as
functions of the input variables gradation parameter P (and
log(P)) and thickness HFGM.
It is expected that thicker layer will have much less
curvature. FGM with a large value of P seems to have the
least stress difference for all thicknesses analysed. When all
Fig. 2 - Mutual dependence of all criteria. The best solutions are located
closest to the coordinates origin.
Fig. 2 shows that that thicker FGM plates (larger bubbles)
with a larger gradation parameter (red colour), i.e. a thin
graded layer (~20% of the total thickness of the ß-TCP
layer) will lead to the lowest curvature, stresses and their
derivatives during the whole sintering process. In the case of
the optimised profile, this HAP-rich layer with lower porosity
and dissolution rate might provide an interesting effect on
osteointegration as well. This is expected to ensure better
stability of the scaffold in the body after implantation due to
less destructive acting of internal stresses. In the future,
more complicated geometries and different sintering regimes
might be also simulated to find out the optimal set of
processing parameters.
References
[1] Oonishi H., Biomater., 12 (1991) 3, 171-178.
[2] Pompe W., Worch H., Epple M., Friess W. et al. Mater. Sci.
Eng. A362 (2003) 40-60.
[3] Gasik M., Zhang B. Comp. Mater. Sci., 18 (2000) 93-101.
[4] Gasik M. Comp. Mater. Sci., 13 (1998) 42-55.
For more information:
[email protected]
Michael Gasik
Aalto University Foundation, Finland
www.aalto.fi
24
- Newsletter EnginSoft Year 7 n°3
ANSYS 13: Preview
Nei mesi conclusivi del 2010 è in previsione
l’uscita della release 13 di ANSYS. In realtà, già da
alcuni mesi, EnginSoft ha preso parte ai test della
suddetta release, prendendo piena coscienza delle
qualità e delle notevoli novità introdotte dalla
nuova versione.
In questa parte si vuole porre l’accento sugli
importanti “improvements” che caratterizzano
ANSYS 13, sottolineando tutti i potenziamenti che
caratterizzano l’ambiente di meshatura, la
modellazione geometrica e le varie tipologie di
simulazione.
Alcune novità importanti riguardano la
modellazione delle beam e delle shell; si potrà
scegliere di colorare gli spigoli delle shell in
funzione del numero di elementi con i quali sono Figura 1 – Mesh in visualizzazione wireframe
Uno degli aspetti innovativi riguarda la possibilità di
a contatto; sarà possibile visualizzare la mesh in trasparenza
selezionare geometrie in funzione delle coordinate, oppure
e non solo il “wireframe” della geometria come nella
collegando le facce agli spigoli selezionati e così via (Figura 3).
precedente release (Figura 1).
Inoltre si potrà definire per le shell uno spessore variabile in
funzione di una coordinata (vedi Figura 2), mentre anche per
I contatti, oltre a poterli rinominare in funzione della
gli elementi beam si potrà inserire la pretensione.
nomenclatura geometrica dei corpi, potranno essere
raggruppati in maniera da renderli più accessibili e
facilmente individuabili per assiemi complessi formati da
molte parti in contatto fra loro (Figura 4).
Figura 2 – Spessore variabile per le shell
Figura 3 – Selezione in ambiente WB di differenti geometri
Per quanto riguarda nuove tipologie di analisi, è stata
implementata all’interno di ANSYS WB l’analisi di creep, con
il conseguente inserimento all’interno dell’”Engineering
Data” dei materiali di creep.
All’interno dell’Engineering Data si potranno anche inserire le
proprietà dei materiali iperelastici in funzione della
temperatura.
Sarà possibile poi impostare un’analisi “Gasket”,
introducendo per le parti di interesse un comportamento di
rigidezza di tipo Gasket, oltre che semplicemente rigido o
flessibile come nella release 12.
Sempre nella finestra di dettaglio relativa ai settaggi
dell’analisi è stata introdotta la gestione dei controlli di
stabilizzazione per poter aiutare la convergenza di una
soluzione.
Per quanto riguarda il controllo del
“run” di soluzione, ANSYS ha inserito la
possibilità di poter ripartitre con
l’analisi dall’ultimo istante converso; in
questo modo si eviterà di dover
Newsletter EnginSoft Year 7 n°3 -
25
svolgere
di
nuovo
completamente il calcolo,
ripartendo invece dall’ultima
soluzione raggiunta.
ANSYS WB13 offrirà la
possibilità di svolgere analisi
armoniche ed analisi modali in
presenza di coazioni, anche
funzione di differenti load-step
di
analisi
statiche
precedentemente
svolte
potendo
osservare
come
cambiano i modi propri di
vibrare durante l’applicazione
Figura 4 – Nuova organizzazione dei contatti
del carico.
Infine, altre novità importanti sono rappresentate dalla
inserimento al suo interno di tutte le analisi a disposizione;
facilità di importazione di risultati esterni, con la possibilità
nella versione 13 sarà possibile utilizzare HFSS e Maxwell
di trasferire carichi da analisi 2D ad analisi 3D; da CFD sarà
direttamente in interfaccia Workbench, permettendone il
possibile trasferire anche carichi termici volumetrici e non
successivo collegamento ad analisi termiche e strutturali
più solo superficiali (Figura 5).
(Figura 6).
Sempre all’inseguimento della creazione di un’unica
piattaforma “multiphysics”, dalla quale accedere a tutte le
diverse tipologie di analisi, ANSYS continua il processo di
Figura 5- Trasferimento carichi da 2D a 3D
Nel campo della dinamica esplicita è stato implementato
anche l’ambiente euleriano all’interno di WB, con la
possibilità di interazione eulerianalagrangiana tra le differenti parti del sistema
in esame.
In conclusione, è stato fatto un ulteriore
passo avanti per quanto riguarda l’analisi
cinematica con corpi rigidi; vi sarà la
possibilità di introdurre contatti anche tra
corpi rigidi: con la possibilità di mantenere
comunque un solutore esplicito non
ricorrendo al tradizionale solutore implicito
di ANSYS con la possibilità di conseguenza,
di risolvere analisi cinematiche in pochi
minuti.
Daniele Calsolaro - EnginSoft
Per maggiori informazioni:
Emiliano D’Alessandro - EnginSoft
[email protected]
Figura 4 – Nuova organizzazione dei contatti
26
- Newsletter EnginSoft Year 7 n°3
I prodotti ANSYS al servizio della
progettazione e della simulazione dei
motori elettrici: La verticalizzazione
RMxprt-Maxwell-ANSYS Mechanical.
I prodotti Ansoft
Da circa 2 anni il pacchetto di prodotti di casa ANSYS si è
arricchito dei software Ansoft.
I prodotti Ansoft sono programmi ad alte prestazioni per il
design e l’automazione in ambito elettronico ed
elettromeccanico (EDA -Electronic Design Automation
software). Con all’attivo ormai oltre 25 anni di sviluppo, i
software di casa Ansoft rappresentano lo stato dell’arte nella
simulazione elettromagnetica.
riportati i principali software di Ansoft per l’analisi di
componenti e di sistemi.
La tecnica della cosimulazione, implementata nei prodotti
Ansoft, consente in particolare di simulare all’interno dello
stesso ambiente di lavoro schemi circuitali accoppiati a
modelli agli elementi finiti (Figura 2). Il fulcro di questo tipo
di applicazioni è la tecnologia Simplorer, che verrà trattata in
Tra i prodotti Ansoft più diffusi citiamo HFSS,
software che implementa il metodo agli elementi
finiti per l’analisi “full wave” di problemi
elettromagnetici in alta frequenza, Simplorer,
simulatore di circuiti complessi per la
modellazione e l’analisi di sistemi meccatronici,
e Maxwell, software agli elementi finiti per
l’analisi di dispositivi operanti in bassa
frequenza, di cui accenneremo in questa nota.
Una delle principali linee di sviluppo dei
prodotti Ansoft è la possibilità di fornire da un
lato un insieme di tools per la simulazione e la
Figura 2; Ansoft Simplorer: cosimulazione del modello agli elementi finiti, dello schema
verifica di singoli componenti elettromagnetici, circuitale e dei controlli.
dall’altro strumenti e metodologie che
maniera approfondita nei prossimi numeri della della
permettano l’analisi di sistemi complessi, meccatronici ed
newsletter.
ibridi in generale, implementando tecniche di cosimulazione
e di estrazione di modelli a parametri concentrati.
Tale approccio viene sintetizzato in Figura 1, dove sono
Le nuove soluzioni software di Ansoft affiancano EMAG, il
tradizionale
prodotto
per
le
analisi
elettromagnetiche in ANSYS, rafforzandone le
potenzialità in ambito multifisico e introducendo
una serie di verticalizzazioni e customizzazioni su
specifiche tipologie di prodotti.
La simulazione multifisica si realizza nell’interfaccia
ANSYS Workbench che consente l’integrazione tra i
solutori elettromagnetici di Ansoft e quelli CFD,
termici e strutturali di ANSYS.
Figura 1: Component design e System design in Ansoft
Nel presente articolo è presentata la soluzione
relativa alla progettazione e alla simuolazione dei
motori elettrici. La verticalizzazione proposta
utilizza i software RMxprt e Maxwell 2D/3D di
Ansoft, oltre ad i solutori termici e strutturali di
ANSYS.
Newsletter EnginSoft Year 7 n°3 -
27
laddove il modello presenta delle zone critiche
assicurando la convergenza (b) della soluzione.
In Figura 4 il metodo di mesh auto adattivo viene
sintetizzato con un diagramma a blocchi. È
mostrato inoltre, su un modello 3D, la mesh
prodotta al primo ed all’ultimo passo di
convergenza.(b).
Maxwell permette, in funzione del solutore
utilizzato, la determinazione accurata delle forze
Figura 3; Alcune applicazioni in Maxwell 3D: campo di induzione magnetica valutato mediante
e
delle coppie, dei valori di capacità, di
analisi transient su un motore elettrico (sinistra); perdite ohmiche valutate sul primario e sul
induttanza,
di resistenza e di impedenza. il
secondario di un trasformatore elettrico (destra)
software consente altresì di valutare agevolmente
I software Maxwell2D/3D e RMxprt
alcune grandezze di indubbia importanza in ambito
Maxwell è un software efficace ed efficiente per la
industriale quali correnti parassite, perdite ohmiche e nel
simulazione dei campi elettrici e magnetici. Basato sul
ferro.
metodo agli elementi finiti, Maxwell consente di analizzare il
comportamento elettromagnetico di strutture e componenti
Nell’ambito della progettazione di macchine rotanti, a
quali motori elettrici, attuatori, trasformatori, converters ed
Maxwell si affianca, condividendone l’interfaccia, uno
altri congegni elettrici ed elettro-meccanici comuni ai settori
strumento dedicato: RMxprt.
automotive, aereospace, della difesa e
dell’industria in generale. Maxwell
consente inoltre di effettuare diverse
tipologie di analisi statiche, armoniche e
transient operando su geometrie complesse
in domini bidimensionali e tridimensionali.
In Figura 3 vengono mostrati alcuni
esempi di applicazioni in Maxwell3D. Un’
analisi transient di un motore elettrico
brushless a magneti permanenti (a) e
un’analisi armonica di un trasformatore
elettrico (b).
Fin dalla prima release, Maxwell utilizza
tecniche di meshatura auto-adattive che Figura 5; Alcune applicazioni dell’interfaccia di RMxprt
consentono di ottenere un ottimo
compromesso tra onere computazionale della simulazione,
Utilizzando la teoria classica delle macchine elettriche ed il
dovuto al numero di elementi necessari a discretizzare una
concetto di circuito equivalente RMxprt calcola
struttura, e accuratezza della soluzione proposta. Basati su
istantaneamente, fornendo le principali caratteristiche, il
criteri energetici, tali algoritmi creano e raffinano la mesh
comportamento della macchina nelle diverse alternative
progettuali.
RMxprt ha una semplice interfaccia grafica per inserire i
parametri progettuali relativi alla geometria di rotore e
statore, ai settaggi degli avvolgimenti e alle caratteristiche
dei materiali. In Figura 5 è mostrata l’interfaccia di RMxprt.
Il motore rappresentato è un asincrono trifase.
RMxprt consente di analizzare in modo esaustivo una vasta
gamma di motori elettrici: macchine sincrone ed asincrone,
macchine con commutazione a spazzola ed elettronica,
alternatori, ecc.
Figura 4; Come lavora l’altgoritmo di mesh auto-adattivo: diagramma a
blocchi (a); mesh al primo ed all’ultimo passo di convergenza (b)
La release 13 di RMxprt, ultima nata in casa Ansoft, presenta
inoltre un nuovo slot editor che permette al progettista la
modellazione della geometria di qualsiasi tipo di slot.
RMxprt considera gli effetti pelle ed alcuni effetti 3D, come
le geometrie delle teste di matassa e l’inclinazione delle cave.
28
- Newsletter EnginSoft Year 7 n°3
Figura 6; L’integrazione RMxprt Maxwell2D/3D: RMxprt automaticamente
genera i modelli per Maxwell2D e Maxwell3D
Vengono inoltre considerati nel calcolo gli effetti della
saturazione del ferro.
Le elevate potenzialità del software sono completate con un
ottimizzatore integrato: Optimetrics, che consente la
L’accoppiamento Maxewll-ANSYS
per le analisi multifisiche
La soluzione calcolata in Maxwell2D e 3D può essere
utilizzata come boundary condition per il solutore termico e
strutturale di ANSYS.
In maniera completamente automatizzata, e senza bisogno di
implementare script esterni, è infatti possibile eseguire
analisi termiche 2-way fra Maxwell e ANSYS thermal. Maxwell
rende disponibili le perdite di potenza per i modelli di ANSYS
e legge le temperature output dell’analisi termica, per
ricalcolare la soluzione elettromagnetica.
L’esportazione delle perdite di potenza avviene da modelli 2D
e 3D di Maxwell verso i solutori statici e transient di ANSYS.
In Figura7 viene riportato un esempio di quanto detto: le
perdite di potenza nel ferro calcolate da Maxwell2D sono
mappate sulla geometria 3D in ANSYS.
Per quanto riguarda l’analisi transient, possono essere
trasferite le perdite valutate su singoli istanti o come media
calcolata su un intervallo temporale opportuno.
Sia il modello 2D di Maxwell che la geometria 3D utilizzati
in figura 7 sono stati creati a partire dal modello parametrico
di RMxprt.
Figura 7; Le perdite di potenza nel ferro calcolate da Maxwell 2D (sinistra) sono mappate sul modello termico di ANSYS (centro) per valutare le temperature (destra)
valutazione istantanea di diverse configurazioni di parametri,
per individuare quella che permette il raggiungimento degli
obbiettivi progettuali imposti, nel rispetto dei vincoli.
Per quanto riguarda le analisi strutturali sia la densità di
forza volumetrica che le forze superficiali magnetiche
calcolate da Maxwell possono essere importate dal modello
strutturale di ANSYS. In figura 8 la densità di forza
volumetrica calcolata da Maxwell 3D è mappate sul modello
strutturale di ANSYS, come condizioni al contorno per una
successiva analisi strutturale.
La verticalizzazione RMxprt-Maxwell per l’analisi
dei motori elettrici
Una volta definito il modello analitico attraverso
l’interfaccia, RMxprt crea automaticamente il modello agli
Come accennato precedentemente, l’accoppiamento
elementi finiti 2D e 3D per Maxwell, trasferendo: la
multifisico descritto non richiede l’utilizzo di script esterni o
geometria, le caratteristiche di moto e le proprietà
macro, poichè la procedura è implementata completamente
meccaniche (inerzia e coppia resistente all’albero), i dati dei
materiali (curva BH e di perdita del ferro), il setup degli avvolgimenti e l’alimentazione.
In definitiva il modello così creato è pronto ad
essere lanciato con i solutori transient di
Maxwell.
La soluzione agli elementi finiti in Maxwell
permette oltre alla determinazione dei transitori,
i valori puntuali di campo ed una soluzione più
raffinata ed accurata di quella analitica ottenuta
con RMxprt.
La Figura 6 mostra lo stesso modello in RMxprt ed Figura 8; La densità di forza volumetrica calcolata da Maxwell 3D (a sinistra la mesh in
in Maxwell 2D e 3D.
Maxwell) è mappata sul modello strutturale di ANSYS (destra)
Newsletter EnginSoft Year 7 n°3 -
29
condividere le informazioni con
altri blocchi di analisi attraverso
le
procedure
tipiche
dell’interfaccia WB.
In Figura 9 come appare
l’applicativo
Maxwell
in
interfaccia ANSYS Workbench 13.
Figura 9; Maxwell ed altri prodotti Ansoft sono presenti in interfaccia AnsysWB.
dalle interfacce di Maxwell e ANSYSWB. Inoltre le mesh di
ANSYS e Maxwell sono indipendenti.
L’interfaccia ANSYS13-Maxwell14
Con le prossime release di Maxwell14 ed ANSYS13 a breve in
uscita, l’integrazione fra i moduli elettromagnetici di Ansoft
e l’ambiente di simulazione di ANSYSWB verrà ulteriormente
migliorato.
Maxwell potrà infatti essere lanciato direttamente
dall’interfaccia di ANSYSWB, l’icona presente nella finestra
dell’Analysis System.
In questo modo Maxwell2D/3D potrà essere utilizzato
all’interno del Project Schematic di ANSYS Workbench e
Conclusioni
Nella presente nota è stata
presentata la soluzione ANSYS per
la progettazione e la verifica dei motori elettrici.
La procedura proposta non si limita al dominio
elettromagnetico, ma grazie alle tecnologie implementate in
ANSYS, permette l’analisi anche in ambito termico e
strutturale, in maniera semplice ed efficace.
La piattaforma nella quale si realizza questa tecnologia è
ANSYSWB, che nella release 13, segna un significativo
avanzamento nella completa integrazione fra le principali
tecnologie Ansoft ed i solutori di ANSYS.
Per maggiori informazioni:
Emiliano D’Alessandro - EnginSoft
[email protected]
The New ANSYS Frontier Product:
ANSYS EKM (Engineering Knowledge
Manager)
An interesting benchmark experience with Ansaldo Energia
ANSYS Engineering Knowledge Manager (ANSYS
EKM) is a simulation process and data
management (SPDM) software product that
provides solutions for engineers who are
challenged with managing the vast amounts of
data and best practices that are generated in
simulation activities.
ANSYS EKM is a web-based solution with an
easy-to-use and intuitive user interface. The
technology’s capabilities range from simple
archival and management of simulation data to
process automation and capture and
deployment of best practices, version control
and branching, audit traceability and
dependency mapping, advanced search and
retrieval, report generation and simulation
comparison and extensive customization.
It is worth noting that ANSYS EKM is a different
solution compared to existing PLM/PDM
Fig. 1 - Web access interface
30
- Newsletter EnginSoft Year 7 n°3
systems. ANSYS EKM is focused on the CAE simulation
aspect of product engineering. This ANSYS product allows
engineers and other users to index, archive, search and
retrieve simulation and supporting files. ANSYS EKM
supports automated meta-data extraction and report
generation for all ANSYS simulation products. It also
supports other commercial CAE solution formats and can
be configured to support internal/legacy codes, if
necessary. ANSYS EKM is presented as a “complementary”
solution to PLM/PDM systems. The ANSYS EKM Datalink
capability allows direct bi-directional interface between
ANSYS EKM and commercial PLM/PDM systems.
Typically, a new analysis project for a turbine blade begins
by searching existing CAE analysis files. The pre-analyzed
CAE database is a good starting point for a new product,
since typical CAE database includes a wealth of
information like simulation intent, materials, solver
settings, results etc that can be leveraged in the analysis
work at hand.
By using a web browser and a password-protected
internet/network connection, ANSYS EKM can be accessed
from any geographical location, even when the user is
travelling.
At Ansaldo Energia, ANSYS EKM was used for managing the
simulation data and defining a simulation process that led
to an efficient and collaborative method for new turbine
blade design and analysis. This method established a
benchmark case study at Ansaldo Energia.
EnginSoft, an ANSYS channel partner in Italy, is
committed to developing benchmark evaluations and case
studies that showcase how CAE can save companies time
and money in their product development activities.
Fig. 2 - EnginSoft Compute Cluster Configuration
Recently, EnginSoft was offered an opportunity to present
the ANSYS EKM solution to Ansaldo Energia for their
Simulation Data and Process Management needs.
The Benchmark at Ansaldo Energia
Ansaldo Energia is a long-time user of software from
ANSYS. The energy company leverages multiple ANSYS
simulation tools to analyze different physical phenomenon
in the process of typical turbine blade analysis.
The indexing and archival capability of ANSYS EKM is very
useful in searching and retrieving existing simulation
files. These files can then be efficiently leveraged for the
current simulation work at hand.
Using ANSYS EKM Studio, a simple simulation workflow
was built for this benchmark case study (Figure 2). The
workflow consisted of multiple nodes or tasks, connected
by transitions, or actions. For example,
geometry /CAD design creation (using
Pro/ENGINEER software), a CFD analysis
(using ANSYS CFX) and a mechanical
analysis (using ANSYS static structural
code) are the three tasks involved in this
workflow. Each task can have a staff
member assigned who is expected to
complete the work – or a machine assigned
(a queue system such as RSF/SGE and compute cluster,
etc.)
This workflow involved four users: a lead analyst, a CAD
expert, a CFD expert and an FEM expert. These staff
members each work in different departments, sometimes
at geographically different locations. They usually
communicate with each other using conventional
communication methods, such as telephones, emails, etc.
Newsletter EnginSoft Year 7 n°3 -
Using ANSYS EKM Studio, the lead
analyst creates the workflow and sets
up the project structure and access
permissions for the other assignees.
This helps ensure that appropriate data
access is granted to the users and they
can work in a collaborative way,
without affecting each others’ work.
When the lead analyst starts a
simulation process using this workflow,
the next-in-line assignee automatically
receives an email notification via
ANSYS EKM about the work item or task
that is assigned to him. This work item
includes all necessary information
about the task. The user can then
complete the work and mark it as done
in ANSYS EKM. This generates an email
to the next assignee, and the
automated communication goes on.
31
Fig. 3 - Total deformation and equivalent stress results for the new turbine blade
The simulation process was made very efficient and
productive by this automated communication. The actual
actions of the user were captured in the ANSYS EKM
process status, a tool helpful in reviewing or auditing the
process at a later dated, if needed.
Through such coordination, the software can reduce
downtime associated with communication and data
transfer between different users. Furthermore, it was
possible to create an interface between ANSYS EKM and
the company’s server and cluster; the benchmark
development was organized between the different
EnginSoft headquarters staff, depending by their singular
capabilities (Figure2).
By using automated or batch processing nodes in the
workflow, it was possible to execute the simulation in
batch mode on EnginSoft server and cluster. At the end of
the batch execution, simulation result files were
automatically uploaded back to ANSYS EKM. This helped in
using these files in subsequent nodes/actions in the
workflow.
Based on the server cluster configuration and available
simulation software, the ANSYS EKM benchmark workflow
was executed in batch mode to complete the remote
simulation runs
At the end of the benchmark evaluation, Ansaldo Energia
concluded that
• Prior analysis files were easily searched and retrieved
using the advanced search capability in ANSYS EKM.
The turn-around time was quick and the search process
was efficient.
• Since it was easy to find pre-existing solutions and
reuse them, the need to perform a repeat analysis was
minimized. This resulted in better utilization of
resources.
• The overall design and analysis process was well
coordinated. The various team members rated the
ability to collaborate in the simulation process as
excellent.
Ansaldo Energia staff expressed an interest in improving
ANSYS EKM so it could execute multi-objective optimization, leveraging the parametric analysis capability of the
software used in this workflow.
Andrea Pancrazzi - EnginSoft
[email protected]
Ing. Daniele Calsolaro - EnginSoft
[email protected]
Comment by Shantanu Bhide
(Product Manager, ANSYS EKM),
ANSYS, Inc. as inset:
The ANSYS Engineering Knowledge Manager (EKM) is a
comprehensive solution for simulation-based process and
data management challenges. ANSYS EKM provides solutions and benefits to all levels of the enterprise, from the
individual engineer interested in spending less time handling data and more time focusing on true engineering efforts to the entire organization looking for increased productivity in all aspects of its simulation activities. It enables the enterprise to address the many critical issues associated with simulation data including backup and archival, traceability and audit trail, process automation, collaboration and capture of engineering expertise, and IP
protection.
32
- Newsletter EnginSoft Year 7 n°3
ESAComp Versione 4.1 – Strumento
basilare per la progettazione delle
strutture in composito
ESAComp è un software sviluppato espressamente per l’analisi ed
il design delle strutture in materiale composito. Esso è capace
inoltre di supportare efficacemente lo sviluppo della fase
concettuale e preliminare nel rispetto dei molteplici requisiti
caratteristici di un progetto nell’ambito di strutture complesse
in materiale composito.
ESAComp dispone di strumenti e capacità che possono essere
sfruttate sia in maniera autonoma sia come supporto ed
integrazione delle funzioni dei più diffusi pacchetti di software
agli elementi finiti.
ESAComp comprende inoltre un database costantemente
aggiornato e basato su informazioni provenienti da fonti
complementari, tra cui anche fornitori industriali, che racchiude
ed integrati ed ESAComp costituisce uno degli elementi
principali e specifici del processo logico (cfr. Figura 2); così
come ANSYS ACP permette di verificare la necessaria fattibilità
produttività di componenti geometrici in composito tramite
l’analisi di drappabilità e di risolvere puntualmente il problema
strutturale di dettaglio, ad ESAComp è demandata la risoluzione
Figura 2 – Processo logico progettuale di strutture in materiale composito
Figura 1 – Modulo specifico per il dimensionamento e l’analisi dei rinforzi
strutturali in composito
tutte le proprietà dei materiali necessarie alla soluzione
di problemi ingegneristici.
La nuova versione 4.1 di ESAComp estende e migliora
tutte le precedenti capacità, rendendolo lo strumento
ideale e necessario per la progettazione delle strutture in
composito.
In particolare la nuova versione introduce strumenti e
processi che permettono di incrementare la velocità di
sviluppo di un progetto e raggiungere soluzioni
maggiormente performanti, come ad esempio il modulo
per il predimensionamento dei rinforzi strutturali su
piastre piane o curve (cfr. Figura 1).
In un processo progettuale che miri a risolvere
efficacemente ed esaustivamente ogni specifica fase per
una struttura in materiale composito, integrando
strumenti CAD e codici numerici agli elementi finiti,
come ad esempio ANSYS ACP – ANSYS Composite
PrePost, è necessario disporre di strumenti verticalizzati
delle seguenti fasi: progetto concettuale, selezione dei
materiali, tramite logiche specifiche e di confronto, e
progettazione preliminare dei singoli piani di laminazione (cfr.
Figura 3).
Attraverso la completa integrazione e le iterazioni progettuali, i
dati possono essere interscambiati tra ANSYS ACP ed ESAComp.
Infine una sintesi delle principali novità introdotte nella nuova
versione di ESAComp è riportata nelle pagine seguenti.
Figura 3 – Funzioni specifiche per la soluzioni di problemi complessi e soddisfazione di
requisiti differenti
Newsletter EnginSoft Year 7 n°3 -
33
ESAComp 4.1: New Features
Version 4.1.086 for MS Windows
NEW ANALYSIS CAPABILITIES
• Curved plates extend the old analysis capability for
rectangular plates to singly curved plates defined by
the plate dimensions and the radius of curvature.
Stiffeners can be placed in the axial direction. The
boundary conditions can be independently defined for
each edge of the plate. Linear-static load response and
failure analysis can be performed under pressure and
point loads. Buckling and natural frequency analyses
are introduced in future ESAComp releases. As a special
case of curved plates, flat plates can also be defined.
The new implementation will fully replace the old plate
analysis in the future.
• Hat stiffeners can be defined for curved plates besides
the beam type stiffeners supported by earlier ESAComp
versions. The hat stiffener types include bonded and
integral stiffeners. The capabilities for defining both
stiffener types are extensive. For instance,
besides the hat laminate there are possibilities
to define additional reinforcing layers for the
sides and top part of the hat. In the analyses
hat stiffeners are modeled using shell
elements.
• The Cylindrical shell add-on module allows
analyses of cylinder and tube-like structures.
The cylinder may have a constant diameter or
it may be conical. The laminate lay-up may
vary in the axial direction by assigning
different laminates for ring type cylinder
segments. The boundary conditions at the
ends of the structure are defined using an
innovative and simple-to-use approach. Forces
and moments can be applied at the ends of the
cylinder. In addition, a pressure load or inertial loads
due to linear acceleration or rotation may be applied.
The analysis types include static load response and
failure, as well as buckling and natural frequency
analyses based on linear eigenvalue approach.
• The Stiffened cylindrical shell add-on module is a
further extension of the cylindrical shell module. Beam
type stiffeners can be placed in the axial and
circumferential directions. The locations are specified
independently for each stiffener. The stiffeners may be
on the inside or outside of the cylinder. The analysis
possibilities are identical to the standard cylindrical
shell module.
• The Elmer FE solver by CSC, The Finnish IT Center for
Science (www.csc.fi/elmer) is now included as a
standard module in the ESAComp distribution. It is
used for realizing the curved plate and cylindrical shell
analyses and it also provides a basis for introducing
advanced nonlinear analyses in the future ESAComp
releases.
• A new 3D result viewer is introduced for viewing the
results of curved plate and cylindrical shell analyses.
Versatile capabilities for model rotation and zooming
are included. The selected result item can be viewed as
a contour plot with optional annotations for failure
modes and critical layers. The features include also
deformed plots and animation of eigenmodes. For a
selected element, layer level stresses, strains and
reserve factors can be viewed as layer charts and in
numeric format.
NEW DATA EXCHANGE CAPABILITIES
• FE export to ANSYS Composite PrepPost (ACP). ACP
supports data exchange with ESAComp using the
34
•
•
•
•
- Newsletter EnginSoft Year 7 n°3
ESAComp XML format. The new export
capability improves the possibilities by
writing an ACP specific Python script which
can be simply copied and pasted to the ACP
command prompt. Both ply materials and
laminate lay-ups can be exported. Laminate
lay-ups from ESAComp can be interpreted as
Sub Laminates or Stack-ups in ACP.
FE export to ANSYS Workbench allows
creation of an ANSYS specific XML file that
can be read in by the WB Engineering Data
module. Isotropic and orthotropic ply
materials can be exported.
FE export to ComPoLyX allows transfer of
ESAComp ply material data in the form of a
ComPoLyX Python script. The typically
incomplete material data from FE models can
be completed with the material description
from ESAComp before performing advanced
failure analyses in ComPoLyX.
ABAQUS export has been improved for ABAQUS SHELL
elements through the use of ESAComp extension
variables (Edit -> Extension variables...). For each ply
of the active case, an FEA related material ID can be
specified. Similarly, for each laminate an FEA related
section type ID and reference plane data can be set.
Consequently, these ID's are used when export is made.
Support for unit systems in ESAComp XML. In the
earlier versions all ESAComp XML data exchange was
done in basic SI units. Now, the FE import/export unit
options can be used for selecting the unit system. If
imported XML includes a header indicating the unit
system, this information is used instead of the
selected unit options.
DATA BANK UPDATE
• The ESAComp Data Bank has been updated extensively.
The update covers the following material types: foam
cores, honeycomb cores, other cores, carbon fibers,
glass fibers, typical aramid fibers, polyester resins,
vinylester resins, some epoxy
resins including typical classes,
homogeneous plies, typical FRP,
CSM, Spray up rovings, MMC,
and plywood.
LICENSING AND INSTALLATION
• RLM license manager by Reprise
Software, Inc. has replaced the
earlier-used FLEXlm in ESAComp
licensing.
To
the
IT
administrator and ESAComp
end-user, RLM provides a userfriendly web browser interface
for configuring the license
server and for monitoring
license usage. Node-locked
licenses are handled with a very simple license file
based approach. No license server is needed for nodelocked licenses. Old license files are not compatible
with the new licensing system. ESAComp users that are
eligible for the version upgrade will receive new license
files.
• Along with the new licensing, version numbers are now
based on the release date – OR “A release-date-based
version number is now used with the new licensing”.
In the license file, the highest supported version
number is shown, for example, as “2010.12”. This
indicates that the license is valid for all versions
released in December 2010 or before that. The release
date based version number is shown on the ESAComp
start-up screen besides the “normal version number”,
e.g. “4.1.086 (2010.06)”. When a customer renews
maintenance, a new license file with the updated
version number (maintenance end date) is provided.
This approach increases transparency of the licensing
and makes it easy to take in use new software upgrades
when available.
• The new installation system allows
flexible installation of ESAComp and
RLM license server from the same
installation package. The new
installation procedure supports
multiple users on the same PC. The
user specific ESAComp files are by
default placed under each user’s
home
directory
(“$USERPROFILE\ESAComp\ …”).
• In
addition,
many
smaller
enhancements have been made.
For more information:
Marco Perillo - EnginSoft
[email protected]
Newsletter EnginSoft Year 7 n°3 -
Peculiarità del software Coldform
35
®
Transvalor ha rilasciato a Giugno la versione 2010 di Coldform, il software dedicato alla simulazione dello
stampaggio a freddo di viteria e minuteria metallica.
1. OTTIMIZZATORE multi-parametro e multi-obiettivo integrato:
impostata una sequenza campione, Coldform è in grado di
individuare, il miglior compromesso dei parametri (dimensioni e
posizione filo) per ottenere il
miglior risultato (assenza di
ripieghe, completo riempimento,
minor carico pressa,...). Per
configurazioni più complesse,
possibilità di interfacciamento
con il nostro software di
ottimizzazione modeFRONTIER®,
in grado di guidare più software
e collegarli tra di loro (es:
entrare nel CAD e modificare
dettagli
parametrici
quali
lunghezze e curvature, trasferendo poi le geometrie a Coldform
per il calcolo).
2. DATABASE DEI MATERIALI con circa 900 leghe (acciai, inox,
alluminio, rame-ottone, titanio, nickel, …), con curve di
deformazione ottenute da prove
sperimentali.
3. DATABASE DI CINEMATICHE
estremamente completo, con
tutte le presse standard
(meccaniche, idrauliche, a
doppia ginocchiera, link-drive, ad
energia, …), con la possibilità di
impostare leggi di moto a piacere
(rotazioni singole e multiple, traslazioni, combinazioni a
piacere) per ogni utensile. Ogni stazione può essere legata
alle precedenti, considerando tutti i transfer intermedi.
4. LICENZA multi-utente: nella configurazione standard, una
macchina di calcolo e un numero a piacere, all'interno dello
stesso stabilimento, di stazioni per la preparazione dei calcoli e
l'analisi dei risultati. Possibilità di licenze floating o
configurazioni ad-hoc.
5. MULTI-PROCESSORE fin dalla nascita, nel 1994: possibilità di
ridurre significativamente i tempi di calcolo usando tutti i
"core" a disposizione.
Possibilità di installare la licenza su pc standard e quindi
estenderla a pc multi-core \ multi-processore (es: dualquadcore) o a cluster windows o linux fino a 32 core.
Tipologia di analisi impostabili:
• analisi 2D molto rapide per configurazioni assialsimmetriche
(fasi di estrusione diretta ed inversa o combinata,
ricalcatura della testa);
• analisi 3D per configurazioni più complesse (ricalcatura di
testa esagonale, creazione della chiave e di dettagli
sottotesta, …);
• possibilità di trasferire i risultati da 2D a 3D mantenendo la
"storia di deformazione" del prodotto;
• possibilità di concatenare le operazioni all'interno di una
singola simulazione, con trasferimento automatico dei
risultati tra le fasi.
Livelli di accuratezza impostabili:
1. analisi limitata al pezzo, con ipotesi di
stampi «rigidi»;
2.
analisi "non accoppiata" degli
stampi: calcolo del pezzo e valutazione
della deformazione elastica degli
stampi sottoposti al carico trasmesso
dal pezzo in deformazione;
3. analisi "accoppiata" con calcolo
congiunto di pezzo e stampi.
Per i casi 2. e 3., possibilità di impostare configurazioni
precaricate (blindaggio) tramite definizione dell'interferenza,
anche per più anelli consecutivi (tool-stack).
Inoltre:
• analisi di trafilatura;
• analisi di tranciatura di bave o di punzonatura, con
configurazioni eventualmente flottanti su molle o su cuscini;
• analisi di rollatura del filetto;
• analisi del trattamento
termico di tempra;
• analisi di comportamento in
esercizio: prove di serraggio
vite-bullone (o fasteners in
senso lato) e calcolo degli
sforzi e delle deformazioni
su vite, bullone e sugli
oggetti da fissare.
36
- Newsletter EnginSoft Year 7 n°3
I principali risultati ottenibili dalla simulazione sono:
Per il pezzo:
• analisi dei contatti ed indviduazione della mancanze;
• analisi delle ripieghe, con valutazione della genesi ed
evoluzione nel pezzo;
• analisi delle cricche generate da eccessivo stiro del
materiale;
• analisi termica dei riscaldamenti dovuti a scorrimenti
interni, attrito con gli stampi, picchi di pressione;
• analisi del ritorno elastico all'apertura degli stampi o
all'estrazione dalle matrici.
Per gli stampi:
• analisi delle zone maggiormente sollecitate a rischio di
rottura;
• calcolo dell'usura termo-meccanica degli stampi;
• analisi della deflessione dello stampo sotto il carico
trasmesso dal pezzo in deformazione.
Per la pressa di stampaggio:
• calcolo del carico pressa e della sua distribuzione su ogni
utensile (stampi, punzoni, spine, matrici);
• calcolo del centro di carico dello stampo, per valutare
sbilanciamenti rispetto al baricentro di stampaggio;
• calibrazione della forza assorbita in ogni stazione di
stampaggio, in modo da evitare sbilanciamenti della pressa,
tenendo conto eventualmente anche della rigidità a flessione
della macchina.
Preventivazione e
Valutazione di
Formabilità
Partendo dal pezzo da realizzare, in pochi minuti si
ottengono forma e dimensioni dello sviluppo, con un calcolo
degli spessori e dei costi.
Le versioni più complete consentono una valutazione di
stampabilità, evidenziando rotture o grinzature, che possono
essere eliminate intervenendo sui parametri di stampaggio
(premilamiera, fori di centraggio, superfici di appoggio
curve, …).
Identificare in pochi minuti le modifiche nel
design che consentano una riduzione del
costo del componente dal 10 al 15%!
Per maggiori informazioni:
Marcello Gabrielli - EnginSoft
[email protected]
Per tutti i prodotti
• possibilità di intervenire sulle geometrie importate
(.iges, .vda, .step)
• meshatura automatica in pochi istanti
• completo database di materiali con curve di deformazione
e FLD
Newsletter EnginSoft Year 7 n°3 -
• report generato in automatico in
formato .html e .xls
• interfaccia comune nella suite
• analisi rapidissime: in pochi minuti
si ottiene il risultato
• analisi facilitata dei risultati
Per CATIA e SolidWorks
• le versioni specifiche per CATIA e
SolidWorks sono associative e
rigenerative:
ogni
modifica
introdotta nel modello si trasporta
istantaneamente nella simulazione di
formabilità.
Chiedi subito una prova
gratuita del Prodotto!
www.enginsoft.it/link/testfti
37
38
- Newsletter EnginSoft Year 7 n°3
Pushing Reservoir Data Handling to
New Frontiers
promote the integration of Kraken and the
reservoir simulators GPAS (General Purpose
Adaptive
Simulator)
and
UTCHEM
(University of Texas Chemical Compositional
Simulator).
Developed by Engineering Simulation and Scientific
Software (ESSS), Kraken is a reservoir simulation postprocessor that features a modern and powerful user
interface designed for the visualization and handling of
multiple scenarios and data sets.
This functional post-processor natively reads and
interprets data from ECLIPSE, IMEX, UTCHEM and several
other simulators. By properly integrating numerical
solutions, times and units, Kraken provides unique means
of analyzing and comparing multiple simulations.
Additionally, it is capable of handling grid and well data,
while supporting structured, unstructured, and reservoir
type grids.
Kraken’s power and functionality is built upon ESSS’
expertise in developing and implementing numerical
solutions for companies such as Petrobras, Shell,
ExxonMobil, Total, Statoil, Chevron and Maersk. A few
notable examples of these customized applications are:
• SourSimRL: a simulator currently in use by most of the
major oil and gas companies to avoid the formation of
sulfide gas (H2S) during the secondary oil recovery
process.
• Cyclope: a powerful volumetric mesh converter for
characterization and simulation of reservoirs.
• SCBR 2.0: a complete simulation tool for analysis of
chronic risk to human health, considering the multiple
transportation routes of contaminants: air, ground,
underground aquifers, and surface water.
• PWDa: Pressure While Drilling (PWD) simulation,
manipulation and data analysis tool. It identifies the
key phenomena which impacts annular pressure during
well drilling of petroleum reservoirs.
It is also important to mention the collaborative
development between ESSS and the Center for Petroleum
and Geosystems Engineering (CPGE), in an effort to
“We have been working with ESSS during
two years in the development of pre- and
post-processing software for our reservoir
simulators”, said Kamy Sepehrnoori,
professor of Petroleum Engineering at CPGE.
“We feel that ESSS has done an excellent job in working
with us and performing the various tasks for this project”,
he added.
Kraken environment
1 - Visualization
3D Visualization of any type of grid, from traditional
reservoir simulation to complex unstructured grids
Kraken features a complete set of visualization processes
for extracting, cutting and plotting the information
obtained from the grid solution. It allows the user to
select a region of interest within the grid, by using an IJK
block or a set of cells with predefined property values.
Fig. 1 - This figure shows three examples of visualization processes to aid
fluid behavior analysis. From top to bottom: streamlines from the injector
wells, a plane cut between two wells and an iso-surface tracing the water
saturation front.
Overall information such as average pressure, minimum
and maximum fluid saturation plots, water saturation
front along the transient solution, among others, can be
obtained for the entire grid or specific regions of interest.
2 - Plotting
XY Plots are fully interactive and linked to 3D visualization
for easy data transition from one visualization context to
another. A particular well can be selected and its
Newsletter EnginSoft Year 7 n°3 -
Fig. 2 - This figure illustrates the link between the 3D objects selection and
the well’s production curves visualized using an XY plot and a spreadsheet.
production curves evaluated within an XY Plot window. To
inspect numerical values and create additional
information, formulas may be inserted using any data set
from the model.
39
5 - Report generation
Kraken’s features allow image and data exchange with
other applications. Simple operations such as Copy and
Paste can be used to transfer images from Kraken
visualization windows to other applications, allowing the
user to quickly insert a visual analysis into documents and
reports.
Data values may also be copied from XY Plot curves and
pasted directly into spreadsheet applications such as
Microsoft Excel and open Office Calc.
Exporting document files such as PDF, HTML or RTF format
are part of Kraken’s framework and additional features
allow the creation of customized reports by appending
images from the available views.
3 - Workflow automation
Easily record a sequence of steps using a macro tool for
reproducing daily tasks. Extend the behavior and data
computations using a high-level Python API. Users can
create additional grid properties and curves using their
own routines.
Fig. 5 - Kraken panel for creating customized reports directly from the
visualization windows.
Simulation solutions for the subsurface oil
and gas sector
Fig. 3 - Kraken interface for recording and playing back recorded macros.
4 - Smart properties management
Kraken assembles a complete description of the simulation
data, providing information such as components, phases,
condition of all properties. Units are handled seamlessly,
which allows changing or reconciling different metrics at
no additional computational cost.
Fig. 4 - Kraken editor for listing the grid solution array information and the
direct selection of units of a given XY plot axis.
ESSS has been working closely with major oil and gas companies to develop technologies for subsurface applications
for over 15 years. As the leading CAE solution provider in
South America, ESSS has developed a wide range of computational solutions to attend to the needs for reliable software applications in several areas, including:
• Reservoir Modeling and Simulation
• Basin Modeling and Simulation
• Well Data Interpretation
• Microstructural Characterization
Cor Kuijvenhoven, Sr. Production Chemist at Shell
International Exploration and Production says: “ESSS has a
complete package of technology and expertise to build applications for the Oil and Gas Industry, from chemical and
microbiological modeling to numerical simulation and postprocessing.”
Likewise, in the field of reservoir simulation, ESSS’ expertise
allows them to build applications for upscaling and geological uncertainty analyses, automatic history match, and product scheduling and steam injection optimization.
"The integration environment for engineering technologies
developed by ESSS has greatly simplified the simultaneous
use of various commercial and in-house tools for petroleum
reservoir analysis", states Régis K. Romeu, Reservoir
Engineer Consultant at Petrobras.
40
- Newsletter EnginSoft Year 7 n°3
6 - Rich data comparison
A case comparison can be created to display the
differences between two models. Kraken handles merging
time-steps, units and property identification, despite the
type of simulator and the unit system utilized in each
model. As a result of the operation, the new simulation
model contains all of the features of a standard model.
ESSS and EnginSoft broaden CAE
portfolios through partnership
ESSS and EnginSoft are highly-innovative and wellestablished Computer Aided Engineering (CAE) solution
leaders in South America and Europe, respectively.
In light of their shared technical expertise, objectives
and engineering services, both companies have
recently created a corporate partnership to complement
and expand their respective CAE solutions portfolios.
Such partnership entails a diverse range of activities
from project collaborations, engineer exchange
programs and a joint office in Houston, Texas.
Fig. 6 - Simulation comparison example: The water saturation profile from
models A and B are displayed as a side by side difference of the two grids
(A-B), with the production curves and its differences in a XY plot.
7 - Powerful inspection of the available data
Local inspection of grid information is available by
interacting with the 3D visualization window. This feature
is composed of a visual representation of the selected
data and a floating panel with detailed topological and
geometric information, solution arrays and non
neighboring connections, which provides a straightforward
way of evaluating the grid solution arrays around a
selected cell.
Fig. 7 - Grid data inspection composed of a 3D visual representation of the
selected cells and a list of detailed information in a floating panel.
XY curve plots can be obtained from the selection, and
utilized for tracing the transient behavior of a specified
grid property. Changes in the selected cell properly are
updated in the XY curves plotted, thus creating an
interactive environment for local transient analysis.
For more information please contact:
[email protected]
Contact in Italy: Livio Furlan - EnginSoft
[email protected]
A technical personnel exchange program was carried
out at the end of 2009 as the initial step of the
ESSS/EnginSoft partnership. The primary objective of
this program was to foster a collaborative exchange of
expertise between ESSS and EnginSoft personnel across
the various areas of their CAE portfolios.
ESSS and EnginSoft engineers relocated to the
EnginSoft-Padova and ESSS-Rio de Janeiro sites,
respectively, for a period of three months. Through
project collaboration and interactions with other
engineers and technical managers, a very promising
and warm collaborative climate was developed.
Moreover, the successful outcome of this program
serves as further incentive for future personnel
exchanges.
A natural step forward in the ESSS/EnginSoft
collaboration is the establishment of a joint
ESSS/EnginSoft office in Houston, geared toward
providing CAE consulting services and software sales to
the Houston area, with a specific focus on the Oil & Gas
and Off-Shore industry sectors. A synergistic effort
between ESSS and EnginSoft will lead to a combined
portfolio of expertise which includes Finite Element
Analysis (FEA), Computational Fluid Dynamics (CFD),
Multidisciplinary Optimization (MDO) and software
customization for Geology, Reservoir Engineering and
Microstructural Characterization.
The main strength of the joint operation relies on ESSS’
15-year expertise as the leading computational
simulation solution provider in South America, with
offices in Brazil, Argentina, Chile and Peru, and
partnerships with various universities and research
centers; and EnginSoft’s 25 years of experience as a
leading European Computer Aided Engineering (CAE)
service provider with several offices in Italy, across
Europe, and partnerships with industry and
universities.
Newsletter EnginSoft Year 7 n°3 -
41
A Simple Parallel Implementation of a
FEM Solver in Scilab
Nowadays many simulation software have the possibility to
take advantage of multi-processors/cores computers in order
to reduce the solution time of a given task. This not only
reduces the annoying delays typical in the past, but allows the
user to evaluate larger problems and to do more detailed
analyses and to analyze a greater number of scenarios.
Engineers and scientists who are involved in simulation
activities are generally familiar with the terms “High
Performance Computing” (HPC). These terms have been coined
to indicate the ability to use a powerful machine to efficiently
solve hard computational problems.
One of the most important keywords related to the HPC is
certainly parallelism. Total execution time will be reduced if
the original problem can be divided in a given number of
subtasks which are then tackled concurrently, that is in
parallel, by a number of cores.
To completely take advantage of this strategy three conditions
have to be satisfied: the first one is that the problem we want
to solve has to exhibit a parallel nature or, in other words, it
should be possible to reformulate it in smaller problems, which
can be solved simultaneously, whose solutions, opportunely
combined, give the solution of the original large problem.
Secondly, the software has to be organized and written to
exploit this parallel nature. So typically, the serial version of
the code has to be modified where necessary to this aim.
Finally, we need the right hardware to support this strategy.
Of course, if one of these three conditions is not fulfilled, the
benefits could be poor or even non-existent in the worst case.
It is worth to mention that not all the problems arising from
engineering can be solved effectively with a parallel approach,
if their associated numerical solution procedure is intrinsically
serial.
One parameter which is usually reported in the technical
literature to judge the goodness of a parallel implementation
of an algorithm or a procedure is the so-called speedup, which
is simply defined as the ratio between the execution time on
a single core machine and the same quantity on a multicore
machine (S = T1/Tp), being p the number of cores used in the
computation. Ideally, we would like to have a speedup not
lower than the number of cores: unfortunately this does not
happen mainly, but not only, because some serial operations
have to be performed during the solution. In this context it is
interesting to mention the Amdahl’s law which bounds the
theoretical speedup that can be obtained, given the
percentage of serial operations (f
[0,1]) that has to be
globally performed during the run. It can be written as:
It can be easily understood that the speedup S is strongly
(and badly) influenced by f rather than by p. If we imagine to
have an ideal computer with infinite number of cores (p=∞)
and implement an algorithm whit just 5% of operations that
have to be performed serially (f=0.05), we get a speedup of
20 as a maximum. This clearly means that it is worth to invest
in algorithms rather than simply increasing the number of
cores…
Someone in the past has moved criticism to this law, saying
that it is too pessimistic and unable to correctly estimate the
real theoretical speedup: in any case, we think that the most
important lesson to learn is that a good algorithm is much
more important that a good machine.
As said before, many commercial software propose since many
years the possibility to run parallel solutions. With a simple
internet search it is quite easy to find some benchmarks which
advertize the high performances and high speedup obtained
using various architectures and solving different problems. All
these noticeable results are usually the result of a very hard
work of code implementation.
Probably the most used communication protocols to
implement parallel programs, through opportunely provided
libraries, are the MPI (Message Passing Interface), the PVM
(Parallel Virtual Machine) and the openMP (open Message
Passing): there certainly are other protocols and also variants
of the aforementioned ones, such an the MPICH2 or HPMPI,
which gained the attention of the programmers for some of
their features.
As the reader has probably seen, in all the acronyms listed
above there is a letter “P”. With a bit of irony we could say
that it always stands for “problems”, in view of the difficulties
that a programmer has to tackle when trying to implement a
parallel program using such libraries. Actually, the use of these
libraries is often and only a matter for expert programmers and
they cannot be easily accessed by engineers or scientists who
want to easily cut the solution time of their applications.
In this paper we would like to show that a naïve but effective
parallel application can be implemented without a great
programming effort and without using any of the above
mentioned protocols. We used the Scilab platform (see [1])
because it is free and it provides a very easy and fast way to
implement applications: on the other hand, the fact that
Scilab scripts are substantially interpreted and not compiled is
paid with a not performing code in absolute sense. It is
however possible to rewrite all the scripts using a compiled
language, such as C, to get a faster run-time code. The main
objective of this work is actually to show that it is possible to
implement a parallel application and solve large problems
efficiently (e.g.: with a good speedup) in a simple way rather
than to propose a super-fast application.
To this aim, we choose the stationary heat transfer equation
written for a three dimensional domain together with
42
- Newsletter EnginSoft Year 7 n°3
any loss of generality, we decided to only use ten-noded
tetrahedral elements with quadratic shape functions (see [4]
for more details on finite elements).
The solution of the resulting system is performed through the
preconditioned conjugate gradient (PCG) (see [5] for details).
In Figure 1 a pseudo-code of a classical PCG scheme is
reported: the reader should observe that the solution process
firstly requires to compute the product between the
preconditioner and a given vector (*) and secondly the
The stationary heat transfer equation
product between the system matrix and another known vector
As mentioned above, we decided to consider the stationary
(**). This means that the coefficient matrix (and also the
and linear heat transfer problem for a three-dimensional
preconditioner) is not explicitly required, as it is when using
domain Ω. Usually it is written as:
direct solvers, but it could be not directly computed and
stored.
[1]
This is a key feature of all the iterative solvers and we
together with Dirichlet, Neumann and Robin boundary
certainly can take advantage of it, when developing a parallel
conditions, which can be expressed as:
code.
The basic idea is to partition the mesh in such a way that,
more or less, the same number of elements are assigned to
[2]
each core (process) involved in the solution, to have a well
balanced job and therefore to fully exploit the potentiality of
The conductivity k is considered as constant, while f
the machine. In this way each core fills a portion of the matrix
and it will be able to compute some terms resulting from the
represents an internal heat source. On some portions of the
matrix-vector product, when required. It is quite clear that
domain boundary we can have imposed temperatures , given
fluxes
and also convections with an environment
some coefficient matrix rows will be split on two or more
characterized by a temperature
and a convection
processes, since some nodes are shared by elements on
coefficient h.
different cores.
The discretized version of the Galerkin
The number of overlapping rows
formulation for the above reported equations
resulting from this strongly
leads to a system of linear equations which can
depends on the way we partition
be shortly written as
(* and **) the mesh. The ideal
partition produces the minimum
[3]
overlap, leading to the lesser
number of non-zero terms that
The matrix of coefficients [K] is symmetric,
each process has to compute and
positive definite and sparse. This means that a
store.
great amount of its terms are identically zero.
In other words, the efficiency of
The vector {T} and {F} collect the unknown
the solution process can depends
nodal temperatures and nodal equivalent loads.
If large problems have to be solved, it
on how we partition the mesh.
To solve this problem, which
immediately appears that an effective strategy to
really is a hard problem to solve,
store the matrix terms is needed. In our case we
decided to store in memory the non-zero terms
we decided to use the partition
functionality of gmsh (see [2])
row-by-row in a unique vector opportunely
allocated, together with their column positions:
which allows the user to
partition a mesh using a well
in this way we also access terms efficiently. We
known library, the METIS (see
decided to not take advantage of the symmetry
of the matrix (actually, only the upper or lower
[3]), which has been explicitly
written to solve this kind of
part could be stored, requiring only half as much
problem. The resulting mesh
storage) to simplify a little the implementation.
Moreover, this allows us to potentially use the
partition is certainly close to the
best one and our solver will use
same pieces of code without any change, for the
solution of problems which lead to a notit when spreading the elements
symmetric coefficient matrix.
to the parallel processes.
Fig. 1 - The pseudo-code for a classical
The matrix coefficients, as well as the known preconditioned conjugate gradient solver. It can be An example of mesh partition
vector, can be computed in a standard way, noted that during the iterative solution it is required performed with METIS is plot in
compute two matrix-vector products involving the
performing the integration of known quantities to
Figure 3, where a car model mesh
preconditioner M (*) and the coefficient matrix K
over the finite elements in the mesh. Without (**).
is considered: the elements have
appropriate boundary conditions. A standard Galerkin finite
element (see [4]) procedure is then adopted and implemented
in Scilab in such a way as to allow a parallel execution.
This represents a sort of elementary “brick” for us: more
complex problems involving partial differential equations can
be solved starting from here, adding new features whenever
necessary.
Newsletter EnginSoft Year 7 n°3 -
been drawn with different colors according to their partition.
This kind of partition is obviously suitable when the problem
is run on a four cores machine.
As a result, we can imagine that the coefficient matrix is split
row-wise and each portion filled by a different process running
concurrently with the others: then, the matrix-vector products
required by the PCG can be again computed in parallel by
different processes. The same approach can be obviously
extended to the preconditioner and to the postprocessing of
element results.
For sake of simplicity we decided to use a Jacobi
preconditioner: this means that the matrix [M] in Figure 1 is
just the main diagonal of the coefficient matrix. This choice
allows us to trivially implement a parallel version of the
preconditioner but it certainly produces poor results in terms
of convergence rate. The number of iterations required to
converge is usually quite high and it could be reduced
adopting a more effective strategy. For this reason the solver
will be hereafter addressed to as JCG and no more as PCG.
43
approach. All the models proposed in the following have been
solved on a Linux 64 bit machine equipped with 8 cores and
16 Gb of shared memory. It has to be said that our solver does
not necessarily require so powerful machines to run: the code
has been actually written and run on a common Windows 32
bit dualcore notepad.
A first benchmark: the Mach 4 model
A first benchmark is proposed to test our solver: we
downloaded from the internet a funny CAD model of the Mach
4 car (see the Japanese anime Mach Go Go Go), produced a
mesh of it and defined a heat transfer problem including all
kinds of boundary conditions.
The problem has no physical nor engineering meaning: the
objective is here to have a sufficiently large and non trivial
n° of nodes
n° of
tetrahedral elements
n° of
unknowns
n° of nodal
imposed
temperatures
511758
317767
509381
2377
A brief description of the solver structure
Table 1: Some data pertaining to the Mach 4 model are proposed in this
In this section we would like to briefly describe the structure
table.
of our software and highlight some key points. The Scilab
System
Analysis
JCG
System
5.2.2 platform has been used to develop our FEM solver: we
n° of
Analysis
JCG
fill-in
time
time
fill-in
only used the tools available in the standard distribution (i.e.:
cores
time
speedup
speedup
[s]
[s]
speedup
[s]
avoiding external libraries) to facilitate the portability of the
resulting application and eventually to allow a fast translation
1
6960
478
5959
1.00
1.00
1.00
to a compiled language.
2
4063
230
3526
1.71
2.08
1.69
A master process governs the run. It firstly reads the mesh
3
2921
153
2523
2.38
3.12
2.36
partition, organizes data and then starts a certain number of
4
2411
153
2079
2.89
3.91
2.87
slave parallel processes according to the user request. At this
point, the parallel processes read the mesh file and load the
5
2120
91
1833
3.28
5.23
3.25
information needed to fill their own portion of the coefficient
6
1961
79
1699
3.55
6.08
3.51
matrix and known vector.
7
1922
68
1677
3.62
7.03
3.55
Once the slave processes have finished their work the master
starts the JCG solver: when a matrix-vector product has to be
8
2093
59
1852
3.33
8.17
3.22
computed, the master process asks to the slave processes to
Table 2: Mach 4 benchmark. The table collects the times needed to solve the
compute their contributions which will be appropriately
model, to perform the system fill-in and to solve the system through the
JCG. The speedup are also reported in the right part of the table.
summed together by the master.
When the JCG reaches the required tolerance the postprocessing phase (e.g.: the computation of fluxes) is
performed in parallel by the slave processes. The
solution ends with the writing of results in a text file.
A communication protocol is mandatory to manage the
run. We decided to use binary files to broadcast and
receive information from the master to the slave
processes and conversely.
The slave processes are able to wait for the binary
files and consequently read them: once the task (e.g.:
the matrix-vector product) has been performed, they
write the result in another binary file which will be
read by the master process.
This way of managing communication is very simple
but certainly not the best from an efficiency point of
view: writing and reading files, even if binary ones,
could take a not-negligible time. Moreover, the Fig. 2 - The speedup values collected in Table 2 have been plotted here against the
speedup is certainly badly influenced by this number of cores.
44
- Newsletter EnginSoft Year 7 n°3
Fig. 3 - The Mach 4 mesh has been divided in 4 partitions (see colors)
using the METIS library available in gmsh. This mesh partition is obviously
suitable for a 4 cores run.
performs 1202 iterations to converge. It immediately appears
that the global speedup is strongly influenced by the JCG
solution phase, which does not scale as well as the fill-in
phase. This is certainly due to the fact that during the JCG
phase the parallel processes have to communicate much more
than during the other phases: a guess solution vector has
actually to be written at each iteration and the result of the
matrix vector product has to be written back to the master
process by the parallel runs. The adopted communication
protocol, which is extremely simple and easy to implement,
shows here all its limits. However, we would like to underline
that the obtained speedup is more than satisfactory.
In Figure 4 the temperature field computed by ANSYS
Workbench (top) and the same quantity obtained with our
solver (bottom) working with the same mesh are plotted.
A second benchmark: the motorbike engine model
The second benchmark involves the model of a motorbike
engine (also in this case the CAD file has been downloaded
from the internet) and the same steps already performed for
the Mach 4 model have been repeated. The model is larger
than before (see Table 3) and it can be seen in Figure 6, where
the grid is plotted. However, it has to be mentioned that
conceptually the two benchmarks have no differences; the
main concern was also in this case to have a model with a
non-trivial geometry and boundary conditions.
The final termination accuracy for the JCG has been set to 106 reaching convergence after 1380 iterations.
The Table 4 is analogous to Table 2: the time needed to
complete different phases of the job and the analysis time are
reported, as obtained for runs performed with increasing
number of parallel processes involved.
Also in this case, the trend in the reduction of time with the
increase of number of cores seems to follow the same law as
n° of nodes
n° of
tetrahedral elements
n° of
unknowns
n° of nodal
imposed
temperatures
2172889
1320374
2136794
36095
Table 3: Some data pertaining to the motorbike engine model.
Fig. 4 - Mach 4 model: the temperature field computed with ANSYS
Workbench (top) and the same quantity computed with our solver (bottom).
No appreciable differences are present.
model to solve on a multicore machine, to compare the results
with those obtained with a commercial software and to
measure the speedup factor.
In Table 1 some data pertaining to the mesh has been
reported. The same mesh has been solved with our solver and
with ANSYS Workbench, for comparison purposes.
In Table 2 the time needed to complete the analysis (Analysis
time), to compute the system matrix and vector terms (System
fill-in time) and the time needed to solve the system with the
JCG are reported together with their speedup. The termination
accuracy has been always set to 10-6: with this set up the JCG
n° of
cores
Analysis
time
[s]
System
fill-in
time
[s]
JCG
time
[s]
Analysis
speedup
System
fill-in
speedup
JCG
speedup
1
33242
2241.0
28698
1.00
1.00
1.00
2
20087
1116.8
17928
1.65
2.01
1.60
3
14679
744.5
12863
2.26
3.01
2.23
4
11444
545.6
9973
2.90
4.11
2.88
5
9844
440.9
8549
3.38
5.08
3.36
6
8694
369.6
7524
3.82
6.06
3.81
7
7889
319.7
6813
4.21
7.01
4.21
8
8832
275.7
7769
3.76
8.13
3.69
Table 4: Motorbike engine benchmark. The table collects the times needed to
solve the model (Analysis time), to perform the system fill-in (System fill-in)
and to solve the system through the JCG, together with their speedup.
Newsletter EnginSoft Year 7 n°3 -
45
Fig. 5 - A comparison between the speedup obtained with the two
benchmarks. The ideal speedup (the main diagonal) has been highlighted
with a black dashed line. In both cases it can be see that the speedup follow
the same roughly linear trend, reaching a value between 3.5 and 4 when
using 6 cores. The performance drastically deteriorates when involving more
than 6 cores probably because the machine where runs were performed has
only 8 cores.
Fig. 6 - The motorbike engine mesh used for this second benchmark.
before (see Figure 5). The run with 8 parallel processes does
not perform well because the machine has only 8 cores and we
start up 9 processes (1 master and 8 slaves): this certainly
wastes the performance.
In Figure 7 a comparison between the temperature field
computed with ANSYS Workbench (top) and our solver
(bottom) is proposed. Also in this occasion no differences are
presents.
Conclusions
In this work it has been shown how it is possible to use Scilab
to write a parallel and portable application with a reasonable
programming effort, without involving hard-to-use message
passing protocols. The three dimensional heat transfer
equation has been solved through a finite element code which
takes advantage of the parallel nature of the adopted
algorithm: this can be seen as a sort of “elementary brick” to
develop more complicated problems. The code could be
rewritten with a compiled language to improve the run-time
performance: also the message passing technique could be
reorganized to allow a faster communication between the
Fig. 7 - The temperature field computed by ANSYS Workbench (top) and by
our solver (bottom). Also in this case the two solvers lead to the same
results, as it can be seen looking the plots.
concurrent processes, also involving different machines
connected through a net.
Stefano Bridi is gratefully acknowledged for his precious help.
References
[1] http://www.scilab.org/ to have more information on
Scilab.
[2] The Gmsh can be freely downloaded from:
http://www.geuz.org/gmsh/
[3] http://glaros.dtc.umn.edu/gkhome/views/metis to have
more details on the METIS library.
[4] O. C. Zienkiewicz, R. L. Taylor, (2000), The Finite Element
Method, volume 1: the basis. Butterworth Heimemann.
[5] Y. Saad, (2003), Iterative Methods for Sparse Linear
Systems, 2nd ed., SIAM.
For more information on this document please contact the
author: Massimiliano Margonari - Enginsoft S.p.A.
[email protected]
46
- Newsletter EnginSoft Year 7 n°3
The Need for “Simulation-Quality”
Material Data
Material testing for simulation is about understanding how to
best describe a material’s behaviour as input for the CAE code.
Such testing requires expertise and experience beyond testing
performed in a typical test laboratory: while the test
instruments may be the same, the knowledge of CAE and
experience with diverse materials is increasingly important.
FEA software such as ANSYS are being increasingly used for nonlinear simulations such as those listed below. We discuss how
DatapointLabs’ uncommon material expertise helps you avoid
problems when the data is being generated for
• Rubber hyperelastic modeling
• Foam / hyperfoam and crushable foam modeling
• Plastics: elastic-plastic modeling, visco-elasticity and
stress-relaxation
• Metals: kinematic and isotropic hardening, cyclic plasticity
• Crash and drop testing: rate dependent stress-strain models
• Metal forming: forming limit diagram (FLD) and spring-back
material modeling
• Process Simulation including injection-molding, blowmolding and thermoforming CAE
More than one method to get the data
Obtaining material data for non-linear FEA is not easy because
the testing can be highly complicated. Hyperelastic material
modeling requires testing in different modes such as uniaxial,
biaxial or shear. For use in FEA, DatapointLabs performs these
tests with a calibrated load cell to measure the stress, and an
extensometer to measure the local strain in the gauge region of
the test specimen.
Some test labs measure strain using instrument displacement
instead of extensometry but this brings error from the test into
the FEA. Now, when tests are performed at high speeds for the
calibration of crash material models, careful instrument design
is needed to avoid noise and oscillation in the stress-strain
data, as presented in our paper at the NAFEMS World Congress,
2009 [1]. If noise exists, the quality of the simulation is
degraded. The error here is not due to wrong methodology of
testing, but the wrong choice of instrumentation.
Understanding the region of interest for your FEA
Rubber materials suffer damage by chain breakage during the
first deformation (Mullens Effect), which results in a
considerably different stress-strain behavior seen between the
first pull and the subsequent cyclic loadings [2]. DatapointLabs
develops data and model calibration depending on whether the
initial deformation is being simulated as compared to cyclic
loading.
Understand the impact of the environmental conditions of
your product.
DatapointLabs maintains extensive facilities to test materials at
elevated or cryogenic temperature, in saline (for in-vivo
biomedical simulation), or other fluids-soaked environments.
Understanding how well the model accommodates the
real-life simulation
Visco-elastic and stress relaxation data acquisition requires
understanding of the complex visco-elastic theory: it can be
applied only for small strain simulation, but FEA of rubber and
plastics is often performed at large strains. DatapointLabs has
deep expertise in applying visco-elasticity to real-life
simulation. In the modeling of foams, DatapointLabs assists
clients with the selection of the material model that is most
suitable for the type of foam: crushable, elastic, visco-elastic or
hyperfoam. [3]. This service is included with the testing
ordered.
Experience with diverse materials
Products of today utilize an astonishing variety of materials
ranging from metals, rubber, plastic, foam to films, fiber,
composites, ceramics and glass. Being able to test each of
these widely differing materials with the same high level of
accuracy demands familiarity with such materials.
DatapointLabs has tested over 18,000 materials over the past
15 years for physical properties such as tensile, compressive,
shear, high strain rate, hyperelastic, visco-elastic, creep, stress
relaxation, fatigue, thermal expansion and conductivity,
viscosity, PVT.
Fig. 1 - LS-DYNA MAT24 Crash Material Model Calibration
Understanding material modeling and CAE
As we see in the above outlined cases, the material data
requirements of the various material models used in CAE are
Newsletter EnginSoft Year 7 n°3 -
often complex and unclear. It is not common for test
laboratories to be familiar with CAE. With over a decade-long
focus on CAE, DatapointLabs has the unique credentials
required to meet the exacting demands of new product
development. DatapointLabs works in direct partnership with
over 15 of the world’s most prominent CAE software vendors to
make TestPaks® which are packages that include the material
testing, material model selection, model calibration and
validation processes. The CAE user simply requests a TestPak®,
sends the material sample and then receives, 5 days later, the
material data plus a digital input file ready for the specified
CAE. DatapointLabs online catalog offers over 150 TestPaks®.
Conclusion
It is clear that considerable thought and effort must therefore
be paid to correct material modeling and that this part of CAE
cannot be taken lightly. Certainly, universities and research
institutes possess the scientific understanding to perform
material testing. However, their instruments and test
technicians are not dedicated to this kind of testing. Their
laboratories are usually not ISO 17025 quality certified. The few
cases above just serve to illustrate the nature of the problem
which is quite wide-spread ranging from rate dependency [1] to
process simulation [4]. The data must be clean and free from
instrument artifact. It must be correct and appropriate for the
simulation. Finally, the process of calibrating these material
models is often error prone because, for a variety of reasons, the
models cannot accommodate the observed material behavior.
This lack of fidelity then results in a limitation in the ability of
the model to describe the real life situation in FEA.
Ordering TestPaks® from DatapointLabs reduces these risks!
About the Author
Mr. Hubert Lobo is a recognized leader in the understanding of
non-linear material behavior, and how it impacts virtual product
design. With >20 years of experience in this area, he brings
valuable insights to the product development community in its
efforts to design with modern day materials like plastics,
rubber, foams and composites.
Mr. Lobo has a Masters degree in Engineering from Cornell
University. He has authored numerous articles and the
47
Why clients treat DatapointLabs as the
expert partner for product development!
Our clients have come to realize that material data used for
CAE applications cannot be ordered from a material test laboratory that is not familiar with simulation. The cost for removing this important source of CAE inaccuracy is trivial
compared to the risk of product failure. Time wasted by a highly qualified CAE analyst attempting to get a good simulation result with bad test data can also be much more expensive.
DatapointLabs makes it easy for CAE users to get good material model calibrations for CAE in a timely and cost-effective way.
• Cost savings: only the required tests are performed
• Highly pertinent: properties of the actual material being
simulated
• Save effort: the CAE user does not waste time selecting
and calibrating material models
• Fast Results: data in 5 business days; a 48 hour RUSH
service is available.
• High Quality: DatapointLabs has been ISO 17025
certified since 2000, ensuring that the tests are
performed on calibrated, traceable instruments by
technicians trained to do this job correctly.
• Best and cutting edge Technology Center: Online Order
Placement Service at www.datapointlabs.com
• DHL Sample Pickup Service from countries in Europe, 2
day express delivery to DatapointLabs!
• Digital
Test
Data
download
available
at
www.matereality.com via Matereality Data Delivery
Service. Each client gets a Personal Material Database to
store their material properties on this digital platform.
Handbook of Plastics Analysis. In 2002, the Society of Plastics
Engineers honored Mr. Lobo, recognizing his pioneering work in
quantification of material behavior for CAE. He is the founder
and President of two successful companies: DatapointLabs, an
expert materials testing company that generates representative
properties for CAE, and Matereality, providing material database
solutions for virtual product development. DatapointLabs and
Matereality are based in Ithaca, New York State, USA, hometown
of the famous Cornell University.
About DatapointLabs and EnginSoft:
DatapointLabs offers expertise for precise “Simulation-Quality”
Material Data to EnginSoft and its customers in Italy as part of
our Resellers Agreement with EnginSoft SpA.
Fig. 2 - Sophisticated instrumentation and expert technical staff are needed
Stefano Odorizzi, General Manager of EnginSoft:
Precise material data and correct material modeling are
important for our customers’ sophisticated simulation work,
design and product development.
We are delighted to collaborate with DatapointLabs and to offer
their expertise to our customers in Italy who can now benefit
48
- Newsletter EnginSoft Year 7 n°3
from the company’s speedy material testing services and
knowledge.
DatapointLabs is a partner of ANSYS, Inc., Livermore Software
Technology Corp. and the TechNet Alliance.
For more information about the services in Italy, please contact:
Nicola Gramegna - EnginSoft
[email protected]
www.datapointlabs.com - [email protected]
References
[1] "A Robust Methodology to Calibrate Crash Material Models
for Polymers." Hubert Lobo and Brian Croop NAFEMS World
Congress Crete, Greece. 2009.
[2] "Practical Issues in the Development and Implementation of
Hyperelastic Models." Hubert Lobo and Twylene Bethard.
Abaqus User Conference. 2001.
[3] "Selecting Material Models for the Simulation of Foams."
Brian Croop and Hubert Lobo. 7th European LS-DYNA
Conference, Austria. 2009.
[4] "Closing the Gap: Improving Solution Accuracy with Better
Material Models." Hubert Lobo. MUG2000 2000.
To download our technical papers,
www.datapointlabs.com (click on Research)
please
visit:
Hubert Lobo
DatapointLabs, USA
Model of a Multimass Hyperelastic
System and its Parametric
Identification
In the description of systems with non-rigid connections,
models with lumped parameters are widely used. In these cases,
the number of masses usually does not exceed four and
connections are represented by different rheological models. The
most frequently used models are two-element models of KelvinVoigt and Maxwell and three-element models of Bingham,
Shvedova. However, they are not applicable for the description
of dynamics of the systems that contain links from hyperelastic
materials, experiencing relevant reversible deformations.
Thus, the development of a simple and at the same time highly
accurate model of a hyperelastic element is a very relevant task.
In the present research, the hyperelastic element is represented
by two consequently connected viscoelastic bodies of KelvinVoigt with different elasticity modules and viscosity
coefficients. A non-linear damper, which possesses memory, is
included in one of the bodies and is parallel to the elements of
Hooke and Newton. The damper is based on the generalized
Bouc-Wen model of dynamic hysteresis, which is described by an
ordinary differential equation of the first order.
It is difficult to adjust the mathematical model of the proposed
hyperelastic element. It contains 13 coefficients, most of which
specify the shape of the hysteresis loop and therefore can not
be measured directly during experiments. However, the model
allows us to take into consideration the elastic aftereffect, the
Bauschinger effect, that enhances the accuracy of the
description of nonrigid systems’ dynamics.
The unknown coefficients of the model are determined by
parametric identification based on prior available information
about their admitted region (should not contradict the physical
meaning) and on the experimental data, obtained from the
studied nonrigid system.
The Identification process consists of solving a multiobjective
optimization problem, having as constraints the inequalities
with the values of models coefficients.
The optimization objectives are:
• minimize the root-mean-square deviation of responses of the
real system and its model at harmonic input action;
• minimax Wald’s criterion of deviation of responses of real
systems and its model at step input excitation;
• minimize the weighted sum of values of two previous
objective functions at the mixed input action.
The efficiency of the proposed model of the hyperelastic element
and the method of identification of its parameters was estimated
using a system consisting of a DC motor, a two-stage parallelshaft reducer and a rotating mass. For their connection, hollow
aluminum shafts with hyperelastic inserts in the form of rubber
tubing were used. The model of this system was developed in the
MatLab / Simulink package and contained 27 unknown
coefficients, including the reduction ratio. Antitorque moment
was accepted equal to zero; loss due to frictional forces was
neglected. The optimization process was carried out in the
program modeFRONTIER® with the use of the MOGA-II algorithm
(multiobjective genetic algorithm with elitism). The values of
velocities and angles of the DC-motor, and rotating mass were
considered as responses of the system. As a result of calculations
a set of Pareto-optimal solutions was defined, from which a
vector of desired parameters of model was selected. When the
resulting model worked out mixed input action, the error was of
4-7 % from experimental data, while the errors of a model with
elements of Hooke and Kelvin-Voigt were of 16-19% and 11-14%
respectively. Therefore, the proposed model of hyperelastic
element and the method of identification of its parameters are
highly effective and can be used to describe the dynamics of
nonrigid systems with high accuracy.
Denis Kozlov ([email protected]), postgraduate student,
Department of Electrotechnics and Electrical equipment,
Tula State University, Russia
!PPLICAZIONISEMPREPIÂESIGENTI
RICHIEDONOELABORAZIONI
SEMPREPIÂVELOCI
%#OMPUTER%NGINEERINGSIPROPONECOMEUNA
REALTAITALIANADIECCELLENZANELLINTEGRAZIONEDI
SOLUZIONIDEDICATEALCALCOLOADALTEPRESTAZIONI
(0#LOFFERTADI%SIBASASUUNESTESAGAMMADI
PRODOTTIWORKSTATIONGRAFICHESERVERSTORAGE3!.
FINOAISISTEMICLUSTERhCHIAVIINMANOvDIGRANDI
DIMENSIONITUTTIPROGETTATIINBASEALLAESIGENZEDEL
CLIENTEETESTATISECONDORIGOROSEPROCEDUREPER
OFFRIRESOLUZIONISCALABILIEDAFFIDABILINELTEMPO
GARANTENDOILRITORNODEGLIINVESTIMENTI
SULLHARDWARE
%#OMPUTER%NGINEERINGEXCELSATINTEGRATING
SOLUTIONSFORTHE(IGH0ERFORMANCE#OMPUTING
(0#%SRANGEINCLUDESABROADSELECTIONOF
PRODUCTSFROMCOMPUTERGRAPHICS7ORKSTATIONSTO
SERVERSTORAGE3!.UPTOPOWERFULCUSTOMBUILT
CLUSTERSYSTEMSEACHONEOFTHEMDESIGNED
FOLLOWINGTHECLIENTSREQUIREMENTSANDTESTED
ACCORDINGTOSTRICTPROCEDURESINORDERTOPROVIDE
SCALABLESOLUTIONSWHICHARERELIABLEEVENASTIME
GOESBYANDGUARANTEEAPROFITABLERETURNON
HARDWAREINVESTMENTS
%COMPUTERENGINEERING3P!6IA-ARTIRIDELLA,IBERTÜ3CANDIANO2EGGIO%MILIA)TALIA
4EL&AXWWWECOMPANYCOMEMAILINFO ECOMPANYCOM
50
- Newsletter EnginSoft Year 7 n°3
An Interview with
Mr Nazario
Bellato, Simulation
Manager of
Magneti Marelli
Powertrain
Intervista con
Nazario Bellato,
Simulation
Manager di
Magneti Marelli
Powertrain
Magneti Marelli Powertrain is Magneti Marelli’s business unit
dedicated to the development and manufacturing of engines
and transmission components for cars, motorbikes and light
vehicles. Today, Magneti Marelli Powertrain supports four
application centers and eleven manufacturing sites on four
continents. During our recent FEM Interview Tour in Italy, we
had the pleasure to meet Mr Nazario Bellato at Magneti
Marelli Powertrain. Mr Bellato’s expertise is of the highest
standard, he holds the position of Simulation Manager and
is a Veteran User of ANSYS ( since 1993). A key person in the
Calculation Department at Magneti Marelli Power Train, Mr
Bellato’s ambition has always been to be innovative in every
respect, in particular within the company’s European
Projects ( 11 European and 5 American patents). Most of his
technical expectations from the simulation community are
summarized in the following interview. Mr Bellato is a source
of knowledge and inspiration in the Ansys Italian Advisory
Group! Mr Bellato has a University degree from Università
Politecnica delle Marche, he started his career working in
applied research for Indesit. After an extremely successful
cooperation with Giorgio Fua, a famous Italian economist,
Mr Bellato joint Fiat in 1994. This was also the time when
he started his work with finite element applications.
Laureato presso l'Università
Politecnica delle Marche, ha
iniziato
il
percorso
professionale nell'ambito
universitario della ricerca
applicata a livello europeo
e internazionale (Gruppo
Indesit). L'incontro nel
1993 con l'economista
Giorgio Fuà e la sua scuola
di specializzazione (ISTAO,
Istituto Adriano Olivetti
per
la
Gestione
dell'Economia e delle Aziende), l'ho portano in contatto
con il Gruppo Fiat e, in particolare il Centro Ricerche Fiat,
dove entra a farne parte nel 1994. Attualmente è il
responsabile dell’Ente Calcoli e Simulazione presso
Magneti Marelli PowerTrain. Ha al suo attivo 14 brevetti
europei e 5 estensioni negli Stati Uniti. Varie soluzioni
sviluppate, e relative varianti, sono attualmente in
produzione.
Air Intake Manifold Development
1. Che spazio ha (e dovrebbe avere)
l’innovazione
nel
mondo
industriale/impresariale?
La globalizzazione e le nuove
tecnologie di comunicazione hanno
avviato e consolidato una rapida
trasformazione dei mercati e dei
businees correlati. I cicli di vita
prodotto che a volte duravano
decenni oggi possono durare anche
mesi. Questo fa si che l'innovazione
sia diventata un fattore dominante
per una azienda che vuole
sopravvivere o espandersi alle
mutevoli regole internazionali e
soprattutto per le aziende del
"vecchio continente": garantire livelli
coerenti della qualità della vita per i
propri dipendenti e ragionevoli
Newsletter EnginSoft Year 7 n°3 -
51
Simulation is the key to state-of-the-art engineering and
manufacturing…
1. What is or should be the role of innovation in the
industrial and entrepreneurial world ?
In the last years, globalization and new communication
technologies have created an extremely fast market;
business transactions and our business culture evolved in a
similar way. In the past, the life cycles of products spanned
years, now they are realized in months and in some cases
even in weeks. Obviously, the innovation of products has
become the main growth factor and is crucial for a company
that aims at growing its product range into increasingly
competitive markets. Today, we can observe continuous
changes in the global market, it’s a very fast and dynamic
world out there. Among each company’s goals should be to
maintain good working conditions for its workers and to
create reasonable profits for its shareholders. Innovation is
an important driver for the culture, health and future of a
company. New ideas will only lead to successes and
competitive advantage when they are transformed into
competitive industrial solutions which grow the company
turnover in the end.
2. Which are the strategies to be innovative and what are
the actions needed to realize innovation?
Innovation is usually achieved at the end of a complex
development process which involved and counted on various
internal and external competencies. Each competence
contributes to a unique final result. Strong competitive
advantage is based on many reliable technical results.
When we want to develop and create an optimized new
product, we know that it is essential to examine the same
from different points of view: technical performance,
organization, product processes, market strategies…
In this interview, I would like to focus on the technical
aspects. As a first step, a specific strategy has to be defined
in order to concentrate and guide the company’s energy. We
have to know the reasons why we want to innovate a
product and it has to be clear what results we want to
obtain. At this point, the managers look at the company’s
resources, the final results and then determine the right
tools that should lead to those results. The Human
Resources responsible has an important role as he/she
coordinates the assignment of the technical experts und
support teams and in this way, determines efficiency, work
load and success of a project. The General Manager and
owner carry key responsibilities because they communicate
the mission of a company and important projects, especially
in product development. The right words and actions
motivate and inspire excellence in the team all the way
through to the final results. I can say from my experiences
through the years that to know, expand and believe in the
company’s competencies is key to success and will secure
the company’s future.
The block diagram represent the phases for the development of the
progressive bore:
Before “cutting” the aluminium we made some FEA analysis in order to find
a “possible” solution.
This process usually take 2 or 3 attempts
The final adjustments are made on real parts.
Also this phase usually needs 2 or 3 loops to reach the correct bore shape.
profitti per i propri investitori. L'innovazione è figlia della
cultura aziendale. Le idee innovative da sole non servono
per generare vantaggio competitivo ma solo se generano
soluzioni innovative. Le idee anche le più brillanti da sole
sono delle "scatole vuote": devono essere in grado di
generare soluzioni industriali utili a produrre profitto.
2. Quali sono le strategie per essere innovativi e quali
valutazioni spingono all’innovazione?
L'innovazione è il risultato di un complesso processo di
sviluppo a cui contribuiscono competenze diverse interne
o esterne all'azienda, volte ad un unico obiettivo:
ottenere nuovi vantaggi competitivi.
Ci sono molte vie per innovare: prodotto, organizzazione,
processo produttivo, strategie di mercato, etc. in questo
ambito parleremo di prodotto.
Innovare sul prodotto significa aver scelto a priori una
strategia specifica dove far convergere le proprie energie,
ciò equivale a chiedersi perché vogliamo innovare e a
quali risultati vogliamo convergere. Presa coscienza della
scarsità delle risorse, nel senso che non ci sono mai
risorse sufficienti per muoversi in ogni direzione, è
indispensabile individuare con certezza gli obiettivi da
raggiungere. Al centro comunque ci sono le persone: sono
necessarie figure professionali particolari e Team ben
coordinati: efficienti, efficaci e soprattutto capaci di
gestire lo stress, il rischio e soprattutto gli insuccessi.
Inoltre per aver successo ovviamente è necessaria la
52
- Newsletter EnginSoft Year 7 n°3
3. Which are the right positions covered by CAE and
Virtual Prototyping?
It is well known that virtual analysis ( CAE) is a fundamental
tool for time-to-market TTM reduction. The FEM approach
helps to control development costs and ultimately specific
business and product goals. I would recommend to define
specific accurate DOE plans in order to significantly reduce
prototyping time and to substitute the same with a VIRTUAL
prototyping step. Obviously, the advantage is to simplify the
design with six sigma, considerable improvements in product
quality will be the results. The approach to reduce
experimental time while defining investments and
production lines at the same time is probably the best
solution to achieve a profitable final product.
4. How did the user’s demand change in the last year?
From the 90th until today, the skills of numerical analysts
have consistently evolved; in fact, in the past, they used to
be advanced computer experts because to create a finite
element model was considered “art” If this “wizard” was also
a Unix expert, he/she would improve the company’s
technical level. Nowadays however, a lot of things have
changed. For example, it is possible to work within
integrated environments where the core activity is not to
have a mesh model but to have a more accurate realization
of a simulation of the real physical phenomena and hence to
have the correct prediction of all mechanical, CFD and
electromagnetic variables to optimize the final product.
5. What benefits did you have in your professional career
so far and how did the move to design and production
happen?
The wide and correct use of new technologies brings us the
opportunity to have a global vision of the whole project and
the driving of all development forces is an important new
capability in our technology box. Coupled analysis now
makes it possible to make fast decisions for a project and at
the same time, new technologies help us in being more
accurate with our decisions in the different processes.
6. How has EnginSoft increased the value, the quality and
the capability of your company?
EnginSoft is not a just a software provider, we regard
EnginSoft as a pro-active technology branch of our Magneti
Marelli Powertrain unit. The EnginSoft team has an
important role as ‘ Knowledge broker’. The technical
standards and levels of EnginSoft are transferred to our
company. In fact, EnginSoft has a well-established
reputation which is based on their flexibility to adapt their
offer, services and expertise to the dynamics and
requirements of the market. For example, EnginSoft’s
concept of project chain management involving different
techniques is second to none. When we discussed new
frontier applications, their technical experiences to run
simulations which deliver usable results were of utmost
importance. We greatly value EnginSoft’s concept to offer a
single solution for a single technical problem, but to include
chiara sponsorizzazione della Direzione o della proprietà.
Se il mix è ben posto i risultati prima o poi arrivano e
spesso sono l'ancora di sopravvivenza o di successo
dell'azienda.
3. Che ruolo ricoprono gli strumenti CAE e di
prototipazione virtuale in tal senso?
Nel ciclo di sviluppo prodotto le analisi predittive (CAE)
sono uno degli strumenti fondamentali per ridurre il TTM
e mantenere i costi di sviluppo e industrializzazione in
linea con gli obiettivi di business. La possibilità di ridurre
drasticamente la fase prototipale in funzione di un giusto
mix con la prototipazione virtuale permette di definire
piani DOE molto più accurati e semplifica il design for six
sigma, con notevoli miglioramenti in ottica qualità.
La possibilità di ridurre le fasi prototipali e rilasciare gli
investimenti in tool e linee di produzione solo nei
momenti definiti, diventa in alcuni settori la chiave per
avere prodotti profitevoli.
4. Come sono cambiate le esigenze degli utilizzatori
negli ultimi anni?
Si è passato dall'analista numerico anni '90 che spesso era
un mago di informatica perché preparare modelli era
veramente un'arte da sartoria e conoscenze unix alla
possibilità di lavorare in ambienti integrati dove il core
dell'attività non è la mesh ma la rappresentazione, il più
possibile coerente del fenomeno fisico in funzione degli
obiettivi dell'attività e la corretta interpretrazione dei
risultati.
5. Quali vantaggi ha rilevato nella sua esperienza
professionale e come è cambiato il suo approccio alla
progettazione/produzione?
Ora con le nuove tecnologie si ha una visione più di
insieme quindi la possibilità di guidare correttamente le
energie di sviluppo. La possibilità di poter effettuare
analisi accoppiate, l'accesso a tecnologie che per costi o
one-way Fluid Structure Interaction using CFX and Ansys Mechanical
Newsletter EnginSoft Year 7 n°3 -
experiences and knowledge to establish a defined company
project workflow.
7. What are the perspectives of computational codes in
view of the future challenges of the market?
These codes will dominate development cycle processes.
Although it is important to keep in mind that a lot of
information will have to be managed and this will require
considerable human knowledge. Finally, I would like to
mention that it is important to be sure about the results of
any simulation. This is a complex problem which concerns
model quality, input parameters, the modeling of material
properties and more…..It is crucial to manage the problem
in a consistent way and to keep the economics in mind as
well: for example, to upgrade hardware to be able to exam
more complex models and to have consistent information
and useful methods. In a situation like this, it is only
natural to talk to a partner (like EnginSoft) who is able to
provide systematic support and competencies in the use of
different computational software for the various application
areas in engineering.
8. For which projects will you foster the wide use of these
tools?
I hope to succeed in realizing my strategy in the company
which will help to quickly answer to any new business
opportunity by providing a robust, flexible and modular
solution to create profit instead of small volume products.
9. What are your wishes for the scientific technology
world that is searching the right ratio/mix between
competition and creativity?
This deserves a simple answer: Competition and Creativity
are great ingredients for Innovation!
It is very interesting to look at the actual general
methodology that Magneti marelli has in its primary
development project phase: They usually work for FPT, PSA,
RSA, GM Dailmer, BMW, VW, Suzuki but it is not possible to
look at their specific images but it is extremely remarkable to
have a synthetic vision of their work-flow project completly
and strongly closed in Ansys workbench philosofy.
For more information:
Roberto Gonella - EnginSoft
[email protected]
53
tempi di implementazione in passato non erano
utilizzabili in ambito industriale, permette di essere più
efficienti ed efficaci sulle scelte progettuali.
6. Qual è stato il contributo di EnginSoft e in che modo
ha saputo valorizzare qualità, potenzialità e capacità
della sua industria/impresa?
Enginsoft è stato sempre per noi non un fornitore ma una
estensione della nostra azienda. La competenza dei suoi
tecnici e la flessibilità aziendale della società ad adattarsi
alle mutevoli condizioni di mercato, sono stati i fattori
chiave che hanno reso solida la collaborazione nel corso
degli anni.
7. Che prospettive intravede per i codici di calcolo in
relazione alle sfide poste dal futuro?
Diventeranno sempre più parte dominante del ciclo di
sviluppo prodotto ma non dobbiamo dimenticare che la
differenza sarà fatta sempre più dall'uomo nella gestione
e utilizzo delle maggiori infomazioni a disposizione. Il
vantaggio competitvo sarà dato soprattutto dalla
preparazione culturale e tecnica delle nuove generazioni,
ma questa è un'altra storia...
8. Quali progetti, obiettivi e nuovi traguardi intende
perseguire grazie all’uso di questi strumenti?
Avere la possibilità di rispondere rapidamente alle nuove
opprtunità di business con soluzioni robuste, flessibili e
This methodologies enables to import fluid forces, temperatures, from a
steady-state CFD analysis into a the Mechanical application analysis.
This one way transfer of temperatures information from a CFD analysis can
be used in determining the temperature distribution on a structure in a
steady-state analysis.
modulari capaci di rendere profitevoli anche bassi volumi
di produzione.
9. E cosa si auspica per il mondo della tecnologia
scientifica alla continua ricerca di una dimensione tra
creatività e competitività?
Che in futuro le due parole siano usate entrambe come
sinonimo di innovazione
Per maggiori informazioni:
Roberto Gonella - EnginSoft
[email protected]
54
- Newsletter EnginSoft Year 7 n°3
Illumination Analysis and Design
Optimization of an Automotive Speed
Meter
DENSO CORPORATION Japan uses modeFRONTIER® as standard tool for optimization and much more
On the display panel in front of the
with
approximately
120,000
vehicle driver, the instrument
employees. In Europe, including Italy,
cluster conveys a wide variety of
33 regional DENSO offices and
information, such as driving speed,
factories are based. DENSO’s main
motor rotation number and fuel
business is to develop and provide
level. These values are shown
advanced automotive technologies,
clearly on the displays of the panel
systems and components, such as, for
so that the driver can recognize
example, powertrain control systems,
each driving condition of the
electric systems, electronic systems,
vehicle by just one look. For an
thermal systems and information &
instrument cluster, a high level of
safety systems for the world's major
visibility and a stress-free display
automakers. The automotive speed
for long drives that match the
meter described above is one of
vehicle’s design are required. In
DENSO’s main products. As one of the
particular, high-luminance and
big players in the global automotive
consistent
illumination
are
sector, DENSO’s focus is on delivering
Fig. 1 - Vehicle interior (above) and meter illumination
necessary for the meter display to (below)
the highest quality speed meters
provide good visibility for highest
while shortening development cycles.
safety. In fact, the illumination quality is the most
Design and development processes continuously have to
important design requirement for the automotive speed
be adapted and made more efficient.
meter. To shorten its product development cycle and to
deliver a high quality product to the market as fast as
The challenge
possible, have become the biggest challenges for DENSO
For years, DENSO has done illumination analyses using 3Das global competition in the automotive industry has
CAD data to determine the illumination design of
increased over the past years.
automotive speed meters.
DENSO is a world
famous automotive
parts supplier located
in Kariya in the Aichi
prefecture region of
Japan. The company
operates
in
33
countries and regions
Fig. 2 - 3D-CAD data of the meter assembly
(above) and the pointer (below)
The purpose of the illumination analysis outlined in this
article is to predict the illumination brightness and
unevenness of the meter by calculating the luminance
distribution on the speed meter dial and pointer, and the
ray tracing from light sources, and moreover, to design the
optimal meter geometry by changing the 3D model in the
CAD system as necessary. However, this trial and error
design process, to modify the 3D model every time with
Fig. 3 - Luminance distribution result (left) and virtual display (right) obtained by illumination analysis
software. Speed meter dial (above) and pointer (below).
Newsletter EnginSoft Year 7 n°3 -
Fig. 4 - modeFRONTIER® Workflow: a)Input setting; b) CAD geometry change and
illumination analysis (automated batch process); c) Results output setting
the illumination analysis results, took up considerable
time until the required quality was achieved. Typically,
there were 4 to 8 design parameters for 1 pointer
geometry. Hence it was not easy, even for the experienced
engineers, to choose the right design parameters from a
number of possible combinations of these parameters so
that the 2 different objective functions, average
luminance and luminance ratio, could reach an optimum
level. Actually, about 10 iteration for one pointer were
necessary and the design process took 7 – 10 days for 1
product. The type of work was mostly routine and
increased the engineers’ workload. Moreover, the process
always relied on the know-how and understanding of the
engineers. Therefore, in order to ensure high quality
product development for the future, it became extremely
important to improve the existing process. To realize time
reduction and optimization of illumination quality, DENSO
has established a design optimization system for
automotive speed meter using modeFRONTIER®.
The solution
The modeFRONTIER® multi-objective optimization tool
and its experimental design methods, were embedded to
fully automate the repetition of the 3D design changes
following the illumination analysis results. The design
process flow to be automated was the following:
1. 3D-CAD data (NX) was translated into IGES and
imported into the illumination analysis software.
2. After the meshing was completed and
each boundary condition defined, the
software calculated the illumination
distribution and the result was exported
as the input file for the design change
inside the 3D-CAD system.
To establish the automatic system for this
flow, the workflow of the automatic
calculation process was created inside
modeFRONTIER®. This way, the batch
program to change the CAD geometry and
to execute the illumination analysis was
determined.
55
As modeFRONTIER® provides an easy-to-use Japanese
GUI, the initial settings for the automatic process
could be defined very easily even by engineers who
were unfamiliar with the software at the time. Then,
the actual optimization flow started. The first step was
the illumination analysis using modeFRONTIER®’s
experimental design method. With this method, the
highly accurate approximate function could be
obtained using limited calculation time. Then, the
optimized result was searched to lead the Pareto front.
The challenge was the multi-objective optimization
showing the trade-off between the 2 objective
functions. So, choosing the most efficient algorithm
from the many different multi-objective optimization
algorithms that modeFRONTIER® provides, to explore
the optimized result effectively, was crucial to establish
the new system in DENSO.
As mentioned before, the number of parameters is
relatively high with 4 to 8 design parameters and 2
different objective functions of average luminance and
luminance ratio. Here FMOGA (Fast Multi-objective Genetic
Algorithm which executes the multi-objective
optimization by updating the response surface
automatically) was selected as it has a wider search range
and the Pareto front can be reached quickly. FMOGA has
the ability to reduce the actual calculation work by using
the response surface and to drastically downsize the total
calculation time. Before using FMOGA, it was evaluated by
changing the approximate rate by 70%, 80% and 90%, in
order to know which rate is the most valid for the response
surface, which can explore the better result, and if the
required calculation time is reasonable. The goal is within
24 hours. For the final result, an 80% approximate rate
was chosen finally as it delivers better performances in a
suitable time frame.
Results
A1 to A6 in Fig.6 are the 3D-CAD design parameters for the
pointer geometry and represent the pointer angle, the
angle side of the reflecting surface, the angle above the
reflecting surface, the angle below the reflecting surface
Fig. 5 - FMOGA evaluation by changing the approximate rate of the response surface
56
- Newsletter EnginSoft Year 7 n°3
Conclusions
The automatic optimization process managed to:
• Automate the routine between CAD data change, the
illumination analysis and optimization.
• Streamline the whole design process by reducing
workload and process automation.
• Improve product quality by switching from a trial and
error solution to theoretical optimization.
The general-purpose geometry optimization system can be
easily used by the designers and will be introduced not
only for the meter design, but also for other optical
products of DENSO in the months ahead. Today,
modeFRONTIER® is DENSO’s standard tool for optimization
and it is expected to support each business area of the
company.
Fig. 6 - The pointer geometry (above) and the design parameters (below)
and 2 different heights. For each design parameter, the
minimum value, the middle value and the maximum value
were defined. The calculation point for the luminance was
determined as shown in Fig.7. The optimization was
executed in order to find the combination at which both
the average luminance and the
luminance ratio will become larger.
This article is based on the original case study by Mr.
Chiaki Suzumura, DENSO CORPORATION, Japan.
The article has been written in collaboration with CDadapco JAPAN Co.,LTD.
Akiko Kondoh,
Consultant for EnginSoft in Japan
Fig.8 shows the results of the
optimization solution and the
Pareto front. The red point
represents the result before the
optimization
system
was
introduced. The pink point is the
optimum result gained from the
Pareto front. Though the average
luminance is 158 which is the
same as before, the luminance
ratio has improved by 15% from
0.64 to 0.72.
Fig. 7 - Objective functions (average luminance and luminance ratio)
Yet, not only the luminance result,
but also the man hour could be
improved significantly. The process
which used to require the
designer’s constant attention over
8 days in the past, now, after the
introduction of the optimization
software modeFRONTIER®, only
requires 8 hours of manpower for
operation and a total of 2 days
including
the
automatic
calculations. This means the
workload could be reduced by 90%
with the help of optimization
methods.
Fig. 8 - The Pareto front of the average luminance and the luminance ratio
Newsletter EnginSoft Year 7 n°3 -
57
Elysium’s CADdoctor Enriches Product
Data Quality in PLM
3D interoperability is based on 3D data translation. When
3D CAD data is converted into another format, people
expect that not only 3D geometry but also other
information such as attributes and annotation is
converted perfectly. However, it is a well known fact that
some kinds of geometry or information often fail to be
converted, or cause errors. To transfer full information
translation, for example, between Japanese and English.
Since the grammar and vocabulary are different and some
terms don’t have exact equivalents in the other language,
it is almost impossible to translate completely. It is not
surprising that translation errors of 3D data often occur as
in the case of language.
Even if you created 3D models that look the same
on several different CAD systems and converted
them into IGES files, they would make differences.
As the English language has various dialects, IGES
files converted from different CADs are described
in different ways, just like words spoken in
different dialects. Dialects also have an effect on
translation accuracy.
Fig. 1 - Objects to be translated
from one CAD to another or to other formats for FEM, CAM,
RP or DMU, you have to pay more attention to the Product
Data Quality (‘PDQ’). Elysium’s ‘CADdoctor’ provides the
right solution to leverage 3D CAD data.
3D Data Conversion Methods
On a superficial level, conversion methods are divided into
‘Direct’ and ‘Mediate’ translation. ‘Direct’ translation
means that user can exchange, read and write an original
CAD data among two or more CAD systems. On the other
hand, ‘Mediate’ translation is a conversion using an
intermediate file format like IGES, STEP or special formats
provided by vendors (*). Indeed, those two methods are
internally equivalent. Here is how an intermediate file
acts and why we need to care about PDQ (**).
Errors Caused by Format
When you convert a 3D CAD model into another format via
an intermediate file, defective geometries are often found.
This is because the representation of a 3D geometry is
different from CAD to CAD. It is like “language”
Fig. 2 - Wrong 3D data remain wrong after translation
The other reason for errors
Nowadays, the auto industry is trying to standardize the
notation in IGES to prevent PDQ errors, and each CAD
vendor has taken countermeasures against such problems
as well. STEP is designed to eliminate ambiguity over 3D
data representation. Yet unfortunately even STEP cannot
solve all the problems accompanied with 3D data
conversion.
If original 3D data contains unneeded information or lacks
sufficient information, wrong information will be kept in
errors throughout the data conversion. As a matter of fact,
it is evident that defective 3D geometry, that is lack of
PDQ, causes problems later on.
Errors caused by Bad PDQ
Converting poor-quality 3D data often results in failure.
While some apparent errors such as a gaping hole on a
surface, face distortion or an untrimmed face are easy to
find, many invisible errors are prone to stay undetected.
Therefore, an innocuous-looking 3D model can cause
problems; for example, you cannot execute operations
anymore because the model is not a solid model, or errors
58
- Newsletter EnginSoft Year 7 n°3
high-accuracy CAD, the gap
between faces is considered as
‘Gap’ as the width is larger
than the threshold.
Fig. 3 - a missing face in a conversion result
This is the same in the case of
‘Tiny Element’. You can create
a microscopic element on
high-accuracy CAD. For a
precision-level 0.001mm CAD
system, a 0.002mm element is
sufficient for closing a gap.
But when transferred into
Fig. 4 - Example of hole in polygon mesh due to bad PDQ
another low-accuracy CAD, the
element would be recognized as an unusable
‘Tiny element’.
Other than tolerance matters, the way to
handle analytical representation or nonmanifold varies from CAD to CAD. Major PDQ
guidelines recommend generally acceptable
values, though it is impractical. For example,
while popular PDQ guidelines recommend
converting the analytical representation to a
generic NURBS surface, a cylinder must be
represented as an analytical surface for a
kind of motion simulation analysis tool.
Fig. 5 - Typical cause of PDQ deterioration
occur during Boolean operation or offset.
In extreme cases, CAD software freezes in the middle of
modelling after repeated reworks because of accumulated
PDQ problems.
3D data with poor PDQ leads to troubles not only with CAD
but downstream FEM analysis, digital
mock up (DMU), rapid prototyping (RP) or
CAM.
Product Data Quality Guideline
‘PDQ’ literally means the quality of 3D data for product
development and manufacturing. To ensure quality
control, you will need some criteria and a guideline to
judge the quality. One of the most popular guidelines is
Factors that make the PDQ worse
Usually, CAD modellers and operators of
data conversion don’t pursue the PDQ. But
improper 3D geometry is generated at
every stage from design to distribution of
3D data.
The most common cause of PDQ problems
is that each CAD system has unique
standards. Even if an original 3D model
does not have PDQ errors in the original
CAD, the converted model may have PDQ
errors in the target CAD because of the
difference in accuracy criteria and
tolerance. (***)
For example, if you create a model in a
low-accuracy CAD, a slight gap between
faces whose width is smaller than the
threshold value is considered as just an
edge. And, once the model is converted to
Fig. 6 - Different judgement on ‘Gap’
Fig. 7 - Different judgement on ‘Tiny Face’
Newsletter EnginSoft Year 7 n°3 -
59
Fig. 8 - Typical examples of poor-quality modelling
the Product Data Quality Guideline for the Global
Automotive Industry set by Strategic Automotive product
data Standards Industry Group (‘SASIG’). Document
Version 2. 1 released in May 2005, which has been also
published as ISO/PAS 26183:2006, is widely accepted as a
standard.
that the surface is generated when it is exported to IGES
format.
Embedded Face stands for multiple edges and faces that
are overlapping in whole or part. This kind of error is often
caused by reworks in the design phase. Copied geometry
for a revision remains as an embedded face.
Recommended software for PDQ validation and
repair
There is a remedy for everything. If you are
willing to solve the problems regarding PDQ and
willing to circulate high-quality 3D data, why
don’t you take effective measures to evaluate the
PDQ and to correct problems? Elysium’s CADdoctor
is one of the most reliable software for PDQ
validation and 3D data healing.
Although it is a very big self-intersection, it looks normal
when you see it in a CAD window because the trimmed
face itself does not have any problems. The base surface
has self-intersection outside the trimmed area. It seems
Although there are various healing tools, few can
automatically correct errors according to both
popular PDQ guidelines and user-defined standards.
Through long-term on-site trials with a number of
manufacturers, CADdoctor has proven unparalleled
performance of detecting and healing errors. Adopted and
highly praised among CAD-using industries, it ensures
strict compliance with PDQ guidelines and/or specific
company standards.
Healing requires an extremely high degree of geometry
interoperability. CADdoctor allows its users to repair
damaged geometry with very simple and quick operations.
Fig. 10 - Embedded Face
Fig. 11 - Invalid 3D geometry detected by Elysium’s CADdoctor SX
Fig. 9 - Self-intersecting Surface
SASIG’s PDQ guideline contains Geometric Quality Criteria
that consist of 64 check items and Non-Geometric Quality
Criteria which define file naming conventions, data
structure including layer or assembly and so on.
60
- Newsletter EnginSoft Year 7 n°3
with PDQ errors. For FEM, CADdoctor also
provides powerful optimization including
geometry simplification and polygon
handling functions. (****) In addition,
ASFALIS, Elysium’s flagship solution, will
help automating the entire process.
For more information, please visit the
ELYSIUM website:
http://www.elysium-global.com
Fig. 12 - Some healing tools worsen PDQ rather than improve.
Sakae Morita, ELYSIUM Co.,Ltd., Japan
Tips for Practical Use of CADdoctor
Product development is divided into two phases; creation
of 3D data and utilization of 3D data.
Creation of 3D CAD data
Ideally, the original 3D CAD data created during the design
phase should have no errors inside and should comply
with common PDQ guidelines. CADdoctor is a desirable
option for automatic detection and correction of PDQ
problems.
(*) For example, Elysium Products including CADdoctor SX,
CADdoctor EX and ASFALIS provide Elysium Neutral File as
the intermediate file to users.
(**) Please note that the conveyable information differ
from each intermediate file. Especially, attribute
information and 3D annotations tend to be neglected.
Fig. 13 - CADdoctor Healing Example [Original (NX) -> Dr (before) -> Dr (After) -> Result (V5)]
Fortunately, most file formats can convey 3D geometry
information.
Fig. 14 - 3D data distribution with PDQ tool
Utilization of 3D CAD data
As the tolerance and other
standards differ from application to
application, engineers in the FEM or
experimental stages have to
prepare 3D CAD data in accordance
with the intended use. Regarding
PDQ, if data is not healed at all in
the design phase, the data will cost
considerable man hours to repair
errors with CAD operation. On the
other hand, if data is validated and
repaired in the design phase, it will
need just a few minutes to deal
(***) PDQ items about tolerance were originally classified
as Geometric Quality Criteria. However, such items are
practically subject to the rules of an individual company or
set between the parties concerned. So, the items are
classified as Non-Geometric Quality Criteria. In this article,
the term ‘Tolerance’ means identical tolerance. If the
distance between two vertices is within the identical
tolerance, the CAD system recognizes that vertexes are at
the same place.
(****) Regarding CAE, a complex CAD
model can cause considerable damage
to FEM analysis, such as generation of
irregular mesh, longer calculation
time, and analysis errors. The
CADdoctor SX FEM Suite will help you
as an excellent tool for downsizing
CAD files to produce high-quality
mesh data for FEM by automatically
repairing the PDQ defects and
removing unnecessary features. Mesh
generator is needed separately.
Fig. 15 - CADdoctor helps to create regular mesh for FEM
Newsletter EnginSoft Year 7 n°3 -
61
The Culture of Wood
rigid structures made from concrete. Owing
Wood is very close to the people in Japan,
to its flex structure and central pillar, the
where two-thirds of the land is covered by
Pagoda can absorb vibrations and has
forest, and wood has been applied to
withstood major earthquakes in the last
people’s life naturally since the early times.
thousand years. Today, the same technique
Geographically, Japan stretches a long way
is employed by skyscraper architects, not
from the northeast to the southwest. Within
only in Japan but also in many other
short distances, the country has incredible
countries worldwide. People’s skills and
altitudes and changing landscapes. This is
knowledge of how to use the individuality of
also why we can find many different types of
wood and this architectural method, have
trees in the country. Today, not only metallic
evolved further during the last centuries.
materials but also lightweight and high
Japan has developed many new materials
strength composite materials have been
and techniques, but people still love
developed. Such high-performance materials
wooden architectures and most houses are
are used in different industries, from
built from wood.
aerospace,
automotive,
energy,
environmental to bio engineering and
entertainment, etc. At a time when we are Image1: Gojyu-no-To, five-story Pagoda in Wood can be seen in many different
MONODUKURI. One of the reasons is that
offered such sophisticated new materials Horyu-ji
the latest processing technologies are able to produce many
from science and technology, we once again appreciate wood
wooden materials with even characteristics that are heat and
for the natural beauty, warmth and gentleness it brings to
humidity resistant. Wood is now applied to a wider range of
our homes and workplaces.
products. Another reason is that traditional wooden products
made by old manufacturing methods are still popular, and
Unlike metallic and composite materials, wood is a living
their popularity is growing ! These products unify in harmony
material with individual characteristics. Sometimes,
different techniques and craftsmanship with the natural
engineers and designers face difficulties when using wood as
material that people admire, and that we can’t expect from
a material. It erodes under the influence of humidity, and
products made by machines.
becomes distorted and cracks from drying. Its strength and
durability depend on the position and direction of use. But
For example: Hashi (chopsticks), Wan (bowl), Ohitu,
wood also has many benefits as a material. Wood provides
Suribachi (mortar) and Seiro (steamer) are some of the
heat insulation, moisturizing and good humidity capabilities.
typical wooden tools that many people use at home.
Its natural soft look and scent gives us a feeling of wellRecently, Muku (unprocessed wood) has become popular for
being. With all this in mind, the Japanese have produced
flooring and furniture. Wood is also used after the burning
beautiful and functional wooden goods, making up
process, as Mokutan (charcoal).
disadvantages of the material and revealing its full beauty.
Famous examples can be found in Japan’s traditional
It is used for char-grilled cooking like Yakitori (grilled
architecture. In the Nara prefecture (region in the center of
chicken), it turns tap water into mineral water, and thanks to
Japan, on the Kii Peninsula) people celebrate this year the
its humidity condition, it deodorizes our homes with its air
1300th anniversary of Heijo-kyo, the prefecture’s capital.
cleaning capabilities. It also protects
Heijo-kyo is home to many traditional wooden
computers and electric devices from
architectures, some are listed as World Heritages by
electromagnetic rays.
Unesco. Horyu-ji and Todai-ji are well known all
over the world as the world’s oldest and biggest
We appreciate the living material wood for
wooden architectures. Gojyu-no-To (five-story
its natural beauty, it has become a treasured
Pagoda) in Horyu-ji exists for 1300 years without
companion in many areas of our lives. To
leaning or breaking despite the wooden structure,
people in innovative manufacturing
its delicate figure has not changed since the day it
industries using CAE, Wood MONODUKURI
was finished. Gojyu-no-To was designed and erected
will bring a fresh understanding and
based on an old architectural method of piling the
enthusiasm for nature…
flex structure joined wooden parts using a special
assembling technique. Of course, this is very Image2: Ohitu (bowl for keeping
Akiko Kondoh
different from the approaches architects use for cooked rice)
62
- Newsletter EnginSoft Year 7 n°3
EnginSoft Joined the E2BA
On the 8th of July, 2010, the General Assembly of the
E2BA (Energy Efficient Buildings Association), held in
Brussels, admitted EnginSoft S.p.A. as a new member of
the association. The new membership to the association
will allow EnginSoft to extend its research network and to
have a proactive and collaborative role across the building
industry value chain, furthermore, EnginSoft will reinforce
its active role in the research activities related to the
construction industry.
EnginSoft, BENIMPACT and the sustainable buildings
industry
Recently the EnginSoft R&D team strongly focused in the
eco-sustainable buildings design sector, thanks to the
development of its applied research project BENIMPACT
(Building’s ENvironmental IMPACT evaluator & optimizer).
The project is co-funded by the autonomous Province of
Trento (Northern Italy)
by means of the ERDF
(European
Regional
Development Fund).
BENIMPACT mainly aims
at
the
development
of
methodologies (and of a related
prototypical software platform)
to support architects and
engineers in the design of ecosustainable buildings (both new
ones and modifications of the
existing ones). Based on the integration of various
analysis tools within the Process Integration and Design
Optimization platform modeFRONTIER®, the BENIMPACT
suite will allow to identify the optimal trade-off between
costs and environmental performances of the buildings.
please get more information on the BENIMPACT research
project from:
http://www.enginsoft.com/research/prgbenimpact.html
The E2B Association
The E2BA (Energy Efficient Buildings
Association) was created in the 2008 by
the founding members of the E2B EI
(Energy Efficient Buildings European
Initiative) as a non-profit, international, industrial
association. The E2BA was created in order to prepare and
manage a PPP (Public-Private Partnership) with the
European Commission, seek and demonstrate industry
engagement and represent and coordinate members’
research interests within the PPP.
The E2BA will focus, strengthen and give coherence to an
overall effort in Europe, with the objective of accelerating
innovation in cutting edge European low carbon
technologies. The association will also pool its members’
efforts in order to support the mission of the E2B EI.
The E2B EI is a Europe wide, industry driven, research and
demonstration programme for energy efficient buildings
and districts, with the ambitious vision that all the
European buildings will be designed, built or renovated to
high energy efficiency standards by 2050. Its overall
vision is to deliver, implement and optimize building and
district concepts that have the technical, economic and
societal potential to drastically decrease energy
consumption and reduce CO2 emissions in both new and
existing buildings across the European Union (EU).
The E2B EI aspiration is to manage a € 2bn research and
demonstration programme from 2009 until 2019. To date
the EC has committed € 500m for the period 2010 to 2013
in the framework of the E2B PPP.
The E2B EI will increase the level of research into key
technologies and develop a competitive industry in the
fields of energy-efficient construction processes, products
and services. With the outcome of this research, the
European community will be equipped to address climate
change and improve its energy independence.
The E2B EI will work to achieve the following objectives:
• deliver high quality, cost effective research that
secures confidence from industry, public and private
investors, decision makers and other stakeholders;
• leverage further industrial, national and regional RTD
investment;
• build close cooperation with research being carried out
at national and regional levels;
• enable the market entry of energy efficiency
technologies, allowing commercial market forces to
drive the associated public benefits;
• place Europe at the forefront of energy efficient
buildings and district technologies worldwide;
• focus on achieving long-term sustainability and
industrial competitive targets for cost, performance
and durability aimed to overcome critical technology
problem areas;
• stimulate innovation and the emergence of new value
chains including SMEs;
• facilitate the interaction between industry, universities
and research centers;
• encourage the participation of the new Member States
and candidate countries;
• perform broadly conceived socio-techno economic
research aimed to assess and monitor technological
progress;
Newsletter EnginSoft Year 7 n°3 -
• target non-technical barriers to leverage markets and
carry out research modes to support the development
of new regulations;
• review existing standards to eliminate artificial barriers
to markets;
• provide reliable information to the general public on
the benefits of new technologies to the environment,
security of supply, energy costs and employment.
The impacts that will arise while pursuing the achievement of the aforementioned objectives are important and
numerous, the most effective are:
• impacts on energy consumption and on renewable
energy installations: lowering the total primary energy
requested by buildings and generating at least 20% of
the total primary energy from renewable resources;
• impacts on the environment: the use of the new
innovative and cost effective technologies will reduce
63
the production of CO2 of nearly 65 million tons per
year;
• impacts on the society: generating 90,000 to 150,000
new jobs and improving buildings indoor comfort
(thermal comfort, indoor air quality, acoustics and
visual comfort);
• impacts on the economy: cost savings in the range of
12% for lighting, 55% for heating and 20% for hot
water, that results in an estimate of € 126,000 - €
150,000 per year.
Please get more information on the E2BA from:
http://www.e2b-ei.eu
For further information please contact:
Angelo Messina - EnginSoft
[email protected]
Feat Group: We Forge All You Need
La storia di Feat trova il suo principio
agli inizi degli anni 70.
Si colloca nel cuore della Brianza, nella zona del lago di Como, una terra
abitata da genti con solide competenze meccaniche, dedite al lavoro e portate alla creatività.
L'uomo che ha guidato lo sviluppo di Feat è il sig. Cogo
che, dopo esperienze in Svizzera e con un gruppo americano, ha preso in mano le redini di quella che allora era
una piccola stamperia dotata più di entusiasmo che di risorse.
Erano anni di grandi opportunità , dove l'Italia era considerata sul piano internazionale un fornitore di qualità e
competitivo.
Fin
dall'inizio
Feat ha delineato
tre elementi strategici alla base
del proprio sviluppo nel settore dello stampaggio a caldo
dell'acciaio:
1) I clienti vanno cercati in tutto il mondo e devono essere ditte di primo livello che esigono fornitori di primo
livello.
2) Non bisogna dare solo una prodotto ma anche un servizio tecnico di sviluppo con competenze specifiche
nei vari campi di applicazione.
3) È un falso luogo comune che la qualità si dia per scontata ed il prezzo sia l'unica componente distintiva. Per
applicazioni di sicurezza o ad alto contenuto di sollecitazioni strutturali, un componente forgiato viene
scelto per le sue caratteristiche di affidabilità.
Tutti gli elementi che vanno a concorrere nella
qualità devono, di conseguenza, essere curati
con la massima attenzione.
Basti pensare che i clienti, in Feat, hanno a disposizione un ufficio tecnico commerciale con
background internazionale con il quale è possibile sviluppare in collaborazione le migliori soluzioni produttivo – funzionali. Inoltre trovano
un metallurgista per poter definire correttamente le caratteristiche dei materiali e dei trattamenti termici necessari e possono avvalersi di
64
- Newsletter EnginSoft Year 7 n°3
un laboratorio interno
completamente equipaggiato.
Oggi Feat è fornitore
strategico per importanti marchi internazionali produttori di
primo impianto. Le nostre forniture interessano tutti i paesi europei, gli Stati Uniti ed il
Sud Africa. Inoltre
consideriamo con vivo
interesse i nuovi mercati di sbocco: Brasile,
Corea e Medio Oriente.
La produzione si è specializzata nei settori del valvolame
per l'industria chimica ed energetica, dei ganci per le macchine da sollevamento e dei componenti per le macchine
da costruzione e movimento terra. La tendenza seguita è
quella di completare lo stampato con lavorazioni meccaniche di precisione fino a consegnare un componente finito , certificato e pronto per l'assemblaggio.
Le risorse produttive non hanno mai smesso di evolvere ed
i processi sono messi costantemente in discussione. Oggi
disponiamo di linee di stampaggio capaci di lavorare componenti fino a 100 kg, trasformiamo tutti i tipi di acciaio
con una specializzazione per le super-leghe e l'inox.
Gestiamo geometrie complesse e cambiamo produzione
con flessibilità.
La sfida del prossimo decennio è quella di robotizzare le
operazioni di stampaggio riuscendo a preservare la flessibilità e mantenendo ragionevoli i costi delle attrezzature.
In questo modo potremo garantire l' omogeneità dei parametri qualitativi e produttivi ed elevare i passati standard
artigianali a quelli propri dell'ingegneria industriale.
Visitate il sito di FEAT all’indirizzo:
www.featgroup.com
L’utilizzo di FORGE
L'introduzione di Forge in Feat risale al 2003. Avevamo approcciato quest'applicazione consentendo a due studenti
dell' università di Padova, di svolgere la loro tesi presso di
noi.
Alla fine dell' esperienza ci eravamo convinti che questo
tipo di software potesse aiutarci ad ottenere risparmi di
materia prima e un approccio più metodico alla progettazione degli stampi con conseguente codifica del knowhow aziendale acquisito.
A tal fine abbiamo organizzato all'interno del dipartimento stampi, una funzione specifica dedicata all'utilizzo di
Forge e dotata degli strumenti hardware più moderni.
L'utilizzo delle simulazioni è diventato un passaggio d'uso
comune ogni volta che vogliamo studiare una nuova famiglia di prodotto oppure re-ingegnerizzare uno stampato
esistente.
È ormai diventata una prassi per tutto lo staff tecnico, incontrarsi in sala corsi per potere analizzare e discutere in
team le simulazioni sviluppate.
L'utilizzo costante e l'applicazione nei casi concreti sono
condizioni fondamentali per sfruttare al massimo le potenzialità dello strumento.
Ad esempio, nel caso illustrato in figura vediamo un componente valvola destinato a settori dell'industria farmaceutica, si tratta di applicazioni estremamente delicate,
ove ogni imperfezione superficiale potrebbe dare luogo a
disastrose non-conformità. In questo caso Forge ci ha permesso di individuare preventivamente un'insidiosa ripiega
del materiale e di sviluppare una soluzione tramite la modifica dello stampo.
Perché EnginSoft e FORGE in FEAT
Ad oggi Feat ha raggiunto un buon livello di esperienza
che consente di fare simulazioni utili, conoscendo la loro
affidabilità e limiti.
Dà soddisfazione vedere i responsabili di produzione essere oggi i primi a volere simulare prima di emettere una
quotazione o definire un'attrezzatura.
Riteniamo che Forge ci abbia aiutato ad ottenere benefici
nei seguenti campi:
riduzione sfrido, prevenzione problematiche di stampaggio, individuazione difettologie come: mancato riempimento , cricche o ripieghe. Analisi dell'andamento delle
fibre, robotizzazione della movimentazione, vita stampi.
Il beneficio più importante è stato quello di stimolare il
lavoro di gruppo, con un linguaggio scientifico comune
per integrare le varie competenze in soluzioni innovative.
Il supporto di EnginSoft, attraverso l’assistenza telefonica
ed affiancamenti dedicati, è fondamentale per essere aggiornati sui miglioramenti dello strumento e ci consente di
applicarlo a problemi nuovi e sempre più complessi.
Newsletter EnginSoft Year 7 n°3 -
65
Enginsoft ha partecipato allo Users’
Meeting Europeo di FORGE il 7 e 8
giugno 2010 a Sophia Antipolis, Francia
Nei giorni 7 e 8 giugno 2010 presso il
centro congressi AGORA EINSTEIN a
Sophia Antipolis si è tenuto il
tradizionale Users’ Meeting Europeo degli
utilizzatori di Forge, organizzato da
Transvalor,
produttore
dei
due
programmi. In questa occasione sono
state presentate le ultime novità del
software e Transvalor si è confrontata
con i propri clienti, raccogliendo suggerimenti per migliorare
le funzionalità di Forge e ColdForm.
Nutrita è stata la partecipazione italiana (vedi foto
dell’“italian team”), con utilizzatori in campi di applicazione
anche molto diversi, interessati a confrontarsi con Transvalor
e con gli altri utilizzatori presenti per migliorare il modus
operandi e la qualità dei risultati.
Tra le novità di Forge 2009 sono state evidenziate il nuovo
modulo di ottimizzazione automatica, in grado di fornire
risultati molto più affidabili della progettazione
sperimentale, la significativa riduzione dei tempi di
calcolo, ottenuta grazie alla revisione delle funzioni di
contatto, le nuove funzioni di tracciatura delle ripieghe
(vedi www.enginsoft.it/software/forge/ per ulteriori
dettagli)
Per quanto riguarda le modifiche a breve termine, in corso
di implementazione per la prossima versione, sono stati
mostrati nuovi modelli più completi per l’impostazione di
processi particolari (laminazione), nuovi modelli di presse
meccaniche, miglior controllo di volume, nuovi risultati nei
file .vft, ma soprattutto un nuovo strumento per la
generazione in automatico dei report di calcolo in formato
Word, PowerPoint o pagine HTML.
Per quanto riguarda lo sviluppo a medio termine, è in fase
iniziale di sviluppo una interfaccia completamente nuova,
“Forge workbench”, che integrerà pre-, post-processore e
launcher: flessibilità, personalizzazione, usabilità ed
ergonomicità sono gli obiettvi di questo sviluppo, che verrà
testato prima della release ufficiale con un gruppo ristretto
di utenti.
Un altro tema di ricerca è l’integrazione in Forge di un
modulo per il calcolo del riscaldamento ad induzione, che
verrà validato attraverso casi industriali suggeriti dagli utenti
e rilasciato ufficialmente nel 2012.
Per ultimo, recentemente è stato concluso un progetto di
ricerca con il CEMEF basato sul metodo bi-mesh, in grado di
ridurre significativamente i tempi di calcolo, soprattutto
nella simulazione di processi stazionari e non stazionari,
come ad esempio la laminazione circolare.
Per quanto riguarda le presentazioni
degli utenti, in tutti i lavori è stato
sottolineato che Forge consente di
ottenere un notevole ritorno sugli
investimenti. Alcuni esempi: Forge è
stato utilizzato da un utente tedesco
per una simulazione di forgiatura
incrementale e da un secondo utente
per lo studio dei fattori che possono
influenzare il ciclo di vita dello stampo. Un utente spagnolo
ha invece impiegato il software per simulare la formazione di
un disco con la forgiatura orbitale. In un altro caso, Forge
ha calcolato l’evoluzione della dimensione della grana in
una lega a base di nickel per migliorarne la qualità. Altri
utenti hanno mostrato applicazioni del software per la
simulazione del processo di laminazione, la laminazione
trasversale planetaria KRM e l’analisi della deflessione
della pressa meccanica in un processo di forgiatura multistage.
A contorno delle sessioni tecniche, Transvalor ha organizzato
una eccellente cena di gala sul lungomare di Cannes, che è
stata molto apprezzata da tutti i presenti.
Ing. Marcello Gabrielli ([email protected])
Responsabile in Enginsoft della attività di simulazione con il
software Forge
Meeting Italiano Forge
Montichiari, 22 Ottobre 2010
Come ogni anno, Enginsoft propone, a tutti gli utilizzatori
che non hanno potuto essere presenti al Meeting Europeo
di Transvalor, un Meeting Italiano degli utilizzatori di
Forge, che avrà luogo in occasione della 2010 Enginsoft
International Conference a Montichiari (BS), i giorni 21 e
22 ottobre. La seconda giornata vedrà presente Transvalor
per un riassunto di quanto mostrato in Francia, e alcuni
clienti italiani e stranieri, che mostreranno come viene
utilizzato il software nella propria azienda.
Per maggiori informazioni, consultate il programma
dell’evento all’indirizzo http://www.caeconference.com/
66
- Newsletter EnginSoft Year 7 n°3
EnginSoft Event Calendar
ITALY
September 2010- Automotive WEBINARS on Modelling Metal
Cutting and Machining Simulation with AdvantEdge™
EnginSoft is pleased to announce the next Webinars on
Modelling Metal Cutting and Machining Simulation with
AdvantEdge.
To register, please visit: www.enginsoft.com
21-22 October 2010 – EnginSoft International Conference
2010. CAE Technologies for Industry. Fiera Montichiari,
Brescia. Be part of Europe’s major CAE event where today's
limitless applications of Simulation based Engineering and
Sciences will be discussed! “Believe in innovation: simulate
the world” www.caeconference.com
FRANCE
7 October- Journée Simulation Numérique «Organisation et
rentabilité de la fonction calcul». Paris
http://www.af-micado.com/
12-13 October – Congrès Nafems. «Simulation numérique:
moteur de performance». Paris
http://www.nafems.org/events/nafems/2010/francecongres/
18 November – French Flowmaster and modeFRONTIER®
Users Group Meeting. Forum Utilisateurs. Enginsoft France
vous invite à participer à l'édition 2010 de son Forum
Utilisateurs qui se déroulera le 18 novembre à Paris. Cette
journée sera résolument orientée témoignages clients sur les
solutions modeFRONTIER® et Flowmaster! Hotel Saint James
et Albany, Paris, Lieu : Saint James et Albany - 202, rue de
Rivoli 75001. www.enginsoft-fr.com
EnginSoft France 2010 Journées porte ouverte
dans nos locaux à Paris et dans d’autres villes de France, en
collaboration avec nos partenaires. Prochaine événement:
Journées de présentation modeFRONTIER®
Pour plus d'information visitez : www.enginsoft-fr.com,
contactez: [email protected]
GERMANY
Please stay tuned to www.enginsoft-de.com,
contact [email protected] for more information.
2-4 November - AIRTEC 2010, Messegelände Frankfurt
Visit EnginSoft GmbH in the exhibition of AIRTEC 2010,
booth E 20 in „Design & Engineering“.
We are delighted to present modeFRONTIER® and „Design and
Numerical Optimization of Winglets of a Piaggio Aircraft” by
Ubaldo Cella, Piaggio Aero Industries; Francesco Franchini,
EnginSoft SpA, in the parallel Conference „Supply on the
wings“.
Please note the presentation in your diary:
3rd November 2010, 15:00hrs, Session „Improved
Simulations/Experiments“ http://www.airtec.aero/
modeFRONTIER® Seminars 2010
EnginSoft GmbH, Frankfurt am Main
• 26 October
• 30 November
24 – 25 November 2010 - NAFEMS European Conference:
Simulation Process and Data Management
Holiday Inn Frankfurt Airport-North
http://www.nafems.org/events/nafems/2010/
EuropeSDPM2010/
Seminars Process Product Integration
EnginSoft GmbH, Frankfurt Office
How to innovate and improve your production processes !
Seminars hosted by EnginSoft Germany and EnginSoft Italy
Please stay tuned to: www.enginsoft-de.com
UK
Please stay tuned to www.enginsoft-uk.com,
contact Bipin Pastel at: [email protected] for more
information.
modeFRONTIER® Workshops at Warwick Digital Lab
• 18 October
• 10 November
• 7 December
Please register for free on www.enginsoft-uk.com
28-29 September -The InfoWorks user meeting. Crowne Plaza
Hotel, Reading, Berkshire. EnginSoft UK are official sponsors
as well as co-presenting with Wessex Water
10-11 November 2010 - WaPuG. Hilton Hotel, Blackpool
EnginSoft UK will be attending.
www.ciwem.org/groups/wapug
25 November 2010 - modeFRONTIER® Workshops with
InfoWorks CS at Warwick Digital Lab.
SWEDEN
Training Courses: November 3-4 - Introduction to
modeFRONTIER®. For more information, please contact Adam
Thorp, [email protected]. For training registration,
please visit http://nordic.enginsoft.com/training/index.html
October 26-27 - NAFEMS Nordic Regional Conference 2010,
Newsletter EnginSoft Year 7 n°3 -
Göteborg. EnginSoft Nordic will be giving the presentation
"Multi-objective. Optimization of Dual-Antenna Handhelds
for MIMO Communications" together with Ericsson and Efield.
Please visit http://www.nafems.org/events/nafems/
2010/NORDIC2010/
for more information, abstract submission and registration.
SPAIN
28 - 29 September - Introductory Course on the use of
modeFRONTIER®. The 2-day course provides a practical
introduction to design optimization using modeFRONTIER®.
The course combines lectures but most of the time is
dedicated to hands-on sessions so that the attendees
complete the course with the basic skills in using many of
the modeFRONTIER® functions. More information can be
found on http://www.aperiotec.es/agenda.php
Programa de cursos de modeFRONTIER® and other local
events. Please contact our partner, APERIO Tecnología:
[email protected] and stay tuned to: www.aperiotec.es
4 November - NAFEMS-Iberia Awareness Seminar on
Organised by: NAFEMS-Iberia. Department. of Aeronautics,
Polytechnic University of Madrid (UPM), Madrid. Gino Duffett
to represent Aperio Tecnología and present: Experimental and
Simulation Evaluation of Material Properties Related to
Mechanical Components
PORTUGAL
24 November 2010 - NAFEMS Awareness Seminar on Finite
Elements and Numerical Optimization in Engineering.
Department of Mechanical Engineering, University of Aveiro,
Aveiro. Organised by: Research Group GRIDS (Department of
Mechanical Engineering, University of Aveiro) and NAFEMSGino Duffett to represent Aperio Tecnología / ESTECO and
present: Multi-Disciplinary Optimization and Automatic
Design Process using CAE software and modeFRONTIER®.
USA
15-16 November - NAFEMS Virtual Conference: 2020 Vision of
Engineering Analysis and Simulation
Hosted online by NAFEMS North America
http://www.nafems.org/events/nafems/2010/NA2010/
8 December – Workshop on Optimization hosted by Stanford
University, Cascade Technologies and EnginSoft USA. A unique
program on the benefits and use of optimization in today’s
product design and development, conducted by Gianluca
Iaccarino, Assistant Professor, Stanford School of Engineering.
Courses on: Design Optimization with modeFRONTIER®. Ozen
Engineering, Sunnyvale – Silicon Valley, CA. Learn about
Optimization coupled with ANSYS. OZEN can easily help you
out automating the search for the optimal design. The
primary audience for this course includes ANSYS Classic and
Workbench users as well as new modeFRONTIER® users who
want to have a complete overview to all software
capabilities. www.ozeninc.com
67
EUROPE, VARIOUS LOCATIONS
modeFRONTIER® Academic Training. Please note: These
Courses are for Academic users only. The Courses provide
Academic Specialists with the fastest route to being fully
proficient and productive in the use of modeFRONTIER® for
their research activities. The courses combine
modeFRONTIER® Fundamentals and Advanced Optimization
Techniques.
For more information, please contact Rita Podzuna,
[email protected]
To meet with EnginSoft at any of the above events, please
contact us at: [email protected]
EnginSoft Contributes to the
LION5 Conference
This meeting, which continues the
successful series of LION events, is
aimed at exploring the intersections
and uncharted territories between
machine
learning,
artificial
intelligence,
mathematical
programming and algorithms for
hard optimization problems. The
main purpose of the event is to bring together experts
from these areas to discuss new ideas and methods,
challenges and opportunities in various application areas,
general trends and specific developments.
EnginSoft will contribute to the conference because of its
interest in design optimization and machine learning.
Silvia Poles, Optimization Consultant at EnginSoft, will
give a tutorial on “Multiobjective Optimization for
Innovation in Engineering Design” as a survey on
methodologies to approach the design optimization
process, a set of best practices intended for rapid delivery
of high-quality products, with a specific focus on the
numerical algorithms and post-processing used for
selecting optimal design configurations.
Moreover, EnginSoft, together with the University of
Trento, the University of Udine and Microsoft Research, is
organizing a special session on “Software and
Applications” as part of LION5.
All customers’ contributions on solving design
optimization problems using dedicated software (e.g.
modeFRONTIER®, ANSYS, …) are welcome.
The conference takes place at “Sapienza Università di
Roma, Dipartimento di Informatica e Sistemistica Antonio
Ruberti”, on January 17th-21st, 2011.
More information is available at:
http://www.intelligent-optimization.org/LION5/

Documenti analoghi

ICEPAK 13.0: buone notizie per i progettisti elettronici

ICEPAK 13.0: buone notizie per i progettisti elettronici United States or other countries. [ICEM CFD is a trademark used by ANSYS, Inc. under license]. (www.ANSYS.com) modeFRONTIER is a trademark of ESTECO srl (www.esteco.com) Flowmaster is a registered ...

Dettagli

EnginSoft CAE Conference 2011 Welcomes an Audience of 600

EnginSoft CAE Conference 2011 Welcomes an Audience of 600 ESTECO srl 34016 TRIESTE Area Science Park • Padriciano 99 Tel. +39 040 3755548 • Fax +39 040 3755549 www.esteco.com CONSORZIO TCN 38123 TRENTO Via della Stazione, 27 - fraz. Mattarello Tel. +39 04...

Dettagli

corsi su Ansys, ModeFrontier, Maxwell e HFSS offerti da EnginSoft

corsi su Ansys, ModeFrontier, Maxwell e HFSS offerti da EnginSoft CFD. Partendo dalle caratteristiche generali dell’ambiente di lavoro ANSYS Workbench (cella, componente, sistema), viene introdotti lo strumento integrato al suo interno per la creazione di modelli...

Dettagli