вход по аккаунту



код для вставкиСкачать
J Intell Manuf
DOI 10.1007/s10845-017-1366-7
An approach for composing predictive models from disparate
knowledge sources in smart manufacturing environments
Duck Bong Kim1
Received: 2 March 2017 / Accepted: 6 September 2017
© Springer Science+Business Media, LLC 2017
Abstract This paper describes an approach that can compose predictive models from disparate knowledge sources
in smart manufacturing environments. The capability to
compose disparate models of individual manufacturing components with disparate knowledge sources is necessary in
manufacturing industry, because this capability enables us
to understand, monitor, analyze, optimize, and control the
performance of the system made up of those components.
It is based on the assumption that the component models
and component sources used in any particular composition can be represented using the same collection of system
‘viewpoints’. With this assumption, creating this integrated
collection is much easier than it would be. This composition
capability provides the foundation for the ability to predict
the performance of the system from the performances of
its components—called compositionality. Compositionality
is the key to solve decision-making/optimization problems
related to that system-level prediction. For those problems, compositionality can be achieved using a three-tiered,
abstraction architecture. The feasibility of this approach is
demonstrated in an example in which a multi-criteria decision making method is used to determine the optimal process
parameters in an additive manufacturing process.
Keywords Smart manufacturing · Data analytics · Compositionality · Decision making · Additive manufacturing
Duck Bong Kim
Manufacturing and Engineering Technology,
College of Engineering, Tennessee Technological University,
Cookeville, TN, USA
In order to survive in a severe market competition, manufacturing companies should be agile while addressing their
business processes, ranging from the system design to the
maintenance. Thus, manufacturing companies are under
pressure to become more innovative and to develop new
strategies to increase (1) the productivity and efficiency of
their manufacturing processes and (2) the quality and reliability of their products. In addition, the waste should be
minimized as much as possible and the resources should be
effectively allocated while they achieve the desired profitability. To address these issues, recently, a number of
information-based technologies including smart manufacturing (SM), cloud services (CS), and internet of things (IoT)
have been proposed (Gallaher et al. 2016; Anderson 2016;
Davis et al. 2012; Shrouf et al. 2014).
In one way or another, all of these technologies enable
new types of modeling and analysis methods that have
the potential to provide the necessary foundation for these
new strategies (Lee et al. 2013; SMLC 2011). Especially,
analytical modeling and analysis methodologies provide
the possibility to address SM-related tasks and problems
for satisfying different stakeholders’ desires. Realizing this
potential, however, requires researchers to overcome significant hurdles: (1) featuring high uncertainty, conflicting
objectives, heterogeneous forms of data and information,
multi-interests, and perspectives, and (2) accounting for
complex and dynamically evolving manufacturing systems.
These have led to consistent research efforts on the use of
modeling and analysis methodologies to enhance the SMrelated tasks and problems.
The multi-criteria decision making (MCDM) method
(Wang et al. 2009) is considered as one of the possible solutions to reduce the complexity and difficulty when
J Intell Manuf
requirements are conflicting. An operational evaluation and
decision-support approach is suitable for addressing multicriteria-related problems. However, there are still several
issues in using the MCDM methods in the SM environments. Mainly, it is the “how to compose disparate analytical
problems into a unified analytical problem?” Composing
analytical problems is not a simple task with specific reasons. First, it relates to (1) the use of different unit of measure
and (2) complexity due to its evolving manufacturing system
and dynamically changing environment. Second, it should
trace the rationale (e.g., motivation, reason, background,
constraints) of the design and manufacturing for the improvements, but these results in burden on composing analytical
problems from disparate knowledge sources with irrelevant
technical details. Third, the aggregated model from multiple
local predictive model, called here global predictive model,
should be verified and validated although each local predictive model satisfies the confidence level required. It should
quantify the uncertainty and its propagation of the global
predictive model, since it relates to multiple criteria with
subjective normalization and weighting factors.
In addition, it has interoperability issues. Most of current approaches for performing the MCDM problems are
stand-alone, which means users typically model and analyze the manufacturing processes using customized methods
for their specific application tools or modeling environments. This causes interoperability problems, information
duplication or even inefficient implementation. Moreover,
to analyze process performance, comprehensive background
knowledge about SM and operations research (OR) are
required. To resolve these issues, a systematic and comprehensive methodology with contributions from multiple
experts should be investigated, developed, and provided.
However, it is still difficult for manufacturers and decisionmakers to get the full advantages of SM paradigm and
modeling and analysis techniques, particularly the small and
medium enterprises (SMEs), due to the complexity and lack
of resources. Among these hurdles, we focus on the composition of analytical models and MCDM methods.
We propose an approach to compose predictive models of
manufacturing components them into an aggregated, systemlevel, predictive model. The novelty of this research lies in
firstly introducing the concept of viewpoints for the model
composition from disparate, analytical, experimental, and
informational sources in SM environments. The aggregated
model is then used to create the different objective functions
and constraints that make up a MCDM problem. Individual
objective-function weights are specified using an interactive
environment for scientific computing, such as IPython notebook. For a case study, we show how this composition process
works and demonstrate its effectiveness in the case of an additive manufacturing (AM) process, which involves finding
optimal process parameters. “Related work” section reviews
the related work and “The proposed approach” section
introduces the three-tier approach. “Domains-of-discourse
tier”, “Problem-domain tier”, and “Analytical-technology
domain” sections include detailed descriptions of those tiers,
i.e., domain of discourse, problem domain, and analytical
technology domain, respectively. Then, the AM case study
is described in “A case study” section. “Discussion and conclusion” section discusses the advantage/disadvantage of this
approach and provides a conclusion of this work.
Related work
The related work falls into three categories: life cycle
assessment (LCA) frameworks for managing manufacturingprocesses knowledge, single-criteria optimization methods,
and multi-criteria optimization methods, respectively. Each
paragraph explains the current state-of-the-art and the main
limitations of these categories with respect to composability.
LCA frameworks (ISO 2006) provide systematic and logical procedures that help assess environmental impacts of
products and manufacturing processes. These impacts can
analyze the sustainability-related performance of those products and processes (Jacquemin et al. 2012). To date, the
applications of LCA frameworks have been primarily for
products because they have several limitations when applied
to manufacturing processes. For an instance, most LCA
frameworks do not support all of dynamic and diverse characteristics of those processes. In addition, they do not fully
support the analytical capabilities needed to model the mathematical details needed for process optimization. To address
these limitations, an approach that can generate and compose
mathematical models is needed.
Because of this, process optimization is typically still
done manually by process planners based on his/her knowledge and experience. As a result, process optimization, in
general,—and sustainability optimization, in particular—is
often time-consuming and highly inconsistent. In addition,
efforts to collect the historical data needed to improve
the process planner’s sustainability-related knowledge and
decisions have been limited. To address this limitation,
several different types of data-collection and knowledgemanagement systems have been proposed. Their advantages
and disadvantages are discussed in Yusof and Latif (2014).
One major disadvantage is the lack of companion analytical
capabilities that can transform this knowledge into optimal process decisions. Recently, researchers have begun to
address this disadvantage for both single and multi-criteria
optimization problems.
Examples of the single criteria optimization problem
include Kim et al. (2015), Campatelli et al. (2014), and
Rajemi et al. (2010). Kim et al. (2015) proposed a modelbased approach that provides a systematic procedure to
J Intell Manuf
improve sustainability performances of SM processes. Campatelli et al. (2014) used an experimental approach combined
with the response surface method (RSM) to determine the
optimal process settings for minimizing the power consumption in a milling process. Rajemi et al. (2010) established a
new methodology for optimizing the energy footprint for a
machined part and derived an economic tool-life formula for
minimizing the total energy footprint. These single criteria
decision making methods are straight forward and easy to
develop, but this single objective function is limited in when
satisfying multiple stakeholders’ desires. It means this traditional single criteria approach is not suitable for solving
the complexity and the multi-interests/criteria/objectives in
dynamically evolving SM processes and environments.
Multi-criteria decision-making (MCDM) methods have
been applied in both assessment and optimization problems. Vinodh et al. (2014) developed a decision support
system (DSS) that can assess the sustainability of manufacturing organizations by taking into consideration various
factors needed for maintaining sustainability. Arslan et al.
(2004) also developed a DSS for machine-tool selection
by using a weighted average of several criteria including
productivity, flexibility, space, adaptability, precision, cost,
reliability, safety, environment, maintenance, and service.
Zhao et al. (2012) proposed an LCA-supported, environmentally conscious, process-planning methodology with a
set of ranking/weighting schemes for impact aggregation.
Madic et al. (2016) proposed a weighted aggregated sum
product assessment (WASPAS) method for determination of
manufacturing process parameters (e.g., laser power, cutting speed, and gas pressure) in the case of laser cutting,
based on Taguchi’s L9 method. Sen et al. (2017) developed a
hybrid framework consisting of three MCDM methods, such
as decision-making trail and evaluation laboratory (DEMATEL), analytical network process (ANP), and restricted
multiplier data envelopment analysis (RMDEA). Rudnik and
Kacprzak (2017) presented a fuzzy technique for order preference by comparing it to ideal solution (FTOPSIS) for a
practical solution in the case of discrete flow control in a
manufacturing system. However, these methods do not provide the cost-effective solutions for composing the various
optimization problems. Moreover, it has an interoperability
issue. These methods are stand-alone, meaning that specific
application tools or modeling environments are necessary or
information can be unnecessarily duplicated.
The proposed approach
Conventional MCDM methods manage the complexities
associated with decision-support by resolving the conflicting interests and preferences. Methods differ in exactly how
the “resolving” get done. However, as we noted earlier, these
methods do not provide cost-effective solutions to two problems: (1) creating and (2) composing the various constraints
associated with the different criteria. In this paper, we provide
an approach that can provide cost-effective solutions to those
two problems. The proposed approach uses an integrated set
of tools to perform, validate, and reuse the composition of
different, analytic-analysis results of various manufacturing
processes. The integration is based on the 3-tiered architecture depicted in Fig. 1. The three tiers are denoted as
“domains of discourse”, “problem domain”, and “analyticaltechnology domain”. Each tier addresses a different level of
abstraction, creates tools consistent with that abstraction, and
contributes those tools to the approach. These tiers are discussed in turn below.
The Domains-of-discourse tier includes representations
of all static and dynamic information related to the production resources and processes. Static information includes
any database schema, conceptual models, or ontologies.
Examples include the quality information framework (QIF)
(Zhao et al. 2012), business-to-manufacturing markup language (B2MML) (B2MML 2003), MTConnect schema
(Vijayaraghavan and Dornfeld 2010), and property ontologies (Denno and Kim 2015). Dynamic information describes
the current state of production processes and equipment. For
example, results of the inspection processes are in the form of
QIF Results; results of sensor data would in the MTConnect
format. An important point to note about this tier is that actual
information structures are not optimized for the solution of
any particular optimization problem. Rather, the tier simply
accepts the information in whatever forms were generated
by processes that created them.
The Problem-domain tier represents all of the viewpoints relevant to the particular optimization problem being
solved. Each such viewpoint can be captured in a problemformulation metamodel (PFM), which describes the schema
or pattern of the mathematical modeling method underlying
the problem. For example, supposed the particular optimization problem is generating a production schedule for a job
shop. The PFM might describe that job shop as a configuration of its queues and a collection of its resources. The
information needed to create that configuration comes from
the domains-of-discourse tier. In this production-scheduling
example, the PFM would need information from several
different viewpoints including process plans, production
resources, product demand, and inventory status. Figure 1
depicts the viewpoints related to another problem: the additive manufacturing (AM), parameter-optimization problem.
The associated viewpoints for that problem are part requirements, tensile strength, processing time, surface roughness,
and equipment characteristics. The actual optimization problem is formulated as the specification of a set of constraints
and an objective function as described in “Problem-domain
tier” section.
J Intell Manuf
Fig. 1 Conceptual view of three-tier analytical approach and its usage in an example problem
The Analytical-technology-domain tier includes a metamodel of the analytical tools available to solve different
kinds of optimization problems. Figure 1 depicts the use
of an optimization metamodel for a class of such tools. It
might include optimization programming language (OPL)
(Hentenryck 1999), a mathematical programming language
(AMPL) (Fourer et al. 1990), and general algebraic modeling systems (GAMS) (Bussieck and Meeraus 2004). That
metamodel is defined as a meta-object facility (MOF) (OMG
2014) conforming model. Such models include specifications
of classes and their relationships. Each instance of the model
(1) consists of a collection of instances of the classes and relationships and (2) represents the formulation of each particular
analytical problem in the terms of the specific analytical tool
that will be used to solve it. This collection of instances
and relationships can be serialized and used as inputs to the
analytical tool. For example, a collection of instances representing the example problem could be serialized as OPL
code and executed by an optimization tool that is capable of
processing OPL.
Domains-of-discourse tier
Figure 2 shows the first tier, domain of discourse, including five main steps: define decision goals, identify relevant
manufacturing viewpoints, collect data for those viewpoints,
develop predictive models, and check model fidelity. The following subsections explain the details of each step.
Define decision goals
Decision goals need to be understood clearly to avoid incorrect results and wrong decisions. The understanding must
include both the scope of, and the reason for, those decisions.
For example, one decision goal might involve (1) selecting
process parameters for an AM machine, which is the scope,
and (2) minimizing energy consumption and manufacturing
cost, which is the reason.
Identify manufacturing viewpoints
Achieving the defined decision goals requires information
from several different manufacturing viewpoints. That information can be represented digitally in different ways including domain-specific languages, semantic-web technologies,
and ontology techniques, among others. Four principles are
proposed to select and represent these viewpoints.
1. Systematic principle: viewpoints should reflect all of the
essential characteristics of, and cover the performance of,
the entire SM system under consideration.
2. Independency principle: viewpoints should not have
inclusion relationships at the same level of abstraction
and should reflect the performance of alternatives from
different aspects.
3. Measurability principle: performance-related viewpoints
should be quantitatively measurable.
4. Analytical principle: viewpoints related to any analytical
analysis should be abstracted as predictive models. In
J Intell Manuf
Fig. 2 Example of procedures in the domain of discourse
addition, the predictive models should be normalized to
compare or operate directly when multi-criteria decisionmaking is involved.
Develop predictive models
In general, predictive models fall into three categories: empirical models, analytical models, and hybrid models. The
procedure for developing or refining predictive models is
different in each of these categories and can vary depending on the details of the particular model. The following
describes a typical procedure for empirical models. Developing empirical models involves four steps: (1) determine the
experimental design, (2) choose the type of model (choices
include regression, response surface model, neural networks,
and inductive learning), (3) run the experiment and collect the
data, and (4) fit the data to the chosen model. This four-step
process is illustrated in the first column in Fig. 2. The analytical and hybrid models are shown in second and third columns,
respectively. If there is high confidence in the analytical
model, it is used to develop predictive model. In contrast,
if the confidence level of the analytical model is low, the
model can be updated by incorporating the local empirical
model into the analytical model (hybrid model).
Supposed that Y is a vector Yˆ1 , Yˆ2 , . . . , Ŷn of
performance-criteria responses and x1 , x2 , . . . xn are vectors
of decision variables. The functional relationships for these
criteria are described as
Y1 = Yˆ1 + ε1 = f 1 x1,1 , x1,2 , . . . , x1,i + ε1
Y2 = Yˆ2 + ε2 = f 2 x2,1 , x2,2 , . . . , x2, j + ε2
Yn = Ŷn + εn = f n xn,1 , xn,2 , . . . , xn,k + εn ,
where f 1 , f 2 , . . . , f n can be regression functions and ε1 ,
ε2 , . . ., and εn are the error terms.
Check model fidelity
After the functional forms of the predictive models have
been developed, they should be examined by checking the
model’s fidelity—for regression models the fidelity can be
measured by the correlation coefficient R. Model fidelity
includes model accuracy, model reliability, and model robust-
J Intell Manuf
ness (Eddy et al. 2014). Model accuracy indicates “how close
sample points that are included in the model are to the model
itself”. Model reliability denotes “how close any points that
are not included in the model are to the model itself”.
Model robustness takes into account “the resolution between
rank adjacent alternatives identified by the model and the
effect of all variability due to the accuracy and reliability
Problem-domain tier
Figure 3 represents the second tier, problem domain, containing five steps (1) normalize the predictive model, (2)
determine the weighting factors of the criteria, (3) check
reliability, (4) develop a global predictive model, and (5)
check confidence level. The following subsections explain
the details of each step.
Normalize predictive models
Aggregating different performance criteria directly to obtain
the overall objective function or to develop an aggregated
predictive model is difficult because each criterion may have
a different unit of measure. If this happens, a normalization
step is necessary to make each response dimensionless, comparable, and additive.
As described in Wang et al. (2009), there are several different normalization methods. Popular methods include linear
min–max normalization, target-based normalization, balance
of opinions, and distance-to-reference country. Note, since
the method choice can significantly affect the decision, each
method should be carefully considered with respect to the
decision goals.
Normalization transforms response values by dividing
them by a selected reference value. For example, in linear
min–max normalization, that value is the difference between
the min and max values. The resulting normalized predictive
model can be represented formally a
Normalize Ŷ1 =
Normalize Ŷ2 =
Ŷ¯ 1 = f¯1 x1,1 , x1,2 , . . . , x1,k
Ŷ¯ 2 = f¯2 x2,1 , x2,2 , . . . , x2,k
Normalize Ŷn = Ŷ¯ n = f¯n xn,1 , xn,2 , . . . , xn,k
Determine weighting factors
After normalization, determining the weighting factors is
next because the weights needs to prioritize the relative
importance between responses (indicators/criteria). As we
saw with normalization, weighting factors also can affect
the results significantly; consequently, the choice of weighting factors should be selected carefully with respect to
the decision goals. Frequently, weights are determined in
consultation with manufacturing experts who use one of
two approaches: an equal-weighting approach or a rankorder-weighting approach (Wang et al. 2009). If all criteria
are deemed of equal importance, then the equal-weights
approach is used. Otherwise, a rank-order-weighting method,
which considers the relative importance among responses as
its main criterion, is used. Three approaches are in common
use today (Wang et al. 2009); they are (1) subjective, pairwise weighting such as the one used in AHP, (2) objective
weighting method such as the Entropy method and the TOPSIS method, and (3) a hybrid method such multiplication
synthesis and additive synthesis.
Check reliability
Fig. 3 Procedures in the problem domain
Since weighting factors can significantly affect the final
result, they are sometimes checked to see whether they are
reliable or not. For example, to check the reliability of the
weighting factors used in AHP, the consistency ratio (CR)
can be used (Saaty 2008). When the CR is low, that indicates “not consistent”. When this happens, more experts and
decision makers need to update the weighting factors and
decrease the CR. This process repeats until a satisfactory CR
is achieved. Once that happens, the normalized predictive
models and their weighting factors are known.
J Intell Manuf
Develop a global predictive model
After checking the reliability, the individual, normalized, predictive models with the weighting factors are composed to
develop a global predictive model (Kim et al. 2015; Denno
and Kim 2015) or it can be multi-objective decision-making
problem. It can consist of three types of model: empirical,
analytical, and hybrid. Also, there are several methods (Wang
et al. 2009) for composing this global predictive model (e.g.,
weighted sum, weighted geometric mean, and AHP).
Check confidence level
The composed global predictive model needs to be evaluated
whether it satisfies the required confidence level or not. The
accuracy, reliability, and robustness of the model need to be
considered, as well (Eddy et al. 2014). Even though each
local predictive model already satisfies the required confidence level, there is no guarantee that the global predictive
model will satisfy its own confidence level. In other words, it
is not easy to quantify the uncertainty of the global predictive
model as a simple function of the uncertainties of the individual models. The difficulty lies in the fact that the global
uncertainty is impacted by multiple, interacting criteria with
subjective normalization methods and weighting factors. If
that global uncertainty is too high, the global predictive
model can be updated with a robustness model (see Fig. 3).
Analytical-technology domain
Figure 4 shows the third tier, analytical technology domain,
including three steps: (1) translate the global predictive
model into optimization problem, (2) execute optimization
problem using an analytical solver, and (3) check feasibility
and optimality. The following subsections explain the details
of each step.
Translate a global predictive model into an optimization
As noted above, the global predictive model is used to
develop the multi-criteria objective function. To complete
the creation of the optimization problem, we need to formulate all of the constraints. Constraints can be generated
from the various viewpoints. For example, the process-plan
viewpoint provides information need to formulate both the
necessary precedence constraints and resource constrains.
The order viewpoint provides the information needed to formulate due date constraints. To provide this information, all
manufacturing viewpoints can be translated into a neutral
representation and stored in the manufacturing viewpoint
database for reusability.
Fig. 4 Procedures in the analytical technology domain
Execute optimization problem
After generating the required optimization formulation,
we generate a predictive metamodel using the approaches
described in Kim et al. (2015) and Denno and Kim (2015).
The metamodel code must then be serialized to generate the
exact input for each specific, analytical-optimization tool.
Each such tool consists of a solver and a modeling environment. A solver (e.g., CPLEX OMG 2014) is a generic term
indicating a piece of software that solves a mathematical optimization problem formulated as a linear program, a nonlinear
program, and, a mixed integer linear program, among others. Each solver typically can solve many such “programs”
using a number of well-known solution methods. A modeling environment (e.g., AMPL Fourer et al. 1990 and GAMS
Bussieck and Meeraus 2004) provides general and intuitive
ways to express mathematical optimization problems, generates problem instances, offers features for importing data,
invokes solvers, analyzes results, scripts extended algorithmic schemes, and interfaces to other applications.
Check feasibility and optimality
The generated results from the solver need to be checked
for both feasibility and optimality. If the problem has been
formulated correctly, this is generally not a problem. If the
check fails, tracing the reason of the infeasibility and the
incorrectness back to the detailed information in multiple
manufacturing viewpoints is required. The new formulation
will go through the same steps as before.
A case study
To illustrate use of this approach for composing predictive
models from disparate knowledge sources, we used the previous case study described in Kim et al. (2015). It is to
J Intell Manuf
define and refine process and material parameters in the metal
laser-sintering AM process. The previous case study used six
equation-based models to predict how process performance
and part quality will respond to changes in the smart manufacturing environment. Consequently, the results from the
previous case study provide actionable recommendations to
a decision-maker for the control purpose. However, the original work involved only a single optimization criterion, which
means the previous approach and case study cannot manage
the multiple criteria at the same time. In this case, based on the
previous example, we extend the single optimization problem to a multi-criteria one. Specifically, we focus on how
the various predictive models are integrated into a single,
multi-criteria objective function.
Domain of discourse
Define decision goals
(Y1 −Y5 ) are utilized from the previous research in Kim et al.
(2015), as shown below:
Y1 = (αtime × L) +
V Pi
V Ai
Y2 = x1 /(x2 × x3 × x4 ) × Vol.
× 52 ×
Y3 =
841.6 + 0.175x2 −2.375x 3 + 5.5x5 − 16.6x22
− 19.9x32 −22.45x 25 + 0.2x2 x3
− 3.85x2 x5 + 0.55x3 x5
Y5 =
3.0 + 0.05x1 +1.9625x 3 +0.0125x4 + 0.95x12
+ 0.025x32 +1.525x 24 + 0.025x1 x3
+ 0.075x1 x4 + 0.05x3 x4
Y4 =
55.0 + 0.5875x1 +19.975x 3 +0.0375x 4 + 4.55x12
+ 0.125x32 + 9.85x42 − 0.175x1 x3
+ 0.6x1 x4 + 0.125x 3 x4 .
The decision goal is to find the optimal process and material
parameters for manufacturing a (54 × 54 × 28 mm) turbine
blade with a volume of 20,618 mm3 using a laser-sintering
process. This process was performed by an EOS M270 directmetal, laser-sintering machine (DMLS) using Raymor Ti–
6Al–4V powder with an apparent density of 2.55 g/cm3 .
where αtime is 10.82 s, L is the number of layers, and
βtime is 0.0125 s. The average occupancy rate in each voxel
(VPi /VAi ) is assumed to be 0.3. To verify the predictive
model (Y3 −Y5 ), R 2 error (the coefficient of determination)
was performed.
Identify relevant manufacturing viewpoints
Problem domain
From the six manufacturing viewpoints (Kim et al. 2015), we
only selected five optimization criteria: manufacturing time
(Y1 ), energy consumption (Y2 ), volumetric error (Y3 ), tensile strength (Y4 ), and surface roughness (Y5 ). It is considered
that the manufacturing time violates the first principle “independency” in “Identify manufacturing viewpoints” section.
Since the manufacturing cost can be considered as a higher
level, compared to other Y1 –Y5 , we did not include it in this
case study. In addition, we selected five, bounded, decision
variables: laser power X 1 (80–160) W, scanning speed X 2
(360–840) mm/s, layer thickness X 3 (20–50) µm, hatch distance X 4 (100–160) µm, and Oxygen X 5 (0.13–0.17)%.
Normalize the predictive models
Develop predictive models
In this step, predictive models for manufacturing time (Y1 )
and energy consumption (Y2 ) are represented using analytical model—Eqs. (7) and (8), respectively. The remaining criteria, volumetric error (Y3 ), tensile strength (Y4 ), and surface
roughness (Y5 ) are represented using empirical (regression)
models—Eq. (9) through Eq. (11). Taguchi design is used
to determine the key process parameters with an analysis of
variance (ANOVA) test. Then, predictive models are developed by using the Box–Behnken experimental design and
response surface method (RSM). The five predictive models
Since the criteria have different units, a normalization step for
the five predictive models is required to make each response
dimensionless and comparable. We used the linear min–max
normalization method (Wang et al. 2009) and the following
rules to achieve that normalization. The first rule is, if a target
value of a response is “the-larger-the-better”
as tensile
strength, the normalized predictive model Ŷ¯ n is estimated
Ŷn − min Ŷn
Ŷ¯ n =
max Ŷn − max Ŷn
where Ŷ¯ n is 0 ≤ Ŷ¯ n ≤ 1.
If a target value of a response is “the-smaller-the-better”
such as manufacturing time, energy consumption, volumetric error, and surface roughness, the normalized predictive
model (Ŷ¯ n ) is estimated by:
max Ŷn − Ŷn
Ŷ¯ n =
max Ŷn − min Ŷn
J Intell Manuf
Table 1 Min and max values
for the five responses
min Ŷn
Manufacturing time Ŷ1
Energy consumption Ŷ2
Volumetric error Ŷ3
Tensile strength Ŷ4
Surface roughness Ŷ5
max Ŷn
max Ŷn − min Ŷn
Table 2 Results of determined weights according to the manufacturing viewpoints considered and its validation
Manufacturing viewpoints
Case 1
Case 2
Case 3
Each minimum and maximum value of each response
(Y1 −Y5 ) is calculated by minimizing and maximizing each
response without constraint. Table 1 shows the min/max values and its difference values for the five responses.
Determine weighting factors
From several methods (e.g., TOPSIS and Entropy) for
weighting factor determination, the analytical hierarchy process (AHP) is used to determine the relative weights of each
criterion as an example and due to its simplicity and efficiency
(Saaty 2008). A square and reciprocal pairwise comparison matrix A of order 5 is formulated, based on the relative
weights of criteria. In this case study, this matrix will be as
an example in Eq. (14):
A = Y3
2 ⎥
(λmax − n)
/R I
(n − 1)
Nanda max
Random index
Random index (RI) is 1.12 when the number of criteria (n) is
5. Then, the CR is 0.016, which means the estimated weights
are consistent (Saaty 2008). Table 2 shows the considered
manufacturing viewpoints and its results of weights and validation with respect to three cases. Just as in Case 1, this can
be extended to Cases 2 and 3. From the table, it is easy to see
that the weights are consistent.
Develop a global predictive model
After determining the weights, the final, global predictive model can be one of two approaches: single-objective
decision-making or multi-objective decision-making. In this
step, we use a simple weighted sum method (Saaty 2008)
for generating the single-objective function, called a global
predictive model. The summation is constructed based on
the normalized predictive models and the subjective weight
assigned to them. The formula is as follows:
Y = f (x1 , x2 , . . . , xn ) =
w j Ŷ¯ j
where ω j = 1. Each weight for each indicator is calculated
as 0.125, 0.088, 0.363, 0.221, and 0.203, respectively. For the
reliability check for the weights, the consistency ratio (CR)
is calculated by:
CR =
The final, global, predictive model for Case 1 is shown in Eq.
(17). In a similar way, the global predictive models for Cases
2 and 3 can be formulated.
Y = w1 ×
max Ŷ1 − Ŷ1
max Ŷ1 − minŶ1
+ w3 ×
The consistency index (CI) is estimated by (λmax −n)/(n−1).
The maximum value of lambda (λmax ) is calculated as 5.072.
+w5 ×
+ w2 ×
max Ŷ3 − Ŷ3
max Ŷ3 − minŶ3
max Ŷ5 − Y5
max Ŷ5 − minŶ5
max Ŷ2 − Ŷ2
max Ŷ2 − minŶ2
+ w4 ×
Ŷ4 − minŶ4
max Ŷ4 − minŶ4
J Intell Manuf
Fig. 5 Example of optimization programming language (OPL) source code
Analytical technology domain
Translate it to the OPL metamodel for a MCDM problem
The Analytical-technology-domain tier includes a metamodel of the analytical tools available to solve different
kinds of optimization problems. Figure 1 depicts the use
of an optimization metamodel for a class of such tools that
might include optimization programming language (OPL)
(Hentenryck 1999), A mathematical programming language
(AMPL) (Fourer et al. 1990), and general algebraic modeling systems (GAMS) (Bussieck and Meeraus 2004). That
metamodel is defined as a meta-object facility (MOF) (OMG
2014) conforming model. Such models include specifications
of classes and their relationships. Each instance of the model
(1) consists of a collection of instances of the classes and relationships and (2) represents the formulation of each particular
analytical problem in the terms of the specific analytical tool
that will be used to solve it. This collection of instances and
relationships can be serialized and used as input to the analytical tool. For example, a collection of instances representing
the example problem could be serialized as OPL code and
executed by an optimization tool that is capable of processing
This multi-criteria function is a weighted composition of
the generated predictive models of each response. In our
approach, these predictive models are treated as information objects that must be organized into a template for
optimization. The template directs the mapping of these
mathematical predictive models, and similarly constructed
constraints, into such objects. The resulting information
objects are then used to populate a metamodel that serves
as an interface for the particular tool that is used to solve
that optimization problem.1 Analytical models, such as Eq.
(17), and the bounds for each decision variable, become
the objective function and the constraints, respectively, in
the multi-criteria optimization formulation problem. They
are assigned as the same roles of information objects in
the metamodel, which was created using the optimization
programming language (OPL) (Hentenryck 1999). Figure 5
shows the window snapshot containing some portions of
the OPL source codes for case study 3 in IBM ILOG
CPLEX optimization studio (version 12.4), which denotes
the minimum and maximum values, its predictive models,
the integrated single objective function, and its constraints.
Note, each such tool will require its own metamodel.
J Intell Manuf
It considers three manufacturing viewpoints: manufacturing time (Y1 ), volumetric error (Y3 ), and tensile strength
(Y4 ).
Actionable recommendations
We used the methods described in our previous research (Kim
et al. 2015; Denno and Kim 2015), to serialize these objects
so that they can be used as inputs into the optimization solver
in IPython notebook environment (Pérez and Granger 2007).
Figure 6 shows the window snapshot containing example
code and its result of case study 3 in the IPython notebook
environment. The results denote the optimal results, the optimal manufacturing parameters, and its results of responses,
Table 3 shows the objective function and constraints,
optimal results, optimal manufacturing parameters, and the
results of the responses. In Case 1, the objective function
is to maximize Y with constraints (Y1 ≤ 8572.7 s, Y2 ≤
1518 kJ, Y3 ≤ 4.68%, Y4 ≥ 800 MPa, and Y5 ≤ 70.67 µm).
The optimal manufacturing parameters (X 1 : 116.50 W, X 2 :
503.49 mm/s, X 3 : 28.32 µm, X 4 : 131.37 µm, andX 5 :
0.153%) are determined. In addition, the maximized value of
Y is 0.799. Other responses include Y1 = 12,081.00 s, Y2 =
1282.22 kJ, Y3 = 2.14%, Y4 = 836.51 MPa, and Y5 =
46.12 µm. Just as in Case 1, this can be extended to Cases 2
and 3.
This case study demonstrates that the optimal process
parameters can be determined with respect to the different
stakeholders’ needs. This approach can provide products and
services at a better quality, and resources should be allocated efficiently while they achieve the desired profitability
and minimize wastes (e.g., air pollutants) in SM environments.
Discussion and conclusion
One of the benefits of our approach is reusability. As shown in
“Analytical technology domain” section, each individual predictive model can be created in a modular way, independent
of the other predictive models. To provide an actionable recommendation in certain manufacturing/design conditions,
the individual predictive models can be composed to create
the required multi-criteria objective function and constraints.
In other words, individual predictive models can be reused in
different optimization formulations according to the defined
objective and constraints, as shown in Table 3. This leads to a
Fig. 6 Example code and its result of case study 3 in the IPython notebook environment
J Intell Manuf
Table 3 Input and outputs of Cases 1, 2, and 3
Case 1
Case 2
Case 3
Objective: max (Y) constraints
Objective: max (Y ) constraints
Objective: max (Y ) constraints
Y1 ≤ 12,081 s
Y1 ≤ 10,081 s
Y1 ≤ 15,000 s
Y2 ≤ 1518 kJ
Y2 ≤ 1218 kJ
Y3 ≤ 3.65%
Y3 ≤ 4.68%
Y4 ≥ 810.0 MPa
Y4 ≥ 820.0 MPa
Y4 ≥ 800 MPa
Y5 ≤ 65.34 µm
Y5 ≤ 70.67 µm
Optimal results
Y = 0.799
Optimal manufacturing parameters
X 1 = 116.50 W
X 1 = 100.91 W
X 1 = 119.51 W
X 2 = 503.49 mm/s
X 2 = 500.96 mm/s
X 2 = 502.94 mm/s
X 3 = 28.32 µm
X 3 = 33.98 µm
X 3 = 28.74 µm
X 4 = 131.37 µm
X 4 = 130.67 µm
X 4 = 130.51 µm
X 5 = 0.153%
X 5 = 0.154%
X 5 = 0.153%
Y1 = 12,081.00 s
Y1 = 10,081.00 s
Y1 = 11,907.10 s
Y2 = 1282.22 kJ
Y2 = 935.42 kJ
Y3 = 2.19 kJ
Y3 = 2.14%
Y4 = 839.32 MPa
Y4 = 836.89 MPa
Y4 = 836.51 MPa
Y5 = 54.39 µm
Results of responses
Y = 0.793
Y = 0.807
Y5 = 46.12 µm
reduction of time and effort compared to how these activities
might be performed in traditional quality processes.
Another benefit is that our approach is compatible with a
number of production-information standards. We believe it
would not be difficult to superimpose the approach on manufacturing systems that apply open standards for machine
monitoring, manufacturing execution, and quality reporting,
such as MTConnect (Vijayaraghavan and Dornfeld 2010),
quality information framework (QIF) (Zhao et al. 2012), and
ISA-95 (Scholten 2007).
The third benefit is that our approach is flexible and yet
enables automation. It can be implemented with assorted
optimization approaches and solver tools. Our use of IPython
notebook (Pérez and Granger 2007) provides great freedom
in problem formulation. We use web services to communicate between the notebook and supporting information and
tools. This means that routine analysis can be automated,
while less common activities can be performed as manual
steps inside the notebook.
Choosing weights that accurately capture the interests
and preferences of the various stakeholders is still more of
an art than a science. The reasons include the uncertainty
of information, the vagueness of human feelings, and the
recognition that terms like “equal”, “moderate”, “essential”,
“very strong”, and “absolute” are qualitative. Consequently,
converting these qualitative terms into quantitative weights
for the criteria can be a time-consuming, iterative process.
Yet, this process is necessary because the results from the
optimization can be affected significantly by the choices of
methods for scaling, normalization, weighting, and aggregation (Bohringer and Jochem 2007).
The composition of the global predictive model from
multiple local predictive models generates uncertainty propagation issues. Although each local predictive model may
satisfy the confidence level required, their composition may
not. This can happen because of the way the uncertainty
in each local model propagates when it is composed with
other local predictive models. Since each local predictive
model has its own uncertainties as shown in Eqs. (1)–(3),
uncertainties in local predictive models are also scaled, normalized, weighted, and aggregated (SNWAed) during the
procedure for the global predictive model generation. Thus,
the SNWAed uncertainty is not the simple summation of
uncertainties of each local predictive model. We did not take
care of these SNWAed uncertainties, since it is the out scope
in this paper.
The uncertainties in local predictive models can be categorized as either aleatory, epistemic, or a mixture of two
(Roy and Oberkampf 2011). There are four main sources of
uncertainty: model inputs, model assumptions, model form,
and numerical approximations. As a result, epistemic uncertainties can be reduced with a more accurate model, more
detailed knowledge, or better quality data. These uncertainties exist in input/output data and a local predictive model
itself. Thus, the global predictive model needs to be investigated to determine whether it satisfies the required confidence
J Intell Manuf
level or not. For this task, quantifying uncertainty and its
propagation regarding different local predictive models will
be performed in the near future.
This paper proposes a comprehensive and systematic approach
to composing predictive models of manufacturing processes
from disparate analytical, experimental, and informational
sources. This approach is based on a three-tier architecture
consisting of domain of discourse, problem domain, and
analytical technology domain. The main benefit of this architecture is that it facilitates complex decision-making based
on multiple decision criteria. To demonstrate the proposed
approach, we described a case study related to an additive
manufacturing process. The decision was to determine the
optimal manufacturing process parameters using an optimization formulation with (1) an objective function based
on five, different, weighted criteria and (2) interval constraints that bound five different process variables. It aims to
provide the cost-effective solutions to compose the various
optimization problems without being burdens from irrelevant
technical details. In addition, it provides traces (e.g., motivation, constraints, and background) of the decision-making
and its verifiability for the results. In the near future, we will
investigate and quantify the uncertainties in local predictive
models and its propagated uncertainties in a global predictive
Acknowledgements This research work has been done with the main
author’s previous colleagues: Peter Denno (National Institute of Standards and Technology, NIST) and Dr. Albert Jones (NIST). The author
feels sorry for the fact that they do not have the authorships of this paper
due to the NIST affiliation policy. In addition, the author would like to
thank them for their significant, scientific contributions to this research.
Anderson, G. (2016). The economic impact of technology infrastructure
for smart manufacturing (p. 4). Gaithersburg, MD, NIST Economic Analysis Briefs: NIST.
Arslan, M., Catay, B. L., & Budak, E. (2004). A decision support system
for machine tool selection. Journal of Manufacturing Technology
Management, 15(1), 101–109.
B2MML. (2003). Business to manufacturing markup language. The
World Batch Forum.
Bohringer, C., & Jochem, P. E. P. (2007). Measuring the immeasurable–
A survey of sustainability indices. Ecological Economics, 63, 1–8.
Bussieck, M. R., & Meeraus, A. (2004). General algebraic modeling
system (GAMS). Modeling languages in mathematical optimization. Berlin: Springer.
Campatelli, G., Lorenzo, L., & Antonio, S. (2014). Optimization of
process parameters using a response surface method for minimizing power consumption in the milling of carbon steel. Journal of
Cleaner Production, 66, 309–316.
Davis, J., Edgar, T., Porter, J., Bernaden, J., & Sarli, M. S. (2012). Smart
manufacturing, manufacturing intelligence and demand-dynamic
performance. Computers and Chemical Engineering, 47, 145–156.
Denno, P., & Kim, D. B. (2015). Integrating views of properties in
an analytical method for manufacturing. International Journal
of Computer Integrated Manufacturing,. doi:10.1080/0951192X.
Eddy, D., Krishnamurty, S., Grosse, I., Wileden, J., & Lewis, K. (2014).
A robust surrogate modeling approach for material selection in sustainable design of products. In ASME 2014 international design
engineering technical conferences and computers and information in engineering conference. American Society of Mechanical
Fourer, R., Gay, D. M., & Kernighan, B. W. (1990). A modeling
language for mathematical programming. Management Science,
36(5), 519–554.
Gallaher, M. P., Oliver, Z. T., Reith, K. T., O’Conner, A. C. & RTI International. (2016). Economic analysis of technology infrastructure
needs for advanced manufacturing: Smart manufacturing. NIST
GCR 16-007.
Hentenryck, P. V. (1999). The OPL optimization programming language. Cambridge: The MIT Press.
ISO 14040. (2006). Environmental management–Life cycle
assessment–Principles and framework. Geneva: International
Organization for Standardization.
Jacquemin, L., Pontalier, P. Y., & Sablayrolles, C. (2012). Life cycle
assessment (LCA) applied to the process industry: A review. International Journal of Life Cycle Assessment, 17, 1028–1041.
Kim, D. B., Denno, P., & Jones, A. T. (2015). A model-based approach
to refine process parameters in smart manufacturing. Concurrent
Engineering: Research and Applications, 23, 1063293–15591038.
Lee, J., Lapira, E., Bagheri, B., & Kao, H. A. (2013). Recent advances
and trends in predictive manufacturing systems in big data environment. Manufacturing Letters, 1(1), 38–41.
Madic, M., Antucheviciene, J., Radovanovic, M., & Petkovic, D.
(2016). Determination of manufacturing process conditions by
using MCDM methods: Application in laser cutting. Engineering
Economics, 27(2), 144–150.
OMG Meta-Object Facility (MOF) Core specification, version 2.4.1.
object management group. p. 1. Retrieved 17 February 2014.
Pérez, F., & Granger, B. E. (2007). IPython: A system for interactive
scientific computing. Computing in Science and Engineering, 9(3),
Rajemi, M., Mativenge, P., & Aramcharoen, A. (2010). Sustainable
machining: Selection of optimum turning conditions based on
minimum energy considerations. Journal of Cleaner Production,
18(10), 1059–1065.
Roy, C. J., & Oberkampf, W. L. (2011). A comprehensive framework
for verification, validation, and uncertainty quantification in scientific computing. Computer Methods in Applied Mechanics and
Engineering, 200(25–28), 2131–2144.
Rudnik, K., & Kacprzak, D. (2017). Fuzzy TOPSIS method with
ordered fuzzy numbers for flow control in a manufacturing system. Applied Soft Computing, 52, 1020–1041.
Saaty, T. L. (2008). Decision making with the analytic hierarchy process.
International Journal of Services Sciences, 1(1), 83–98.
Scholten, B. (2007). The road to integration: A guide to applying the
ISA-95 standard in manufacturing. ISA.
Sen, P., Roy, M., & Pal, P. (2017). Evaluation of environmentally
conscious manufacturing programs using a three-hybrid multicriteria decision analysis method. Ecological Indicators, 73, 264–
Shrouf, F., Ordieres, J., & Miragliotta, G. (2014). Smart factories in
industry 4.0: A review of the concept and of energy management
approached in production based on the internet of things paradigm.
J Intell Manuf
In IEEE international conference on industrial engineering and
engineering management (IEEM) (pp. 697–701). IEEE.
SMLC (Smart manufacturing Leadership Coalition) (2011). Implementing 21st century smart manufacturing. Workshop summary report.
Vijayaraghavan, A., & Dornfeld, D. (2010). Automated energy monitoring of machine tools. CIRP Annals Manufacturing Technology,
59, 21–24.
Vinodh, S., Jayakrishna, K., Kumar, V., & Dutta, R. (2014). Development of decision support system for sustainability evaluation: A
case study. Clean Technologies and Environmental Policy, 15(1),
Wang, J. J., Jing, Y., Zhang, C., & Zhao, J. (2009). Review on
multi-criteria decision analysis aid in sustainable energy decisionmaking. Renewable and Sustainable Energy Reviews, 13(9),
Yusof, Y., & Latif, K. (2014). Survey on computer-aided process
planning. The International Journal of Advanced Manufacturing
Technology, 75(1–4), 77–89.
Zhao, F., Murray, V., Ramani, K., & Sutherland, J. (2012). Toward the
development of process plans with reduced environmental impacts.
Frontiers of Mechanical Engineering, 7(3), 231–246.
Zhao, Y. F., Horst, J. A., Kramer, T. R., Rippey, W., & Brown, R. (2012).
Quality information framework-integrating metrology processes.
Information Control Problems in Manufacturing, 14(1), 1301–
Без категории
Размер файла
1 731 Кб
017, s10845, 1366
Пожаловаться на содержимое документа