IET
Decrease font size
Increase font size
Topic Title: Reliability Engineering
Topic Summary: Any Reliability Engineers in the IET?
Created On: 30 April 2009 06:25 AM
Status: Read Only
Linear : Threading : Single : Branch
<< 1 2 3 Previous Last unread
Search Topic Search Topic
Topic Tools Topic Tools
View similar topics View similar topics
View topic in raw text format. Print this topic.
 30 June 2011 01:12 PM
User is offline View Users Profile Print this message



VINODPALSINGH

Posts: 15
Joined: 09 April 2011

Hi Dvaidr,

UK has formal regulatory framework on Quantitative Risk Criteria which is very nicely explained is HSE, UK document 'Reducing Risks, Protecting People' and it is available freely over the net. Similarly it seems that other nations including Netherlands, Switzerland, Singapore, ... have framework for addressing the subject.

Summary of Risk Criteria in use today by various entities is very nicely captured in AIChE/CCPS's Guidelines for developing quantitative safety risk criteria. I believe there is some material in 'Lees loss prevention in the process industry' Vol-1 which is available to IET members thru KNOVEL.

Returning back to Reliability Engineering, I fully support the development of TPN community with IET. I think IET should have done this long time back.

Regards,

-Vinod

-------------------------
Regards,
-VINOD PAL SINGH, Abu Dhabi,UAE.
 05 July 2011 09:47 AM
User is offline View Users Profile Print this message



VINODPALSINGH

Posts: 15
Joined: 09 April 2011

Dear Forum Members.

ASQ has personnel certification program related to Reliability Engineering, CRE- Certified Reliability Engineering. This is not for the purpose of marketing the certification, but, I like CRE body of knowledge (CRE-BoK). For reference I am copying CRE-BoK here for info. CRE-BoK captures the breadth of RE.

CERTIFIED RELIABILITY ENGINEER (CRE)
BODY OF KNOWLEDGE
The topics in this Body of Knowledge include additional detail in the form of subtext explanations and the cognitive level at which the questions will be written. This information will provide useful guidance
for both the Examination Development Committee and the candidates preparing to take the exam. The subtext is not intended to limit the subject matter or be all-inclusive of what might be covered in an
exam. It is intended to clarify the type of content to be included in the exam. The descriptor in parentheses at the end of each entry refers to the highest cognitive level at which the topic will be tested. A more comprehensive description of cognitive levels is provided at the end of this document.

I. RELIABILITY MANAGEMENT (18 Questions)
A. Strategic management
1. Benefits of reliability engineering
Describe how reliability engineering techniques and methods improve programs, processes, products, systems, and services. (Understand)
2. Interrelationship of safety, quality, and reliability Define and describe the relationships among safety, reliability, and quality. (Understand)
3. Role of the reliability function in the organization
Describe how reliability techniques can be applied in other functional areas of the organization, such as marketing, engineering, customer /product support, safety and product liability, etc. (Apply)
4. Reliability in product and process development
Integrate reliability engineering techniques with other development activities, concurrent engineering, corporate improvement initiatives such as lean and six sigma methodologies, and emerging technologies. (Apply)
5. Failure consequence and liability management
Describe the importance of these concepts in determining reliability acceptance criteria. (Understand)
6. Warranty management
Define and describe warranty terms and conditions, including warranty period, conditions of use, failure criteria, etc., and identify the uses and limitations of warranty data. (Understand)
7. Customer needs assessment
Use various feedback methods (e.g., quality function deployment (QFD), prototyping, beta testing) to determine customer needs in relation to reliability requirements for products and services. (Apply)
8. Supplier reliability
Define and describe supplier reliability assessments that can be monitored in support of the overall reliability program. (Understand)
B. Reliability program management
1. Terminology
Explain basic reliability terms (e.g., MTTF, MTBF, MTTR, availability, failure rate, reliability, maintainability). (Understand)
2. Elements of a reliability program
Explain how planning, testing, tracking, and using customer needs and requirements are used to develop a reliability program, and identify various drivers of reliability requirements, including market expectations and standards, as well as safety, liability, and regulatory concerns. (Understand)
3. Types of risk
Describe the relationship between reliability and various types of risk, including technical, scheduling, safety, financial, etc. (Understand)
4. Product lifecycle engineering
Describe the impact various lifecycle stages (concept/design, introduction, growth, maturity, decline) have on reliability, and the cost issues (product maintenance, life expectation, software defect phase containment, etc.) associated with those stages.
(Understand)
5. Design evaluation
Use validation, verification, and other review techniques to assess the reliability of a product's design at various lifecycle stages. (Analyze)
6. Systems engineering and integration
Describe how these processes are used to create requirements and prioritize design and development activities. (Understand)
C. Ethics, safety, and liability
1. Ethical issues
Identify appropriate ethical behaviors for a reliability engineer in various situations. (Evaluate)
2. Roles and responsibilities
Describe the roles and responsibilities of a reliability engineer in relation to product safety and liability. (Understand)
3. System safety
Identify safety-related issues by analyzing customer feedback, design data, field data, and other information. Use risk management tools (e.g., hazard analysis, FMEA, FTA, risk matrix) to identify and prioritize safety concerns, and identify steps that will minimize the misuse of products and processes. (Analyze)
II. PROBABILITY AND STATISTICS FOR RELIABILITY (27 Questions)
A. Basic concepts
1. Statistical terms
Define and use terms such as population, parameter, statistic, sample, the central limit theorem, etc., and compute their values. (Apply)
2. Basic probability concepts
Use basic probability concepts (e.g., independence, mutually exclusive, conditional probability) and compute expected values. (Apply)
3. Discrete and continuous probability distributions
Compare and contrast various distributions (binomial, Poisson, exponential, Weibull, normal, log-normal, etc.) and their functions (e.g., cumulative distribution functions (CDFs), probability density functions (PDFs), hazard functions), and relate them to the bathtub curve. (Analyze)
4. Poisson process models
Define and describe homogeneous and non-homogeneous Poisson process models (HPP and NHPP). (Understand)
5. Non-parametric statistical methods
Apply non-parametric statistical methods, including median, Kaplan-Meier, Mann- Whitney, etc., in various situations. (Apply)
6. Sample size determination
Use various theories, tables, and formulas to determine appropriate sample sizes for statistical and reliability testing. (Apply)
7. Statistical process control (SPC) and process capability
Define and describe SPC and process capability studies (Cp, Cpk, etc.), their control charts, and how they are all related to reliability. (Understand)
B. Statistical inference
1. Point estimates of parameters
Obtain point estimates of model parameters using probability plots, maximum likelihood methods, etc. Analyze the efficiency and bias of the estimators. (Evaluate)
2. Statistical interval estimates
Compute confidence intervals, tolerance intervals, etc., and draw conclusions from the results. (Evaluate)
3. Hypothesis testing (parametric and non-parametric)
Apply hypothesis testing for parameters such as means, variance, proportions, and distribution parameters. Interpret significance levels and Type I and Type II errors for accepting/rejecting the null hypothesis. (Evaluate)
III. RELIABILITY IN DESIGN AND DEVELOPMENT (26 Questions)
A. Reliability design techniques
1. Environmental and use factors
Identify environmental and use factors (e.g., temperature, humidity, vibration) and stresses (e.g., severity of service, electrostatic discharge (ESD), throughput) to which a product may be subjected. (Apply)
2. Stress-strength analysis
Apply stress-strength analysis method of computing probability of failure, and interpret the results. (Evaluate)
3. FMEA and FMECA
Define and distinguish between failure mode and effects analysis and failure mode, effects, and criticality analysis and apply these techniques in products, processes, and designs. (Analyze)
4. Common mode failure analysis
Describe this type of failure (also known as common cause mode failure) and how it affects design for reliability. (Understand)
5. Fault tree analysis (FTA) and success tree analysis (STA)
Apply these techniques to develop models that can be used to evaluate undesirable (FTA) and desirable (STA) events. (Analyze)
6. Tolerance and worst-case analyses
Describe how tolerance and worst-case analyses (e.g., root of sum of squares, extreme value) can be used to characterize variation that affects reliability. (Understand)
7. Design of experiments
Plan and conduct standard design of experiments (DOE) (e.g., full-factorial, fractional factorial, Latin square design). Implement robust-design approaches (e.g., Taguchi design, parametric design, DOE incorporating noise factors) to improve or optimize design. (Analyze)
8. Fault tolerance
Define and describe fault tolerance and the reliability methods used to maintain system functionality. (Understand)
9. Reliability optimization
Use various approaches, including redundancy, derating, trade studies, etc., to optimize reliability within the constraints of cost, schedule, weight, design requirements, etc. (Apply)
10. Human factors
Describe the relationship between human factors and reliability engineering. (Understand)
11. Design for X (DFX)
Apply DFX techniques such as design for assembly, testability, maintainability environment (recycling and disposal), etc., to enhance a product's producibility and serviceability. (Apply)
12. Reliability apportionment (allocation) techniques
Use these techniques to specify subsystem and component reliability requirements. (Analyze)
B. Parts and systems management
1. Selection, standardization, and reuse
Apply techniques for materials selection, parts standardization and reduction, parallel modeling, software reuse, including commercial off-the-shelf (COTS) software, etc. (Apply)
2. Derating methods and principles
Use methods such as S-N diagram, stress-life relationship, etc., to determine the relationship between applied stress and rated value, and to improve design. (Analyze)
3. Parts obsolescence management
Explain the implications of parts obsolescence and requirements for parts or system requalification. Develop risk mitigation plans such as lifetime buy, backwards compatibility, etc. (Apply)
4. Establishing specifications
Develop metrics for reliability, maintainability, and serviceability (e.g., MTBF, MTBR, MTBUMA, service interval) for product specifications. (Create)
IV. RELIABILITY MODELING AND PREDICTIONS (22 Questions)
A. Reliability modeling
1. Sources and uses of reliability data
Describe sources of reliability data (prototype, development, test, field, warranty, published, etc.), their advantages and limitations, and how the data can be used to measure and enhance product reliability. (Apply)
2. Reliability block diagrams and models
Generate and analyze various types of block diagrams and models, including series, parallel, partial redundancy, time-dependent, etc. (Create)
3. Physics of failure models
Identify various failure mechanisms (e.g., fracture, corrosion, memory corruption) and select appropriate theoretical models (e.g., Arrhenius, S-N curve) to assess their impact. (Apply)
4. Simulation techniques
Describe the advantages and limitations of the Monte Carlo and Markov models. (Apply)
5. Dynamic reliability
Describe dynamic reliability as it relates to failure criteria that change over time or under different conditions. (Understand)
B. Reliability predictions
1. Part count predictions and part stress analysis
Use parts failure rate data to estimate system- and subsystem-level reliability. (Apply)
2. Reliability prediction methods
Use various reliability prediction methods for both repairable and non-repairable components and systems, incorporating test and field reliability data when available (Apply)
V. RELIABILITY TESTING (24 Questions)
A. Reliability test planning
1. Reliability test strategies
Create and apply the appropriate test strategies (e.g., truncation, test - to-failure, degradation) for various product development phases. (Create)
2. Test environment
Evaluate the environment in terms of system location and operational conditions to determine the most appropriate reliability test. (Evaluate)
B. Testing during development
Describe the purpose, advantages, and limitations of each of the following types of tests, and use common models to develop test plans, evaluate risks, and interpret test results. (Evaluate)
1. Accelerated life tests (e.g., single-stress, multiple-stress, sequential stress, stepstress)
2. Discovery testing (e.g., HALT, margin tests, sample size of 1),
3. Reliability growth testing (e.g., test, analyze, and fix (TAAF), Duane)
4. Software testing (e.g., white-box, black-box, operational profile, and fault-injection)
C. Product testing
Describe the purpose, advantages, and limitations of each of the following types of tests, and use common models to develop product test plans, evaluate risks, and interpret test results. (Evaluate)
1. Qualification/demonstration testing (e.g., sequential tests, fixed-length tests)
2. Product reliability acceptance testing (PRAT)
3. Ongoing reliability testing (e.g., sequential probability ratio test [SPRT])
4. Stress screening (e.g., ESS, HASS, burn-in tests)
5. Attribute testing (e.g., binomial, hypergeometric)
6. Degradation (wear - to-failure) testing
VI. MAINTAINABILITY AND AVAILABILITY (15 Questions)
A. Management strategies
1. Planning
Develop plans for maintainability and availability that support reliability goals and objectives. (Create)
2. Maintenance strategies
Identify the advantages and limitations of various maintenance strategies (e.g., reliability-centered maintenance (RCM), predictive maintenance, repair or replace decision making), and determine which strategy to use in specific situations. (Apply).
3. Availability tradeoffs
Describe various types of availability (e.g., inherent, operational), and the tradeoffs in reliability and maintainability that might be required to achieve availability goals.
(Apply)
B. Maintenance and testing analysis
1. Preventive maintenance (PM) analysis
Define and use PM tasks, optimum PM intervals, and other elements of this analysis, and identify situations in which PM analysis is not appropriate. (Apply)
2. Corrective maintenance analysis
Describe the elements of corrective maintenance analysis (e.g., fault-isolation time, repair/replace time, skill level, crew hours) and apply them in specific situations.
(Apply)
3. Non-destructive evaluation
Describe the types and uses of these tools (e.g., fatigue, delamination, vibration signature analysis) to look for potential defects. (Understand)
4. Testability
Use various testability requirements and methods (e.g., built in tests (BITs), falsealarm rates, diagnostics, error codes, fault tolerance) to achieve reliability goals
(Apply)
5. Spare parts analysis
Describe the relationship between spare parts requirements and reliability, maintainability, and availability requirements. Forecast spare parts requirements using field data, production lead time data, inventory and other prediction tools, etc. (Analyze)
VII. DATA COLLECTION AND USE (18 Questions)
A. Data collection
1. Types of data
Identify and distinguish between various types of data (e.g., attributes vs. variable, discrete vs. continuous, censored vs. complete, univariate vs. multivariate). Select appropriate data types to meet various analysis objectives. (Evaluate)
2. Collection methods
Identify appropriate methods and evaluate the results from surveys, automated tests, automated monitoring and reporting tools, etc., that are used to meet various data analysis objectives. (Evaluate)
2009 CRE FINAL BOK.doc Page 7 of 8
3. Data management
Describe key characteristics of a database (e.g., accuracy, completeness, update frequency). Specify the requirements for reliability-driven measurement systems and database plans, including consideration of the data collectors and users, and their functional responsibilities. (Evaluate)
B. Data use
1. Data summary and reporting
Examine collected data for accuracy and usefulness. Analyze, nterpret, and summarize data for presentation using techniques such as trend analysis, Weibull, graphic representation, etc., based on data types, sources, and required output. (Create)
2. Preventive and corrective action
Select and use various root cause and failure analysis tools to determine the causes of degradation or failure, and identify appropriate preventive or corrective actions to take in specific situations. (Evaluate)
3. Measures of effectiveness
Use various data analysis tools to evaluate the effectiveness of preventive and corrective actions in improving reliability. (Evaluate)
C. Failure analysis and correction
1. Failure analysis methods
Describe methods such as mechanical, materials, and physical analysis, scanning electron microscopy (SEM), etc., that are used to identify failure mechanisms. (Understand)
2. Failure reporting, analysis, and corrective action system (FRACAS)
Identify the elements necessary for a FRACAS to be effective, and demonstrate the importance of a closed-loop process that includes root cause investigation and follow up. (Apply)

-------------------------
Regards,
-VINOD PAL SINGH, Abu Dhabi,UAE.
 06 July 2011 06:23 AM
User is offline View Users Profile Print this message



VINODPALSINGH

Posts: 15
Joined: 09 April 2011

It seems like I am talking/writing to my own self ;-)

Rgds,

-------------------------
Regards,
-VINOD PAL SINGH, Abu Dhabi,UAE.
 14 May 2012 11:13 PM
User is offline View Users Profile Print this message



pguleria

Posts: 2
Joined: 17 January 2012

Hi guys I was wondering if someone could suggest any 3-5 days course in RCM in London please. e.g RCA ,FMEA etc
 14 May 2012 11:16 PM
User is offline View Users Profile Print this message



pguleria

Posts: 2
Joined: 17 January 2012

Hi there,

I am new to Reliability and I think its one of the most important engineering field for any machine.

I was wondering if someone could suggest any 3-5 days course in RCM in London please. e.g RCA ,FMEA etc

Kind regards

Pank
 08 June 2012 12:18 PM
User is offline View Users Profile Print this message



dvaidr

Posts: 519
Joined: 08 June 2003

Sorry for the delayed reply. Have you looked at Wilde Analysis? They do courses throughout the UK.
 05 June 2013 02:15 PM
User is offline View Users Profile Print this message


Avatar for vijayyargal.
vijayyargal

Posts: 2
Joined: 04 June 2013

Anything on reliability & safety engineering w.r.t aerospace, kindly be known.

-------------------------
Aerospace and Aviation w.r.t RFID applications.
IET » Management in engineering » Reliability Engineering

<< 1 2 3 Previous Last unread
Topic Tools Topic Tools
Statistics

See Also:



FuseTalk Standard Edition v3.2 - © 1999-2014 FuseTalk Inc. All rights reserved.