Weighted Space Time Turbo Trellis Codes
Researchers: Branka Vucetic, Yohghui Li, Jinhong
Yuan and Agus Santoso
Support: ARC Discovery Grant, Norman I Price
Scholarship and Girling Watson Fellowship
Space-time coding, carried out in both the time and space
domains, is a practical technique that enables to approach
the MIMO system capacity bounds. The simplest example
of space-time coding is the Alamouti scheme, which has
been adopted as a standard for the third generation of WCDMA
cellular radio networks and IEEE 802.16
broadband wireless access systems. It is simple to
implement but has no coding gain and its performance is
far from the MIMO system capacity limit. Space-time
trellis codes achieve substantial coding and diversity
gains and are simple to implement for small numbers of
transmit antennas. Layered space-time codes (LST), with
time domain coding only, achieve high coding and
diversity gains but the detection/decoding is quite
challenging for a large number of transmit antennas.
Space-time turbo trellis coded modulation schemes,
outperform the other known ST codes. All these space-
Telecommunications Laboratory
time coding schemes use channel state information (CSI)
at the receiver only. Substantial further improvements are
possible by exploiting CSI both at the transmitter and the
receiver, as demonstrated in our recent results in MIMO
systems with transmit antenna selection. In this project
the performance and design of space-time turbo trellis
codes with variable power across transmit antennas if
both full and partial CSI are available at the transmitter
will be investigated.
- ABSTRACT (1)
- Alloys (1)
- Applications of Chemical Engineering (1)
- Areas of Specialization in Metallurgical Engineering (1)
- Audio restoration in Audio Engineering (1)
- Branchesof Metallurgical Engineering (1)
- Composition of Petroleum (1)
- Development in Nuclear Power (1)
- Different professional branches in Audio Engineering (1)
- Digital engineering (1)
- Do you know what is Petroleum? (1)
- Do you really know who is a nuclear engineer? (1)
- Duties of RF Engineer (1)
- Electronic design automation (1)
- Elements of Aerospace Engineering (2)
- Engineering services (1)
- Engineering technician (1)
- Extraction (1)
- Formation of Petroleum (1)
- Microstructure (1)
- Modern Aerospace Engineering Education (1)
- Modern Developments in the Field of Electrical Engineering (1)
- Modern Developments in the Field of Mechanical Engineering (1)
- Modern topics in nuclear physics (1)
- MSIE (1)
- Nuclear criticality safety (1)
- Nuclear fusion and plasma physics (1)
- Nuclear materials and nuclear Fuels (1)
- Nuclear medicine and medical physics (1)
- Nuclear power (1)
- Outside-plant engineer (1)
- Overview of Petroleum Engineering (1)
- PhD (1)
- Practitioners (1)
- Product areas of EDA (1)
- Production (1)
- Quality Engineering Specialization (2)
- Radiation measurements and dosimetry (1)
- Related to Metallurgical Engineering (1)
- Related to Nuclear Engineering (3)
- Research on Quality Engineering (1)
- Research Project on Telecommunication (1)
- Software Development (1)
- Software Development Activities (1)
- Specialization Fields for Civil Engineering (1)
- Specialization fields for Electrical Engineering (1)
- Specialization Fields for Mechanical Engineering (2)
- Specialization Fields of Nuclear Engineering (1)
- Sub-Disciplines of Software Engineering (1)
- Task of Industrial Engineers (1)
- Telecom equipment engineer (1)
- What do the Metallurgical Engineers do? (1)
- What industrial Engineers do? (1)
- What is Aerospace Engineering? (1)
- What is Audio Engineering? (1)
- What is Broadcast Engineering? (1)
- What is Chemical Engineering? (1)
- What is Civil Engineering? (1)
- What is Electrical Engineering? (1)
- What is Industrial Engineering? (1)
- What is Mechanical Engineering? (1)
- What is Metallurgical Engineering (1)
- What is Nuclear Engineering? (1)
- What is Petroleum Engineering? (1)
- What is RF Engineering? (1)
- What is Software Engineering? (1)
- What is Telecommunication Engineering? (1)
- Where do the industrial Engineeers work? (1)
FEEDJIT Live Traffic Feed
-
▼
2010
(74)
-
▼
February
(64)
- Research Projects on Telecommunication
- Product areas of EDA
- Electronic design automation
- Outside-plant engineer
- Telecom equipment engineer
- What is Telecommunication Engineering?
- Formation of Petroleum
- Composition of Petroleum
- Do you know what is Petroleum?
- Overview of Petroleum Engineering
- What is Petroleum Engineering?
- Audio restoration in Audio Engineering
- Different professional branches in Audio Engineering
- Practitioners
- What is Audio Engineering?
- Duties of RF Engineer
- What is RF Engineering?
- Engineering technician
- Engineering services
- Digital engineering
- Broadcast engineers are generally required to have...
- What is Broadcast Engineering?
- Do you really know who is a nuclear engineer?
- Nuclear criticality safety
- Development in Nuclear Power
- Nuclear power
- Modern topics in nuclear physics
- Related to Nuclear Engineering
- Related to Nuclear Engineering
- Related to Nuclear Engineering
- Radiation measurements and dosimetry
- Nuclear materials and nuclear Fuels
- Nuclear medicine and medical physics
- Nuclear fusion and plasma physics
- Specialization Fields of Nuclear Engineering
- What is Nuclear Engineering?
- Areas of Specialization in Metallurgical Engineering
- Branchesof Metallurgical Engineering
- What do the Metallurgical Engineers do?
- Related to Metallurgical Engineering
- Microstructure
- Production
- Alloys
- Extraction
- What is Metallurgical Engineering
- Employment ProspectsWith quality and productivity ...
- PhD, Quality Engineering Specialization
- MSIE, Quality Engineering Specialization
- Research on Quality Engineering
- Where do the industrial Engineeers work?
- Task of Industrial Engineers
- What industrial Engineers do?
- What is Industrial Engineering?
- Applications of Chemical Engineering
- What is Chemical Engineering?
- Modern Aerospace Engineering Education
- ABSTRACT
- Elements of Aerospace Engineering
- Elements of Aerospace Engineering
- What is Aerospace Engineering?
- Software Development Activities
- Software Development
- Sub-Disciplines of Software Engineering
- What is Software Engineering?
-
▼
February
(64)
Blog Archive
Live Traffic Feed
EDA is divided into many (sometimes overlapping) sub-areas. They mostly align with the path of manufacturing from design to mask generation. The following applies to chip/ASIC/FPGA construction but is very similar in character to the areas of printed circuit board design:
Design and architecture: design the chip's schematics, output in Verilog, VHDL, SPICE and other formats.
Floorplanning: The preparation step of creating a basic die-map showing the expected locations for logic gates, power & ground planes, I/O pads, and hard macros. (This is analogous to a city-planner's activity in creating residential, commercial, and industrial zones within a city block.)
Logic synthesis: translation of a chip's abstract, logical RTL-description (often specified via a hardware description language, or "HDL", such as Verilog or VHDL) into a discrete netlist of logic-gate (boolean-logic) primitives.
Behavioral synthesis, high-level synthesis or algorithmic synthesis: This takes the level of abstraction higher and allows automation of the architecture exploration process. It involves the process of translating an abstract behavioral description of a design to synthesizable RTL. The input specification is in languages like behavioral VHDL, algorithmic SystemC, C++ etc and the RTL description in VHDL/Verilog is produced as the result of synthesis.
Intelligent verification
Co-design: The concurrent design, analysis or optimization of two or more electronic systems. Usually the electronic systems belong to differing substrates such as multiple PCBs or Package and Chip co-design.
Intelligent testbench
IP cores: provide pre-programmed design elements.
EDA databases: databases specialized for EDA applications. Needed since historically general purpose DBs did not provide enough performance.
Simulation: simulate a circuit's operation so as to verify correctness and performance.
Transistor simulation – low-level transistor-simulation of a schematic/layout's behavior, accurate at device-level.
Logic simulation – digital-simulation of an RTL or gate-netlist's digital (boolean 0/1) behavior, accurate at boolean-level.
Behavioral Simulation – high-level simulation of a design's architectural operation, accurate at cycle-level or interface-level.
Hardware emulation – Use of special purpose hardware to emulate the logic of a proposed design. Can sometimes be plugged into a system in place of a yet-to-be-built chip; this is called in-circuit emulation.
Clock Domain Crossing Verification (CDC check): Similar to linting, but these checks/tools specialize in detecting and reporting potential issues like data loss, meta-stability due to use of multiple clock domains in the design.
Formal verification, also model checking: Attempts to prove, by mathematical methods, that the system has certain desired properties, and that certain undesired effects (such as deadlock) cannot occur.
Equivalence checking: algorithmic comparison between a chip's RTL-description and synthesized gate-netlist, to ensure functional equivalence at the logical level.
Power analysis and optimization: optimizes the circuit to reduce the power required for operation, without affecting the functionality.
Place and route, PAR: (for digital devices) tool-automated placement of logic-gates and other technology-mapped components of the synthesized gate-netlist, then subsequent routing of the design, which adds wires to connect the components' signal and power terminals.
Static timing analysis: Analysis of the timing of a circuit in an input-independent manner, hence finding a worst case over all possible inputs.
Transistor layout: (for analog/mixed-signal devices), sometimes called polygon pushing – a prepared-schematic is converted into a layout-map showing all layers of the device.
Design for Manufacturability: tools to help optimize a design to make it as easy and cheap as possible to manufacture.
Design closure: IC design has many constraints, and fixing one problem often makes another worse. Design closure is the process of converging to a design that satisfies all constraints simultaneously.
Analysis of substrate coupling.
Power network design and analysis
Physical verification, PV: checking if a design is physically manufacturable, and that the resulting chips will not have any function-preventing physical defects, and will meet original specifications.
Design rule checking, DRC – checks a number of rules regarding placement and connectivity required for manufacturing.
Layout versus schematic, LVS – checks if designed chip layout matches schematics from specification.
Layout extraction, RCX – extracts netlists from layout, including parasitic resistors (PRE), and often capacitors (RCX), and sometimes inductors, inherent in the chip layout.
Mask data preparation, MDP: generation of actual lithography photomask used to physically manufacture the chip.
Resolution enhancement techniques, RET – methods of increasing of quality of final photomask.
Optical proximity correction, OPC – up-front compensation for diffraction and interference effects occurring later when chip is manufactured using this mask.
Mask generation – generation of flat mask image from hierarchical design.
Manufacturing test
Automatic test pattern generation, ATPG – generates pattern-data to systematically exercise as many logic-gates, and other components, as possible.
Built-in self-test, or BIST – installs self-contained test-controllers to automatically test a logic (or memory) structure in the design
Design For Test, DFT – adds logic-structures to a gate-netlist, to facilitate post-fabrication (die/wafer) defect testing.
Technology CAD, or TCAD, simulates and analyses the underlying process technology. Semiconductor process simulation, the resulting dopant profiles, and electrical properties of devices are derived directly from device physics.
Electromagnetic field solvers, or just field solvers, solve Maxwell's equations directly for cases of interest in IC and PCB design. They are known for being slower but more accurate than the layout extraction above.
Electronic design automation (EDA) is the category of tools for designing and producing electronic systems ranging from printed circuit boards (PCBs) to integrated circuits. This is sometimes referred to as ECAD (electronic computer-aided design) or just CAD. (The articles for Printed circuit boards and wire wrap both contain specialized discussions of the EDA used for those.)
Terminology
The term EDA is also used as an umbrella term for computer-aided engineering, computer-aided design and computer-aided manufacturing of electronics in the discipline of Electronic engineering. This usage probably originates in the IEEE Design Automation Technical Committee.
This article describes EDA specifically for electronics, and concentrates on EDA used for designing integrated circuits. The segment of the industry that must use EDA are chip designers at semiconductor companies. Large chips are too complex to design by hand.
Growth of EDA
EDA for electronics has rapidly increased in importance with the continuous scaling of semiconductor technology.[citation needed] Some users are foundry operators, who operate the semiconductor fabrication facilities, or "fabs", and design-service companies who use EDA software to evaluate an incoming design for manufacturing readiness. EDA tools are also used for programming design functionality into FPGAs.
Outside plant (OSP) engineers are also often called Field Engineers as they often spend a great deal of time in the field taking notes about the civil environment, aerial, above ground, and below ground. OSP Engineers are responsible for taking plant (copper, fiber, etc.) from a wire center to a distribution point or destination point directly. If a distribution point design is used then a cross connect box is placed in a strategic location to feed a determined distribution area.
The cross-connect box, also known as a service area interface is then installed to allow connections to be made more easily from the wire center to the destination point and ties up fewer facilities by not having dedication facilities from the wire center to every destination point. The plant is then taken directly to its destination point or to another small closure called a pedestal where access can also be gained to the plant if necessary. These access points are preferred as they allow faster repair times for customers and save telephone operating companies large amounts of money.
The plant facilities can be delivered via underground facilities, either direct buried or through conduit or in some cases laid under water, via aerial facilities such as telephone or power poles, or via microwave radio signals for long distances where either of the other two methods is too costly.
As structural engineers, OSP egineers are responsible for the structural design and placement of cellular towers and telephone poles as well as calculating pole capabilities of existing telephone or power poles new plant is being added onto. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. Shoring also has to be taken into consideration for larger trenches or pits. Conduit structures often include encasements of slurry that needs to be designed to support the structure and withstand the environment around it (soil type, high traffic areas, etc.).
As electrical engineers, OSP engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition power requirements have to be calculated and provided for to power any electronic equipment being placed in the field. Ground potential has to be taken into consideration when placing equipment, facilities, and plant in the field to account for lightning strikes, high voltage intercept from improperly grounded or broken power company facilities, and from various sources of electromagnetic interference.
As civil engineers, OSP egineers are responsible for drawing up plans, either by hand or using Computer Aided Drafting (CAD) software, for how telecom plant facilities will be placed. Often when working with municipalities trenching or boring permits are required and drawings must be made for these. Often these drawings include about 70% or so of the detailed information required to pave a road or add a turn lane to an existing street. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. As Civil Engineers Telecom Engineers provide the modern communications backbone for all technological communications distributed throughout civilizations today.
Unique to Telecom Engineering is the use of air core cable which requires an extensive network of air handling equipment such as compressors, manifolds, regulators and hundreds of miles of air pipe per system that connects to pressurized splice cases all designed to pressurize this special form of copper cable to keep moisture out and provide a clean signal to the customer.
As Political and Social Ambassador, the OSP Engineer is the telephone operating companies’ face and voice to the local authorities and other utilities. OSP Engineers often meet with municipalities, construction companies and other utility companies to address their concerns and educate them about how the telephone utility works and operates. Additionally, the OSP Engineer has to secure real estate to place outside facilities on such as an easement to place a cross connect box on.
A telecom equipment engineer is an electronics engineer that designs equipment such as routers, switches, multiplexers, and other specialized computer/electronics equipment designed to be used in the telecommunication network infrastructure.
Central-office engineer
A Central-office engineer is responsible for designing and overseeing the implementation of telecommunications equipment in a central office (CO for short), also referred to as a wire center or telephone exchange. A CO engineer is responsible for integrating new technology into the existing network, assigning the equipments location in the wire center and providing power, clocking (for digital equipment) and alarm monitoring facilities for the new equipment. The CO engineer is also responsible for providing more power, clocking, and alarm monitoring facilities if there isn’t currently enough available to support the new equipment being installed. Finally, the CO Engineer is responsible for designing how the massive amounts of cable will be distributed to various equipment and wiring frames throughout the wire center and overseeing the installation and turn up of all new equipment.
As structural engineers, CO engineers are responsible for the structural design and placement of racking and bays for the equipment to be installed in as well as for the plant to be placed on.
As electrical engineers, CO engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition power requirements have to be calculated and provided for to power any electronic equipment being placed in the wire center.
Telecommunications engineering or telecom engineering is a major field within electronic engineering. Telecom engineers come in a variety of different types from basic circuit designers to strategic mass developments. A telecom engineer is responsible for designing and overseeing the installation of telecommunications equipment and facilities, such as complex electronic switching systems to copper telephone facilities and fiber optics. Telecom engineering also overlaps heavily with broadcast engineering.
Telecommunications is a diverse field of engineering including electronics, civil, structural, and electrical engineering as well as being a political and social ambassador, a little bit of accounting and a lot of project management. Ultimately, telecom engineers are responsible for providing the method that customers can get telephone and high speed data services.
Telecom engineers use a variety of different equipment and transport media available from a multitude of manufacturers to design the telecom network infrastructure. The most common media, often referred to as plant in the telecom industry, used by telecommunications companies today are copper, coaxial cable, fiber, and radio.
Telecom engineers are often expected, as most engineers are, to provide the best solution possible for the lowest cost to the company. This often leads to creative solutions to problems that often would have been designed differently without the budget constraints dictated by modern society. In the earlier days of the telecom industry massive amounts of cable were placed that were never used or have been replaced by modern technology such as fiber optic cable and digital multiplexing techniques.
Telecom engineers are also responsible for keeping the records of the companies’ equipment and facilities and assigning appropriate accounting codes for purposes of taxes and maintenance. As telecom engineers responsible for budgeting and overseeing projects and keeping records of equipment, facilities and plant the telecom engineer is not only an engineer but an accounting assistant or bookkeeper (if not an accountant) and a project manager as well.
Formation of petroleum occurs from hydrocarbon pyrolysis, in a variety of mostly endothermic reactions at high temperature and/or pressure.[15] Today's oil formed from the preserved remains of prehistoric zooplankton and algae, which had settled to a sea or lake bottom in large quantities under anoxic conditions (the remains of prehistoric terrestrial plants, on the other hand, tended to form coal). Over geological time the organic matter mixed with mud, and was buried under heavy layers of sediment resulting in high levels of heat and pressure (diagenesis). This process caused the organic matter to change, first into a waxy material known as kerogen, which is found in various oil shales around the world, and then with more heat into liquid and gaseous hydrocarbons via a process known as catagenesis.
In its strictest sense, petroleum includes only crude oil, but in common usage it includes both crude oil and natural gas. Both crude oil and natural gas are predominantly a mixture of hydrocarbons. Under surface pressure and temperature conditions, the lighter hydrocarbons methane, ethane, propane and butane occur as gases, while the heavier ones from pentane and up are in the form of liquids or solids. However, in the underground oil reservoir the proportion which is gas or liquid varies depending on the subsurface conditions, and on the phase diagram of the petroleum mixture.[2]
An oil well produces predominantly crude oil, with some natural gas dissolved in it. Because the pressure is lower at the surface than underground, some of the gas will come out of solution and be recovered (or burned) as associated gas or solution gas. A gas well produces predominately natural gas. However, because the underground temperature and pressure are higher than at the surface, the gas may contain heavier hydrocarbons such as pentane, hexane, and heptane in the gaseous state. Under surface conditions these will condense out of the gas and form natural gas condensate, often shortened to condensate. Condensate resembles gasoline in appearance and is similar in composition to some volatile light crude oils.
The proportion of hydrocarbons in the petroleum mixture is highly variable between different oil fields and ranges from as much as 97% by weight in the lighter oils to as little as 50% in the heavier oils and bitumens.
Petroleum (L. petroleum, from Greek πετρέλαιον, lit. "rock oil") or crude oil is a naturally occurring, flammable liquid consisting of a complex mixture of hydrocarbons of various molecular weights, and other organic compounds, that is found in geologic formations beneath the earth's surface.
The term "petroleum" was first used in the treatise De Natura Fossilium, published in 1546 by the German mineralogist Georg Bauer, also known as Georgius Agricola.
Petroleum engineering has become a technical profession that involves extracting oil in increasingly difficult situations as the "low hanging fruit" of the world's oil fields are found and depleted. Improvements in computer modeling, materials and the application of statistics, probability analysis, and new technologies like horizontal drilling and enhanced oil recovery, have drastically improved the toolbox of the petroleum engineer in recent decades.
Deep-water, arctic and desert conditions are commonly contended with. High Temperature and High Pressure (HTHP) environments have become increasingly commonplace in operations and require the petroleum engineer to be savy in topics as wide ranging as thermo-hydraulics, geomechanics, and intelligent systems.
The Society of Petroleum Engineers is the largest professional society for petroleum engineers and publishes much information concerning the industry. Petroleum engineering education is available at 17 universities in the United States and many more throughout the world - primarily in oil producing states - but not only top producers, and some oil companies have considerable in house petroleum engineering training classes.
Petroleum engineers have historically been one of the highest paid engineering disciplines; this is offset by a tendency for mass layoffs when oil prices decline. Petroleum engineering(field company man representing the natural resource company) salaries start from $60,000 annually for just graduated individuals. For an individual with experience, the salaries can go from $150,000 to $200,000 annually.Drillers on the drilling rig (contractor) makes approx.80-100,000 USD.Rig managers are making approx.110,000-130,000 USD.This is per 2010.In a June 4th, 2007 article, Forbes.com reported that Petroleum Engineering was the 24th best paying job in the United States.
Petroleum engineering is an engineering discipline concerned with the subsurface activities related to the production of hydrocarbons, which can be either crude oil or natural gas. These activities are deemed to fall within the upstream sector of the oil and gas industry, which are the activities of finding and producing hydrocarbons. (Refining and distribution to a market are referred to as the downstream sector.) Exploration, by earth scientists, and petroleum engineering are the oil and gas industry's two main subsurface disciplines, which focus on maximizing economic recovery of hydrocarbons from subsurface reservoirs. Petroleum geology and geophysics focus on provision of a static description of the hydrocarbon reservoir rock, while petroleum engineering focuses on estimation of the recoverable volume of this resource using a detailed understanding of the physical behavior of oil, water and gas within porous rock at very high pressure.
The combined efforts of geologists and petroleum engineers throughout the life of a hydrocarbon accumulation determine the way in which a reservoir is developed and depleted, and usually they have the highest impact on field economics. Petroleum engineering requires a good knowledge of many other related disciplines, such as geophysics, petroleum geology, formation evaluation (well logging), drilling, economics, reservoir simulation, well engineering, artificial lift systems, and oil & gas facilities engineering.
Audio restoration is a generalized term for the process of removing imperfections (such as hiss, crackle, noise, and buzz) from sound recordings. Audio restoration can be performed directly on the recording medium (for example, washing a gramophone record with a cleansing solution), or on a digital representation of the recording using a computer (such as a AIFF or WAV file). Record restoration is a particular form of audio restoration that seeks to repair the sound of damaged records.
Modern audio restoration techniques are usually performed by digitizing an audio source from analog media, such as lacquer recordings, optical sources and magnetic tape. Once in the digital realm, recordings can be restored and cleaned up using dedicated, standalone digital processing units such as declickers, decracklers, dehissers and dialogue noise suppressors, or using digital audio workstations (DAWs). DAWs can perform various automated techniques to remove anomalies using algorithms to accomplish broadband denoising, declicking and decrackling, as well as removing buzzes and hums. Often audio engineers and sound editors use DAWs to manually remove "pops and ticks" from recordings, and the latest spectrographic 'retouching' techniques allow for the suppression or removal of discrete unwanted sounds. DAWs are capable of removing the smallest of anomalies, often without leaving artifacts and other evidence of their removal. Although fully automated solutions exist, audio restoration is sometimes a time consuming process that requires skilled audio engineers with specific experience in music and film recording techniques.
There are four distinct steps to commercial production of a recording. Recording, editing, mixing, and mastering. Typically, each is performed by a sound engineer who specializes only in that part of production.
Studio engineer could be either a sound engineer working in a studio together with a producer, or a producing sound engineer working in a studio.
Recording engineer is a person who records sound.
Mixing engineer is a person who creates mixes of multi-track recordings. It is not uncommon for a commercial record to be recorded at one studio and later mixed by different engineers in other studios.
Mastering engineer Typically the person who mixes the final stereo tracks (or sometimes just a few tracks or stems) that the mix engineer produces. The mastering engineer makes any final adjustments to the overall sound of the record in the final step before commercial duplication. Mastering engineers use principles of equalization and compression to affect the coloration of the sound.
Game audio designer engineer is a person who deals with sound aspects of game development.
Live sound engineer is a person dealing with live sound reinforcement. This usually includes planning and installation of speakers, cabling and equipment and mixing sound during the show. This may or may not include running the foldback sound.
Foldback or monitor engineer is a person running foldback sound during a live event. The term "foldback" is outdated and refers to the practice of folding back audio signals from the FOH (Front of House) mixing console to the stage in order for musicians to hear themselves while performing. Monitor engineers usually have a separate audio system from the FOH engineer and manipulate audio signals independently from what the audience hears, in order to satisfy the requirements of each performer on stage. In-ear systems, digital and analog mixing consoles, and a variety of speaker enclosures are typically used by monitor engineers. In addition most monitor engineers must be familiar with wireless or RF (radio-frequency) equipment and must interface personally with the artist(s) during each performance.
Systems engineer is a person responsible for the design setup of modern PA systems which are often very complex. A systems engineer is usually also referred to as a "crew chief" on tour and is responsible for the performance and day-to-day job requirements of the audio crew as a whole along with the FOH audio system.
Audio post engineer is a person who edits and mixes audio for film and television.
An audio engineer is someone with experience and training in the production and manipulation of sound through mechanical (analog) or digital means. As a professional title, this person is sometimes designated as a sound engineer or recording engineer instead. A person with one of these titles is commonly listed in the credits of many commercial music recordings (as well as in other productions that include sound, such as movies).
Audio engineers are generally familiar with the design, installation, and/or operation of sound recording, sound reinforcement, or sound broadcasting equipment, including large and small format consoles. In the recording studio environment, the audio engineer records, edits, manipulates, mixes, and/or masters sound by technical means in order to realize an artist's or record producer's creative vision. While usually associated with music production, an audio engineer deals with sound for a wide range of applications, including post-production for video and film, live sound reinforcement, advertising, multimedia, and broadcasting. When referring to video games, an audio engineer may also be a computer programmer.
In larger productions, an audio engineer is responsible for the technical aspects of a sound recording or other audio production, and works together with a record producer or director, although the engineer's role may also be integrated with that of the producer. In smaller productions and studios the sound engineer and producer is often one and the same person.
In typical sound reinforcement applications, audio engineers often assume the role of producer, making artistic decisions along with technical ones.
Audio engineering is a part of audio science dealing with the recording and reproduction of sound through mechanical and electronic means. The field draws on many disciplines, including electrical engineering, acoustics, psychoacoustics, and music. Unlike acoustical engineering, audio engineering does not deal with noise control or acoustical design. An audio engineer is closer to the creative and technical aspects of audio rather than formal engineering. An audio engineer must be proficient with different types of recording media, such as analog tape, digital multitrack recorders and workstations, and computer knowledge. With the advent of the digital age, it is becoming more and more important for the audio engineer to be versed in the understanding of software and hardware integration from synchronization to making sounds
The expressions "audio engineer" and "sound engineer" are ambiguous. Such terms can refer to a person working in sound and music production, as well as to an engineer with a degree who designs professional equipment for these tasks. The latter professional often develops the tools needed for the former's work. Other languages, such as German and Italian, have different words to refer to these activities. For instance, in German, the Tontechniker (audio technician) is the one who operates the audio equipment and the Tonmeister (sound master) is a person who creates recordings or broadcasts of music who is both deeply musically trained (in 'classical' and non-classical genres) and who also has a detailed theoretical and practical knowledge of virtually all aspects of sound, whereas the Toningenieur (audio engineer) is the one who designs, builds and repairs it.
Individuals who design acoustical simulations of rooms, shaping algorithms for digital signal processing and computer music problems, perform institutional research on sound, and other advanced fields of audio engineering are most often graduates of an accredited college or university, or have passed a difficult civil qualification test.
RF Engineers are specialists in their respective field and can take on many different roles, such as design, and maintenance. An RF Engineer at a broadcast facility is responsible for maintenance of the stations high power broadcast transmitters, and associated systems. This includes transmitter site emergency power, remote control, main transmission line and antenna adjustments, microwave radio relay STL/TSL links and more. Typically, transmission equipment is past its expected lifetime, and there is little support available from the manufacturer. Often, creative and collaborative solutions are required. The range of technologies used is vast due to the wide array of frequencies allocated for different radio services, and due to the range in age of equipment. In general, older equipment is easier to service.
RF Engineering, also known as Radio Frequency Engineering, is a subset of electrical engineering that deals with devices which are designed to operate in the Radio Frequency spectrum. These devices operate within the range of about 3 kHz up to 300 GHz.
RF Engineering is incorporated into almost everything that transmits or receives a radio wave which includes, but not limited to, Mobile Phones, Radios, WiFi and walkie talkies.
RF Engineering is a highly specialized field. To produce quality results, an in-depth knowledge of Mathematics, Physics and general electronics theory is required. Even with this, the initial design of an RF Circuit usually bears very little resemblance to the final physical circuit produced, as revisions to the design are often required to achieve intended results.
An engineering technician, sometimes called an ETechnican, is a person who has relatively practical understanding of the general theoretical principles of the specific branch of engineering in which they work.
Engineering technicians solve technical problems. Some help engineers and scientists do research and development. They build or set up equipment. They do experiments. They collect data and calculate results. They might also help to make a model of new equipment. Some technicians work in quality control. They check products, do tests, and collect data. In manufacturing, they help to design and develop products. They also find ways to produce things efficiently.They may also be persons who produce technical drawings or engineering drawings.
Broadcast stations often call upon outside engineering services for certain needs. For example, because structural engineering is generally not a direct part of broadcast engineering, tower companies usually design broadcast towers.
Other companies specialize in both broadcast engineering and broadcast law, which are both essential when making an application to a national broadcasting authority for a construction permit or broadcast license. This is especially critical in North America, where stations bear the entire burden of proving that their proposed facilities will not cause interference and are the best use ot the radio spectrum. Such companies now have special software that can map projected radio propagation and terrain shielding, as well as lawyers that will defend the applications before the U.S. Federal Communications Commission, Canadian Radio-television and Telecommunications Commission (CRTC), or the equivalent authorities in some other countries.
The conversion to digital broadcasting means broadcast engineers must now be well-versed in digital television and digital radio, in addition to analogue principles. New equipment from the transmitter to the radio antenna to the receiver may be encountered by engineers new to the field. Furthermore, modern techniques place a greater demand on an engineer's expertise, such as sharing broadcast towers or radio antennas among different stations (diplexing).
Digital audio and digital video have revolutionized broadcast engineering in many respects.Broadcast studios and control rooms are now already digital in large part, using non-linear editing and digital signal processing for what used to take a great deal of time or money, if it was even possible at all. Mixing consoles for both audio and video are continuing to become more digital in the 2000s, as is the computer storage used to keep digital media libraries. Effects processing and TV graphics can now be realized much more easily and professionally as well.
Other devices used in broadcast engineering are telephone hybrids, broadcast delays, and dead air alarms. See the glossary of broadcast engineering terms for further explanations.
Broadcast engineers are generally required to have knowledge in the following areas, from conventional video broadcast systems to modern Information Technology:
Conventional broadcast
Audio/Video instrumentation measurment
Baseband video – standard / high-definition
Broadcast studio acoustics
Television studios - broadcast video cameras and camera lenses
Production switcher (Video mixer)
Audio mixer
Broadcast IT
Video compression - DV25, MPEG, DVB or ATSC (or ISDB)
Digital server playout technologies. - VDCP, Louth, Harris, control protocols
Broadcast automation
Disk storage – RAID / NAS / SAN technologies.
Archives – Tape archives or grid storage technologies.
Computer networking
Operating systems – Microsoft Windows / Mac OS / Linux / RTOS
Post production – video capture and non-linear editing systems (NLEs).
RF
RF satellite uplinking – High powered amplifiers (HPA)
RF communications satellite downlinking – Band detection, carrier detection and IRD tuning, etc.
RF transmitter maintenance - IOT UHF transmitters, Solid State VHF transmitters, antennas, transmission line, high power filters, digital modulators.
Health & safety
Occupational safety and health
Fire suppression systems like FM 200.
Basic structural engineering
RF hazard mitigation
Broadcast engineering
Broadcast engineering is the field of electrical engineering, and now to some extent computer engineering and information technology, which deals with radio and television broadcasting. Audio engineering and RF engineering are also essential parts of broadcast engineering, being their own subsets of electrical engineering.
Broadcast engineering involves both the studio end and the transmitter end (the entire airchain), as well as remote broadcasts. Every station has a broadcast engineer, though one may now serve an entire station group in a city, or be a contract engineer who essentially freelances his services to several stations (often in small media markets) as needed.
Modern duties of a broadcast engineer
Modern duties of a broadcast engineer include maintaining broadcast automation systems for the studio and automatic transmission systems for the transmitter plant. There are also important duties regarding radio towers, which must be maintained with proper lighting and painting. Occasionally a station's engineer must deal with complaints of RF interference, particularly after a station has made changes to its transmission facilities
A nuclear engineer is someone who works with atomic particles. This field is incredibly diverse, encompassing everything from building high powered nuclear weapons to developing new techniques in nuclear medicine with the goal of diagnosing disease. Working conditions for nuclear engineers typically include long hours in a laboratory environment, and most nuclear engineers are good at working on teams to solve complex problems. Pay scales in this field vary widely, depending on what type of work a nuclear engineer does and what sort of training he or she has received. As a general rule, most people enter this field out of genuine interest, rather than a desire to make money.
Nuclear criticality safety is a field of nuclear engineering dedicated to the prevention of an inadvertent, self-sustaining nuclear chain reaction. Additionally, nuclear criticality safety is concerned with mitigating the consequences of a nuclear criticality accident. A nuclear criticality accident occurs from operations that involve fissile material and results in a tremendous and potentially lethal release of radiation. Nuclear criticality safety practitioners attempt to minimize the probability of a nuclear criticality accident by analyzing normal and abnormal fissile material operations and providing controls on the processing of fissile materials. A common practice is to apply a double contingency analysis to the operation in which two or more independent, concurrent changes in process conditions must occur before a nuclear criticality accident can occur. For example, the first change in conditions may be complete or partial flooding and the second change a rearrangement of the fissile material. Controls (requirements) on process parameters (e.g., fissile material mass, equipment) result from this analysis. These controls, either passive (physical), active (mechanical), or administrative (human), are implemented by inherently safe or fault-tolerant plant designs, or, if such designs are not practicable, by administrative controls such as operating procedures, job instructions and other means to minimize the potential for significant process changes that could lead to a nuclear criticality accident.
Installed nuclear capacity initially rose relatively quickly, rising from less than 1 gigawatt (GW) in 1960 to 100 GW in the late 1970s, and 300 GW in the late 1980s. Since the late 1980s worldwide capacity has risen much more slowly, reaching 366 GW in 2005. Between around 1970 and 1990, more than 50 GW of capacity was under construction (peaking at over 150 GW in the late 70s and early 80s) — in 2005, around 25 GW of new capacity was planned. More than two-thirds of all nuclear plants ordered after January 1970 were eventually cancelled. A total of 63 nuclear units were canceled in the USA between 1975 and 1980.
Washington Public Power Supply System Nuclear Power Plants 3 and 5 were never completed.During the 1970s and 1980s rising economic costs (related to extended construction times largely due to regulatory changes and pressure-group litigation)[28] and falling fossil fuel prices made nuclear power plants then under construction less attractive. In the 1980s (U.S.) and 1990s (Europe), flat load growth and electricity liberalization also made the addition of large new baseload capacity unattractive.
The 1973 oil crisis had a significant effect on countries, such as France and Japan, which had relied more heavily on oil for electric generation (39% and 73% respectively) to invest in nuclear power.Today, nuclear power supplies about 80% and 30% of the electricity in those countries, respectively.
A general movement against nuclear power arose during the last third of the 20th century, based on the fear of a possible nuclear accident as well as the history of accidents, fears of radiation as well as the history of radiation of the public, nuclear proliferation, and on the opposition to nuclear waste production, transport and lack of any final storage plans. Perceived risks on the citizens' health and safety, the 1979 accident at Three Mile Island and the 1986 Chernobyl disaster played a part in stopping new plant construction in many countries,although the public policy organization Brookings Institution suggests that new nuclear units have not been ordered in the U.S. because the Institution's research concludes they cost 15–30% more over their lifetime than conventional coal and natural gas fired plants.
Unlike the Three Mile Island accident, the much more serious Chernobyl accident did not increase regulations affecting Western reactors since the Chernobyl reactors were of the problematic RBMK design only used in the Soviet Union, for example lacking "robust" containment buildings.Many of these reactors are still in use today. However, changes were made in both the reactors themselves (use of low enriched uranium) and in the control system (prevention of disabling safety systems) to reduce the possibility of a duplicate accident.
An international organization to promote safety awareness and professional development on operators in nuclear facilities was created: WANO; World Association of Nuclear Operators.
Opposition in Ireland, and Poland prevented nuclear programs there, while Austria (1978), Sweden (1980) and Italy (1987) (influenced by Chernobyl) voted in referendums to oppose or phase out nuclear power. In July 2009, the Italian Parliament passed a law that canceled the results of an earlier referendum and allowed the immediate start of the Italian nuclear program.
Nuclear power is power (generally electrical) produced from controlled (i.e., non-explosive) nuclear reactions. Commercial plants in use to date use nuclear fission reactions. Electric utility reactors heat water to produce steam, which is then used to generate electricity. In 2009, 15% of the world's electricity came from nuclear power, despite concerns about safety and radioactive waste management. More than 150 naval vessels using nuclear propulsion have been built.
Nuclear fusion reactions are widely believed to be safer than fission and appear potentially viable, though technically quite difficult. Fusion power has been under intense theoretical and experimental investigation for many years.
Both fission and fusion appear promising for some space propulsion applications in the mid- to distant-future, using low thrust for long durations to achieve high mission velocities. Radioactive decay has been used on a relatively small (few kW) scale, mostly to power space missions and experiments.
Spontaneous changes from one nuclide to another: nuclear decay
There are 80 elements which have at least one stable isotope (defined as isotopes never observed to decay), and in total there are about 256 such stable isotopes. However, there are thousands more well-characterized isotopes which are unstable. These radioisotopes may be unstable and decay in all timescales ranging from fractions of a second to weeks, years, or many billions of years.
For example, if a nucleus has too few or too many neutrons it may be unstable, and will decay after some period of time. For example, in a process called beta decay a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is turned into a proton and an electron and antineutrino, by the weak nuclear force. The element is transmuted to another element in the process, because while it previously had seven protons (which makes it nitrogen) it now has eight (which makes it oxygen).
In alpha decay the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays, until a stable element is formed.
In gamma decay, a nucleus decays from an excited state into a lower state by emitting a gamma ray. It is then stable. The element is not changed in the process.
Other more exotic decays are possible (see the main article). For example, in internal conversion decay, the energy from an excited nucleus may be used to eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons, but is not beta decay, and (unlike beta decay) does not transmute one element to another.
Nuclear fusion
When two low mass nuclei come into very close contact with each other it is possible for the strong force to fuse the two together. It takes a great deal of energy to push the nuclei close enough together for the strong or nuclear forces to have an effect, so the process of nuclear fusion can only take place at very high temperatures or high densities. Once the nuclei are close enough together the strong force overcomes their electromagnetic repulsion and squishes them into a new nucleus. A very large amount of energy is released when light nuclei fuse together because the binding energy per nucleon increases with mass number up until nickel-62. Stars like our sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. Research to find an economically viable method of using energy from a controlled fusion reaction is currently being undertaken by various research establishments (see JET and ITER).
Nuclear fission
For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones. This splitting of atoms is known as nuclear fission.
The process of alpha decay may be thought of as a special type of spontaneous nuclear fission. This process produces a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.
For certain of the heaviest nuclei which produce neutrons on fission, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a so-called chain reaction. (Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions.) The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission type nuclear bombs such as the two that the United States used against Hiroshima and Nagasaki at the end of World War II. Heavy nuclei such as uranium and thorium may undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.
For a neutron-initiated chain-reaction to occur, there must be a critical mass of the element present in a certain space under certain conditions (these conditions slow and conserve neutrons for the reactions). There is one known example of a natural nuclear fission reactor, which was active in two regions of Oklo, Gabon, Africa, over 1.5 billion years ago. Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain-reactions.
Production of heavy elements
According to the theory, as the Universe cooled after the big bang it eventually became possible for particles as we know them to exist. The most common particles created in the big bang which are still easily observable to us today were protons (hydrogen) and electrons (in equal numbers). Some heavier elements were created as the protons collided with each other, but most of the heavy elements we see today were created inside of stars during a series of fusion stages, such as the proton-proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star. Since the binding energy per nucleon peaks around iron, energy is only released in fusion processes occurring below this point. Since the creation of heavier nuclei by fusion costs energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s process) or by the rapid, or r process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r process is thought to occur in supernova explosions because the conditions of high temperature, high neutron flux and ejected matter are present. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers). The process duration is typically in the range of a few seconds.
Nuclear physics
Nuclear physics is the field of physics that studies the building blocks and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power and nuclear weapons, but the research has provided wider applications, including those in medicine (nuclear medicine, magnetic resonance imaging), materials engineering (ion implantation) and archaeology (radiocarbon dating).
The field of particle physics evolved out of nuclear physics and, for this reason, has been included under the same term in earlier times.
Modern nuclear physics
A heavy nucleus can contain hundreds of nucleons which means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy which arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission.
Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert-Mayer. Nuclei with certain numbers of neutrons and protons (the magic numbers 2, 8, 20, 50, 82, 126, ...) are particularly stable, because their shells are filled.
Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons, analogously to Cooper pairs of electrons.
Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark-gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.
Nuclear fuel
Nuclear fuel is any material that can be consumed to derive nuclear energy, by analogy to chemical fuel that is burned to derive energy. Nuclear fuels are the most dense sources of energy available to humans. Nuclear fuel in a nuclear fuel cycle can refer to the material or to physical objects (for example fuel bundles composed of fuel rods) composed of the fuel material, perhaps mixed with structural, neutron moderating, or neutron reflecting materials.
The most common type of nuclear fuel contains heavy fissile elements that can be made to undergo nuclear fission chain reactions in a nuclear fission reactor. The most common fissile nuclear fuels are 235U and 239Pu. The actions of mining, refining, purifying, using, and ultimately disposing of these elements together make up the nuclear fuel cycle, which is important for its relevance to nuclear power generation and nuclear weapons.
Not all nuclear fuels are used in fission chain reactions. Plutonium-238 and some other elements are used to produce small amounts of nuclear power by radioactive decay in radioisotope thermoelectric generators and other atomic batteries. Light nuclides such as 3H (tritium) are used as fuel for nuclear fusion
The thermal conductivity of uranium dioxide is low; it is affected by porosity and burn-up. The burn-up results in fission products being dissolved in the lattice (such as lanthanides), the precipitation of fission products such as palladium, the formation of fission gas bubbles due to fission products such as xenon and krypton and radiation damage of the lattice. The low thermal conductivity can lead to overheating of the center part of the pellets during use. The porosity results in a decrease in both the thermal conductivity of the fuel and the swelling which occurs during use.
According to the International Nuclear Safety Center [1] the thermal conductivity of uranium dioxide can be predicted under different conditions by a series of equations.
The bulk density of the fuel can be related to the thermal conductivity
Where ρ is the bulk density of the fuel and ρtd is the theoretical density of the uranium dioxide.
Then the thermal conductivity of the porous phase (Kf)is related to the conductivity of the perfect phase (Ko, no porosity) by the following equation. Note that s is a term for the shape factor of the holes.
Kf = Ko.(1-p/1+(s-1)p)
Rather than measuring the thermal conductivity using the traditional methods in physics such as lees's disk, the Forbes' method or Searle's bar it is common to use a laser flash method where a small disc of fuel is placed in a furnace. After being heated to the required temperature one side of the disc is illuminated with a laser pulse, the time required for the heat wave to flow through the disc, the density of the disc, and the thickness of the disk can then be used to calculated to give the thermal conductivity.
λ = ρCpα
λ thermal conductivity
ρ density
Cp heat capacity
α thermal diffusivity
If t1/2 is defined as the time required for the non illuminated surface to experience half its final temperature rise then.
α = 0.1388 L2 / t1/2
This must be becomig very difficult for you to understand so lets close this topic by only telling you in the end about the types of Oxide Fuel which are:
UOX (Uraniumm Oxide)
MOX (Mixed Oxide)
Atomic physics
Atomic physics (or atom physics) is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and the processes by which these arrangements change. This includes ions as well as neutral atoms and, unless otherwise stated, for the purposes of this discussion it should be assumed that the term atom includes ions.
The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics—which deals with the atom as a system comprising of a nucleus and electrons, and nuclear physics—which considers atomic nuclei alone.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics always considers atoms in isolation. Atomic models will consist of a single nucleus which may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical) nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that we are concerned with. This means that the individual atoms can be treated as if each were in isolation because for the vast majority of the time they are. By this consideration atomic physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of atoms.
Electronic configuration
Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
In the event the electron absorbs a quantity of energy less than the binding energy, it will transition to an excited state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the excited atom has been previously ionized, particularly if one of its inner shell electrons has been removed, a phenomenon known as the Auger effect may take place where the quantity of energy is transferred to one of the bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon.
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes.
Nuclear engineers and radiological scientists are interested in the development of more advanced ionizing radiation measurement and detection systems, and using these to improve imaging technologies. This includes detector design, fabrication and analysis, measurements of fundamental atomic and nuclear parameters, and radiation imaging systems, among other things.
Nuclear materials research focuses on two main subject areas, nuclear fuels and irradiation-induced modification of materials. Improvement of nuclear fuels is crucial for obtaining increased efficiency from nuclear reactors. Irradiation effects studies have many purposes, from studying structural changes to reactor components to studying nano-modification of metals using ion-beams or particle accelerators.
An important field is medical physics, and its subfields nuclear medicine, radiation therapy, health physics, and diagnostic imaging.From x-ray machines to MRI to PET, among many others, medical physics provides most of modern medicine's diagnostic capability along with providing many treatment options.
Research areas in nuclear fusion and plasma physics include high-temperature, plasma dynamics, and radiation-resistant materials. Internationally, research is currently directed at building a prototype tokamak called ITER. The research at ITER will primarily focus on instabilities and diverter design refinement. Researchers in the USA are also building an inertial confinement experiment called the National Ignition Facility or NIF. NIF will be used to refine neutron transport calculations for the US stockpile stewardship initiative.
Nuclear fission
Nuclear fission is the disintegration of a fissionable atom's nucleus into two or more different elements nuclei. An approximate number of ~2.4 neutrons are scattered around per fission. There are two types of nuclear fission. 1-Fast Fission 2-Thermal fission
Generally, thermal fission is used in commercial reactors, if we disregard the Fast Breeder Type of Nuclear Reactors.
The United States gets about 20% of its electricity from nuclear power. This is a massive industry and keeping the supply of nuclear engineers plentiful will ensure its stability. Nuclear engineers in this field generally work, directly or indirectly, in the nuclear power industry or for government labs. Current research in industry is directed at producing economical, proliferation resistant reactor designs with passive safety features. Although government labs research the same areas as industry, they also study a myriad of other issues such as: nuclear fuels and nuclear fuel cycles, advanced reactor designs, and nuclear weapon design and maintenance. A principal pipeline for trained personnel for US reactor facilities is the Navy Nuclear Power Program.
Nuclear engineering is the branch of engineering concerned with the application of the breakdown of atomic nuclei and/or other sub-atomic physics, based on the principles of nuclear physics. It includes, but is not limited to, the interaction and maintenance of nuclear fission systems and components— specifically, nuclear reactors, nuclear power plants, and/or nuclear weapons. The field may also include the study of nuclear fusion, medical and other applications of (generally ionizing) radiation, nuclear safety, heat/thermodynamics transport, nuclear fuel and/or other related (e.g., waste disposal) technology, nuclear proliferation, and the effect of radioactive waste or radioactivity in the environment.
•Mine/mill integration
•Geo-metallurgical engineering
•Ore evaluation and sorting technologies
•Integrated pre-concentration and composite fill systems design
•Process design
•Process testwork: protocol design, supervision & evaluation of results
•HPGR testwork and circuit design
•Process design & equipment selection
•Mineral deposit evaluation
•NI 43-101 compliant studies
•Plant and infrastructure cost estimation & economic evaluation
Metallurgical Engineering is a broad field that deals with all sorts of metal-related areas. The three main branches of this major are physical metallurgy, extractive metallurgy, and mineral processing. Physical metallurgy deals with problem solving: you’ll develop the sorts of metallic alloys needed for different types of manufacturing and construction. Extractive metallurgy involves extracting metal from ore. Mineral processing involves gathering mineral products from the earth’s crust.
As a Metallurgical Engineering major, you’ll learn the fundamentals of all three fields, as well as the basics of engineering in general. We need metals to make our society function—metals make up important parts of cars, bikes, planes, buildings, even toothpaste tubes. Your knowledge of the production, design, and manufacturing of these metals and mineral products can be rewarding and exciting.
Most Metallurgical Engineering programs will offer the opportunity to participate in a cooperative education program, an arrangement in which students spend a semester or more doing engineering work with a metallurgical company. Many of these co-op jobs can become actual jobs after graduation, and the experience will make you a more valuable prospective employee.
Metallurgical engineers develop ways of processing metals and converting them into useful products. Metallurgy, the science of metals, is one of the materials sciences. Other materials sciences include physical metallurgy, ceramics, and polymer chemistry, or plastics. Metallurgical engineers, a subspecialty of materials engineers, work primarily in industrial areas, particularly in the iron and steel industries. Some work with other metals such as aluminum or copper. Metallurgical engineers are also employed in industries that make machinery and other products using metal, such as automobiles and electrical equipment. Some work for government agencies or colleges and universities.
The work of metallurgical engineers is similar to the work of metallurgical scientists, or metallurgists. Metallurgical engineers use complex equipment, including electron microscopes, X-ray machines, and spectrographs. They use the latest scientific and technological findings in their work. Metallurgical engineers are often assisted by metallurgical technicians.
There are two main branches of metallurgy—extractive metallurgy and physical metallurgy. Extractive metallurgy involves the separation, or extraction, of metals from ores. Ores are mixtures of metals and other substances. Once the ore has been mined, many steps are needed to extract the metal and refine it to a relatively pure form. Metallurgical engineers design and supervise the processes that separate the metals from their ores. They often cooperate with mining engineers in the early steps of the extraction process. After metallic compounds have been separated from the rock and other waste materials, metallurgical engineers can use a number of different processes to refine the metals. These processes
Metallurgical engineers who work in physical metallurgy develop new alloys for products such as electronics equipment and automobiles,
might involve the use of heat, electric current, or chemicals dissolved in water to produce a pure and usable metal.
Metallurgical engineers involved in extractive metallurgy work in laboratories, ore treatment plants, refineries, and steel mills. They are concerned with finding new and better ways of separating relatively small amounts of metal from huge quantities of waste rock. They must consider the effects that the process has on the environment, the conservation of energy, and the proper disposal of the waste rock.
Physical metallurgy is the study of the structure and physical properties of metals and alloys. It also involves the many processes used to convert a refined metal into a finished product. Most metals are not useful in their pure form. They must be made into alloys, or mixtures of a metal and one or more other elements. Steel is an example of an alloy. It is made from iron and small amounts of carbon and other elements. Copper and zinc are combined to form another alloy, brass. Scientists and metallurgical engineers work in physical metallurgy to develop new alloys to meet many needs. These alloys include radiation shielding for nuclear reactors, lightweight but high-strength steel for automobile bodies, and special metals used in electronic equipment. Physical metallurgical engineers also develop production processes that include melting, casting, alloying, rolling, and welding. They design and supervise the processes that produce such goods as structural steel, wire, or sheets of aluminum. Sometimes they are involved in processes that use these metal goods in the manufacture of other finished products. Physical metallurgists often work in laboratories or in manufacturing plants.
Archaeometallurgy
Archaeometallurgy is the study of the history and prehistory of metals and their use through humans. It is a sub-discipline of archaeology and archaeological science. After initial sporadic work, archaeometallurgy was more widely institutionalised in the 1960s and 70s, with research groups in Britain (The British Museum, the UCL Institute of Archaeology, the Institute for Archeo-Metallurgical Studies (iams)), Germany (Deutsches Bergbau Museum) and the US (MIT and Harvard). Specialisations within archaeometallurgy focus on metallography of finished objects, mineralogy of waste products such as slag and manufacturing studies.
Cupellation
Cupellation is a metallurgical process in which ores or alloyed metals are treated under high temperatures and carefully controlled operations in order to separate noble metals, like gold and silver, from base metals like lead, copper, zinc, arsenic, antimony or bismuth, that might be present in the ore.This process is based on the principle that precious metals do not oxidise or react chemically, contrary to what happens to the base metals; so that when they are heated at high temperatures, the precious metals remain apart and the others react forming slags or other compounds.
Since the Early Bronze Age, the process of cupellation was used to obtain silver out of smelted lead ores; by the Middle Ages and the Renaissance was one of the most common and important processes used to refine metals. By then, fire assays were used for mineral assaying, testing recycled metals to know their purity (jewelry) and minting. Cupellation principles have always remained the same, they change only in the amount of material processed. It is still in use today based on the same principles.
Cupellation Process
Cupellation Hearths
Native silver is a rare element, although it exists as such. Most of the time it is found in nature combined with other metals and minerals that contain silver compounds, generally in the form of sulfides such as galena (lead sulfide) or cerussite (lead carbonate). So the primary production of silver must have been done by the smelting of argentiferous lead ores.
Lead melts at 327°C while silver melts at 960°C, when mixed, as in galena, the most common argentiferous lead ore, they have to be smelted at high temperatures in a reducing condition to produce argentiferous lead. The mineral then has to be smelted again at high temperatures of the order of 900°C or 1000°C in a hearth or blast furnace where air flow makes possible the oxidation of the lead. Lead transforms into lead oxide (PbO) known as litharge, a liquid substance when melted, which captures the oxides of the rest of the metals, while silver and gold remain floating on top of the liquid litharge. The latter is removed or absorbed by capillary action into the hearth linings.This chemical reaction may be view as (Ag+Cu) + Pb + O2 → (CuO+PbO) + Ag
The base of the hearth was dug in the form of a sauce pan and covered with an inert and porous material rich in calcium or magnesium (shells or lime) or by bone ash. The necessity of lining with calcareous materials was due to the fact that lead reacts with silica (clay compounds) to form lead silicate with a glassy appearance that will not allow the litharge to be absorbed in the correct way. Calcareous materials do not react with lead.[14] Some of the litharge fumes evaporate and the rest, in a liquid state are absorbed by the porous earth linings which form what it is known as litharge cakes.
Cupellation furnaces after Agricola, 1556/1950Litharge cakes are usually in the shape of circular, concavo-convex cakes of approximately 15 cm in diameter and are the most common archaeological evidence of the cupellation process in the Early Bronze Age.By their chemical composition (archaeometry), archaeometallurgists may know what kind of ore was treated, which were the main components in it or which steps might have been followed in the process. This information might give them insights about production process, trade, social needs or economic situations among others.
Small scale cupellation
Small scale cupellation or fire assay is based on the same principle as the one done in a cupellation hearth; the main difference lies in the amount of material to be tested or obtained. The samples have to be crushed, roasted and smelted to concentrate the metallic components in order to separate the noble metals. By the Renaissance the use of the cupellation processes was diverse: assay of ores from the mines, testing the amount of silver in jewels or coins or for experimental purposes.It was carried out in small shallow recipients known as cupels.
As the main purpose of small scale cupellation was to assay and test minerals, the matter to be tested has to be carefully weighed. The assays were made in the cupellation or assay furnace, which needs to have windows and bellows to ascertain that the air oxides the lead, as well as to be sure and prepared to take away the cupel when the process is over. Pure lead has to be added to the matter being tested to guarantee the further separation of the impurities. After the litharge has been absorbed by the cupel, buttons of silver were formed and settled in the middle of the cupel. If the alloy also contained a certain amount of gold, it settled with the silver and both have to be separated by parting.
Cupels
Bone ash cupel with silver prillThe primary tool for fire assay or cupellation was the cupel. Their manufacture was made in a very careful way. They used to be small vessels shaped in the form of an inverted truncated cone, made out of bone ashes. According to Georg Agricola,the best material was obtained from burned horns of deer although fish spines could work well. Ashes have to be ground into a fine and homogeneous powder and mixed with some sticky substance to mould the cupels. Moulds were made out of brass with no bottoms so that the cupels could be taken off. A shallow depression in the centre of the cupel was made with a rounded pestle. Cupel sizes depend on the amount of material to be assayed. This same shape has been maintained until the present.
Archaeological investigations as well as archaeometallurgical analysis and written texts from the Renaissance have demonstrated the existence of different materials for their manufacture; they could be made also with mixtures of bones and plant ashes, which were not of a very high quality, or moulded with a mixture of this kind in the bottom with an upper layer of bone ashes.Different recipes depend on the expertise of the assayer or on the special purpose for which it was made (assays for minting, jewelry, testing purity of recycled material or coins). It is thought that at the beginnings of small scale cupellation, potsherds or clay cupels were used
Metallurgists study the microscopic and macroscopic properties using metallography, a technique invented by Henry Clifton Sorby. In metallography, an alloy of interest is ground flat and polished to a mirror finish. The sample can then be etched to reveal the microstructure and macrostructure of the metal. The sample is then examined in an optical or electron microscope, and the image contrast provides details on the composition, mechanical properties, and processing history.
Crystallography, often using diffraction of x-rays or electrons, is another valuable tool available to the modern metallurgist. Crystallography allows identification of unknown materials and reveals the crystal structure of the sample. Quantitative crystallography can be used to calculate the amount of phases present as well as the degree of strain to which a sample has been subjected.
In production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. This involves the production of alloys, the shaping, the heat treatment and the surface treatment of the product. The task of the metallurgist is to achieve balance between material properties such as cost, weight, strength, toughness, hardness, corrosion and fatigue resistance, and performance in temperature extremes. To achieve this goal, the operating environment must be carefully considered. In a saltwater environment, ferrous metals and some aluminium alloys corrode quickly. Metals exposed to cold or cryogenic conditions may endure a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. Metals under continual cyclic loading can suffer from metal fatigue. Metals under constant stress at elevated temperatures can creep.
Metalworking processes
Metals are shaped by processes such as casting, forging, flow forming, rolling, extrusion, sintering, metalworking, machining and fabrication. With casting, molten metal is poured into a shaped mould. With forging, a red-hot billet is hammered into shape. With rolling, a billet is passed through successively narrower rollers to create a sheet. With extrusion, a hot and malleable metal is forced under pressure through a die, which shapes it before it cools. With sintering, a powdered metal is heated in a non-oxidizing environment after being compressed into a die. With machining, lathes, milling machines, and drills cut the cold metal to shape. With fabrication, sheets of metal are cut with guillotines or gas cutters and bent into shape.
Cold working processes, where the product’s shape is altered by rolling, fabrication or other processes while the product is cold, can increase the strength of the product by a process called work hardening. Work hardening creates microscopic defects in the metal, which resist further changes of shape.
Various forms of casting exist in industry and academia. These include sand casting, investment casting (also called the “lost wax process”), die casting and continuous casting.
Heat treatment
Metals can be heat treated to alter the properties of strength, ductility, toughness, hardness or resistance to corrosion. Common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering. The annealing process softens the metal by allowing recovery of cold work and grain growth. Quenching can be used to harden alloy steels, or in precipitation hardenable alloys, to trap dissolved solute atoms in solution. Tempering will cause the dissolved alloying elements to precipitate, or in the case of quenched steels, improve impact strength and ductile properties.
Often, mechanical and thermal treatments are combined in what is known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high alloy special steels, super alloys and titanium alloys.
Plating
Electroplating is a common surface-treatment technique. It involves bonding a thin layer of another metal such as gold, silver, chromium or zinc to the surface of the product. It is used to reduce corrosion as well as to improve the product's aesthetic appearance.
Thermal spraying
Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings
Common engineering metals include aluminium, chromium, copper, iron, magnesium, nickel, titanium and zinc. These are most often used as alloys. Much effort has been placed on understanding the iron-carbon alloy system, which includes steels and cast irons. Plain carbon steels is used in low cost, high strength applications where weight and corrosion are not a problem. Cast irons, including ductile iron are also part of the iron-carbon system.
Stainless steel or galvanized steel are used where resistance to corrosion is important. Aluminium alloys and magnesium alloys are used for applications where strength and lightness are required.
Copper-nickel alloys (such as Monel) are used in highly corrosive environments and for non-magnetic applications. Nickel-based superalloys like Inconel are used in high temperature applications such as turbochargers, pressure vessel, and heat exchangers. For extremely high temperatures, single crystal alloys are used to minimize creep.
Extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. In order to convert a metal oxide or sulfide to a purer metal, the ore must be reduced physically, chemically, or electrolytically.
Extractive metallurgists are interested in three primary streams: feed, concentrate (valuable metal oxide/sulfide), and tailings (waste). After mining, large pieces of the ore feed are broken through crushing and/or grinding in order to obtain particles small enough where each particle is either mostly valuable or mostly waste. Concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products.
Mining may not be necessary if the ore body and physical environment are conducive to leaching. Leaching dissolves minerals in an ore body and results in an enriched solution. The solution is collected and processed to extract valuable metals.
Ore bodies often contain more than one valuable metal. Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents.
Metallurgy is a domain of materials science that studies the physical and chemical behavior of metallic elements, their intermetallic compounds, and their mixtures, which are called alloys. It is also the technology of metals: the way in which science is applied to their practical use. Metallurgy is commonly used in the craft of metalworking.
Employment Prospects
With quality and productivity improvement now recognized as fundamental to achieving long-term success, today's business or organization requires people with state-of-the-art knowledge and experience with quality principles and methods. The quality engineering specialization prepares students to help a wide variety of businesses and organizations in developing and implementing quality systems to improve their productivity and competitiveness, and the quality of life.
Managers of today's organizations and enterprises are faced with an enormous number of competitive pressures as well as a revolution in philosophy and methodologies for improving their systems, whether they be in manufacturing, health care, business agencies or government. In all cases, the demand is for better products and/or services at lower costs.
Total Quality Control, Design for Assembly, Design for Manufacturability, Quality Function Deployment, Kaizen, Statistical Process Control, Taguchi Methods, and a host of additional tools and methodologies have all proven to provide substantial improvements in quality, reduction in cost, increased productivity, or improved responsiveness when the concepts are applied correctly in appropriate settings. It is the job of the quality engineer to understand and apply these new methodologies to guide the improvement of the organization. This job may be done working as a quality engineer or manager in an industrial environment, in a health care organization, in a consulting company, in the education field, in government or in other areas of the service sector. The job opportunities are varied and plentiful.
The need for quality engineering specialists to design and operate more productive systems that improve both competitiveness and the quality of work and life is rapidly increasing as worldwide economic and population growth accelerates. In the United States, there is a demand for continuous improvement of product designs and manufacturing systems to help our industries meet intense competition from abroad. Likewise, needs for improvements in health care delivery and workplace design call for quality professionals who can meet these new demands. This challenge will require a large number of quality systems engineers in industry, business and academia, and this need will exist well into the next century.
Laboratory Facilities and Research Centers
The interdisciplinary curriculum in the quality engineering specialization draws on the sophisticated computer equipment and laboratory resources of the many outstanding departments in the College of Engineering and throughout UW-Madison. For example, the School of Business adds strength in the areas of total quality management and organization design and behavior. The Department of Statistics provides excellent resources for training in the fundamental methodologies necessary to solve problems through data collection and analysis. In the College of Engineering, cutting-edge technologies and equipment, like the advanced coordinate measuring machine and software, allow for hands-on research and experience. Likewise, high technology classrooms and industry-based projects provide opportunities for learned by doing and working with people in teams. Several other facilities provide the opportunity for advanced study, including Computer-Aided Engineering.
It is widely recognized that quality is fundamental to achieving long-term success. A renewed focus on customers and processes sets the stage for continuous improvement for industry, government, educational institutions, healthcare, and businesses. All have benefited from higher quality and productivity as well as reduced time and cost to develop, produce, deliver products and services, and improved safety. Data-based total quality methods are the catalyst to help people achieve these benefits.
Laboratory for Manufacturing System Realization and Synthesis (MA/RS)
The goal of this laboratory is to develop a science base for a new manufacturing system realization and quality improvement. It will bring together research on manufacturing system CAD/CAM models and statistics-based methods for design, control, and diagnostics of multistage manufacturing processes behavior/quality. In doing so it addresses the following areas: (1) system decomposition and analysis using the concept of product/process key characteristics and their causalities; (ii) developing statistical methods driven by engineering models to achieve quality improvement, i.e., integrating models of data sets with efficient CAD/CAM models of manufacturing systems instead of identifying model(s) of data set alone as in the traditional SPC; and, (iii) application of the developed models towards: root cause diagnosis of manufacturing variability; distributed sensing system/networks; and manufacturing system design evaluation and optimization in early design phases. Information generated is further applied to study reusable/reconfigurable multistage manufacturing systems convertability, scalability and diagnosability. Resources available include: PCs, laser tracker, various software (CAM, VSA, …).
For more details please see the MA/RS website at: www.cae.wisc.edu/~darek.
Center for Quality and Productivity Improvement
It is widely recognized that quality is fundamental to achieving long-term success. A renewed focus on customers and processes sets the stage for continuous improvement for industry, government, educational institutions, healthcare, and businesses. All have benefited from higher quality and productivity as well as reduced time and cost to develop, produce, deliver products and services, and improved safety. Data-based total quality methods are the catalyst to help people achieve these benefits.
To rise to the challenge of the international quality revolution, the Center for Quality and Productivity Improvement (CQPI) was founded in October of 1985 by Professor George E.P. Box and the late Professor William G. Hunter. Since its inception, CQPI has been at the forefront in the development of new techniques for improving the quality of products and processes. Today, the Center is also at the forefront of methods aimed at improving the quality of work processes, quality of working life, and quality of healthcare.
The mission of the Center is to create, integrate, and transfer knowledge to improve the quality and performance of industrial, service, governmental, healthcare, educational, social, and other organizations.
The vision of the Center is to excel in the creation, development, and integration of knowledge through research on theories, concepts, and methodologies of quality and productivity measurement, management and improvement, innovation and organizational change.
Areas of expertise in quality engineering, quality management, quality improvement in healthcare, safety applications and research, and quality of working life, human factors and ergonomics.
Major research support has come from the National Science Foundation, the Agency for Healthcare Research and Quality, the National Institute for Occupational Safety and Health, the UW Graduate School, the State of Wisconsin, and private industry.
The industrial engineering PhD degree with concentration in quality engineering seeks to qualify students for leadership positions in research, consulting, government and industry as well as for positions on university faculties in industrial engineering, business and related fields.
The curriculum for the quality engineering specialization is designed to provide students with a balance and breadth of understanding of industrial engineering disciplines that contribute to designing and delivering high-quality products or services safely and efficiently. To accomplish this, courses can be selected from each of four groups: 1)foundation courses; 2)organizational dynamics/change strategies and business; 3)statistical methods; and 4)an elective grouping consisting of engineering systems, sociotechnical engineering, and measurement/evaluation.
In the case of the latter grouping and specialization, students may want to sample broadly from these disciplines or specialize in the application of quality principles in one of them. Flexibility is built into the curriculum to accommodate a wide range of interests and application opportunities.
The industrial engineering MS degree with concentration in quality engineering is designed to provide necessary background for professional careers in industry or government. Emphasis will be placed on the foundations of quality improvement, organizational dynamics/change strategies, and business and statistical methods. There is a flexible elective list of courses to enable students to specialize with these skills in manufacturing systems, sociotechnical engineering, health systems, and decision sciences. To complete the MS program, a GPA of 3.20 or above in graduate-level courses and 30 degree credits are required with 15 degree credits in the IE department.
Quality Engineering's Heritage and Diversity
This program is based on more than 25 years of quality research and teaching at UW-Madison in such diverse areas as applied engineering statistics in production, design for quality of life in workplace systems, and quality for health care delivery. This rich heritage is evident today in the broadness of faculty interests and research activities comprising the program. Current research activities encompass such areas as:
■Design of experiments
■Applied statistical methods
■Quality in product design and development
■Quality assurance systems design & ISO 9000
■Product and system reliability
■Quality in health care systems improvement and cost reduction
■Quality design of work systems and jobs
■Human environmental design
■Quality improvement for manufacturing systems design and control
Industrial engineering, in its current form, began in the early 20th century, when the first engineers began to apply scientific theory to manufacturing. Factory owners labeled their new specialists 'industrial' or management engineers.
Industrial engineering is commonly defined as the integration of machines, staff, production materials, money, and scientific methods. While many current industrial engineers do still deal in these areas, the scope of their work has become more general. Today's industrial engineers work in many more settings than just factories; in recent years, fields like energy and IT have become particularly reliant on the skills of industrial engineers. These flexible professionals may also be employed by:
•Hospitals and other health-care operations
•Transportation
•Food processing
•Media
•Banking
•Utilities
•Local, regional and national governments
Industrial engineers determine the most effective ways to use the basic factors of production -- people, machines, materials, information, and energy -- to make a product or to provide a service. They are the bridge between management goals and operational performance. They are more concerned with increasing productivity through the management of people, methods of business organization, and technology than are engineers in other specialties, who generally work more with products or processes. Although most industrial engineers work in manufacturing industries, they may also work in consulting services, healthcare, and communications. To solve organizational, production, and related problems most efficiently, industrial engineers carefully study the product and its requirements, use mathematical methods such as operations research to meet those requirements, and design manufacturing and information systems. They develop management control systems to aid in financial planning and cost analysis and design production planning and control systems to coordinate activities and ensure product quality. They also design or improve systems for the physical distribution ofgoods and services. Industrial engineers determine which plant location has the best combination of raw materials availability, transportation facilities, and costs. Industrial engineers use computers for simulations and to control various activities and devices, such as assembly lines and robots. They also develop wage and salary administration systems and job evaluation programs. Many industrial engineers move into management positions because the work is closely related.The work of health and safety engineers is similar to that of industrial engineers in that it deals with the entire production process. Health and safety engineers promote worksite or product safety and health by applying knowledge of industrial processes, as well as mechanical,chemical, and psychological principles. They must be able to anticipate, recognize, and evaluate hazardous conditions as well as develop hazard control methods. They also must be familiar with the application of health and safety regulations.
Industrial engineers determine the most effective ways to use the basic factors of production --people, machines, materials, information, and energy -- to make a product or to provide a service. They are the bridge between management goals and operational performance. They are more concerned with increasing productivity through the management of people, methods of business organization, and technology than are engineers in other specialties, who generally work more with products or processes. Although most industrial engineers work in manufacturing industries, they may also work in consulting services, healthcare, and communications.To solve organizational, production, and related problems most efficiently, industrial engineers carefully study the product and its requirements, use mathematical methods such as operations research to meet those requirements, and design manufacturing and information systems. They develop management control systems to aid in financial planning and cost analysis and design production planning and control systems to coordinate activities and ensureproduct quality. They also design or improve systems for the physical distribution of goods and services. Industrial engineers determine which plant location has the best combination of raw materials
availability, transportation facilities, and costs. Industrial engineers use computers for simulations and to control various activities and devices, such as assembly lines and robots. They also develop wage and salary administration systems and job evaluation programs.
Many industrial engineers move into management positions because the work is closely
related. The work of health and safety engineers is similar to that of industrial engineers in that it deals with the entire production process. Health and safety engineers promote worksite or product safety and health by applying knowledge of industrial processes, as well as mechanical,chemical, and psychological principles. They must be able to anticipate, recognize, and evaluate hazardous conditions as well as develop hazard control methods. They also must be familiar with the application of health and safety regulations.