Nanotechnology ("nanotech") is manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology[1][2] referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter which occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Until 2012, through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars, the European Union has invested 1.2 billion and Japan 750 million dollars.[3]Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, molecular engineering, etc.[4] The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based uponmolecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.
Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in nanomedicine, nanoelectronics, biomaterials energy production, and consumer products. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials,[5] and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted. The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms. The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Comparison of Nanomaterials Sizes Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts and implications. Thus, emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the prospects of atomic control of matter. In the 1980s, two major breakthroughs sparked the growth of nanotechnology in modern era. First, the invention of the scanning tunneling microscope in 1981 which provided unprecedented visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received aNobel Prize in Physics in 1986.[6][7] Binnig, Quate and Gerber also invented the analogous atomic force microscope that year. Buckminsterfullerene C60, also known as the buckyball, is a representative member of thecarbon structures known asfullerenes. Members of the fullerene family are a major subject of research falling under the nanotechnology umbrella. Second, Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry.[8][9] C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related graphene tubes (calledcarbon nanotubes and sometimes called Bucky tubes) which suggested potential applications for nanoscale electronics and devices. In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology.[10] Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.[11] Meanwhile, commercialization of products based on advancements in nanoscale technologies began emerging. These products are limited to bulk applications of nanomaterials and do not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based transparent sunscreens, carbon fiber strengthening using silica nanoparticles, and carbon nanotubes for stain-resistant textiles.[12][13] Governments moved to promote and fund research into nanotechnology, such as in the U.S. with the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established funding for research on the nanoscale, and in Europe via the EuropeanFramework Programmes for Research and Technological Development. By the mid-2000s new and serious scientific attention began to flourish. Projects emerged to produce nanotechnology roadmaps[14][15] which center on atomically precise manipulation of matter and discuss existing and projected capabilities, goals, and applications. Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers. Physicist Richard Feynman, the father of nanotechnology. Nanoscience and nanotechnology are the study and application of extremely small things and can be used across all the other science fields, such as chemistry, biology, physics, materials science, and engineering. The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled “There’s Plenty of Room at the Bottom” by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used. In his talk, Feynman described a process in which scientists would be able to manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn't until 1981, with the development of the scanning tunneling microscope that could "see" individual atoms, that modern nanotechnology began. Medieval stained glass windows are an example of how nanotechnology was used in the pre-modern era. (Courtesy: NanoBioNet)It’s hard to imagine just how small nanotechnology is. One nanometer is a billionth of a meter, or 10-9 of a meter. Here are a few illustrative examples:
But something as small as an atom is impossible to see with the naked eye. In fact, it’s impossible to see with the microscopes typically used in a high school science classes. The microscopes needed to see things at the nanoscale were invented relatively recently—about 30 years ago. Once scientists had the right tools, such as the scanning tunneling microscope (STM) and the atomic force microscope (AFM), the age of nanotechnology was born. Although modern nanoscience and nanotechnology are quite new, nanoscale materials were used for centuries. Alternate-sized gold and silver particles created colors in the stained glass windows of medieval churches hundreds of years ago. The artists back then just didn’t know that the process they used to create these beautiful works of art actually led to changes in the composition of the materials they were working with. Today's scientists and engineers are finding a wide variety of ways to deliberately make materials at the nanoscale to take advantage of their enhanced properties such as higher strength, lighter weight, increased control of light spectrum, and greater chemical reactivity than their larger-scale counterparts. Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers. Physicist Richard Feynman, the father of nanotechnology. Nanoscience and nanotechnology are the study and application of extremely small things and can be used across all the other science fields, such as chemistry, biology, physics, materials science, and engineering. The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled “There’s Plenty of Room at the Bottom” by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used. In his talk, Feynman described a process in which scientists would be able to manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn't until 1981, with the development of the scanning tunneling microscope that could "see" individual atoms, that modern nanotechnology began. Medieval stained glass windows are an example of how nanotechnology was used in the pre-modern era. (Courtesy: NanoBioNet)It’s hard to imagine just how small nanotechnology is. One nanometer is a billionth of a meter, or 10-9 of a meter. Here are a few illustrative examples:
But something as small as an atom is impossible to see with the naked eye. In fact, it’s impossible to see with the microscopes typically used in a high school science classes. The microscopes needed to see things at the nanoscale were invented relatively recently—about 30 years ago. Once scientists had the right tools, such as the scanning tunneling microscope (STM) and the atomic force microscope (AFM), the age of nanotechnology was born. Although modern nanoscience and nanotechnology are quite new, nanoscale materials were used for centuries. Alternate-sized gold and silver particles created colors in the stained glass windows of medieval churches hundreds of years ago. The artists back then just didn’t know that the process they used to create these beautiful works of art actually led to changes in the composition of the materials they were working with. Today's scientists and engineers are finding a wide variety of ways to deliberately make materials at the nanoscale to take advantage of their enhanced properties such as higher strength, lighter weight, increased control of light spectrum, and greater chemical reactivity than their larger-scale counterparts. Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers. Physicist Richard Feynman, the father of nanotechnology. Nanoscience and nanotechnology are the study and application of extremely small things and can be used across all the other science fields, such as chemistry, biology, physics, materials science, and engineering. The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled “There’s Plenty of Room at the Bottom” by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used. In his talk, Feynman described a process in which scientists would be able to manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn't until 1981, with the development of the scanning tunneling microscope that could "see" individual atoms, that modern nanotechnology began. Medieval stained glass windows are an example of how nanotechnology was used in the pre-modern era. (Courtesy: NanoBioNet)It’s hard to imagine just how small nanotechnology is. One nanometer is a billionth of a meter, or 10-9 of a meter. Here are a few illustrative examples:
But something as small as an atom is impossible to see with the naked eye. In fact, it’s impossible to see with the microscopes typically used in a high school science classes. The microscopes needed to see things at the nanoscale were invented relatively recently—about 30 years ago. Once scientists had the right tools, such as the scanning tunneling microscope (STM) and the atomic force microscope (AFM), the age of nanotechnology was born. Although modern nanoscience and nanotechnology are quite new, nanoscale materials were used for centuries. Alternate-sized gold and silver particles created colors in the stained glass windows of medieval churches hundreds of years ago. The artists back then just didn’t know that the process they used to create these beautiful works of art actually led to changes in the composition of the materials they were working with. Today's scientists and engineers are finding a wide variety of ways to deliberately make materials at the nanoscale to take advantage of their enhanced properties such as higher strength, lighter weight, increased control of light spectrum, and greater chemical reactivity than their larger-scale counterparts.
0 Comments
The kinetic energy of particles of non-ionizing radiation is too small to produce charged ions when passing through matter. For non-ionizing electromagnetic radiation (see types below), the associated particles (photons) have only sufficient energy to change the rotational, vibrational or electronic valence configurations of molecules and atoms. The effect of non-ionizing forms of radiation on living tissue has only recently been studied. Nevertheless, different biological effects are observed for different types of non-ionizing radiation.[3][5]
Even "non-ionizing" radiation is capable of causing thermal-ionization if it deposits enough heat to raise temperatures to ionization energies. These reactions occur at far higher energies than with ionization radiation, which requires only single particles to cause ionization. A familiar example of thermal ionization is the flame-ionization of a common fire, and the browning reactions in common food items induced by infrared radiation, during broiling-type cooking. The electromagnetic spectrum is the range of all possible electromagnetic radiation frequencies.[3] The electromagnetic spectrum (usually just spectrum) of an object is the characteristic distribution of electromagnetic radiation emitted by, or absorbed by, that particular object. The non-ionizing portion of electromagnetic radiation consists of electromagnetic waves that (as individual quanta or particles, see photon) are not energetic enough to detach electrons from atoms or molecules and hence cause their ionization. These include radio waves, microwaves, infrared, and (sometimes) visible light. The lower frequencies of ultraviolet light may cause chemical changes and molecular damage similar to ionization, but is technically not ionizing. The highest frequencies of ultraviolet light, as well as all X-rays and gamma-rays are ionizing. The occurrence of ionization depends on the energy of the individual particles or waves, and not on their number. An intense flood of particles or waves will not cause ionization if these particles or waves do not carry enough energy to be ionizing, unless they raise the temperature of a body to a point high enough to ionize small fractions of atoms or molecules by the process of thermal-ionization (this, however, requires relatively extreme radiation intensities). Radiation with sufficiently high energy can ionize atoms; that is to say it can knock electrons off atoms and create ions. Ionization occurs when an electron is stripped (or "knocked out") from an electron shell of the atom, which leaves the atom with a net positive charge. Because living cells and, more importantly, the DNA in those cells can be damaged by this ionization, exposure to ionizing radiation is considered to increase the risk of cancer. Thus "ionizing radiation" is somewhat artificially separated from particle radiation and electromagnetic radiation, simply due to its great potential for biological damage. While an individual cell is made of trillions of atoms, only a small fraction of those will be ionized at low to moderate radiation powers. The probability of ionizing radiation causing cancer is dependent upon the absorbed dose of the radiation, and is a function of the damaging tendency of the type of radiation (equivalent dose) and the sensitivity of the irradiated organism or tissue (effective dose).
If the source of the ionizing radiation is a radioactive material or a nuclear process such as fission or fusion, there is particle radiation to consider. Particle radiation is subatomic particles accelerated to relativistic speeds by nuclear reactions. Because of their momenta they are quite capable of knocking out electrons and ionizing materials, but since most have an electrical charge, they don't have the penetrating power of ionizing radiation. The exception is neutron particles; see below. There are several different kinds of these particles, but the majority are alpha particles, beta particles, neutrons, and protons. Roughly speaking, photons and particles with energies above about 10 electron volts (eV) are ionizing (some authorities use 33 eV, the ionization energy for water). Particle radiation from radioactive material or cosmic rays almost invariably carries enough energy to be ionizing. Much ionizing radiation originates from radioactive materials and space (cosmic rays), and as such is naturally present in the environment, since most rock and soil has small concentrations of radioactive materials. The radiation is invisible and not directly detectable by human senses; as a result, instruments such as Geiger counters are usually required to detect its presence. In some cases, it may lead to secondary emission of visible light upon its interaction with matter, as in the case of Cherenkov radiation and radio-luminescence. Graphic showing relationships between radioactivity and detected ionizing radiationIonizing radiation has many practical uses in medicine, research and construction, but presents a health hazard if used improperly. Exposure to radiation causes damage to living tissue; high doses result in Acute radiation syndrome(ARS), with skin burns, hair loss, internal organ failure and death, while any dose may result in an increased chance of cancer and genetic damage; a particular form of cancer, thyroid cancer, often occurs when nuclear weapons and reactors are the radiation source because of the biological proclivities of the radioactive iodine fission product, iodine-131.[3] However, calculating the exact risk and chance of cancer forming in cells caused by ionizing radiation is still not well understood and currently estimates are loosely determined by population based on data from the atomic bombing in Japan and from reactor accident follow-up, such as with the Chernobyl disaster. The International Commission on Radiological Protection states that "The Commission is aware of uncertainties and lack of precision of the models and parameter values", "Collective effective dose is not intended as a tool for epidemiological risk assessment, and it is inappropriate to use it in risk projections" and "in particular, the calculation of the number of cancer deaths based on collective effective doses from trivial individual doses should be avoided."[4] ![]() A pendulum is a weight suspended from a pivot so that it can swing freely.[1] When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. When released, the restoring force combined with the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one complete cycle, a left swing and a right swing, is called the period. The period depends on the length of the pendulum, and also to a slight degree on the amplitude, the width of the pendulum's swing. From its examination in around 1602 by Galileo Galilei, the regular motion of pendulums was used for timekeeping, and was the world's most accurate timekeeping technology until the 1930s.[2] Pendulums are used to regulate pendulum clocks, and are used in scientific instruments such as accelerometers and seismometers. Historically they were used as gravimeters to measure the acceleration of gravity in geophysical surveys, and even as a standard of length. The word "pendulum" is new Latin, from the Latin pendulus, meaning 'hanging'.[3] The simple gravity pendulum[4] is an idealized mathematical model of a pendulum.[5][6][7] This is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. When given an initial push, it will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines. A supersonic aircraft is an aircraft able to fly faster than the speed of sound (Mach number 1). Supersonic aircraft were developed in the second half of the twentieth century and have been used almost entirely for research and military purposes. Only two, Concorde and theTupolev Tu-144, were ever designed for civil use as airliners. Fighter jets are the most common example of supersonic aircraft, although they don't always travel at supersonic speed.
The aerodynamics of supersonic flight is called compressible flow because of the compression (physics) associated with the shock wavesor "sonic boom" created by any object travelling faster than sound. Aircraft flying at speeds above Mach 5 are often referred to as hypersonic aircraft. Supersonic flight brings with it substantial technical challenges, as the aerodynamics of supersonic flight are dramatically different from those of subsonic flight (i.e., flight at speeds slower than that of sound). In particular, aerodynamic drag rises sharply as the aircraft passes the transonic regime, requiring much greater engine power and more streamlined airframes. Wings[edit]To keep drag low, wing span must be limited, which also reduces the aerodynamic efficiency when flying slowly. Since a supersonic aircraft must take off and land at a relatively slow speed, its aerodynamic design must be a compromise between the requirements for both ends of the speed range. One approach to resolving this compromise is the use of a variable-geometry wing, commonly known as the "swing-wing," which spreads wide for low-speed flight and then sweeps sharply, usually backwards, for supersonic flight. However, swinging affects the longitudinal trim of the aircraft and the swinging mechanism adds weight and cost, so it is not often used. Another technique that has been used is a delta-wing design, such as used on Concorde. This has the advantage that it can attain a high angle of attack at low speeds, which generates a vortex on the upper surface which greatly increases lift and gives a lower landing speed. Other kinds of wings are the Short thin wing, Sweep back wing and the swept forward wing. Heating[edit]Main article: Aerodynamic heating Another problem is the heat generated by friction as the air flows over the aircraft. Most supersonic designs use aluminium alloys such as Duralumin, which are cheap and easy to work but lose their strength quickly at high temperatures. This limits maximum speed to around Mach 2.2. Most supersonic aircraft, including many military fighter aircraft, are designed to spend most of their flight at subsonic speeds, and only to exceed the speed of sound for short periods such as when intercepting an enemy aircraft or dropping a bomb on a ground target. A smaller number, such as the Lockheed SR-71 Blackbird military reconnaissance aircraft and the Concorde supersonic civilian transport, are designed to cruise continuously at speeds above the speed of sound, and with these designs the problems of supersonic flight are more severe. Engines[edit]Many early supersonic aircraft, including the very first, relied on rocket power to provide the necessary thrust, although rockets burn a lot of fuel and so flight times were short. Early turbojets were more fuel-efficient but did not have enough thrust and some experimental aircraft were fitted with both a turbojet for low-speed flight and a rocket engine for supersonic flight. The invention of the afterburner, in which extra fuel is burned in the jet exhaust, made these mixed powerplant types obsolete and none entered production. The turbofan engine passes additional cold air around the engine core, further increasing its fuel efficiency, and most supersonic aircraft have been powered by turbofans fitted with afterburners. Supersonic aircraft usually use low bypass turbofans as they give good efficiency below the speed of sound as well as above; or if extended supercruise is needed turbojetengines are desirable as they give less nacelle drag at supersonic speeds. Another high-speed powerplant is the ramjet. This needs to be flying fairly fast before it will work at all. The Pratt & Whitney J58 engines of the Lockheed SR-71 Blackbirdoperated in mixed modes, taking off and landing as turbojets, using the afterburner to accelerate to higher speeds when the inner jet core was shut down and all the air was fed round the bypass duct to the afterburner, so that the engine now operated as a ramjet. This allowed the Blackbird to fly at well over Mach 3, faster than any other production aircraft. The heating effect of friction at these speeds meant that a special fuel had to be developed which did not break down in the heat and clog the fuel pipes on its way to the burner. Transonic flight[edit]Main article: Transonic flight Airflow can speed up or slow down locally at different points over an aircraft. In the region around Mach 1, some areas may experience supersonic flow while others are subsonic. This regime is called transonic flight. As the aircraft speed changes, pressure waves will form or move around. This can affect the trim, stability and controllability of the aircraft, and the designer needs to ensure that these effects are taken into account at all speeds. Hypersonic flight[edit]Main article: Hypersonic flight Flight at speeds above about Mach 5 is often referred to as hypersonic. In this region the problems of drag and heating are even more acute. It is difficult to make materials which can stand the forces and temperatures generated by air resistance at these speeds, and hypersonic flight for any significant length of time has not yet been achieved. Gravity or gravitation is a natural phenomenon by which all things are brought (or gravitate) towards one another - irrespective of size, i.e. stars, planets, galaxies and even light and sub-atomic particles. Gravity has an infinite range, and it cannot be absorbed, transformed, or shielded against. Gravity is responsible for the formation of structures within the universe (namely by creating spheres of hydrogen, igniting them with enough pressure to form stars and then grouping them together in to galaxies), as without gravity, the universe would be composed only of equally spaced particles. On Earth, gravity is commonly recognized in the form of weight where physical objects are harder to pick-up and carry the 'heavier' they are. Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915) which describes gravity, not as a force, but as a consequence of the curvature of spacetime caused by the uneven distribution of mass/energy; and resulting in time dilation, where time lapses more slowly in strong gravitation. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which postulates that gravity is a force where two bodies of mass are directly drawn to each other according to a mathematical relationship, where the attractive force is proportional to the product of their masses and inversely proportional to the squareof the distance between them. This is considered to occur over an infinite range, such that all bodies (with mass) in the universe are drawn to each other no matter how far they are apart. Gravity is the weakest of the four fundamental interactions of nature. The gravitational attraction is approximately 10−38 times the strength of the strong force (i.e. gravity is 38 orders of magnitude weaker), 10−36 times the strength of the electromagnetic force, and 10−29 times the strength of the weak force. As a consequence, gravity has a negligible influence on the behavior of sub-atomic particles, and plays no role in determining the internal properties of everyday matter (but see quantum gravity). On the other hand, gravity is the dominant force at the macroscopic scale, that is the cause of the formation, shape, and trajectory (orbit) of astronomical bodies, including those of asteroids, comets,planets, stars, and galaxies. It is responsible for causing the Earth and the other planets to orbit the Sun; for causing the Moon to orbit the Earth; for the formation of tides; for natural convection, by which fluid flow occurs under the influence of a density gradient and gravity; for heating the interiors of forming stars and planets to very high temperatures; for solar system, galaxy, stellar formation and evolution; and for various other phenomena observed on Earth and throughout the universe. In pursuit of a theory of everything, the merging of general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity has become an area of research. Aurora![]() An aurora is a natural light display in the sky, predominantly seen in the high latitude (Arctic and Antarctic) regions.[nb 1] Auroras are produced when the Magnetosphere is sufficiently disturbed by the Solar wind that the trajectories of charged particles in both Solar wind and magnetospheric plasma, mainly in the form of electrons and protons, precipitate them into the upper atmosphere (thermosphere/exosphere), where their energy is lost. The resulting ionization and excitation of atmospheric constituents emits light of varying colour and complexity. The form of the aurora, occurring within bands around both polar regions, is also dependent on the amount of acceleration imparted to the precipitating particles. Precipitating protons generally produce optical emissions as incident hydrogen atoms after gaining electrons from the atmosphere. Proton auroras are usually observed at lower latitudes. [2] Different aspects of an aurora are elaborated in various sections below. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
August 2016
Categories |