Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2012 June 23
Explanation: As seen from Frösön island in northern Sweden the Sun did set a day after the summer solstice. From that location below the arctic circle it settled slowly behind the northern horizon. During the sunset's final minute, this remarkable sequence of 7 images follows the distorted edge of the solar disk as it just disappears against a distant tree line, capturing both a green and blue flash. Not a myth even in a land of runes, the colorful but elusive glints are caused by atmospheric refraction enhanced by long, low, sight lines and strong atmospheric temperature gradients.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U.
|
ASL Literature and Art
This section is a collection of ASL storytelling, poetry, works of art, and other creative works. It also consists of posts on literary aspects of ASL.
Speech language can convey sound effects in storytelling, whereas sign language can convey cinematic effects in storytelling.
Poetry in sign language has its own poetic features such as rhymes, rhythms, meters, and other features that charactierize poetry which is not limited to speech.
Explore ASL literary arts in this section including some visual-linguistic literary works in ASL and discussion.
Selected works of interest
Deconstruct W.O.R.D.: an original poetry performance.
Knowing Fish: poetic narrative video.
Compare three versions of the poem "Spring Dawn" originally written by Meng Hao-jan. The poem is translated by the literary artist Jolanta Lapiak into ASL in video and unique one-of-a-kind photograph print. Watch how ASL rhymes arise in this signed poem.
|
May 19, 2008 A vaccine created by University of Rochester Medical Center scientists prevents the development of Alzheimer's disease-like pathology in mice without causing inflammation or significant side effects.
Vaccinated mice generated an immune response to the protein known as amyloid-beta peptide, which accumulates in what are called "amyloid plaques" in brains of people with Alzheimer's. The vaccinated mice demonstrated normal learning skills and functioning memory in spite of being genetically designed to develop an aggressive form of the disease.
The Rochester scientists reported the findings in an article in the May issue of Molecular Therapy, the journal of The American Society of Gene Therapy.
"Our study demonstrates that we can create a potent but safe version of a vaccine that utilizes the strategy of immune response shaping to prevent Alzheimer's-related pathologies and memory deficits," said William Bowers, associate professor of neurology and of microbiology and immunology at the Medical Center and lead author of the article. "The vaccinated mice not only performed better, we found no evidence of signature amyloid plaque in their brains."
Alzheimer's is a progressive neurodegenerative disease associated with dementia and a decline in performance of normal activities. Hallmarks of the disease include the accumulation of amyloid plaques in the brains of patients and the loss of normal functioning tau, a protein that stabilizes the transport networks in neurons. Abnormal tau function eventually leads to another classic hallmark of Alzheimer's, neurofibrillary tangle in nerve cells. After several decades of exposure to these insults, neurons ultimately succumb and die, leading to progressively damaged learning and memory centers in the brain.
The mice that received the vaccines were genetically engineered to express large amounts of amyloid beta protein. They also harbored a mutation that causes the tau-related tangle pathology. Prior to the start of the vaccine study, the mice were trained to navigate a maze using spatial clues. They were then tested periodically during the 10-month study on the amount of time and distance traveled to an escape pod and the number of errors made along the way.
"What we found exciting was that by targeting one pathology of Alzheimer's -- amyloid beta -- we were able to also prevent the transition of tau from its normal form to a form found in the disease state," Bowers said.
The goal of the vaccine is to prompt the immune system to recognize amyloid beta protein and remove it. To create the vaccine, Bowers and the research group use a herpes virus that is stripped of the viral genes that can cause disease or harm. They then load the virus container with the genetic code for amyloid beta and interleukin-4, a protein that stimulates immune responses involving type 2 T helper cells, which are lymphocytes that play an important role in the immune system.
The research group tested several versions of a vaccine. Mice were given three injections of empty virus alone, a vaccine carrying only the amyloid beta genetic code, or a vaccine encoding both amyloid beta and interlueikin-4, which was found to be the most effective.
"We have learned a great deal from this ongoing project," Bowers said. "Importantly, it has demonstrated the combined strengths of the gene delivery platform and the immune shaping concept for the creation of customized vaccines for Alzheimer's disease, as well as a number of other diseases. We are currently working on strategies we believe can make the vaccine even safer."
Bowers expects the vaccine eventually to be tested in people, but due to the number of studies required to satisfy regulatory requirements, it could be three or more years before human trials testing this type of Alzheimer's vaccine occur.
Grants from the National Institutes of Health supported the study. In addition to Bowers, authors of the Molecular Therapy article include Maria E. Frazer, Jennifer E. Hughes, Michael A. Mastrangelo and Jennifer Tibbens of the Medical Center and Howard J. Federoff of Georgetown University Medical Center.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
Nephila jurassica: The biggest spider fossil ever found
Spiders are small arthropods, famous for their elasticity, strength and web-making abilities. For some people, spiders are not welcome in the home; as soon as they see one crawling on the ceiling, the first thought that comes to mind is to swat it at once.
But spiders predate us humans by a long way. And while sometimes spiders are tiny creatures, a team of scientists has discovered the largest spider fossil ever in a layer of volcanic ash in Ningcheng County, Inner Mongolia, China. The research was carried out by Paleontologist Professor Paul Selden, of the University of Kansas, with his team.
Named Nephila jurassica, this 165-million-year-old fossil is 2.5 cm in length and has a leg span of almost 9 cm. It is currently the largest known fossilized spider, and is from the family known as Nephilidae, the largest web-weaving spiders alive today.
According to research published online in the 20th April, 2011 issue of Biology Letters, this prehistoric spider was female and shows characteristics of the golden orb weaver. Widespread in warmer regions, the golden silk orb weavers are well-known for the fabulous webs they weave. Females of this family weave the largest orb webs known.
"When I first saw it, I immediately realized that it was very unique not only because of its size, but also because the preservation was excellent," said ChungKun Shih, study co-author, and a visiting professor at Capital Normal University in Beijing, China.
According to a press release: “This fossil finding provides evidence that golden orb-webs were being woven and capturing medium to large insects in Jurassic times, and predation by these spiders would have played an important role in the natural selection of contemporaneous insects.”
|
A fossilised little finger discovered in a cave in the mountains of southern Siberia belonged to a young girl from an unknown group of archaic humans, scientists say.
The missing human relatives are thought to have inhabited much of Asia as recently as 30,000 years ago, and so shared the land with early modern humans and Neanderthals.
The finding paints a complex picture of human history in which our early ancestors left Africa 70,000 years ago to rub shoulders with other distant relatives in addition to the stocky, barrel-chested Neanderthals.
The new ancestors have been named “Denisovans” after the Denisova cave in the Altai mountains of southern Siberia where the finger bone was unearthed in 2008.
A “Denis” is thus a member of an archaic human subspecies.
|
Tips to Facilitate Workshops Effectively
Facilitators play a very important role in the creation of a respectful, positive learning environment during a workshop. Here you will find some tips to facilitate workshops effectively.
- Make sure everybody has a chance to participate. For example, through small group activities or direct questions to different participants. Help the group to avoid long discussions between two people who may isolate the rest of the/other participants. Promote the importance of sharing the space and listening to different voices and opinions.
- Be prepared to make adjustments to the agenda – sometimes you have to cross out activities, but the most important thing is to achieve the general goals of the workshop.
- Make every possible thing to have all the logistics ready beforehand to then be able to focus on the workshop’s agenda.
- Pay attention to the group’s energy and motivation – Plan activities where everyone is able to participate and to stay active and engaged.
- Provide space for the participants to be able to share their own experiences and knowledge. Remember that each one of us has a lot to learn and a lot to teach.
- Relax and have fun! Be a part of the process – You are learning, too, so you don’t have to know it all nor do everything perfect.
- Be prepared for difficult questions. Get familiarized with the topic, know the content of the workshop but remember you don’t have to know all the answers! You can ask other participants what they know about the topic, or you can find out the answers later and share them with the participants after the workshop.
- Focus on giving general information – Avoid answering questions about specific cases. Usually, this can change the direction of the conversation and might be considered as providing legal advice without a license to do so.
- Your work as facilitator is to help the group learn together, not necessarily to present all the information and be the “expert” in the topic.
- Try to be as clear as possible – especially when you are giving the exercises’ instructions. Work as a team with the other facilitators during the whole workshop.
|
A new tool to identify the calls of bat species could help conservation efforts.
Because bats are nocturnal and difficult to observe or catch, the most effective way to study them is to monitor their echolocation calls. These sounds are emitted in order to hear the echo bouncing back from surfaces around the bats, allowing them to navigate, hunt and communicate.
Many different measurements can be taken from each call, such as its minimum and maximum frequency, or how quickly the frequency changes during the call, and these measurements are used to help identify the species of bat.
However, a paper by an international team of researchers, published in the Journal of Applied Ecology, asserts that poor standardisation of acoustic monitoring limits scientists’ ability to collate data.
Kate Jones, chairwoman of the UK-based Bat Conservation Trust
told the BBC that “without using the same identification methods everywhere, we cannot form reliable conclusions about how bat populations are doing and whether their distribution is changing.
"Because many bats migrate between different European countries, we need to monitor bats at a European - as well as country - scale.”
The team selected 1,350 calls from 34 different European bat species from EchoBank, a global echolocation library containing more than 200,000 bat call recordings. This raw data has allowed them to develop the identification tool, iBatsID
, which can identify 34 out of 45 species of bats.
This free online tool works anywhere in Europe, and its creators claim can identify most species correctly more than 80% of the time.
There are 18 species of bat residing in the UK, including the common pipistrelle and greater horseshoe bat.
Monitoring bats is vital not just to this species, but also to the whole ecosystem. Bats are extremely sensitive to changes in their environment, so if bat populations are declining, it can be an indication that other species might be affected in the future.
|
The common name for sedums is Stonecrop. There is a Stonecrop Nursery in eastern New York which was the first garden created by Frank Cabot. Frank created the Garden Conservancy, an organization which strives to preserve some of our exceptional gardens for posterity. Each year it also runs its Open Days Program which opens gardens to the public throughout the country. Frank Cabot went on to create Les Quatre Vents, an outstanding garden at his family home in Quebec.
There are two sedums which most gardeners grow; Sedum acre, a tiny low-growing
groundcover plant with bright yellow flowers. This is being used effectively in the Peace Garden in the plaza between the library and city hall.
The other is Autumn Joy which is in bloom now and will continue to provide color for
months to come. Some references say it requires full sun. Not so! I have it in three locations in my garden. I have several plants growing out of a south-facing wall. But there are tall oaks and maples to the south so that the only time it gets direct sun is in spring before the oaks leaf out. The rest of the year it is dappled light. Another plant is in the east-facing bed on top of my long stone wall where it gets only morning sun. The third plant is in my shrub-perennial border where it gets a bit of sun mid-day. Mine is the ordinary run-of-the-mill Autumn Joy, but there are several cultivars offered in nurseries. Among these are: Crimson, Iceberg, which has white flowers, Autumn Fire and Chocolate Drop, growing only eight inches tall with brown leaves and pink flowers.
There are two native sedums: Roseroot, Sedum rosa and Wild Stonecrop, Sedum ternatum. A third Wild Live-forever, Sedum teliphiodes, grows on cliffs and rocks in Pennsylvania and southward.
|
A Soyuz rocket launched two Galileo satellites into orbit on Friday, marking a crucial step for Europe’s planned navigation system, operator Arianespace announced.
The launch took place at the Kourou space base in French Guiana, at 3:15pm (6:15pm GMT).
Three and three-quarter hours later, the 700kg satellites were placed into orbit.
The new satellites add to the first two in the Galileo navigation system, which were launched on Oct. 21, last year.
Together they create a “mini-constellation.” Four is the minimum number of satellites needed to gain a navigational fix on the ground, using signals from the satellite to get a position for latitude, longitude, altitude and a time reference.
Galileo will ultimately consist of 30 satellites, six more than the US Global Positioning System.
By 2015, 18 satellites should be in place, which is sufficient for launching services to the public, followed by the rest in 2020, according to the European Space Agency.
It is claimed that the system will be accurate to within one meter. The US Global Positioning System, which became operational in 1995 and is currently being upgraded, is currently accurate to between three and eight meters.
In May, the European Commission said the cost by 2015 would be 5 billion euros (US$6.45 billion).
As a medium-sized launcher, Soyuz complements Europe’s heavyweight Ariane 5 and lightweight Vega rockets.
|
Emacs Lisp uses two kinds of storage for user-created Lisp objects: normal storage and pure storage. Normal storage is where all the new data created during an Emacs session are kept (see Garbage Collection). Pure storage is used for certain data in the preloaded standard Lisp files—data that should never change during actual use of Emacs.
Pure storage is allocated only while temacs is loading the
standard preloaded Lisp libraries. In the file emacs, it is
marked as read-only (on operating systems that permit this), so that
the memory space can be shared by all the Emacs jobs running on the
machine at once. Pure storage is not expandable; a fixed amount is
allocated when Emacs is compiled, and if that is not sufficient for
the preloaded libraries, temacs allocates dynamic memory for
the part that didn't fit. The resulting image will work, but garbage
collection (see Garbage Collection) is disabled in this situation,
causing a memory leak. Such an overflow normally won't happen unless
you try to preload additional libraries or add features to the
standard ones. Emacs will display a warning about the overflow when
it starts. If this happens, you should increase the compilation
SYSTEM_PURESIZE_EXTRA in the file
src/puresize.h and rebuild Emacs.
This function makes a copy in pure storage of object, and returns it. It copies a string by simply making a new string with the same characters, but without text properties, in pure storage. It recursively copies the contents of vectors and cons cells. It does not make copies of other objects such as symbols, but just returns them unchanged. It signals an error if asked to copy markers.
This function is a no-op except while Emacs is being built and dumped; it is usually called only in preloaded Lisp files.
The value of this variable is the number of bytes of pure storage allocated so far. Typically, in a dumped Emacs, this number is very close to the total amount of pure storage available—if it were not, we would preallocate less.
This variable determines whether
defunshould make a copy of the function definition in pure storage. If it is non-
nil, then the function definition is copied into pure storage.
This flag is
twhile loading all of the basic functions for building Emacs initially (allowing those functions to be shareable and non-collectible). Dumping Emacs as an executable always writes
nilin this variable, regardless of the value it actually has before and after dumping.
You should not change this flag in a running Emacs.
|
35 - The Philosopher's Toolkit: Aristotle's Logical Works
Peter discusses Aristotle’s pioneering work in logic, and looks at related issues like the ten categories and the famous “sea battle” argument for determinism.
You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.
• J. Hintikka, Time and Necessity. Studies in Aristotle's Theory of Modality (Oxford:1973).
• W. Leszl, “Aristotle's Logical Works and His Conception of Logic,” Topoi 23 (2004), 71–100.
• R. Smith, "Logic," in J. Barnes (ed.), The Cambridge Companion to Aristotle (Cambridge: 1995), 27-65.
• S. Waterlow, Passage and Possibility (Oxford: 1982).
On the "sea battle" problem:
• G.E.M. Anscombe, “Aristotle and the Sea Battle,” in J.M.E. Moravcsik (ed.), Aristotle: a Collection of Critical Essays, (1967), reprinted from Mind 65 (1956).
• D. Frede, “The Sea-Battle Reconsidered: a Defence of the Traditional Interpretation,” Oxford Studies in Ancient Philosophy 3 (1985).
• J. Hintikka, “The Once and Future Sea Fight: Aristotle’s Discussion of Future Contingents in de Interpretatione 9,” in his Time and Necessity (see above).
|
scintillation counterArticle Free Pass
scintillation counter, radiation detector that is triggered by a flash of light (or scintillation) produced when ionizing radiation traverses certain solid or liquid substances (phosphors), among which are thallium-activated sodium iodide, zinc sulfide, and organic compounds such as anthracene incorporated into solid plastics or liquid solvents. The light flashes are converted into electric pulses by a photoelectric alloy of cesium and antimony, amplified about a million times by a photomultiplier tube, and finally counted. Sensitive to X rays, gamma rays, and charged particles, scintillation counters permit high-speed counting of particles and measurement of the energy of incident radiation.
What made you want to look up "scintillation counter"? Please share what surprised you most...
|
Digital Audio Networking Demystified
The OSI model helps bring order to the chaos of various digital audio network options.
Credit: Randall Fung/Corbis
Networking has been a source of frustration and confusion for pro AV professionals for decades. Fortunately, the International Organization of Standardization, more commonly referred to as ISO, created a framework in the early 1980s called the Open Systems Interconnection (OSI) Reference Model, a seven-layer framework that defines network functions, to help simplify matters.
Providing a common understanding of how to communicate to each layer, the OSI model (Fig. 1) is basically the foundation of what makes data networking work. Although it's not important for AV professionals to know the intricate details of each layer, it is vital to at least have a grasp of the purpose of each layer as well as general knowledge of the common protocols in each one. Let's take a look at the some key points.
The Seven Layers
Starting from the bottom up, the seven layers of the OSI Reference Model are Physical, Data Link, Network, Transport, Session, Presentation, and Application. The Physical layer is just that — the hardware's physical connection that describes its electrical characteristics. The Data Link layer is the logic connection, defining the type of network. For example, the Data Link layer defines whether or not it is an Ethernet or Asynchronous Transfer Mode (ATM) network. There is also more than one data network transport protocol. The Data Link layer is divided into two sub-layers: the Media Access Control (MAC) and the Logical Link Control (above the MAC as you move up the OSI Reference Model).
The seven layers of the Open Systems Interconnection (OSI) Reference Model for network functions.
Here is one concrete example of how the OSI model helps us understand networking technologies. Some people assume that any device with a CAT-5 cable connected to it is an Ethernet device. But it is Ethernet's Physical layer that defines an electrical specification and physical connection — CAT-5 terminated with an RJ-45 connector just happens to be one of them. For a technology to fully qualify as an Ethernet standard, it requires full implementation of both the Physical and Data Link layers.
The Network layer — the layer at which network routers operate — “packetizes” the data and provides routing information. The common protocol for this layer is the Internet Protocol (IP).
Layer four is the Transport layer. Keep in mind that this layer has a different meaning in the OSI Reference Model compared to how we use the term “transport” for moving audio around. The Transport layer provides protocols to determine the delivery method. The most popular layer four protocol is Transmission Control Protocol (TCP). Many discuss TCP/IP as one protocol, but actually they are two separate protocols on two different layers. TCP/IP is usually used as the data transport for file transfers or audio control applications.
Comparison of four digital audio technologies using the OSI model as a framework.
TCP provides a scheme where it sends an acknowledge message for each packet received by a sending device. If it senses that it is missing a packet of information, it will send a message back to the sender to resend. This feature is great for applications that are not time-dependent, but is not useful in real-time applications like audio and video.
Streaming media technologies most common on the Web use another method called User Datagram Protocol (UDP), which simply streams the packets. The sender never knows if it actually arrives or not. Professional audio applications have not used UDP because they are typically Physical layer or Data Link layer technologies — not Transport layer. However, a newcomer to professional audio networking, Australia-based Audinate, has recently become the first professional audio networking technology to use UDP/IP technology over Ethernet with its product called Dante.
The Session and Presentation layers are not commonly used in professional audio networks; therefore, they will not be covered in this article. Because these layers can be important to some integration projects, you may want to research the OSI model further to complete your understanding of this useful tool.
The purpose of the Application layer is to provide the interface tools that make networking useful. It is not used to move audio around the network. It controls, manages, and monitors audio devices on a network. Popular protocols are File Transfer Protocol (FTP), Telnet, Hypertext Transfer Protocol (HTTP), Domain Name System (DNS), and Virtual Private Network (VPN), to name just a few.
Now that you have a basic familiarity with the seven layers that make up the OSI model, let's dig a little deeper into the inner workings of a digital audio network.
Breaking Down Audio Networks
Audio networking can be broken into in two main concepts: control and transport. Configuring, monitoring, and actual device control all fall into the control category and use several standard communication protocols. Intuitively, getting digital audio from here to there is the role of transport.
Control applications can be found in standard protocols of the Application layer. Application layer protocols that are found in audio are Telnet, HTTP, and Simple Network Management Protocol (SNMP). Telnet is short for TELetype NETwork and was one of the first Internet protocols. Telnet provides command-line style communication to a machine. One example of Telnet usage in audio is the Peavey MediaMatrix, which uses this technology, known as RATC, as a way to control MediaMatrix devices remotely.
SNMP is a protocol for monitoring devices on a network. There are several professional audio and video manufacturers that support this protocol, which provides a method for managing the status or health of devices on a network. SNMP is a key technology in Network Operation Center (NOC) monitoring. It is an Application layer protocol that communicates to devices on the network through UDP/IP protocols, which can be communicated over a variety of data transport technologies.
Control systems can be manufacturer-specific, such as Harman Pro's HiQnet, QSC Audio's QSControl, or third party such as Crestron's CresNet, where the control software communicates to audio devices through TCP/IP. In many cases, TCP/IP-based control can run on the same network as the audio signal transport, and some technologies (such as CobraNet and Dante) are designed to allow data traffic to coexist with audio traffic.
The organizing and managing of audio bits is the job of the audio Transport. This is usually done by the audio protocol. Aviom, CobraNet, and EtherSound are protocols that organize bits for transport on the network. The transport can be divided into two categories: logical and physical.
Purely physical layer technologies, such as Aviom, use hardware to organize and move digital bits. More often than not, a proprietary chip is used to organize and manage them. Ethernet-based technologies packetize the audio and send it to the Data Link and Physical layers to be transported on Ethernet devices. Ethernet is both a logical and physical technology that packetizes or “frames” the audio in the Data Link layer and sends it to the Physical layer to be moved to another device on the network. Ethernet's Physical layer also has a Physical layer chip, referred to as the PHY chip, which can be purchased from several manufacturers.
Comparing Digital Audio Systems
The more familiar you are with the OSI model, the easier it will be to understand the similarities and differences of the various digital audio systems. For many people, there is a tendency to gloss over the OSI model and just talk about networking-branded protocols. However, understanding the OSI model will bring clarity to your understanding of digital audio networking (Fig. 2).
Due to the integration of pro AV systems, true networking schemes are vitally important. A distinction must be made between audio networking and digital audio transports. Audio networks are defined as those meeting the commonly used standard protocols, where at least the Physical and Data Link layer technologies and standard network appliances (such as hubs and switches) can be used. There are several technologies that meet this requirement using IEEE 1394 (Firewire), Ethernet, and ATM technologies, to name a few. However, because Ethernet is widely deployed in applications ranging from large enterprises to the home, this will be the technology of focus. All other technologies that do not meet this definition will be considered digital audio transport systems, and not a digital audio network.
There are at least 15 schemes for digital audio transport systems and audio networking. Three of the four technologies presented here have been selected because of their wide acceptance in the industry based on the number of manufacturers that support it.
Let's compare four CAT-5/Ethernet technologies: Aviom, EtherSound, CobraNet, and Dante. This is not to be considered a “shoot-out” between technologies but rather a discussion to gain understanding of some of the many digital system options available to the AV professional.
As previously noted, Aviom is a Physical layer–only technology based on the classifications outlined above. It does use an Ethernet PHY chip, but doesn't meet the electrical characteristics of Ethernet. Therefore, it cannot be connected to standard Ethernet hubs or switches. Aviom uses a proprietary chip to organize multiple channels of audio bits to be transported throughout a system, and it falls in the classification of a digital audio transport system.
EtherSound and CobraNet are both 802.3 Ethernet– compliant technologies that can be used on standard data Ethernet switches. There is some debate as to whether EtherSound technology can be considered a true Ethernet technology because it requires a dedicated network. EtherSound uses a proprietary scheme for network control, and CobraNet uses standard data networking methods. The key difference for both the AV and data professional is that EtherSound uses a dedicated network, and CobraNet does not. There are other differences that may be considered before choosing between CobraNet and EtherSound, but both are considered to be layer two (Data Link) technologies.
Dante uses Ethernet, but it is considered a layer four technology (Transport). It uses UDP for audio transport and IP for audio routing on an Ethernet transport, commonly referred to as UDP/IP over Ethernet.
At this point you may be asking yourself why does the audio industry have so many technologies? Why can't there be one standard like there is in the data industry?
The answer to the first question relates to time. Audio requires synchronous delivery of bits. Early Ethernet networks weren't much concerned with time. Ethernet is asynchronous, meaning there isn't a concern when and how data arrives as long as it gets there. Therefore, to put digital audio on a data network requires a way to add a timing mechanism. Time is an issue in another sense, in that your options depend on technology or market knowledge available at the time when you develop your solution. When and how you develop your solution leads to the question of a single industry standard.
Many people don't realize that the data industry does in fact have more than one standard: Ethernet, ATM, FiberChannel, and SONET. Each layer of the OSI model has numerous protocols for different purposes. The key is that developers follow the OSI model as a framework for network functions and rules for communicating between them. If the developer wants to use Ethernet, he or she is required to have this technology follow the rules for communicating to the Data Link layer, as required by the Ethernet standard.
Because one of the key issues for audio involves time, it's important to use it wisely.
There are two types of time that we need to be concerned with in networking: clock time and latency. Clock time in this context is a timing mechanism that is broken down into measurable units, such as milliseconds. In digital audio systems, latency is the time duration between when audio or a bit of audio goes into a system until the bit comes out the other side. Latency has many causes, but arguably the root cause in audio networks is the design of its timing mechanism. In addition, there is a tradeoff between the timing method and bandwidth. A general rule of thumb is that as the resolution of the timing mechanism increases, the more bandwidth that's required from the network.
Ethernet, being an asynchronous technology, requires a timing method to be added to support the synchronous nature of audio. The concepts and methodology of clocking networks for audio are key differences among the various technologies.
CobraNet uses a time mechanism called a beat packet. This packet is sent out in 1.33 millisecond intervals and communicates with CobraNet devices. Therefore, the latency of a CobraNet audio network can't be less than 1.33 milliseconds. CobraNet was introduced in 1995 when large-scale DSP-based digital systems started replacing analog designs in the market. Because the “sound system in a box” was new, there was great scrutiny of these systems. A delay or latency in some time-critical applications was noticed, considered to be a challenge of using digital systems. However, many believe that latency is an overly exaggerated issue in most applications where digital audio systems are deployed. In fact, this topic could be an article unto itself.
A little history of digital systems and networking will provide some insight on the reason why there are several networking technologies available today. In the late '90s, there were two “critical” concerns in the digital audio industry: Year of 2000 compliance (Y2K) and latency. To many audio pros, using audio networks like CobraNet seemed impossible because of the delay —at that time, approximately 5 milliseconds, or in video terms, less time than a frame of video.
Enter EtherSound, introduced in 2001, which addressed the issue of latency by providing an Ethernet networking scheme with low latency and better bit-depth and higher sampling rate than CobraNet. The market timing and concern over latency gave EtherSound an excellent entry point. But since reducing latency down to 124 microseconds limits available bandwidth for data traffic, a dedicated network is required for a 100-MB EtherSound network. Later, to meet the market demands of lower latency requirements, CobraNet introduced variable latency, with 1.33 milliseconds being the minimum. With the Ethernet technologies discussion thus far, there is a relationship between the bit-depth and sample rate to the clocking system.
Audio is not the only industry with a need for real-time clocking schemes. Communications, military, and industrial applications also require multiple devices to be connected together on a network and function in real-time. A group was formed from these markets, and they took on the issue of real-time clocking while leveraging the widely deployed Ethernet technology. The outcome was the IEEE 1588 standard for a real-time clocking system for Ethernet networks in 2002.
As a late entry to the networking party, Audinate's Dante comes to the market with the advantage of using new technologies like IEEE 1588 to solve many of the current challenges in networking audio. Using this clocking technology in Ethernet allows Dante to provide sample accurate timing and synchronization while achieving latency as low as 34 microseconds. Coming to the market later also has the benefit of Gigabit networking being widely supported, which provides the increased bandwidth requirement of ultra-low latency. It should be noted here that EtherSound does have a Gigabit version, and CobraNet does work on Gigabit infrastructure with added benefits but it is currently a Fast Ethernet technology.
Dante provides a flexible solution to many of the current tradeoffs that require one system on another due to design requirements of latency verses bandwidth, because Dante can support different latency, bit depth, and sample rates in the same system. For example, this allows a user to provide a low-latency, higher bandwidth assignment to in-ear monitoring while at the same time use a higher latency assignment in areas where latency is less of a concern (such as front of house), thereby reducing the overall network bandwidth requirement.
The developers of CobraNet and Dante are both working toward advancing software so that AV professionals and end-users can configure, route audio, and manage audio devices on a network. The goal is to make audio networks “plug-and-play” for those that don't want to know anything about networking technologies. One of the advances to note is called “device discovery,” where the software finds all of the audio devices on the network so you don't have to configure them in advance. The software also has advance features for those who want to dive into the details of their audio system.
Advances in digital audio systems and networking technologies will continue to change to meet market applications and their specific requirements. Aviom's initial focus was to create a personal monitoring system, and it developed a digital audio transport to better serve this application. Aviom's low-latency transport provided a solution to the market that made it the perfect transport for many live applications. CobraNet provides the AV professional with a solution to integrate audio, video, and data systems on an enterprise switched network. EtherSound came to the market by providing a low-latency audio transport using standard Ethernet 802.3 technology. Dante comes to the market after significant change and growth and Gigabit networking and new technologies like IEEE 1588 to solve many of challenges of using Ethernet in real-time systems.
Networking audio and video can seem chaotic, but gaining an understanding of the OSI model helps bring order to the chaos. It not only provides an understanding of the various types of technology, but it also provides a common language to communicate for both AV and data professionals. Keeping it simple by using the OSI model as the foundation and breaking audio networking down into two functional parts (control and transport) will help you determine which networking technology will best suit your particular application.
Brent Harshbarger is the founder of m3tools located in Atlanta. He can be reached at [email protected].
|
"From the creators of Animachines, a playful way for kids to learn actions, animals, and opposites."
Opposites are everywhere in our world. A day at the park will reveal busy kids and active animals doing opposite actions. There, on the slide: Sam goes up as Hiroko zips down. And there, in the tree: one squirrel climbs up and another scurries down. Not far away, Daddy is quiet but Simon is LOUD, while a mother bird is silent as she feeds her chirping babies.
In Kids Do, Animals Too ten pairs explain opposites to young children. Each pair features people and a different animal, including: fast, slow -- dogs in, out -- mice up, down -- squirrels on, off -- frogs ahead, behind -- ants quiet, loud -- birds under, over -- spiders wet, dry -- ducks toward, away -- butterflies asleep, awake -- squirrels, bats
The softly colored artwork features familiar park settings, and includes look-and-find details that cleverly connect each active scene. Meanwhile, the aptly crafted language keeps the action moving and the learning fun.
|
- Types of home dialysis
- Daily HHD
- Nocturnal HHD
- Standard HHD
- News & events
- Message boards
- For professionals
- About us
...everything you need to know about doing dialysis at home.
Here we present a chronological tour of dialysis from the beginning.
All photos by Jim Curtis; descriptions courtesy of Baxter.
The first practical artificial kidney was developed during World War II by the Dutch physician Willem Kolff. The Kolff kidney used a 20-meter long tube of cellophane sausage casing as a dialyzing membrane. The tube was wrapped around a slatted wooden drum. Powered by an electric motor, the drum revolved in a tank filled with dialyzing solution. The patient’s blood was drawn through the cellophane tubing by gravity as the drum revolved. Toxic molecules in the blood diffused through the tubing into the dialyzing solution. Complete dialysis took about six hours. The Kolff kidney effectively removed toxins from the blood, but because it operated at low pressure, it was unable to remove excess fluid from the patient’s blood. Modern dialysis machines are designed to filter out excess fluid while cleansing the blood of wastes.
Blood was drained from the patient into a sterile container. Anticlotting drugs were added, and the filled container was hung on a post above the artificial kidney and connected to the cellulose acetate tubing that was wound around the wooden drum. A motor turned the drum, pulling the blood through the tubing by gravity.
The tank underneath the drum was filled with dialyzing fluid. As the blood-filled tubing passed through this fluid, waste products from the blood diffused through the tubing into the dialyzing fluid. The cleansed blood collected in a second sterile container at the other end of the machine. When all of the blood had passed through the machine, this second container was raised to drain the blood back into the patient.
George Thorn, MD, of the Peter Bent Hospital in Boston, MA, invited Willem Kolff, MD, to meet with Carl Walters, MD, and John Merrill, MD, to redesign and modify the original Kolff Rotating Drum Kidney. The artificial kidney was to be used to support the first proposed transplant program in the United States. This device was built by Edward Olson, an engineer, who would produce over forty of these devices, which were shipped all over the world.
Cellulose acetate tubular membrane, the same type of membrane that is used as sausage casing, was wrapped around the drum and connected to latex tubing that would be attached to the patient’s bloodstream. The drum would be rotating in the dialyzing fluid bath that is located under the drum.
The patient’s blood was propelled through the device by the “Archimedes screw principle” and a pulsatile pump. A split coupling was developed to connect the tubing to the membrane, a component necessary to prevent the tubing and membrane from twisting. This connection is at the inlet and outlet of the rotating drum.
The membrane surface area could be adjusted by increasing or decreasing the number of wraps of tubing. The Plexiglas™ hood was designed to control the temperature of the blood. The cost of this device was $5,600 in 1950.
Murphy WP Jr., Swan RC Jr., Walter CW, Weller JM, Merrill JP. Use of an artificial kidney. III. Current procedures in clinical hemodialysis. J Lab Clin Med. 1952 Sep; 40(3): 436-44.
Leonard Skeggs, PhD, and Jack Leonards, MD, developed the first parallel flow artificial kidney at Case Western Reserve in Cleveland, OH. The artificial kidney was designed to have a low resistance to blood flow and to have an adjustable surface area.
Two sheets of membrane are sandwiched between two rubber pads in order to reduce the blood volume and to ensure uniform distribution of blood across the membrane to maximize efficiency. Multiple layers were utilized. The device required a great deal of time to construct and it often leaked. This was corrected by the use of bone wax to stop the leak.
The device had a very low resistance to blood flow and it could be used without a blood pump. If more than one of these units were used at a time, a blood pump was required. Skeggs was able to remove water from the blood in the artificial kidney by creating a siphon on the effluent of the dialyzing fluid. This appears to be the first reference to negative pressure dialysis.
This technology was later adapted by Leonard Skeggs to do blood chemistries. It was called the SMA 12-60 Autoanalyzer.
This artificial kidney was developed to reduce the amount of blood outside of the body and to eliminate the need for pumping the blood through the device.
Guarino used cellulose acetate tubing. The dialyzing fluid was directed inside the tubing and the blood, entering the device from the top, cascading down the membrane. The metal tubing inside the membrane gave support to the membrane.
The artificial kidney ahd a very low blood volume, but it had limited use because there was concern regarding the possibility of the dialyzing fluid leaking into the blood.
Von Garrelts had constructed a dialyzer in 1948 by wrapping a cellulose acetate membrane around a core. The layers of membrane were separated by rods. It was very bulky and weighed over 100 pounds.
William Inouye, MD, took this concept and miniaturized it by wrapping the cellulose acetate tubing around a beaker and separating the layers with fiberglass screening. He placed this “coil” in a Presto Pressure Cooker in order to enclose it and control the temperature. In addition, he made openings in the pot for the dialyzing fluid. With the use of a vacuum on the dialysate leaving the pot, he was able to draw the excess water out of the patient’s blood. A blood pump was required to overcome resistance within the device.
This device was used clinically and when it was used in a closed circuit, the exact amount of fluid removed could be determined.
Inouye WY, Engelberg J. A simplified artificial dialyzer and ultrafilter. Surg Forum. Proceedings of the Forum Sessions, Thirty-ninth Clinical Congress of the American College of Surgeons, Chicago, Illinois, October, 1953; 4: 438-42.
Home Dialysis Central is made possible through the generous annual contributions of our Corporate Sponsors. Learn more about becoming a Corporate Sponsor.
|
Protractors - Innovation
While only one patent model for a protractor survives in the Smithsonian collections—from an inventor with a colorful personal history—several of the other objects also provide examples of technical innovation. For instance, some are manufactured versions of patented inventions. Others were named for the person with whom they were associated, even if that engineer or craftsman laid no claim to designing that protractor.
"Protractors - Innovation" showing 1 items.
- This brass semicircular protractor is divided by single degrees and marked by tens from 10° to 90° to 10°. It is attached with metal screws to a set of brass parallel rules. Brass S-shaped hinges connect the rules to each other. The bottom left screw on the parallel rules does not attach to the bottom piece. A rectangular brass arm is screwed to the center of the protractor. A thin brass piece screwed to the arm is marked with a small arrow for pointing to the angle markings. The protractor is stored in a wooden case, which also contains a pair of metal dividers (5-1/4" long).
- The base of the protractor is signed: L. Dod, Newark. Lebbeus Dod (1739–1816) manufactured mathematical instruments in New Jersey and is credited with inventing the "parallel rule protractor." He served as a captain of artillery during the Revolutionary War, mainly by making muskets. His three sons, Stephen (1770–1855), Abner (1772–1847), and Daniel (1778–1823), were also noted instrument and clock makers. The family was most associated with Mendham, N.J. (where a historic marker on N.J. Route 24 indicates Dod's house), but Dod is known to have also lived at various times in Newark.
- ID number MA*310890 is a similar protractor and parallel rule.
- References: Bethuel Lewis Dodd and John Robertson Burnet, "Biographical Sketch of Lebbeus Dod," in Genealogies of the Male Descendants of Daniel Dod . . . 1646–1863 (Newark, N.J., 1864), 144–147; Alexander Farnham, "More Information About New Jersey Toolmakers," The Tool Shed, no. 120 (February 2002), http://www.craftsofnj.org/Newjerseytools/Alex%20Farnham%20more%20Jeraey%20Tools/Alex%20Farnham.htm; Deborah J. Warner, “Surveyor's Compass,” National Museum of American History Physical Sciences Collection: Surveying and Geodesy, http://americanhistory.si.edu/collections/surveying/object.cfm?recordnumber=747113; Peggy A. Kidwell, "American Parallel Rules: Invention on the Fringes of Industry," Rittenhouse 10, no. 39 (1996): 90–96.
- date made
- late 1700s
- Dod, Lebbeus
- ID Number
- accession number
- catalog number
- Data Source
- National Museum of American History, Kenneth E. Behring Center
|
Simple technique appears to be safe and effective, review suggests
By Robert Preidt
MONDAY, Oct. 15 (HealthDay News) -- A technique called the "mother's kiss" is a safe and effective way to remove foreign objects from the nostrils of young children, according to British researchers.
In the mother's kiss, a child's mother or other trusted adult covers the child's mouth with their mouth to form a seal, blocks the clear nostril with their finger, and then blows into the child's mouth. The pressure from the breath may expel the object in the blocked nostril.
Before using it, the adult should explain the technique to the child so that he or she is not frightened. If the first attempt is unsuccessful, the technique can be tried several times, according to a review published in the current issue of the CMAJ (Canadian Medical Association Journal).
For their report, researchers in Australia and the United Kingdom examined eight case studies in which the mother's kiss was used on children aged 1 to 8 years.
"The mother's kiss appears to be a safe and effective technique for first-line treatment in the removal of a foreign body from the nasal cavity," Dr. Stephanie Cook, of the Buxted Medical Centre in England, and colleagues concluded. "In addition, it may prevent the need for general anesthesia in some cases."
Further research is needed to compare various positive-pressure techniques and test how effective they are in different situations where objects are in various locations and have spent different lengths of time in the nasal passages, the researchers noted in a journal news release.
The U.S. National Library of Medicine has more about foreign objects in the nose.
SOURCE: CMAJ (Canadian Medical Association Journal), news release, Oct. 15, 2012
Copyright © 2012 HealthDay. All rights reserved. URL:http://www.healthscout.com/template.asp?id=669525
|
The atlas of climate change: mapping the world's greatest challenge
University of California Press
, 2007 - Science
- 112 pages
Today's headlines and recent events reflect the gravity of climate change. Heat waves, droughts, and floods are bringing death to vulnerable populations, destroying livelihoods, and driving people from their homes.
Rigorous in its science and insightful in its message, this atlas examines the causes of climate change and considers its possible impact on subsistence, water resources, ecosystems, biodiversity, health, coastal megacities, and cultural treasures. It reviews historical contributions to greenhouse gas levels, progress in meeting international commitments, and local efforts to meet the challenge of climate change.
With more than 50 full-color maps and graphics, this is an essential resource for policy makers, environmentalists, students, and everyone concerned with this pressing subject.
The Atlas covers a wide range of topics, including:
* Warning signs
* Future scenarios
* Vulnerable populations
* Renewable energy
* Emissions reduction
* Personal and public action
Copub: Myriad Editions
|
"One death is a tragedy. A million deaths is a statistic." -Joseph Stalin
When figuring out how people will respond to a foreign tragedy, it comes down to three things: location, location, location.
And TV cameras too.The September 11, 2001 homicide attacks
killed about 3,000 people yet it's had more impact on American politics and foreign policy than anything since World War II. And to the great extent that American foreign policy impacts the rest of the world, it had a huge impact on international affairs as well.
While 3,000 is pretty big death toll for a single incident, there have been other wars and attacks with greater loss of life that had a relatively miniscule influence on American or international affairs. Why? Because those attacks didn't occur in the heart of New York City. The international response would've been significantly less if the attack had been launched in Kathmandu, Bogota or Algiers (in countries with homegrown terrorist problems).The Asian tsunami of 2004
had a devastating effect and cost an estimated 283,000 lives and over a million displaced. It generated an international response that was probably unprecedented in scale. As someone who regularly reads articles on underfunded international crisis appeals, I was heartened by the response to the tsunami. That it hit easily accessible coastal regions, including many tourist areas, made it easier to TV crews to get images. That Europeans and Americans were amongst the victims, if a tiny fraction, ensured that it got coverage in the western media.
But if I told you there was a conflict that has cost almost 15 times as many lives as the tsunami, could you name that crisis? If I told you there was a crisis that, in mortality terms, was the equivalent of a three 9/11s every week for the last 7 years
, would you know which one I'm talking about?
I bet few westerners could, even though it's by far the deadiest conflict of the last 60 years.
The war in the Democratic Republic of the Congo (formerly Zaire) is killing an estimated 38,000 people each month, according to the British medical journal The Lancet
. And if not for the involvement of humanitarian non-governmental organizations and UN relief agencies, the toll would be much higher.Most of the deaths are not caused by violence but by malnutrition and preventable diseases after the collapse of health services, the study said
, notes the BBC. Since the war began in 1998, some 4m people have died, making it the world's most deadly war since 1945, it said.
A peace deal has ended most of the fighting but armed gangs continue to roam the east, killing and looting.
The political process in the DRC is slowly inching in the right direction. Voters in the country recently approved a new constitution
, to replace the one imposed on it by the outgoing Belgian colonialists. EU officials praised the referendum as free and fair, probably the first truly open poll in the country's history. Elections are scheduled for June of this year.
However, instability reigns in much of the country, particularly the east. And central government throughout the entireity of the country has never been strong in this gigantic country. There are 17,000 UN peacekeepers doing the best they can but the country's the size of Western Europe. (By contrast the Americans and British have ten times as many troops in Iraq, a country that's less than 1/5 the size of the DRC. And we know how many problems they're having there)
And this shows why war should ALWAYS be a last resort. Most of the deaths have not been directly caused by war
(bullet wounds, landmines, etc). Most of the deaths have been caused by factors provoked by war's instability and destruction. The destruction of all infrastructure like roads and medical clinics. The inability to get to sources of clean water. The fear of leaving the house to tend the fields or go to the market.
38,000 people a month. If you get pissed off at Howard Dean or Pat Robertson, spare a little outrage for this.
And maybe a few bucks.
WANNA HELP? TAKE YOUR PICK
-Doctors Without Borders
-World Food Program
-Catholic Relief Services
|
Want to stay on top of all the space news? Follow @universetoday on TwitterSedimentary rock covers 70% of the Earth. Erosion is constantly changing the face of the Earth. Weathering agents…wind, water, and ice…break rock into smaller pieces that flow down waterways until the settle to the bottom permanently. These sediments( pebbles, sand, clay, and gravel) pile up and for new layers. After hundred or thousands of years these rocks become pressed together to form sedimentary rock.
Sedimentary rock can form in two different ways. When layer after layer of sediment forms it puts pressure on the lower layers which then form into a solid piece of rock. The other way is called cementing. Certain minerals in the water interact to form a bond between rocks. This process is similar to making modern cement. Any animal carcasses or organisms that are caught in the layers of sediment will eventually turn into fossils. Sedimentary rock is the source of quite a few of our dinosaur findings.
There are four common types of sedimentary rock: sandstone, limestone, shale, and conglomerate. Each is formed in a different way from different materials. Sandstone is formed when grains of sand are pressed together. Sandstone may be the most common type of rock on the planet. Limestone is formed by the tiny pieces of shell that have been cemented together over the years. Conglomerate rock consists of sand and pebbles that have been cemented together. Shale forms under still waters like those found in bogs or swamps. The mud and clay at the bottom is pressed together to form it.
Sedimentary rock has the following general characteristics:
- it is classified by texture and composition
- it often contains fossils
- occasionally reacts with acid
- has layers that can be flat or curved
- it is usually composed of material that is cemented or pressed together
- a great variety of color
- particle size varies
- there are pores between pieces
- can have cross bedding, worm holes, mud cracks, and raindrop impressions
This is only meant to be a brief introduction to sedimentary rock. There are many more in depth articles and entire books that have been written on the subject. Here is a link to a very interesting introduction to rocks. Here on Universe Today there is a great article on how sedimentary rock show very old signs of life. Astronomy Cast has a good episode on the Earth’s formation.
|
by George Heymann | @techeadlines
Google has started a new page on Google Plus to share their vision of what its augmented reality glasses could be. They are soliciting suggestions from users on what they would like to see from Project Glass.
“We think technology should work for you—to be there when you need it and get out of your way when you don’t.
A group of us from Google[x] started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment. We’re sharing this information now because we want to start a conversation and learn from your valuable input. So we took a few design photos to show what this technology could look like and created a video to demonstrate what it might enable you to do.”
Video after the break.
Read the rest of this entry »
Filed under: Android, General technology, Google, Media, Services, Augmented reality, eyeglasses, feature, featured, glasses, Google, Google Plus, Google Project Gass, google virtual reality glasses, New York Times, Project Glass, technology, turn by turn directions, UI, User interface, video chat, Virtual reality, Virtual reality glasses
January 18, 2012 • 11:07 am
$3 million grant from the Bill & Melinda Gates Foundation will fund development
by Eric Klopfer
MIT Education Arcade
With a new $3 million grant from the Bill & Melinda Gates Foundation, the MIT Education Arcade is about to design, build and research a massively multiplayer online game (MMOG) to help high school students learn math and biology.
In contrast to the way that Science, Technology, Engineering and Math (STEM) are currently taught in secondary schools — which often results in students becoming disengaged and disinterested in the subjects at an early age — educational games such as the one to be developed give students the chance to explore STEM topics in a way that deepens their knowledge while also developing 21st-century skills.
Read the rest of this entry »
Filed under: Gaming, Services, Software, 3 million dollar grant, Associate Professor, Augmented reality, Bill & Melinda Gates Foundation, Common Core, Common Core standards, Education, education arcade, education program, Eric Klopfer, Filament Games, game to help teach math and science, High school, Math, mit, MIT education arcade, MMOG, Next generation science, StarLogo TNG, STEM, Washington, Washington D.C.
by George Heymann
Blackberry no sooner made its official Blackberry Bold 9900 announcement and then T-mobile tweeted that it would be carrying the device. The 9900 will be T-mobile’s first 4G capable Blackberry. It is rumored to be available on T-Mobile in the June/July timeframe.
The Blackberry Bold 9900/9930 will be a touch screen device with a 1.2 GHz processor, 8 GB of onboard memory, expandable to 32GB, HSPA+ 14.4 capable, 5 megapixel camera with flash, 720p HD video recording, dual-band WiFi, a built-in compass (magnetometer) and Near Field Communication (NFC) technology featuring the new Blackberry 7 OS.
Press release after the break
Filed under: Blackberry, Hardware, 4G, Augmented reality, BlackBerry, Blackberry Bold 9900, Blackberry Bold 9930, Evolution-Data Optimized, Evolved HSPA, Hertz, High-definition video, HSPA+, Research In Motion, T-Mobile, Touchscreen, Wi-Fi
|
||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
||This article's factual accuracy may be compromised due to out-of-date information. (July 2012)
||This article needs additional citations for verification. (July 2012)
||An editor has expressed a concern that this article lends undue weight to certain ideas, incidents, controversies or matters relative to the article subject as a whole. (July 2012)
Culinary arts is the art of preparing and cooking foods. The word "culinary" is defined as something related to, or connected with, cooking. A culinarian is a person working in the culinary arts. A culinarian working in restaurants is commonly known as a cook or a chef. Culinary artists are responsible for skilfully preparing meals that are as pleasing to the palate as to the eye. They are required to have a knowledge of the science of food and an understanding of diet and nutrition. They work primarily in restaurants, delicatessens, hospitals and other institutions. Kitchen conditions vary depending on the type of business, restaurant, nursing home, etc. The Table arts or the art of having food can also be called as "Culinary arts".
Careers in culinary arts
Related careers
Below is a list of the wide variety of culinary arts occupations.
- Consulting and Design Specialists – Work with restaurant owners in developing menus, the layout and design of dining rooms, and service protocols.
- Dining Room Service – Manage a restaurant, cafeterias, clubs, etc. Diplomas and degree programs are offered in restaurant management by colleges around the world.
- Food and Beverage Controller – Purchase and source ingredients in large hotels as well as manage the stores and stock control.
- Entrepreneurship – Deepen and invest in businesses, such as bakeries, restaurants, or specialty foods (such as chocolates, cheese, etc.).
- Food and Beverage Managers – Manage all food and beverage outlets in hotels and other large establishments.
- Food Stylists and Photographers – Work with magazines, books, catalogs and other media to make food visually appealing.
- Food Writers and Food Critics – Communicate with the public on food trends, chefs and restaurants though newspapers, magazines, blogs, and books. Notables in this field include Julia Child, Craig Claiborne and James Beard.
- Research and Development Kitchens – Develop new products for commercial manufacturers and may also work in test kitchens for publications, restaurant chains, grocery chains, or others.
- Sales – Introduce chefs and business owners to new products and equipment relevant to food production and service.
- Instructors – Teach aspects of culinary arts in high school, vocational schools, colleges, recreational programs, and for specialty businesses (for example, the professional and recreational courses in baking at King Arthur Flour).
Occupational outlook
The occupation outlook for chefs, restaurant managers, dietitians, and nutritionists is fairly good, with "as fast as the average" growth. Increasingly a college education with formal qualifications is required for success in this field. The culinary industry continues to be male-dominated, with the latest statistics showing only 19% of all 'chefs and head cooks' being female.
Notable culinary colleges around the world
- JaganNath Institute of Management Sciences, Rohini, Delhi, India
- College of Tourism & Hotel Management, Lahore, Punjab, Pakistan
- Culinary Academy of India, Hyderabad, Andhra Pradesh, India
- Culinary Academy of India, Hyderabad, Andhra Pradesh, India
- ITM School of Culinary Arts, Mumbai, Maharashtra, India
- Welcomgroup Graduate School of Hotel Administration, Manipal, Karnataka, India
- Institute of Technical Education (College West) – School of Hospitality, Singapore
- ITM (Institute of Technology and Management) – Institute of Hotel Management, Bangalore, Karnataka, India
- Apicius International School of Hospitality, Florence, Italy
- Le Cordon Bleu, Paris, France
- École des trois gourmandes, Paris, France
- HRC Culinary Academy, Bulgaria
- Institut Paul Bocuse, Ecully, France
- Mutfak Sanatlari Akademisi, Istanbul, Turkey
- School of Culinary Arts and Food Technology, DIT, Dublin, Ireland
- Scuola di Arte Culinaria Cordon Bleu, Florence, Italy
- Westminster Kingsway College (London)
- University of West London (London)
- School of Restaurant and Culinary Arts, Umeå University (Sweden)
- Camosun College (Victoria, BC)
- Canadore College (North Bay, ON)
- The Culinary Institute of Canada (Charlottetown, PE)
- Georgian College (Owen Sound, ON)
- George Brown College (Toronto, ON)
- Humber College (Toronto, ON)
- Institut de tourisme et d'hôtellerie du Québec (Montreal, QC)
- Niagara Culinary Institute (Niagara College, Niagara-on-the-Lake, ON)
- Northwest Culinary Academy of Vancouver (Vancouver, BC)
- Nova Scotia Community College (Nova Scotia)
- Pacific Institute of Culinary Arts (Vancouver, BC)
- Vancouver Community College (Vancouver, BC)
- Culinary Institute of Vancouver Island (Nanaimo, BC)
- Sault College (Sault Ste. Marie, ON)
- Baltimore International College, Baltimore, Maryland
- California Culinary Academy, San Francisco, California
- California School of Culinary Arts, Pasadena, California
- California State, Pomona, California
- California State University Hospitality Management Education Initiative
- Chattahoochee Technical College in Marietta, Georgia
- Cooking and Hospitality Institute of Chicago
- Coosa Valley Technical College, Rome, Georgia
- Culinard, the Culinary Institute of Virginia College
- Cypress Community College Hotel, Restaurant Management, & Culinary Arts Program in Anaheim
- Classic Cooking Academy, Scottsdale, Arizona
- Center for Kosher Culinary Arts, Brooklyn, New York
- Culinary Institute of America in Hyde Park, New York
- Culinary Institute of America at Greystone in St. Helena, California
- The Culinary Institute of Charleston, South Carolina
- L'Ecole Culinaire in Saint Louis, Missouri and Memphis, Tennessee
- Glendale Community College (California)
- International Culinary Centers in NY and CA which include:
- Institute for the Culinary Arts at Metropolitan Community College, Omaha, Nebraska
- Johnson & Wales University, College of Culinary Arts
- Kendall College in Chicago, Illinois
- Lincoln College of Technology
- Manchester Community College in Connecticut
- New England Culinary Institute in Vermont
- Orlando Culinary Academy
- Pennsylvania Culinary Institute
- The Restaurant School at Walnut Hill College, Philadelphia, Pennsylvania,
- Scottsdale Culinary Institute
- Secchia Institute for Culinary Education: Grand Rapids Community College, Grand Rapids, MI
- The Southeast Culinary and Hospitality College in Bristol, Virginia
- Sullivan University Louisville, Kentucky
- Los Angeles Trade–Technical College
- Texas Culinary Academy
- Central New Mexico Community College, Albuquerque, NM
- AUT University (Auckland University of Technology)
- MIT (Manukau Institute of Technology)
- Wintec, Waikato Institute of Technology
See also
- McBride, Kate, ed. The Professional Chef/ the Culinary Institute of America, 8th ed. Hoboken, NJ: John Wiley & Sons, INC, 2006.
Further reading
- Beal, Eileen. Choosing a career in the restaurant industry. New York: Rosen Pub. Group, 1997.
- Institute for Research. Careers and jobs in the restaurant business: jobs, management, ownership. Chicago: The Institute, 1977.
External links
|
With the arrival of cold weather, the Occupational Safety and Health Administration is reminding employers to take necessary precautions to protect workers from the serious, and sometimes fatal, effects of carbon monoxide exposure.
Recently, a worker in a New England warehouse was found unconscious and seizing, suffering from carbon monoxide poisoning. Several other workers at the site also became sick. All of the windows and doors were closed to conserve heat, there was no exhaust ventilation in the facility, and very high levels of carbon monoxide were measured at the site.
Every year, workers die from carbon monoxide poisoning, usually while using fuel-burning equipment and tools in buildings or semi-enclosed spaces without adequate ventilation. This can be especially true during the winter months when employees use this type of equipment in indoor spaces that have been sealed tightly to block out cold temperatures and wind. Symptoms of carbon monoxide exposure can include everything from headaches, dizziness and drowsiness to nausea, vomiting or tightness across the chest. Severe carbon monoxide poisoning can cause neurological damage, coma and death.
Sources of carbon monoxide can include anything that uses combustion to operate, such as gas generators, power tools, compressors, pumps, welding equipment, space heaters and furnaces.
To reduce the risk of carbon monoxide poisoning in the workplace, employers should install an effective ventilation system, avoid the use of fuel-burning equipment in enclosed or partially-enclosed spaces, use carbon monoxide detectors in areas where the hazard is a concern and take other precautions outlined in OSHA's Carbon Monoxide Fact Sheet. For additional information on carbon monoxide poisoning and preventing exposure in the workplace, see OSHA's Carbon Monoxide Poisoning Quick Cards (in English and Spanish).
Under the Occupational Safety and Health Act of 1970, employers are responsible for providing safe and healthful workplaces for their employees. OSHA's role is to ensure these conditions for America's working men and women by setting and enforcing standards, and providing training, education and assistance. For more information, visit www.osha.gov.
|
Sitting around a table at Meyers Center BOCES in Saratoga, students, teaching assistants, and teachers were busy crocheting. They weren’t making afghans or shawls– but rather, they were turning plastic into possibility. Little did they know they were also making history.
Profiles of 17 human rights defenders from around the globe, with links to accompanying lesson plans.
Browse the curriculum. Download the complete 158-page guide or individual lesson plans.
Speak Truth to Power: Voices from Beyond the Dark brings the voices of human rights defenders into your classroom.
Find out how you can support your students in their efforts to defend human rights.
The Speak Truth to Power project will enter the video documentary world with a contest challenging middle and high school students in New York to create a film based on the experience of courageous human rights defenders around the world.
It is with great sadness that the family of Professor Wangari Maathai announces her passing away on 25th September, 2011, at the Nairobi Hospital, after a prolonged and bravely borne struggle with cancer.
New to the STTP site this month is a lesson plan based on the work of Congressman John Lewis, who has dedicated his life to protecting human rights, securing civil liberties, and building what he described as “The Beloved Community” in America.
|
The Ancient Forests of North America are extremely diverse. They include the boreal forest belt stretching between Newfoundland and Alaska, the coastal temperate rainforest of Alaska and Western Canada, and the myriad of residual pockets of temperate forest surviving in more remote regions.
Together, these forests store huge amounts of carbon, helping tostabilise climate change. They also provide a refuge for large mammalssuch as the grizzly bear, puma and grey wolf, which once ranged widelyacross the continent.
In Canada it is estimated that ancient forest provides habitat forabout two-thirds of the country's 140,000 species of plants, animalsand microorganisms. Many of these species are yet to be studied byscience.
The Ancient Forests of North America also provide livelihoods forthousands of indigenous people, such as the Eyak and Chugach people ofSouthcentral Alaska, and the Hupa and Yurok of Northern California.
Of Canada's one million indigenous people (First Nation, Inuit andMétis), almost 80 percent live in reserves and communities in boreal ortemperate forests, where historically the forest provided their foodand shelter, and shaped their way of life.
Through the Trees - The truth behind logging in Canada (PDF)
On the Greenpeace Canada website:
Interactive map of Canada's Borel forest (Flash)
Fun animation that graphically illustrates the problem (Flash)
Defending America's Ancient Forests
|
This week’s illustration post focuses on perhaps the most popular illustrated genre of the Renaissance: the emblem book. Emblem books were a genre developed in the early 16th century as digestible and curious works which combined an image, a motto and an explanatory text. This genre remained popular on the continent well into the 17th century, although, strangely, it never found any real footing in the U.K. The illustrations were most commonly woodcuts and the mottoes Latin, however Greek and vernacular mottoes were common as well.
The intention of most emblem books was to deliver a moral lesson through text and image, but often the connection between these two elements is obscure. There are many projects that have developed over the last decade to provide a digital repository for these books worth exploring (Emblematica Online [Univeristy of Illinois and Herzog August Bibliothek], University of Glasgow and the Emblem Project Utrecht are good examples).
Although St Andrews 17th century holdings remain largely unexplored, I did come across a wonderful example of a Spanish emblem book published at the height of the genre’s popularity. This week’s post features the emblems from Sebastián de Covarrubias y Orozco’s Emblemas morales (1610). This work was published under the direction of Don Francisco Gómez de Sandoval, 1st Duke of Lerma, shortly after Covarrubias had recovered from a serious illness. This is perhaps Covarrubias’s most popular publication on emblems, but he is most well-known for his Tesoro de la lengua castellana o española (1611).
From the beginning of this book it is quite obvious that the author’s roots, both as a canon in the Catholic Church and as a keen lexicographer, are quite influential. However, Covarrubias does stray off into the weirder and esoteric world of the emblem books of the day: dragons, serpents, snake-eating deer and oversized-tops (above, which look like the ship from Flight of the Navigator to this blogger). Covarrubias also draws on everyday scenes of fishermen and farmers toiling in the field to root this emblem book in the real-world.
I’ve sampled here some of my favourites from this book, however a full scan is available at Emblematica Online. This work was not Covarrubias’s first foray into the world of emblem books, in 1589 he published a work, also entitled Emblemas morales, which provides almost 100 pages worth of text about emblems and their origins and then provides 100 examples afterword (none of which are repeated in his 1610 work). This work was republished in 1604 with the same text but with new woodcuts.
Covarrubias’s 1610 Emblemas morales, however, is a completely new work, featuring new emblems and mottoes with shorter verse explanations.
|
The latest weapon unleashed to battle California's growing energy demand comes in the form of free software. The State of California Energy Commission's Public Interest Energy Research (PIER) program is responsible for the development of a free software package dubbed the Sensor Placement and Orientation Tool (SPOT). The purpose of this software is to help designers establish correct photosensor placement relative to a given daylighting and electric lighting design.
Daylighting systems, which use natural lighting to supplement electric lighting, are sensitive to photosensor placement and performance. However, until now, there have been no easy-to-use tools to help designers predict performance and determine optimum sensor positioning. Using an Excel worksheet interface, SPOT accounts for variables such as room geometry, surface reflectances, solar orientation, electric lighting layout, and window design to help determine the best location for photosensors. It also helps designers comply with the daylighting requirements in California's Title 24 energy code, which calls for separate controls for daylit areas and offers substantial energy budget credits for automatic daylighting controls.
This software may be downloaded for free at www.archenergy.com/SPOT/index.html.
|
The Handel and Haydn Society is a chorus and period instrument orchestra in the city of Boston, Massachusetts. Founded in 1815, it is one of the oldest performing arts organizations in the USA. Most widely known for its performances of George Frideric Handel's Messiah, the group gave its American premiere in 1818 and has performed the piece annually since 1854.
The Handel and Haydn Society was founded as an oratorio society in Boston on April 20, 1815, by Gottlieb Graupner, Thomas Smith Webb, Amasa Winchester, and Matthew S. Parker, a group of Boston merchants and musicians who were eager to improve the performance of choral music in a city that, at the time, offered very little music of any kind. The name of the Society reflects the foundersí wish to bring Boston audiences the best of the old (G.F. Handel) and the best of the new (Haydn) in concerts of the highest artistic quality. The first performance by the Society was held on Christmas night in 1815 at King's Chapel, and included a chorus of 90 men and 10 women.
From its earliest years, the Handel and Haydn Society established a tradition of innovation, performing the American premieres of G.F. Handelís Messiah in 1818, Haydnís The Creation in 1819, Verdiís Requiem in 1878, Amy Beach's Mass in 1892, and numerous other works by G.F. Handel, Mozart, J.S. Bach, and others.
The Society was also an early promoter of composer Lowell Mason, publishing his first collection of hymns and later electing him as the group's President. Mason's music was extremely influential and much of it is still performed today. He is best known for composing the music for the popular carol, Joy to the World. Mason was also instrumental in establishing music education in the USA.
Throughout the 19th and 20th centuries, Handel and Haydn staged music festivals to commemorate its own anniversaries and such significant events as the end of the Civil War. The Society organized Americaís first great music festival in 1857, and in later years gave benefit concerts to aid the Union Army, victims of the Chicago fire in 1871, and Russian Jewish refugees in 1882. Over the years, the Handel and Haydn Society has performed for such luminaries as President James Monroe, Grand Duke Alexis of Russia, Admiral Dewey, and Queen Elizabeth II.
By the mid 20th century, the Handel and Haydn Society had begun to move toward vocal and instrumental authenticity. In 1967, an acknowledged expert in Baroque performance practice, Thomas Dunn, became the Society's Artistic Director and transformed the group's large amateur chorus into one of approximately 30 professional singers. In 1986, Christopher Hogwood succeeded Thomas Dunn as Artistic Director and added period-instrument performances and a new verve to the high choral standards of the Society. In October 1986, Handel and Haydn presented its first period instrument orchestra concert under Christopher Hogwoodís baton, and by the 1989-1990 season all of the Society's concerts were performed on period instruments. The Society has remained committed to historically informed performance following the end of Christopher Hogwood's tenure as Artistic Director in the spring of 2001.
Handel and Haydn Society announced the appointment of Harry Christophers as Artistic Director on September 26, 2008. Harry Christophers, a regular guest conductor of the Society, began his tenure as Artistic Director with the 2009-2010 season and is the organizationís thirteenth artistic leader since its founding in 1815. The initial term of Harry Christophersí contract with the Society extends through the 2011-2012 season.
Harry Christophers has conducted the Handel and Haydn Society each season since his first appearance in September 2006, when he led a sold-out performance in the Esterházy Palace at the Haydn Festival in Eisenstadt, Austria. Held in the same location where Haydn lived and worked for nearly 40 years, this Austrian appearance marked the Societyís first in Europe in its then 191-year history. Harry Christophers returned to conduct the Society in Boston in a critically acclaimed performance of G.F. Handelís Messiah in December 2007, followed by an appearance at Symphony Hall in January 2008. Founder and Music Director of the renowned UK-based choir and period-instrument orchestra, The Sixteen, he is also in demand as a guest conductor for leading orchestras and opera companies worldwide and in the USA.
Welsh conductor Grant Llewellyn joined Handel and Haydn in the 2001-2002 season as Music Director. Grant Llewellyn did not have a background in period-instrument performance prior to joining the Society, but has won wide acclaim from critics and musicians for his energetic and compelling conducting. He has been noted for his charming personality and for his ability to produce exceptional performances from the Society's musicians.
During his tenure as Music Director, the Society produced several recordings that have met with considerable commercial success, including Peace and All is Bright which both appeared on Billboard Magazine's Classical Top 10 chart. Handel and Haydn Society was also awarded its first Grammy Award for a collaboration with the San Francisco choral ensemble Chanticleer for the 2003 recording of Sir John Tavener's Lamentations and Praises.
The Society also entered into a multi-year relationship with Chinese director Chen Shi-Zheng starting in 2003. This has yielded fully-staged productions of Monteverdi's Vespers (in 2003) and Orfeo (in 2006) that Chen sees as the start of a cycle of Monteverdi's surviving operas and his Vespers. The 2006 Orfeo was co-produced by the English National Opera. Chen also directed a production of Purcell's Dido and Aeneas in 2005 for Handel and Haydn. Grant Llewellyn concluded his tenure in 2006.
In July 2007, the ensemble made a historic appearance at London's Royal Albert Hall as part of the BBC Proms concert series, presenting Haydn's oratorio Die Jahreszeiten (The Seasons), with Sir Roger Norrington conducting.
|
Capital and oldest city of the kingdom of Navarra, Spain. Next to Tudela, it possessed the most important Jewish community. The Jewry was situated in the Navarreria, the oldest quarter of the city. When Navarra came under the guardianship of Philip the Fair, and the Pamplonians refused to pay him homage, the Jewry was destroyed by the French troops, the houses were plundered, and many Jews were killed (1277). In 1280, upon the complaint of the Jews, the city was directed to restore to them the confiscated propertiesand to assign to them other ground for building purposes. In 1319 the city council, in conjunction with the bishop, to whom the Jews were tributary, had resolved, in compliance with the wish of King Charles I., to rebuild the Jewry; but this was not done until 1336.
The new Jewry was near the Puente de la Magdalena, and was surrounded with strong walls to guard it against invasion. In the Jewry was the Alcaceria, where the Jews carried on considerable traffic in silk goods, while in a separate street were stores in which they sold jewelry, shoes, etc. Some of the Jews were artisans, and were employed by the royal court; others practised medicine. The physician Samuel, in recognition of his services as surgeon to the English knight Thomas Trivet, was presented by King Charles in 1389 with several houses situated in the Jewry and which had formerly been in the possession of Bonafos and his son Sento, two jugglers. In 1433 the physician Maestre Jacob Aboazar, who had his house near the Great Synagogue, accompanied the queen on a journey abroad. Contemporary with him was the physician Juce (Joseph).
In 1375 the Jews of Pamplona numbered about 220 families, and paid a yearly tax of 2,592 pounds to the king alone. They had, as in Estella and Tudela, their independent magistracy, consisting of two presidents and twenty representatives. Gradually the taxes became so burdensome that they could no longer be borne. In 1407 King Charles III. issued an order that the movable property of the Jews should be sold at auction, and the most notable members placed under arrest, unless they paid the tax due to him. To escape these frequent vexations many of the Jews resolved to emigrate; and a part of the Jewry was thus left uninhabited. No sooner had Leonora ascended the throne as coregent (1469) than she issued an order to the city magistrate to require the Jews to repair the dilapidated houses.
The policy of Ferdinand and Isabella triumphed in the Cortes of Pamplona in 1496. Two years later the Jews were expelled from Navarra. Many emigrated; and those who were unable to leave the city embraced Christianity. Ḥayyim Galipapa was rabbi of Pamplona in the fourteenth century; and the scholar Samuel b. Moses Abbas was a resident of the city.
- Kayserling, Gesch. der Juden in Spanien, i. 34, 43. 73, 93, 105 et seq.;
- Rios, Hist. ii. 452, iii. 200;
- Jacobs, Sources, s.v. Pamplona.
|
Most exercise-related injuries have the same basic cause - the overstressing of muscles, tendons, ligaments, bones, and other tissue. With sufficient precautions and care, risks can be minimized. Warming up slowly and cooling down properly can help prevent many stress injuries. To be effective, your warm-up and cool-down exercises should use the same muscles as your main exercise. For example, if you jog, begin by walking for several minutes, then jog slowly, before breaking into a full stride. Do this before and after your regular exercise. Every athlete should include a 15-minute warm up and cool down program as part of the workout. This will increase flexibility, reduce muscle soreness, and improve overall performance. Other good principles to follow during exercise are: know your body's limitations and warning signals; drink plenty of water; and never combine heavy eating with heavy exercising. For more information on the benefits of warming up and cooling down, consult a physician.
|
A group of researchers have developed a way to identify pirated movies by reducing the original to a signature genetic code. The system can match even videos that have been altered or had their colors changed to the source, an area where many video piracy mechanisms fall short.
Drs. Alex and Michael Bronstein and Professor Ron Kimmel have come up with a way to isolate a certain subset of data from video files that serves an analogous role to a fingerprint at a crime scene. While the creators haven't published research on this exact project in order to guard the proprietary technology, it works by applying a series of grids over the film to reduce it to a series of numbers.
Once the film has been systematically reduced, copyright holders can take the "DNA signature" of the video and scan sites that host pirated videos for it. According to the three researchers, the signature should be able to find correct matches even if the videos' borders have been changed, commercials have been added, or scenes have been edited out, which is a capability that sites that patrol for piracy, like YouTube, currently lack.
There are no details on the limitations of the system, such as video length, style, or quality. "We have a fully working prototype and have established a company that commercializes it," Dr. Bronstein told Ars.
While the website is no more revealing about how the video DNA matching works, Bronstein adds that they've already had a few companies interested in licensing the technology.
|
In the era of Facebook and Twitter, it's clear we love talking about ourselves—indeed, the topic makes up 40% of our everyday conversation. Now scientists can explain why: Doing so stimulates the brain the same way sex, food, and cash do. Researchers scanned subjects' brains and found that parts linked to those pleasures lit up when subjects chatted about themselves.
Researchers also offered study subjects up to 4 cents per question to talk about topics other than themselves, like President Obama. Turns out "people were even willing to forgo money in order to talk about themselves," says a researcher. Subjects would part with 17% to 25% of the potential cash for an opportunity for self-disclosure, the Wall Street Journal reports.
|
by Nate Jones, Vertebrate Ecology Lab
(still in the Bering Sea) … Of course the bad weather I’ve been writing about was nothing compared to what happens on the Bering during the months of February or March, and the Gold Rush fishes regularly during that time of year, so I had complete faith in the seaworthiness of the ship and the judgment and skill of the crew. I took comfort in that thought, and stumbled down to my bunk for what became a grueling 72 hours of bumps, rolls, and queasy stomachs. During this stormy time the crew exchanged watches at the helm, keeping the ship pointed into the fury.
We all hoped for the best, but by the time the seas had calmed to (a more manageable?) 8-10’, the hungry ocean had damaged and ripped off much of our scientific equipment, snapping several ¼” steel bolts and ripping welds clean apart!
The Gold Rush itself weathered this storm in fine shape (wish we could say the same of our scientific equipment!), and there were no major injuries to anyone on board. It really was quite a minor event in the context of the Bering Sea; just another blowy, bumpy day or two out on the water.
But, it impressed me and I couldn’t help contemplating darker scenarios – what happens when there is a true emergency? What if someone had been swept overboard, or, worse yet, what if the ship itself had been damaged or taken on water and started to go down? Such things do happen, although not as frequently now as they have in the past (coast guard regulations and improvements in technology and crew training have contributed to much increased safety).
In my next post I’ll put up some images from training exercises that are routinely undertaken to help prepare crew and passengers (scientists) for emergencies at sea…
|
Teachers, register your class or environmental club in our annual Solar Oven Challenge! Registration begins in the fall, and projects must be completed in the spring to be eligible for prizes and certificates.
Who can participate?
GreenLearning's Solar Oven Challenge is open to all Canadian classes. Past challenges have included participants from grade 3 through to grade 12. Older students often build solar ovens as part of the heat unit in their Science courses. Other students learn about solar energy as a project in an eco-class or recycling club.
How do you register?
1. Registration is now open to Canadian teachers. To register, send an email to Gordon Harrison at GreenLearning. Include your name, school, school address and phone number, and the grade level of the students who will be participating.
2. After you register, you will receive the Solar Oven Challenge Teacher's Guide with solar oven construction plans. Also see re-energy.ca for construction plans, student backgrounders, and related links on solar cooking and other forms of renewable energy. At re-energy.ca, you can also see submissions, photos and recipes from participants in past Solar Oven Challenges.
3. Build, test and bake with solar ovens!
4. Email us photos and descriptions of your creations by the deadline (usually the first week of June).
5. See your recipes and photos showcased at re-energy.ca. Winners will be listed there and in GreenLearning News.
|
Rurik DynastyArticle Free Pass
Rurik Dynasty, princes of Kievan Rus and, later, Muscovy who, according to tradition, were descendants of the Varangian prince Rurik, who had been invited by the people of Novgorod to rule that city (c. 862); the Rurik princes maintained their control over Kievan Rus and, later, Muscovy until 1598.
Rurik’s successor Oleg (d. 912) conquered Kiev (c. 882) and established control of the trade route extending from Novgorod, along the Dnieper River, to the Black Sea. Igor (allegedly Rurik’s son; reigned 912–945) and his successors—his wife, St. Olga (regent 945–969), and their son Svyatoslav (reigned 945–972)—further extended their territories; Svyatoslav’s son Vladimir I (St. Vladimir; reigned c. 980–1015) consolidated the dynasty’s rule.
Vladimir compiled the first Kievan Rus law code and introduced Christianity into the country. He also organized the Kievan Rus lands into a cohesive confederation by distributing the major cities among his sons; the eldest was to be grand prince of Kiev, and the brothers were to succeed each other, moving up the hierarchy of cities toward Kiev, filling vacancies left by the advancement or death of an elder brother. The youngest brother was to be succeeded as grand prince by his eldest nephew whose father had been a grand prince. This succession pattern was generally followed through the reigns of Svyatopolk (1015–19); Yaroslav the Wise (1019–54); his sons Izyaslav (1054–68; 1069–73; and 1077–78), Svyatoslav (1073–76), and Vsevolod (1078–93); and Svyatopolk II (son of Izyaslav; reigned 1093–1113).
The successions were accomplished, however, amid continual civil wars. In addition to the princes’ unwillingness to adhere to the pattern and readiness to seize their positions by force instead, the system was upset whenever a city rejected the prince designated to rule it. It was also undermined by the tendency of the princes to settle in regions they ruled rather than move from city to city to become the prince of Kiev.
In 1097 all the princes of Kievan Rus met at Lyubech (northwest of Chernigov) and decided to divide their lands into patrimonial estates. The succession for grand prince, however, continued to be based on the generation pattern; thus, Vladimir Monomakh succeeded his cousin Svyatopolk II as grand prince of Kiev. During his reign (1113–25) Vladimir tried to restore unity to the lands of Kievan Rus; and his sons (Mstislav, reigned 1125–32; Yaropolk, 1132–39; Vyacheslav, 1139; and Yury Dolgoruky, 1149–57) succeeded him eventually, though not without some troubles in the 1140s.
Nevertheless, distinct branches of the dynasty established their own rule in the major centres of the country outside Kiev—Halicz, Novgorod, and Suzdal. The princes of these regions vied with each other for control of Kiev; but when Andrew Bogolyubsky of Suzdal finally conquered and sacked the city (1169), he returned to Vladimir (a city in the Suzdal principality) and transferred the seat of the grand prince to Vladimir. Andrew Bogolyubsky’s brother Vsevolod III succeeded him as grand prince of Vladimir (reigned 1176–1212); Vsevolod was followed by his sons Yury (1212–38), Yaroslav (1238–46), and Svyatoslav (1246–47) and his grandson Andrew (1247–52).
Alexander Nevsky (1252–63) succeeded his brother Andrew; and Alexander’s brothers and sons succeeded him. Furthering the tendency toward fragmentation, however, none moved to Vladimir but remained in their regional seats and secured their local princely houses. Thus, Alexander’s brother Yaroslav (grand prince of Vladimir, 1264–71) founded the house of Tver, and Alexander’s son Daniel founded the house of Moscow.
After the Mongol invasion (1240) the Russian princes were obliged to seek a patent from the Mongol khan in order to rule as grand prince. Rivalry for the patent, as well as for leadership in the grand principality of Vladimir, developed among the princely houses, particularly those of Tver and Moscow. Gradually, the princes of Moscow became dominant, forming the grand principality of Moscow (Muscovy), which they ruled until their male line died out in 1598.
What made you want to look up "Rurik Dynasty"? Please share what surprised you most...
|
By A. M. Sullivan
CHAPTER IX. (continued)
From the Atlas and Cyclopedia of Ireland (1900)
It was to the rugged and desolate Hebrides that Columba turned his face when he accepted the terrible penance of Molaise. He bade farewell to his relatives, and, with a few monks who insisted on accompany him whithersoever he might go, launched his frail currochs from the northern shore. They landed first, or rather were carried by wind and stream, upon the little isle of Oronsay, close by Islay; and here for a moment they thought their future abode was to be. But when Columba, with the early morning, ascending the highest ground on the island, to take what he thought would be a harmless look toward the land of his heart, lo! on the dim horizon a faint blue ridge—the distant hills of Antrim! He averts his head and flies downward to the strand! Here they cannot stay, if his vow is to be kept. They betake them once more to the currochs, and steering further northward, eventually land upon Iona, thenceforth, till time shall be no more, to be famed as the sacred isle of Columba! Here landing, he ascended the loftiest of the hills upon the isle, and "gazing into the distance, found no longer any trace of Ireland upon the horizon." In Iona accordingly he resolved to make his home. The spot from whence St. Columba made this sorrowful survey is still called by the islesmen in the Gaelic tongue, Carn-cul-ri-Erinn, or the Cairn of Farewell—literally, The back turned on Ireland.
Writers without number have traced the glories of Iona. Here rose, as if by miracle, a city of churches; the isle became one vast monastery, and soon much too small for the crowds that still pressed thither. Then from the parent isle there went forth to the surrounding shores, and all over the mainland, off-shoot establishments and missionary colonies (all under the authority of Columba), until in time the Gospel light was ablaze on the hills of Albyn; and the names of St. Columba and Iona were on every tongue from Rome to the utmost limits of Europe!
"This man, whom we have seen so passionate, so irritable, so warlike and vindictive, became little by little the most gentle, the humblest, the most tender of friends and fathers. It was he, the great head of the Caledonian Church, who, kneeling before the strangers who came to Iona, or before the monks returning from their work, took off their shoes, washed their feet, and after having washed them, respectfully kissed them. But charity was still stronger than humility in that transfigured soul. No necessity, spiritual or temporal, found him indifferent. He devoted himself to the solace of all infirmities, all misery and pain, weeping often over those who did not weep for themselves.
"The work of transcription remained until his last day the occupation of his old age, as it had been the passion of his youth; it had such an attraction for him, and seemed to him so essential to a knowledge of the truth that, as we have already said, three hundred copies of the Holy Gospels, copied by his own hand, have been attributed to him."
But still Columba carried with him in his heart the great grief that made life for him a lengthened penance. "Far from having any prevision of the glory of Iona, his soul," says Montalembert, "was still swayed by a sentiment which never abandoned him—regret for his lost country. All his life he retained for Ireland the passionate tenderness of an exile, a love which displayed itself in the songs which have been preserved to us, and which date perhaps from the first moment of his exile. . . . 'Death in faultless Ireland is better than life without end in Albyn.' After this cry of despair follow strains more plaintive and submissive."
"But it was not only in these elegies, repeated and perhaps retouched by Irish bards and monks, but at each instant of his life, in season and out of season, that this love and passionate longing for his native country burst forth in words and musings; the narratives of his most trustworthy biographers are full of it. The most severe penance which he could have imagined for the guiltiest sinners who came to confess to him, was to impose upon them the same fate which he had voluntarily inflicted on himself—never to set foot again upon Irish soil! But when, instead of forbidding to sinners all access to that beloved isle, he had to smother his envy of those who hard the right and happiness to go there at their pleasure, he dared scarcely trust himself to name its name; and when speaking to his guests, or to the monks who were to return to Ireland, he would only say to them, 'you will return to the country that you love.' "
"We are now," said Dr. Johnson, " treading that illustrious island which was once the luminary of the Caledonian regions; whence savage clans and roving barbarians derived the benefits of knowledge and the blessings of religion....Far from me and from my friends be such frigid philosophy as may conduct us indifferent and unmoved over any ground which has been dignified by wisdom, bravery, or virtue. That man is little to be envied whose patriotism would not gain force upon the plain of Marathon, or whose piety would not grow warmer among the ruins of Iona."—Boswell's "Tour to the Hebrides."
From a sad, comfortless childhood Giles Truelove developed into a reclusive and uncommunicative man whose sole passion was books. For so long they were the only meaning to his existence. But when fate eventually intervened to have the outside world intrude upon his life, he began to discover emotions that he never knew he had.
A story for the genuine booklover, penned by an Irish bookseller under the pseudonym of Ralph St. John Featherstonehaugh.
FREE download 23rd - 27th May
Join our mailing list to receive updates on new content on Library, our latest ebooks, and more.
You won't be inundated with emails! — we'll just keep you posted periodically — about once a monthish — on what's happening with the library.
|
Diabetic Ketoacidosis Symptoms
The body's cells need two energy requirements to function. The blood stream delivers both oxygen and glucose to the front door of the cell. The the oxygen is invited in, but the glucose needs a key to open the door. The insulin molecule is that key. When we eat, the body senses the levels of glucose in the blood stream and secretes just the right amount of insulin from the pancreas so that cells and the body can function.
People with diabetes don't have the luxury of that auto-sensing. They need to balance the amount of glucose intake with the amount of insulin that needs to be injected. Not enough insulin and the glucose levels in the blood stream start to rise; too much insulin, and they plummet.
The consequences of hypoglycemia (hypo=low, glycemia=glucose in the blood) are easy to understand. No energy source, no function - and the first organ to go is the brain. It needs glucose to function and without it, the brain shuts down quickly. Confusion, lethargy, and coma occur quickly. It's interesting that brain cells don't need insulin to open their doors to glucose, so when people develop coma from low blood sugar, they waken almost instantaneously upon treatment. Blood sugar is one of the first things checked on scene of a comatose patient, because it's so easy to fix and very embarrassing for an EMT to miss.
|
History of Rapa Nui
Your location: Welcome! (Main page) / Culture, history / History of Rapa Nui
Was the first highest rank leader of Easter Island, he is believed to have brought his people on 2 boats more than 1.000 (perhaps even 1.700) years ago to the island.
Western literature refers to Hotu Matu'a as to "Rapa Nui's first king", although it is known that there were no real kings, but rather tribal rulers in Polynesia, we continue to use this term. Hotu Matu'a was considered the "Ariki Mau" by the locals, this meaning sort of a "major leader" or "highest ruler".
The settlement of the island
With certainly we can affirm that Easter Island has been inhabited for over 1.200 years. But, specialists still debate on when the first settlers lead by the legendary Hotu Matu'a arrived.
Specialists consider that the island was colonized sometime between 300 BC and 800 BC. Pollen analysis, DNA analysis and also the studies of local legends point out to various periods between this interval. Of course, some people might have arrived later and others earlier, but it is generally accepted by the great public that the island was uninhabited before 300 BC, despite the fact that there is scientific evidence that Easter Island was inhabited before Hotu Matu'a's arrival.
According to the legends, the Ariki Mau, Hotu Matu'a arrived from an island or group of island called Hiva. Linguistic analysis of the Rapa Nui language suggests that the place of origin was the Marquesas Islands.
Legends say that a person called Hau-Maka (Haumaka) had a dream in which his spirit travelled to an island located far away in order to look for new land for the ruler Hotu Matu'a.
Hau-Maka's dream trip took him to the Mata Ki Te Rangi, meaning "Eyes that look to the Sky", an island located in the center of the Earth. This piece of land was called "Te Pito 'o the Kainga", meaning "center of the Earth".
After Hau-Maka woke up, he told about his dream to Motu Matu'a, the supreme leader who ordered 7 men to travel to the island. So they did and they return to Hiva with the news that indeed, there is new land far away. Following this discovery, Hotu Matu'a traveled with 2 boats with settlers and colonized what we call today "Easter Island".
Several hundreds of years ago a bloody conflict had broken out on Easter Island. This is attributed to a variety of factors: remoteness, overpopulation, deforestation, tribal rivalry.
Easter Island is one of the most isolated islands
in the World. Even today, if you fly with a modern airplane from Santiago,
Chile, it takes 5-6 hours to get there. Imagine how difficult it was about
1.500 years ago!
It is believed that this island was formed by ancient volcanic eruptions.
Roggeveen, the Dutch discoverer of the island had estimated a number of 2.000 - 3.000 inhabitants in 1722. But specialists analyzed the bones, the legends and have come to the conclusion that around the 1500s and the 1700s there could have been as many as 10.000 - 15.000 people living on the island.
Overpopulation could have been the primary reason why the locals started fighting each other. This is believed to have lead to the splitting of the population into several tribes, families. Some think there were 2 tribes fighting, others believe there were multiple families.
During the fights, many moai statues and ahu platforms were destroyed, magnificent statues pulled down. Perhaps it was a revenge against the god(s)? Or just because of anger at the constructors? Perhaps the towards the ancestors who had cut down so many trees in order to move the statues?
The tribal wars have even lead to cannibalism.
During Roggeveen's visit it was noticeable that life on the island has degenerated due to deforestation and the depletion of the island's natural resources.
Today, Easter Island has very few trees, this is due to the fact that the locals had used up all wood for firewood, boat and house construction, but certainly have irresponsibly cut down large amounts of trees for building tools to move and put the moai into place.
Once there were forests of palm trees on Rapa Nui, now the only palm trees that exist were planted. So are all other trees which were brought here from other islands and the Americas.
The disappearing of forests has coincided with the conflict on the island. There was not enough wood to make fishing boats, therefore the islands could forget about going out for fishing and also about leaving the island! The disappearance of wood has also led to the decrease of the number of bids, which could not construct nests anymore. The locals found themselves stuck for good on what they believed to be the "Center of Earth".
The discovery of Easter Island
On Sunday, April 5th, 1722, the first Europeans arrived to the island called by locals "Te Pito 'o Te Henua".
Because it was discovered on Easter, it was named "Easter Island".
The discoverer was Jacob Roggeveen, a Dutch captain.
The name we hear so often, "Rapa Nui" is a newer one, given to the island by Polynesians in the mid 1800.
The oldest name known for this island is "Te Pito 'o Te Henua".
Over 800 statues are today on the island, but when Roggeveen discovered Easter Island, these were in pretty good shape, many in place. Afterwards many have fallen. It is generally believed that there were revolts, conflicts and the islanders pulled them down. There even are theories that point out to the possibility of tsunami tidal waves, which could have demolished moai statues, for example it is strongly suggested that the site of Ahu Tongariki was destroyed by such a force coming from the ocean.
Recovery from the conflicts, colonization and more tragedy
Following the drastic decrease of population induced by the tribal violence and famine, Rapa Nui had recovered only by the mid 1800s, when about 4.000 people lived there. But in the 1800s and the 1900s, more and more Europeans and South Americans arrived to Easter Island, which had become part of Chile in 1888.
Tragically many Rapa Nui people were forcefully deported to Peru and Chile, many others died of diseases brought in by the white man.
All these have almost lead to the extermination of the whole population. In 1877 only 111 Rapa Nui people existed on the island.
Later, the island's population took a positive turn and many Polynesians, Amerindians and white men from Chile and Peru came to settle here.
Today tourism, fishing, some agriculture account for the main economic resources of the island. In fact, tourism which so far helped the island may be its biggest threat as more and more people flock to this tiny triangular land on a weekly basis.
|
In a commentary in the February issue of the journal Nature, a team of scientists from University of California, San Francisco has suggested that sugar should be regulated like alcohol and cigarettes.
In an article entitled ‘The toxic truth about sugar’, the UCSF group suggests that like alcohol and tobacco, sugar is a toxic, addictive substance that should be highly regulated with taxes, laws on where and to whom it can be advertised, and even age-restricted sales.
In response, the American Beverage Association issued the following statement:
"The authors of this commentary attempt to address the critical global health issue of non-communicable diseases such as heart disease and diabetes. However, in doing so, their comparison of sugar to alcohol and tobacco is simply without scientific merit. Moreover, an isolated focus on a single ingredient such as sugar or fructose to address health issues noted by the World Health Organization to be caused by multiple factors, including tobacco use, harmful alcohol use, an unhealthy diet and lack of physical activity, is an oversimplification.
“There is no evidence that focusing solely on reducing sugar intake would have any meaningful public health impact. Importantly, we know that the body of scientific evidence does not support that sugar, in any of its various forms - including fructose, is a unique cause of chronic health conditions such as obesity, diabetes, hypertension, cardiovascular disease or metabolic syndrome."
Source: American Beverage Association
- If you enjoyed this article, you may also like this: ABA puts calorie information at your fingertips
- Caffé Culture 2013, in pictures
- Tetra Pak receives DuPont Continuing Innovation Award
- Pom-Bear Zoo snacks from Intersnack
|
Blocking production of a pyruvate kinase splice-variant shows therapeutic promise
Cold Spring Harbor, N.Y.
– Cancer cells grow fast. That’s an essential characteristic of what makes them cancer cells. They’ve crashed through all the cell-cycle checkpoints and are continuously growing and dividing, far outstripping our normal cells. To do this they need to speed up their metabolism.
CSHL Professor Adrian Krainer and his team have found a way to target the cancer cell metabolic process and in the process specifically kill cancer cells.
Nearly 90 years ago the German chemist and Nobel laureate Otto Warburg proposed that cancer’s prime cause was a change in cell metabolism – i.e., in cells’ production and consumption of energy. In particular cancer cells have a stubborn propensity to eschew using glucose as a source to generate energy. This is known as the Warburg Effect.
While metabolic changes are an important feature in the transformation of normal cells into cancer cells they are not now thought to be cancer’s primary cause. Despite this, metabolic changes remain an attractive target for cancer therapy, as Krainer and colleagues show in a paper published online today in Open Biology
, the open-access journal of Great Britain’s Royal Society.This image compares glioblastoma cells untreated or treated with antisense oligonucleotides (ASO) that modulate splicing for PK-M. The cells are visible under light microscopy in the left column, and the DNA in their nuclei shows up when using the blue dye DAPI in the second column. PK-M2 is visualized using a red stain in the third column, with the merge of the images in each row in the forth column. The 2nd and 3rd rows show cells that have been treated with ASOs. The red dye is nearly all gone indicating that there is less PK-M2 and that the ASOs have worked. Image courtesy of Zhenxun Wang and Adrian Krainer. (click to enlarge)
One difference between metabolism in cancer and normal cells is the switch in cancer to the production of a different version, or isoform, of a protein produced from the pyruvate kinase-M (PK-M) gene. The protein version produced in normal cells is known as PK-M1, while the one produced by cancer cells is known as PK-M2.
PK-M2 is highly expressed in a broad range of cancer cells. It enables the cancer cell to consume far more glucose than normal, while using little of it for energy. Instead, the rest is used to make more material with which to build more cancer cells.
PK-M1 and PK-M2 are produced in a mutually exclusive manner -- one-at-a-time, from the same gene, by a mechanism known as alternative splicing. When a gene’s DNA is being copied into the messenger molecule known as mRNA, the intermediate template for making proteins, a cellular machine called the spliceosome cuts and pastes different pieces out of and into that mRNA molecule.
The non-essential parts that are edited out are known as introns, while the final protein-coding mRNA consists of a string of parts pasted together known as exons. The bit that fits into the PK-M1 gene-coding sequence is known as exon 9, while it is replaced in PK-M2 by exon 10. In this way alternative splicing provides the cell with the ability to make multiple proteins from a single gene.
Krainer, an authority on alternative splicing, previously published research
on the protein regulators that facilitate the splicing mechanism for PK-M. His team showed that expression of PK-M2 is favored in cancer cells by these proteins, which act to repress splicing for the PK-M1 isoform. In the study published today the team explains that it decided to target the splicing of PK-M using a technology called antisense, rather than target the proteins that regulate the splicing mechanism.
Using a panel of antisense oligonucleotides (ASOs), small bits of modified DNA designed to bind to mRNA targets, they screened for new splicing regulatory elements in the PK-M gene. The idea was that one or more ASOs would bind to a region of the RNA essential for splicing in exon 10 and reveal that site by preventing splicing of exon 10 from occurring.
Indeed, this is what happened. “We found we can force cancer cells to make the normal isoform, PK-M1,” sums up Krainer. In fact, a group of potent ASOs were found that bound to a previously unknown enhancer element in exon 10, i.e., an element that predisposes for expression of the PK-M2 isoform, thus preventing its recognition by splicing-regulatory proteins. This initiated a switch that favored the PK-M1 isoform.
When they then deliberately targeted the PK-M2 isoform for repression in cells derived from a glioblastoma, a deadly brain cancer, all the cells died. They succumbed through what is known as programmed cell death or apoptosis -- a process whereby the cell shuts down its own machinery and chops up its own DNA in committing a form of cellular suicide.
As to why the cells die when PK-M2 is repressed: the team found it was not due to the concomitant increase in PK-M1 (the cells survived even when extra PK-M1 was introduced). Rather, it was the loss of the PK-M2 isoform that was associated with the death of the cancer cells. How this works is still unclear but a subject of investigation in the Krainer laboratory.
The next step will be to take their ASO reagents into mouse models of cancer to see if they behave the same way there. While there are some technical and methodological obstacles to overcome, Krainer is optimistic.
“PK-M2 is preferentially expressed in cancer cells, a general feature of all types of cancer -- it’s a key switch in their metabolism,” he says. Thus targeting the alternative splicing mechanism of PK-M2 using ASOs has the potential to be a cancer therapeutic with many applications.
The paper can be obtained online at the following link: Zhenxun Wang, Hyun Yong Jeon, Frank Rigo, C. Frank Bennett and Adrian R. Krainer. 2012 Manipulation of PK-M mutually exclusive alternative splicing by antisense oligonucleotides. Open Biology 2: 120133. http://rsob.royalsocietypublishing.org/content/2/10/120133.full
The research described in this release was supported by the National Cancer Institute grant CA13106, the St. Giles Foundation, and a National Science Scholarship from the Agency for Science, Technology and Research, Singapore. About Cold Spring Harbor Laboratory
Founded in 1890, Cold Spring Harbor Laboratory (CSHL) has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. CSHL is ranked number one in the world by Thomson Reuters for impact of its research in molecular biology and genetics. The Laboratory has been home to eight Nobel Prize winners. Today, CSHL's multidisciplinary scientific community is more than 360 scientists strong and its Meetings & Courses program hosts more than 12,500 scientists from around the world each year to its Long Island campus and its China center. Tens of thousands more benefit from the research, reviews, and ideas published in journals and books distributed internationally by CSHL Press. The Laboratory's education arm also includes a graduate school and programs for undergraduates as well as middle and high school students and teachers. CSHL is a private, not-for-profit institution on the north shore of Long Island. For more information, visit www.cshl.edu.
Written by: Edward Brydo
n, Science Writer
|
Some folks laugh at the notion of Uncle Sam reaching his hand literally into our backyards and regulating almost every drop of water. But, a bill in Congress would do just that. And if it passes, not just farmers and ranchers would be affected, but all landowners.
The Clean Water Restoration Act, or S. 787, gives the government the right to extend its reach to any body of water from farm ponds, to storm water retention basins, to roadside ditches, to desert washes, even to streets and gutters. The legislation leaves no water unregulated and could even impact standing rainwater in a dry area.
Private property owners beware.
While it has "restoration" in its title, it does anything but. The Clean Water Restoration Act is not a restoration of the Clean Water Act at all. It is a means for activists to remove any bounds from the scope of Clean Water Act jurisdiction to extend the government's regulatory reach. But, what the activists won't tell you is that the Clean Water Act is working, and has been for the last 36 years.
Put simply, this legislation would replace the term "navigable waters" from the Clean Water Act with "all interstate and intrastate waters." Farm Bureau supports the protection of U.S. navigable waters, as well as rivers and streams that flow to navigable waters -- all of which are already protected under current law. But, if the Clean Water Act is applied to all waters, farmers and ranchers would be significantly impacted due the number of farming activities that would require permits.
Under this new law, areas that contain water only during a rain would be subject to full federal regulation. Further, not only would many areas not previously regulated require federal permits, those permits would be subject to challenge in federal court, delaying or halting these activities resulting in a huge impact on rural economies.
Farmers and ranchers do a good job taking care of the land. As I often say, they are America's first environmentalists. They use modern conservation practices to protect our nation's water supplies. Many times these efforts are put in place voluntarily because farmers are driven by a strong stewardship ethic.
However, the restoration bill largely disregards the positive conservation role farmers and ranchers are playing. It replaces good works with strict rules. Rather than restore the Clean Water Act, it just brings a new truckload of restrictions for the people who do most to protect our water.
The Clean Water Restoration Act is regulatory overkill. It is written to give the federal government control of structures such as drainage ditches, which are only wet after rainfall. Taking these changes one step further, it would likely give federal regulators the ability to control everyday farming activities in adjacent fields.
Hard-working farm families can't afford, nor do they deserve, Uncle Sam's hand reaching into their backyards, their fields or even their puddles of rainwater.
Bob Stallman is president of the American Farm Bureau Federation.
|
This following chronology looks back at the problem of xenophobia since South Africa’s first democratic elections in 1994.
The Zulu-based Inkatha Freedom Party (IFP) threatens to take “physical action” if the government fails to respond to the perceived crisis of undocumented migrants in South Africa.
IFP leader and Minister of Home Affairs Mangosutho Buthelezi says in his first speech to parliament: “If we as South Africans are going to compete for scarce resources with millions of aliens who are pouring into South Africa, then we can bid goodbye to our Reconstruction and Development Programme.”
In December gangs of South Africans try to evict perceived “illegals” from Alexandra township, blaming them for increased crime, sexual attacks and unemployment. The campaign, lasting several weeks, is known as “Buyelekhaya” (Go back home).
A report by the Southern African Bishops’ Conference concludes: “There is no doubt that there is a very high level of xenophobia in our country … One of the main problems is that a variety of people have been lumped together under the title of ‘illegal immigrants’, and the whole situation of demonising immigrants is feeding the xenophobia phenomenon.”
Defence Minister Joe Modise links the issue of undocumented migration to increased crime in a newspaper interview.
In a speech to parliament, Home Affairs Minister Buthelezi claims “illegal aliens” cost South African taxpayers “billions of rands” each year.
A study co-authored by the Human Sciences Research Council and the Institute for Security Studies reports that 65 percent of South Africans support forced repatriation of undocumented migrants. White South Africans are found to be most hostile to migrants, with 93 percent expressing negative attitudes.
Local hawkers in central Johannesburg attack their foreign counterparts. The chairperson of the Inner Johannesburg Hawkers Committee is quoted as saying: “We are prepared to push them out of the city, come what may. My group is not prepared to let our government inherit a garbage city because of these leeches.”
A Southern African Migration Project (SAMP) survey of migrants in Lesotho, Mozambique and Zimbabwe shows that very few would wish to settle in South Africa. A related study of migrant entrepreneurs in Johannesburg finds that these street traders create an average of three jobs per business.
Three non-South Africans are killed by a mob on a train travelling between Pretoria and Johannesburg in what is described as a xenophobic attack.
In December The Roll Back Xenophobia Campaign is launched by a partnership of the South African Human Rights Commission (SAHRC), the National Consortium on Refugee Affairs and the United Nations High Commissioner for Refugees (UNHCR).
The Department of Home Affairs reports that the majority of deportations are of Mozambicans (141,506) followed by Zimbabweans (28,548)
A report by the SAHRC notes that xenophobia underpins police action against foreigners. People are apprehended for being “too dark” or “walking like a black foreigner”. Police also regularly destroy documents of black non-South Africans.
Sudanese refugee James Diop is seriously injured after being thrown from a train in Pretoria by a group of armed men. Kenyan Roy Ndeti and his room mate are shot in their home. Both incidents are described as xenophobic attacks.
In Operation Crackdown, a joint police and army sweep, over 7,000 people are arrested on suspicion of being illegal immigrants. In contrast, only 14 people are arrested for serious crimes.
A SAHRC report on the Lindela deportation centre, a holding facility for undocumented migrants, lists a series of abuses at the facility, including assault and the systematic denial of basic rights. The report notes that 20 percent of detainees claimed South African citizenship or that they were in the country legally.
According to the 2001 census, out of South Africa’s population of 45 million, just under one million foreigners are legally resident in the country. However, the Department of Home Affairs estimates there are more than seven million undocumented migrants.
Protests erupt at Lindela over claims of beatings and inmate deaths, coinciding with hearings into xenophobia by SAHRC and parliament’s portfolio committee on foreign affairs.
Cape Town’s Somali community claim that 40 traders have been the victims of targeted killings between August and September.
Somali-owned businesses in the informal settlement of Diepsloot, outside Johannesburg, are repeatedly torched.
In March UNHCR notes its concern over the increase in the number of xenophobic attacks on Somalis. The Somali community claims 400 people have been killed in the past decade.
In May more than 20 people are arrested after shops belonging to Somalis and other foreign nationals are torched during anti-government protests in Khutsong township, a small mining town about 50km southwest of Johannesburg. According to the International Organisation of Migration, 177,514 Zimbabweans deported from South Africa pass through their reception centre across the border in Beitbridge since its opening in May 2006.
In March human rights organisations condemn a spate of xenophobic attacks around Pretoria that leave at least four people dead and hundreds homeless.
Sources include: IRIN, Human Rights Watch, SAMP, SAHRC, Centre for the Study of Violence and Reconciliation
|
Bipolar disorder, also known as manic-depressive illness, is a brain disorder that causes unusual shifts in mood, energy, activity levels, and the ability to carry out day-to-day tasks. Symptoms of bipolar disorder are severe. They are different from the normal ups and downs that everyone goes through from time to time. Bipolar disorder symptoms can result in damaged relationships, poor job or school performance, and even suicide. But bipolar disorder can be treated, and people with this illness can lead full and productive lives. Bipolar disorder often develops in a person's late teens or early adult years. At least half of all cases start before age 25. Some people have their first symptoms during childhood, while others may develop symptoms late in life. Bipolar disorder is not easy to spot when it starts. The symptoms may seem like separate problems, not recognized as parts of a larger problem. Some people suffer for years before they are properly diagnosed and treated. Like diabetes or heart disease, bipolar disorder is a long-term illness that must be carefully managed throughout a person's life.
|
Girl Scout Preserves Florida's Wildlife
March 23, 2012 – Miami, Fla. – Senior Girl Scout Caitlin Kaloostian earned the Gold Award, the highest award a Girl Scout can receive, by completing a new butterfly garden and wildlife themed mural for The Florida Fish and Wildlife Conservation Commission (FWC).
The FWC works with community and youth organizations to demonstrate the importance of safeguarding Florida’s Natural resources and to encourage the next generation of conservationists. To further this mission, Kaloostian held fundraisers such as garage sales and solicited help from fellow high school students to raise funds necessary to complete the project. With the help of her troop, Girl Scout Troop 305, Kaloostian painted a mural at the FWC’s Division of Law Enforcement Office which featured native fish and wildlife including a Florida panther, a manatee, an alligator, fish and many other animals. The butterfly garden uses native plants to attract butterflies and birds. Both the mural and the butterfly garden will help to preserve Florida’s future.
|
Commercially-pressed CD's and CD-R or CD-RW disks are fundamentally different technologies, which is why a commercial CD will continue to be readable long after a CD-R has become unusable.
A CD drive uses a focused laser beam that is reflected from the media surface in the CD disc. The beam is reflected onto a sensor that detects changes in the amount of energy that is reflected. The original (commercial) process used perforated aluminum as the media surface. When you use the term "pressed" you are using an old vinyl record term, but the production process is pretty much the same. There is a "master" disk that is put into a press which is filled with polycarbonate. The master disk has little pins sticking up everywhere there is to be a hole in the aluminum. The disk is cooled, and liquid aluminum is spun onto it. This results in an aluminum layer with holes in it.
When the disc is played, the laser reflects strongly from the shiny aluminum or less strongly (or not at all) from the holes. The reflection/non-reflection is translated into the ones and zeros of the binary data stored on the disk.
Over time the aluminum can oxidize or there can be other changes in the plastic and other materials that make the disc unusable. These are long term effects and the ultimate statistical life of a commercial CD is often debated, without conclusion, by the experts.
The CD-R and CD-RW do not use an aluminum media surface. Instead, they use a dye. When the disc is written, a high-powered laser causes spots on the disk to turn dark (hence the term "burning"). When played back, the sensor in the player sees the difference in reflectivity of the dark and not-so-dark spots as the binary data.
Unfortunately, because the dye is a light-sensitive chemical, over time it will fade. This can happen from the heat of the reading laser, from ambient light, and from chemical degradation in the dye and support media.
CD-R/RW media is safe for backup, and for creating alternate media (copying music files to play in your car so that if they are damaged from heat or wear out you can make another one, and preserve your originals elsewhere), and similar purposes. However, they are not safe for archival storage because they are not stable enough for that purpose.
Side note: when burning CD's for use in a car, for best results get "music CD's" which are designed for that application, or slow your burning speed down to 12x or 16x to get a darker spot from your high speed burner. The car will read the disc more reliably.
Insofar as tape storage is concerned, tape is also not a good archival choice of media. It's generally better than CD-R, although I haven't seen any comparative studies.
Major data centers who use tape storage refresh the storage periodically. Their Tape Management System (TMS) remembers the date the tape was recorded, and will call it up to be copied periodically. The old tape is then erased and reused until it reaches end of life (sometimes a fixed usage or time interval, sometimes when the number of recoverable errors reaches a threshold) at which time it is scrapped.
The whole issue of long term archive is complex, and goes beyond media. For media, if a data center stored its files on a 9 track magnetic tape twenty years ago, how would it retrieve that data today (you cannot find working 9 track drives). What if it had used an early Magneto-Optical (MO) drive? Small businesses have trouble when their tape drive fails, and they can't buy another drive in that old format.
File formats are another problem. I have word processing documents that deceased family members created years ago. I no longer have word processing software that will import some of those formats. I can (sometimes) extract the raw text and then try to reformat it in a current program, but if I don't have a printed original I don't know how it was intended to be formatted.
The only archival format that has stood the test of time is paper.
Submitted by: Kevin G. of Dallas, TX
Well, Carl, that so-called expert sure has stirred the waters and a LOT of people are wondering about the same question. However, your friendly Federal Government has studied the problem even longer.
To be specific, the National Archives and Records Administration, in charge of all of the record archiving of the government, has no standard on media storage, and requested NIST, that's National Institute of Standards and Technology, to write a new standard on media durability.
If you never heard of NIST, you're not alone, as NIST is more of a background organization, but suffice to say, they're the ones who creates the standards, references, and accuracy tests for all industries, from DNA to Time accuracy (in fact, if NIST operates one of the Internet "clocks" you can calibrate your PC to). NIST DNA reference material improves forensic DNA test accuracy. NIST also invented closed captioning and many other technology, but enough about NIST.
A gentleman by the name of Fred Byers spent a whole year testing various media, and wrote a guide for NIST to librarians who need to archive information on how to care for optical media such as CD-R and DVD-R's and such. In the guide, he basically stated that with proper handling (store in low humidity, no scratching, stored vertically, etc.) a DVD-R should last 30 years with no fear of losing any information. However, that is NOT an absolute number as it is dependent on a LARGE NUMBER OF FACTORS, some of which in your control, and some not:
Factors that affect disc life expectancy include the following:
type -- as recordable media is more durable than rewritable media manufacturing quality -- you get what you pay for condition of the disc before recording -- obvious quality of the disc recording -- garbage in, garbage out handling and maintenance -- scratches are bad for any discs environmental conditions -- humidity and temperature can warp disc, ruining the reflective layer in the media. light, esp. UV light can destroy the dye used in recordable media, etc.
Let us discuss each factor in a bit more detail
All types of media can be damaged through warpage (disc bending), scratches, and reflective layer breakdown due to oxygen leakage.
Recordable media, in addition, is susceptible to UV rays, which affects the dye used in the process.
Rewritable media, with phase-change recording, is even more susceptible to UV ray and temperature.
It is generally acknowledged that certain brands of media are better than others, and often the stuff on sale is not the stuff you may want to buy and keep around.
What you may not know is that there are only like 16 media manufacturers in the world. They make the media for all the brands that you see in the market, and some brands / factories are known to make high grade media (i.e. they tested best for maintaining data integrity, even when the media was subject to aging tests). While few independent labs did comprehensives tests, a test in Europe a while back for CD-R's revealed that Taiyo Yuden (Verbatim), Kodak (Kodak), and TDK (TDK) kept the most data intact.
Condition of the disc before recording
A disc should be brand new when used. While shelf live of a media is up to 5 years, why take chances? Buy them as you need tem.
Handling and maintenance
Scratches are bad for any discs, as it breaks open the substrate layer and allows air to tarnish the inside silver reflective layer inside.
Scratches also can make information on the media unreadable by interrupting the laser's path.
Environmental conditions -- humidity and temperature can warp the media, and exposure to UV light can destroy the dye used in CD-R's and DVD-R's.
Hope that answers your questions.
Submitted by: Kasey C. of San Francisco, CA
In the '80's, the CD was introduced in the market and portrayed as "THE" solution to the vinyl records.
The CD could be thrown in a mud pool, step on it, scratch it, nothing would harm he CD.
Now we all know that CD's has a lower lifetime as their vinyl counterparts and are more susceptible to errors than them. This is also true for the CD as a media to record software.
The early CD's, were recorded at maximum 640 MB. Mostly not even 640 MB but something like 528 MB. This made them less susceptible to scratches.
But as the CD technology was in a constant evolution, overburning a CD to 800 MB and more became common use. Also the DVD was introduced, offering 4/9GB on a wafer of the size of a CD.
It is obvious the the tracks are becoming so small that the finest scratch, the smallest fault, can ruin the CD/DVD forever.
Answering your question, there is no miracle solution to keep CD/DVD from deterioration trough age. But with a little bit of care, you can have many years of pleasure of your recordings.
1. Buy only CD/DVD from a good brand.
Buying low priced CD/DVD will mostly result in very disappointing experiences.
2. Dont overburn a CD/DVD.
While the overburn technique is now widely accepted by most software, it is still not fully reliable and mostly dont approved by the CD manufacturers.
3. Put every CD back in the jewel case after use, clean them as prescribed by the manufacturer, and avoid as much as possible touching the reading surface of the CD.
As a final remark, CD/DVD are nowadays not expensive and if you can make a backup of them, make a backup and store it in a safe place.
I use an external harddisk of 250GB (<300 CD's) to store a backup of the CD's/DVD's I have. Price of the harddisk is about 100$.
I have been able many times to rescue recordings which were otherwise lost forever by this harddisk.
Hope this helps,
Submitted by: Carlos
You have just discovered what most people don't discover until they actually lose data: commercial CDs and home-burned CDs are not the same. While a commercial premade CD will last a very long time if it is cared for properly, a home-burned CD will begin to deteriorate. The reason is that the home-burned version uses dyes to accomplish what the premade CD does by having it built into the disk. This is, of course, an oversimplified explanation, but it will suffice.
There are a few ways to maximize the amount of time a CD will last. First of all, buy good quality CDs to begin with. Stick with brand names that you are familiar with and have used successfully in the past.
Do not assume that just because a blank CD is made by a well-known company that it will be high quality.
Test them out by actually using them. One of the best ways to do this is to use them for your regular system backups. Be sure to actually restore from those backups periodically (easier if you have another computer handy that you can wipe out data on) or else use a backup program that allows you to mount the backup as a "virtual drive" and retrieve data from it.
This lets you know if there is a problem with a brand deteriorating unusually fast.
Second, never use labels on CDs. I found this out the hard way. Labels cause the CD to deteriorate much more rapidly than it otherwise would. Certain inks used in pens have been reported to do the same, but I have never encountered this problem, so it shouldn't be too severe. Do be certain, however, that you are gentle when marking CDs. Use a felt tip and do not press hard.
Third, put a note somewhere on the CD that tells you when you made it. This lets you monitor how long it has been since the CD was burned. If the data is irreplaceable, burn it to a new CD every 2 years.
As for the recommendation to use magnetic tapes, that has its own set of problems. Magnetic tapes also deteriorate, and they are subject to some damage that CDs are immune to, notably damage from electrical or magnetic fields.
In short, CDs are good for long term storage-- but don't assume that "long term" means forever. Check them regularly and burn them to new media when problems develop or even before if you can't replace what's on them.
As for storage, that is pretty much common sense. Keep the CDs in a case or an envelope if they are not actually being used. Avoid temperature extremes and handle gently. I also recommend making two copies of every important CD. This practice just saved my data when I discovered that the labels on my CDs had wiped out some irreplaceable family photos. It costs twice as much, but if the data is important to you then it isn't really very expensive, is it?
Submitted by: Denise R. of Lebanon, Missouri
Hi Carl N,
Your question has been set by a lot of people over the last 10 years. I've burned CD in 100s over the years and only found 2 discs with missing information. Lifetime of CDs is not limited due to one parameters only, more issues are setting the limit of lifetime. One is related conditions of storage and how you handle the discs. In other word, how careless or careful you are as the user. Then the material used in the CDs - how cheap a blank CD did you buy. And lastly your burning equipment, that is the laser diode.
When pressed CDs were introduced in beginning of the 80s lovers of vinyl records claimed, that CDs would last for 2 years only. But as you have experienced CDs from this period can still be played. I remember one report from about 1990, which claimed a lifetime of only 3-4 years. Looking into the report, it turned out that the condition of storage was -30C (some -25F) and reading/playing equipment had a worn laser diode. Most of us can only say: I don't store my CDs in the freezer and todays laser diodes doesn't wear out as they used to.
Turning towards recordable CDs, the whole issue is a matter of having a whole bunch of clear holes placed in circles in a foil. Readability of these holes are depending on a number of issues. How clear are the holes? Is edge of the hole clear? Is the reflectivity of the materials sufficient? Is the laser still as effective as it was? or has the surface become matte? For the early CD-burners this was jeopardized by increasing burning speed and some blank CD had doubtful foil material. Adding, to this some CD-burners were even sold with writing speeds beyond its capability. Many blank CDs were rejected in this period due to bad burners rather than bad discs. It is my conclusion, that this interim period has given us some doubtful discs.
You have to be careful with your CDs and also a little bit extra careful with your own. They don't like heat, bright light, bending, and writing with aggressive writing pens is also nasty. Especially pens with unknown chemicals may etch the CDs, it is just like burning, but this time controlled by the chemicals.
In case you wanna increase the lifetime of your recordings, you may buy CDs which is claimed to 300+ years lifetime. These CDs are referred to as GOLD-CD. These CDs has a special layer which include some 24ct gold. The advantages of these CDs are the ability to create clear holes into it with reduced oxidation or corrosion over time. Amazing almost also unbelievable 300 year. Just 100 years could be great for me. In 10-20 years everything would be transferred to new media type anyway. I saw a report on the 300 years at
Price of these GOLD CDs is 10-20x times the usual ones.
You have been suggested to use magnetic tape. Nor tapes does not last forever. As matter of fact the sound quality decays over time; frequency range is decreased by each use. This loss you can not be restored as with a digital media. Only digital storage keeps its audio frequency range over time and use. Like R&R;DIGITAL media is here to stay. You may call it CD, DVD, MPx, Blu-ray or whatever, but it's digital.
I believe that todays discs and equipment can provide a disc with sufficient lifetime for most of us and may even restore your more doubtful discs from the early burning time with success. Even discs which are registered as 'No disc' may be restored by copying it today. In case you wanna assure yourselves; let the PC verify the burned disc, this option is normally disabled by default.
What shall I do with my precious discs from the early days? My best recommendation would be to make a new copy, while the old is still readable. This is easy and cheap to most of us today as having two drives in your PC is not uncommon. Lastly, the quality and lifetime of recorded discs is today likely to most depended on your own care.
Submitted by: Leif M. of Helsingor, Denmark
Regarding problems developing over time with recordable CD-R media, I've run into some of this myself, but I also have quite a few discs that were made back when the very first 1x CD burners were made available to the public, and they still read just fine for me.
I suspect that there are several factors involved here.
1. I'm certain there's a difference in quality between brands of CD-R media. A number of my really old CD-Rs that still read flawlessly today were Kodak branded, and were considered expensive "premium quality" discs at the time. They're even physically a little bit thicker than most other media I've handled. By contrast, some of the generic media I purchased because of the low price on 100-pack spindles has actually developed "bubbles" where you can see the dye that's sandwiched between the layers of plastic is disintegrating. (Of course it won't read if small spots are completely gone!) There were/are several different types of dye used for CD-R media, as well, and I wouldn't be surprised if it's turning out that some types have better longevity than others. For example, Verbatim was known for using their trademark blue-tinted dye, while others were shades of green or gold.
2. From what I've read and observed, handling makes a big difference too. Leaving your CD-R's exposed to sunlight (as folks tend to do with music CDs used in their cars or trucks) probably shaves years off of their lifespan. Putting them in some type of jewel-case or sleeve when not in use is a very good idea. Boxes of empty jewel-cases can be purchased fairly inexpensively at most office supply and electronics chain stores.
3. A CD-R holding computer data is inherently more "fragile" and subject to data loss than a CD-R recorded as an audio disc. The standard used for recording audio CDs incorporates quite a bit of error correction information to handle small scuffs and scratches on the media, but besides that, audio data is spread out over a much larger portion of the CD-R. If you have a .ZIP file stored on a CD-R, for example, a pinhole-sized mark someplace on the disc where that .ZIP file is stored can easily be enough to prevent the whole archive from extracting properly. By contrast, the same sized mark might only cause a very brief "stutter" at one point of a song on a music disc (or not pose a discernable problem at all, due to the error correction).
If your audio discs are already deteriorating to the point where players are rejecting them as "unreadable" or they're skipping badly, it sounds to me like things have gotten pretty bad. The only recommendation I'd have is to re-record your music to fresh, good-quality CD-R media and throw out the old ones - and in the future, make a habit of transferring your music to fresh discs every few years or so.
Luckily, in the case of computer software backups, they tend to become so outdated, you no longer really need to keep them by the time the media they're recorded on starts failing. But for those trying to preserve digital photos and the like, I'd recommend this same procedure. Make a fresh set of backups every so often and discard the old media - before it fails on you and you lose something priceless!
Submitted by: Tom W.
CD burned media fails after time.
I am a practicing technician and this is not a new complaint. It is my firm belief that most consumers burn their media at the fastest speed possible for both their software and the media they use. This is fine but there may well be a trade-off in doing this.
What most consumers do not perhaps understand is that commercially produced CD's have actual pits pressed into them that represent the digital data of the original sound data. A burned CD on the other hand is made by fabricating a photo sensitive layer to mimic the pits found in pressed media.
I have found three major causes for this consumers problem they are as
1. A slower burn makes a stronger image representation in the photo
sensitive layer of a burned CD. A faster burn while successful may not impress the photo sensitive layer as effectively as a slow burn. Over time the burn fails as the photo sensitive layer deteriorates.
2. Sunlight and other forms of intense light can effect a burned CD
because it can cause a distortion in the burned media's photo sensitive layer.
3. Scratches by far are more evident on burned media and more easily
caused than on pressed media. Most consumers seem to ignore the manufactures warning and suggestions. Handling of the disc in a careful manner as advised by the manufacturer is the best policy here. I use a camera lens cloth to clean surface of all my media. A camera lens cloth will not scratch the disc surface. Paper and regular household cloths will cause scratches.
Observe the above and I do believe you will have better results.
One more thing always use the media recommended by the burner manufacturer. It is endorsed and guaranteed to work, many of the cheap non name discs out there are just not up to par. Its just like the old cassette tape days.
Most audiophiles went for tapes like Maxell, JVC, Sony etc. but as everyone knows there were a lot of bogus brands out there for the un-informed to purchase.
Submitted by: Peter K.
> I recently read an article by a data storage expert who claimed that
> burned CD-Rs and CD-RWs can be expected to last only two to five years
> and not a whole lot more. I personally have commercially pressed CDs
> from the 1980s that still play fine, but I have begun to notice that
> some of my burned CD-Rs are beginning to skip
you mention that there are basically two types of CDs: Those that are created with all information in place and those you can buy and write on.
The first type is quite robust as the information has been "engraved" into the surface just below the reflector. The most critical part of such a CD is the reflector, most often a very thin layer of aluminum.
The second type of CD works a bit differently: There is a dye layer below the reflector and the information is written onto the CD-R(W) by "burning"
and thereby locally changing the optical properties of the dye. The most critical part is the dye, besides the reflector as above. If the dye degrades the CD easily gets unreadable. The dye of CD-RW is even more critical as it must be "resetable" - another constraint.
> The expert suggests that for secure long-term storage, high -quality
> magnetic tape is the way to go.
This solution is quite expensive as you need a tape drive and enough tape cartridges, but has the advantage of a much larger storage capacity. If the manufacturers say their tape cartridges are reliable for a very long time they have one advantage above CD-R: This type of storage device has been around long enough to prove it. CD-R has been on the market for no more than 10 years.
The best strategy for the private user is: Have a good archive strategy, save often and store the media carefully in a dry, dark, cool place. If you store every file more than once you have a better chance to retrieve it.
There is no real alternatives to CD-R. Use high-quality ones. Do not use any DVD variety as their reliability is much less. DVD may be used for an image backup of your boot drive so you can restore your present configuration for the months to come.
Submitted by: Alexander V.
Unlike pressed original CDs, burned CDs have a relatively short life span of between two to five years, depending on the quality of the CD. There are a few things you can do to extend the life of a burned CD, like keeping the disc in a cool, dark space, but not a whole lot more.
The problem is material degradation. Optical discs commonly used for burning, such as CD-R and CD-RW, have a recording surface consisting of a layer of dye that can be modified by heat to store data. The degradation process can result in the data "shifting" on the surface and thus becoming unreadable to the laser beam.
Many of the cheap burnable CDs available at discount stores have a life span of around two years, In fact, there are some of the better-quality discs offer a longer life span, of a maximum of five years. Distinguishing high-quality burnable CDs from low-quality discs is difficult, I think because few vendors use life span as a selling point.
I've had good luck with Verbatim media, and bad results with TDK. Playback with the TDK discs I used degraded steadily over time, in spite of very little use, and not much in the way of scratches or other blemishes on the disc. On the other hand, the Verbatim discs I've used have held up well over time, and under more use than the TDK ones I used.
Opinions vary on how to preserve data on digital storage media, such as optical CDs and DVDs. I have my own view: To overcome the preservation limitations of burnable CDs, Im suggesting using magnetic tapes, which, as I read, can have a life span of 30 years to 100 years, depending on their quality. Even if magnetic tapes are also subject to degradation, they're still the superior storage media.
But I want to point out that no storage medium lasts forever and, consequently, consumers and business alike need to have a migration plan to new storage technologies.
A Good Question to get in this subject is Does Burning Speed makes difference in quality of CDs? Someone told me that the burning speed makes a difference in the quality of the records. The lower it is, the deeper it burns and therefore the better the quality is. I heard that there are some audio technicians decide to burn masters at 2x and copies at 4x due to getting digital noise from higher burn rates. Might just depend on the burner quality and the burning program...
Hoping you get the Point of my explanation.
Submitted by: Sameer T.
Everyone who owns a business is always trying to be enticed by the security and the longevity of magnetic tape. And although I'm apt to agree with them on its durability, I don't use it to back up important data in my business. I have two problems with magnetic tape vs. CD or DVD. The first problem is hardware. Data backed up on a CD or DVD can be loaded into any computer with a drive capable of handling the media. The same can be said about tape backup, however you are more likely to find a computer off the shelf with a compatible CD or DVD drive vs a magnetic tape drive. The other problem is the need for long term storage.
As a business owner, I'm backing up my important data every one to three days. I've been using CD-RW media to do this for years. If a disk gets corrupt, you can reformat it using your burning software, then use it again. If you are concerned about your CD becoming corrupt, simply burn two or three. The cost of three CD or even DVD media is much more reasonable than the cost of one of those tapes. And if my server dies, I can buy any computer at any store, and load the data onto the new computer right away and I'm back in the game. I have several people trying to convince me of the benefit of a paper backup system. I find it easier (and cheaper) to have multiple electronic backups. My business server has a RAID 1 card and two hard drives which mirror each other. I have the CD backup, and I then take this data and save it to a secure partition on another machine in a separate location.
The likelihood of all of these systems failing at one time is highly unlikely. And if it does, I'm taking the day off, because that's just real bad luck. As far as long term storage (like music), I've noticed that CD-RW can lose the data in the long haul, but haven't had any problems with CD-R media. I have some that skip, but not for a reason I didn't know about. I buy the large spools of CD-R which don't come with jewel cases. So these disks get abused. If you know you are going to keep something for a really long time, I would make sure the disks you buy have jewel cases. And you can apply the multiple disk system with this as well. The media is easy on the wallet, and the more backups you have, the lower the risk you will actually lose the data.
I don't think I answered all the questions, but that's my take.
Submitted by: Dave K.
|
'Old Earth Scientists'... I've never heard that before ... You aren't suggesting that there are 'new earth scientists' are you ?
Well sort of, there are commonly two types of scientists - old earth (who believe that the earth is billions of years old) and then young earth who believe that the earth is around 6,000 years old.As far as Science is concerned the big bang occured between approx 14 - 18 billion years ago
As stated above, there are both old earth scientists and young earth scientists. Old earth scientists believe in the big bang theory and that the age of the earth is in the order of billions of years. Having said that perhaps the above statement should read "As far as old earth scientists believe, the big bang occurred between approx 14 - 18 billion years ago." Furthermore, when you say "concerned" it makes the assumption that the big bang actually did happen. The big bang is a theory and unless scientists can replicate it, it will forever remain a theory.thats not a theory formed by 'old earth scientists' that is calculated using every method we have at our disposal
Unfortunately your statement falls short from the beginning - remember, the big bang theory is just that, a theory.measuring the expansion rate of the universe, measuring light from distant stars etc..there are too many to mention.
The Bible also confirms that the universe is expanding. Isaiah 40:22 teaches that God “stretches out the heavens like a curtain, and spreads them out like a tent to dwell in.” This verse was written thousands of years before secular scientists accepted an expanding universe. It was until more recently that scientists changed their mind from the universe being constant to actually expanding.
There are a few theories floating around with respect to the apparent red shift of stellar objects. old earth scientists believe it to be a result of bodies moving away from earth. As such, they have suggested that there should be no fully formed stellar bodies further away than about 8 billion light years. Astronomers have pointed telescopes into supposed redshift deserts (I.E. locations in space where there should be no fully formed bodies) and they found a sky full of fully formed galaxies.
Measuring light from distant stars relies on the assumption that light has always moved at a constant rate, which unfortunately has not been proven.1. The moon moves away from the earth at around 4cm per year. If the earth was billions of years old, the moon could not be as close to the earth as it is.
That suggests that the moon has always been in orbit around the earth for the 4.5 billion years...it hasn't
Unfortunately this is not what old earth scientists believe. They believe that the earth and moon have been around for over 4 billion years.2. Oil deposits in the earth are under extreme pressure. If the earth was billions of years old this pressure would have caused the oil to have seeped through the rock layers and eventually the pressure would all be gone - I.E. there would be no oil under pressure today
The oil deposits aren't 4.5 billion years old either... they are from rotting animal/vegetable sources from much later .. millions of years not billions
I should have written this statement differently I.E. millions of years. The problem still stands however that if oil has was around millions of years ago, then it could not be under pressure today.3. The sun is shrinking at a rate of five feet per hour. this means that the sun would have been touching the earth a mere 11million years ago (let alone billions of years ago)
No, that asssumes a constant state universe...the universe is very far from constant...its expanding and has been since the beginning. Nobody has ever suggested that the earth - moon - sun position has been in existance let alone constant since the big bang.
Don't old earth scientists make assumptions also? If you look above, old earth scientists make the assumption that the speed of light is constant. Furthermore they still hold to the assumption that the earth, moon and sun have been around for over 4 billion years.4. Helium is added to the atmosphere everyday. Basically there is not enough helium in the atmosphere to support billions of years.
Helium hasn't been added for 4.5 billion years ... again the earth wouldn't have had an atmosphere until recently ( recent related to its 4.5 billion age )
According to old earth scientists. The oxygen enriched atmosphere (basically as we know it today) was formed around 2.7 billion years ago. The amount of helium contained within our atmosphere today is only enough to support thousands of years, certainly not billions.5. Comets lose mass over time, there would be no comets left if the universe was billions of years old. (because comets were apparently a by product of the big bang)
Thats misleading. The origin and time of origin of comets is not claimed to be the big bang. Thats a straw man.
(I am guessing that a straw is another way of saying clutching at straws?)
Again with this one I should not have just skimmed over it but should have elaborated. Comets have long been a good evidence due to their fragile nature and life expectancy. Comets are commonly huge chunks of ice traveling at tremendous speeds through space, when they come close to a star, they begin to melt and so form a trail of moisture. This can't last forever and it will eventually disintegrate. Here in-lies a problem for old earth scientists because there should be no comets left - they should all have been disintegrated by now (giving the billions of years). And if we are talking about clutching at straws - here's a good one for you.
Old earth scientists have come up with another theory to try and explain why we still have comets today. So in comes the Oort Cloud. The Oort Cloud is a hypothetical spherical cloud of comets which may lie roughly 1 light year away from our sun. Apparently, these comets become dislodged from the Oort Cloud by the gravitational pull of passing stars and the milky way itself (due to it apparently being at the outer edges of our milky way) These comets are then free to move about and disintegrate (which is how we see comets today) Now this Oort Cloud has not been detected or seen it is another theory - it is just a hypothetical cloud to try and fit in with the mold of an old universe.6. The earths magnetic field decays by approximately 5% every century, this means that a mere 10,000 years ago, the earths magnetic field would have been so strong that the heat it would have produced would have made life on earth impossible.
No doubt taken from Barnes's magnetic field argument 1973. The decay rate he stated has been debunked and stated as flawed.
How has it been debunked?7. fossilized dinosaur bones - these bones have been found and it is impossible for them to have lasted for millions of years.
Why not ? They have
The evidence available suggests an asteroid hit the earth approx 65 million years ago leading to a catastrpohic global event. There is a layer of iridium in the earths stratography that supports this theory.
Speaking of clutching at straws - "Why not? they have" This goes against what old earth scientists have been telling us for years! Blood cells decay at a much faster rate than the rate at which bones can fossilize. How then can you have a fossilized dinosaur bone which contain blood cells?
If we are talking about debunking theories or practices - radio carbon dating techniques have terrible flaws and rely on many assumptions. Therefor how can you be sure that your 65 million years is accurate?8. Salt is added everyday to the dead sea by inflows. Since it has no outlet - the salt content continues to grow. The amount of salt contained within it is not enough to support billions of years.
The dead Sea didn't spring into existance billions of years ago. Its a result of millions of years of constant change on the earth by volcanic, tectonic, atmospheric activity. The dead sea is a baby compared the age of the earth
I would have thought that you would line up the forming of the seas as we know them now with the catastrophic global event that wiped out the dinosaurs. If not that, then what are you basing your idea that the dead sea is a baby compared to the age of the earth? are we talking thousands of years, hundreds of thousands, millions or perhaps billions?9. The earths population doubles every 50 years (approx) it would take around about 4,000 years to reach the number of people that are on earth today (Lines up nicely with the world wide flood of Noah's day) if we use this figure for millions of years - the earth could not contain the amount of people.
Also that matches for the evolution model. The expansion in the earths population is also linked to the expansion of civilisation .. .not just the existance of humans and their descendants.
Could you expand on which evolution model you're referring to?10. Spiral galaxies appear this way due to their 'rotation' this rotation would eventually cause them to straighten out I.E. lose their spiral. There should be no spiral galaxies if the universe was actually billions of years old.
That again is a straw man. The big bang theory doesn't suggest spiral galaxies popping into existance at the moment of the big bang. They are formed over many millions of years
Why not? The big bang suggests that everything else popped into existence at the moment of the big bang. If this is not the case - then how did they form?The earth, the universe and everything in it was brought about in creation week. It was a divine event brought about by a supernatural creator.
No it wasn't ( that which can be asserted without evidence can also be dismissed without evidence )
We have just been discussing a page full of evidences!And faith .....
Would you build an electronic project based on faith ? would you cross the road by faith ?
But you yourself are obviously a man of great faith. You believe that the universe and all it contains was brought about by a supposed big bang. To put it lightly - 'Nothing became something and the something exploded'
Where did this matter come from in the first place? doesn't the big bang go against the law of conservation of mass and energy?
If you are dismissing faith, then you must have proof of the big bang. You obviously weren't there when the supposed big bang took place so therefor it would stand to reason that you can replicate the big bang - after all, we are dismissing faith here.If I am sick I see a Doctor, If I have trouble seeing I go to an optician etc etc. Faith would not heal me or make me see. Rather countless selfless individuals who over thousands of years have devoted their lives to bettering mankind.
Yes indeed! Isn't it interesting how even though we apparently all stemmed from a common singularity we are all unique and have our own special gifts and talents? If we look to God's word though, we find that we all have been given these unique gifts and talents - some to be doctors, some to be opticians, some to make super pong tables and some to be astronauts!
But back on topic, isn't there an underlying reason that you go to a doctor? You go specifically to a doctor because you have faith in him. If you didn't have faith in him and all his years of training then you would just go to anyone wouldn't you?Its just not the case at all. For a start evolution doesn't need a set of ready to be assembled parts lying around. Its a process beginning with the smallest building blocks at chemical level and taking millions and millions of years to progress.
Fair enough, Let's walk though this one step at a time starting from the beginning - how did the very first building block get here?
Also a 747 ( or an LED pong table ) isn't carrying about obselete parts of earlier less successful aircraft in its frame like we are.
Could you list these supposed obsolete parts and explain why they are not required (I think you'll find that every part of our body plays it's own important role)
You say that you have faith in fellow humans. Why is that? If we are just a result of random chemical reactions then why do you trust in them?
On that note, why does anyone have morals? why do we have laws and rules? if we are the by product of natural selection in that it is survival of the fittest, who is to say that I can't go out and kill someone - after all this is how we supposedly came to be!
Do you feel sorrow when a family member or close friend dies? I am guessing that you would, but hold on a second - why on earth would you get sad if this is simply what you are arguing for in motion? To expand, If we are brought about by the strongest cells living on and the weaker ones dying off, isn't it good that your family member or friend has died because it means that the strong have survived and the weak are now dead? you should be sitting there giving hi five's to everyone shouting "Way to go natural selection!"
And finally, Why on earth would scientists use evidence from the past to predict the future? If the universe came about by disorder and random chemical reactions then how on earth could we use this information to reliably predict the future. Uniformity does not make any sense in a universe created by random chance and disorder.
Of course this is not the case, we find that the universes history is very much ordered because God designed it that way.
|
Cancer of the cervix (sir-vix) is one of the most common cancers in women. There seems to be a connection between cervical cancer and sexual activity at an early age, especially when multiple partners are involved. Cervical cancer grows without symptoms, that's why a yearly pap smear is so important. A pap smear can detect the presence of cancer cells at an early stage. When precancerous cells are found, usually called dysplasia (dis-PLAY-zha), they can be removed in the doctor's office using various procedures that burn or freeze the cells off the cervix. If the cancer has advanced, the recommended treatment usually includes a combination of chemotherapy, radiation and surgery, which will prevent a woman from bearing children. For more information about cervical cancer, contact your healthcare provider.
|
Race for survival
On the brink of extinction, Honu'ea struggle to the sea
October 23, 2008A full moon floods the wide swath of sand that conceals Orion's nest. Big Beach is lit up so bright we can see where the first ones emerged, the football-sized divot on a small mound of sand cordoned of with yellow caution tape.
This is the third and final full moon to hit the still-gravid mound.
Sheryl King, a biologist with the Hawaii Wildlife Fund, sits at the head of a circle consisting of a dozen or so of us. She explains what we are to do should more baby honu'ea—hawksbill turtles—dig their way out on one of our shifts: make sure they head in the right direction—toward the sea. Keep cats, mongooses, and crabs away. If one flips over in a footprint, push up sand beneath it so it can right itself, but don't ever touch a hatchling.
We determine who stakes out when, then hit the hay, or rather the sand.
Hawksbill nests typically gestate for around 60 days, King said, but she adjusts a nest's "due date" according to various factors, among them temperature and shade.
King spotted Orion, the mother, depositing this particular clutch around 64 days prior to the first hatchlings' emergence. She was watching for nesting hawksbills as part of the dawn patrol, a U.S. Fish and Wildlife Service effort to spot nesting females as part of the Honu'ea Recovery Project. She estimated the nest would begin to hatch on October 11. She was only 2 days off.
The project takes place with help from several entities, including FWS, the Hawaii Department of Land and Natural Resources and the National Oceanic and Atmospheric Administration. The aim is to get the honu'ea population up to more stable number.
Orion herself likely hatched very close to the spot where she dropped off her most recent batch.
"They tend to return to their natal beach," said HWF co-founder Hannah Bernard. "We don't really understand how they find their way."
Yet Orion doesn't stick around for very long after nesting. According to King, she spends most of her days off the coast of Oahu and comes to the vicinity of Big Beach every three to four years just to nest.
"I first tracked her, and named her, in 2001," King said. "We've tracked her with satellite transmitters so we have a good handle on her movements."
This is Orion's third or fourth nest this season. Two other nests were laid on island this year by an as yet unidentified female, which Bernard says is a good thing—one more nesting female adding to the species' extremely small gene pool.
This is one of only ten or so nesting areas archipelago-wide. There are three on Maui. Other sites include Kameahame Beach on the Big Island and a black sand beach at the mouth of Moloka'i's Halawa River. Ninety percent of honu'ea nesting occurs on the Ka'u Coast of the Big Island.
Nests contain an average of 140 eggs. But while a single hawksbill may lay nearly a thousand eggs in a given year, Hawaii's honu'ea aren't exactly thriving. King said that they have a one in 10,000 chance of making it to adulthood. Volunteers stake out the nest for 24 hours a day as the due date approaches to help ensure the hatchlings' instinctual seaward striving goes without predatory incident.
HWF volunteer coordinator Angie Hofmann compares the hatching of a sea turtle nest to childbirth. Everyone was antsy in the days leading up to the hatching. A handful of volunteers parked nest-side in beach chairs day and night, eyes locked on the mound for even the tiniest movement. One volunteer called it a "watched pot."
Only this one boils.
The first batch emerged at around 5am on Monday, October 13. Forty-eight hatchlings made their way to the water that morning, but Orion's nest was still far from empty.
Glimpsing these tiny hatchlings, bellies full of yolk, as they march toward the sea is an extraordinary sight on its own, but there is a particular sense of urgency for the little ones whose prolific mama chose to deposit them in the shade of a keawe tree at Big Beach.
Honu'ea are not the enormous green guys that bob up beside you when you're snorkeling at Black Rock or Molokini. Honu'ea are smaller—they grow to be up to 270 pounds, whereas the greens round out at 400. Honu'ea have a beak rather than a rounded snout—hence the Anglo name, hawksbill.
Most importantly, honu'ea are endangered under the U.S. Endangered Species Act and most people you ask will say they're critically endangered; greens are not.
Though their plight is severe and stemming from the same source, green sea turtles are listed as threatened, which means that their numbers are much higher than those of the honu'ea.
Statewide, according to King and Bernard, there are fewer than 100 nesting female honu'ea. Fewer than ten of these will nest throughout the isles in any given year. Only five or six total dig their nests on Maui's coastline.
"That's critically low," Bernard said, adding that the entire Hawaii hawksbill population is extremely vulnerable. "The greater your numbers, the greater your resilience."
They cite anthropogenic—human—causes for the species' alarmingly low numbers: runoff, traffic, lights that disorient nesting turtles, introduced predators, habitat loss and more.
Hawksbills across the globe were once plundered for their shells, which were made into combs, jewelry and even guitar picks. In Japan, according to the 1999 Jay April documentary Red Turtle Rising, they were seen as a sign of longevity, and thus stuffed and hung on the walls in many homes. In Hawaii their shells were used to make dinnerware, jewelry and medicine, though a kapu (taboo) barred honu'ea meat from being consumed (they dine primarily on poisonous sponges, which makes their meat toxic). The tortoiseshell pattern that may or may not constitute your sunglass frames was inspired by the hawksbill. In 1973 real tortoiseshell was banned worldwide under the Convention on International Trade in Endangered Species (CITES).
|Researchers have tracked Orion's (the mama turtle) route and found she likes to hang out on Oahu, but comes to Maui to nest.|
It may be illegal to mess with them these days, but they're not exactly bouncing back.
That's why the 140 or so hatchlings here at Big Beach, barely larger than your big toe, need to make it the ocean.
So far the turnout has been outstanding. The first night saw 48 turtles scamper into the tide. The next night more than 100 came out. Tonight we'll see the stragglers to the shore, if there are any.
The next day King will excavate the nest carefully with her hands for any that didn't make it out, dead or alive. Live hatchlings will be placed in the water after dusk. Eggshells will be counted and unhatched eggs will be sent to a NOAA lab in Honolulu for DNA testing.
My one to 2am shift comes and goes without a peep. I've been instructed to shine a red flashlight on the nest every few minutes, but the mound is frozen.
I fall asleep after my shift with few expectations.
At some bleary hour a voice startles me awake.
"There's a turtle!" King says as she passes my tent. "A turtle just hatched!"
It's barely a quarter past five in the morning. Volunteers climb out of sleeping bags and tents and flood the area around the nest. One hatchling moves slowly toward the sea in the moonlight, almost a silhouette at this dark hour. Its tracks look like tire tread from a mountain bike. We inch along behind it, awestruck.
After 20 minutes the turtle is at the edge of the sea. Although its flippers have just had a killer workout, the hatchling takes to the waves effortlessly after the lapping water swallows it whole.
Any number of things could have thrown off the hatchling and its siblings. Had this been a beach up the road they may have gone toward bright lights. They may have gone toward South Kihei Road and gotten smashed, which has happened before with nesting mothers; once in 1993 and once in 1996, thanks to speeding motorists. A feral cat (of which there are many) could have gotten to them. King says that even ghost crabs prey on sea-bound hatchlings, gouging out their eyes in a horrific display King herself has witnessed in the northwest Hawaiian Isles.
Hofmann said her major concern is the long-term impact of development on nesting. While Big Beach is a state park and thus can't be built upon, two proposed developments—Wailea 670 and the expansion of Makena Resort—could increase the volume of beachgoers that may, inadvertently or otherwise, disturb the nests.
"If they both get their way there'd be another city down here," she said.
The proposed development sites may be pretty far mauka of where the turtles nest, but storm runoff has an obvious impact on their ability to successfully hatch and make it to the sea, as does lighting.
Hofmann said that, given how close honu'ea are to extinction, developers should reconsider how they determine appropriateness when choosing a building site.
"The turtles have chosen this as their nesting place," she said.
While there are several well-documented hawksbill nesting sites statewide, there is no bureaucratic mechanism that can designate them as a critical habitat.
Bernard said that the only defense for sites with impending developments so far has been a lighting ordinance that the county adopted in 2007, which she said was watered-down.
"It's not the bill that we hoped for," she said, "but it's a start."
Just after six in the morning the camp gets jostled awake once again. Three more babies have come out, a volunteer says. I hop to my feet. The last ones to emerge on their own are making it to sea in the new daylight, each on a separate trajectory, seemingly unaware of one another but probably very aware of us.
We scare away the looming ghost crabs. We clear the path of debris, as the turtles' tiny flippers hoist them along the final stretch of sand.
It takes one honu'ea a few tries to take to the water; the oncoming surf pushes it off course. The other two swim off almost instantly.
Nobody knows where they're headed. They return to near shore areas after about five to 10 years, but the time in between is known as the lost years. One theory is that they attach themselves to little clumps of seaweed, floating wherever the current takes them. Those ready to nest, of course, eventually make it back to the beach of their birth using some mysterious sense that we don't yet understand. The hope is they'll stick around long enough for us to find out. MTW
For more information on how you can help hawksbills visit wildhawaii.org. To find out more about the role of honu'ea in Hawaiian history and culture check out the award-winning 1999 documentary Red Turtle Rising, directed by Jay April. The film is available for free on the Web at filmmaui.com and through the World Turtle Trust.
|Entertainment and lifestyle news for Maui, Hawaii and the surrounding Islands. Maui Time Weekly is Mauis only independent and locally owned newspaper.
Mail this link to a friend|
|
Lier Psychiatric Hospital (Lier Psykiatriske Sykehus or Lier Asyl in Norweigan) in Norway, has a long history as an institution. The sickest people in the society was stowed away here and went from being people to be test subjects in the pharmaceutical industry’s search for new and better drugs. The massive buildings house the memory of a grim chapter in Norwegian psychiatric history the authorities would rather forget.
UPDATE: When you have read this post you might be interested in reading my report one year later!
The buildings welcome you
Many of the patients never came out again alive and many died as a result of the reprehensible treatment. It was said that the treatment was carried out voluntarily, but in reality the patients had no self-determination and the opportunity to make their own decisions.
Must be creepy at night
There is little available information about the former activities at Lier Hospital. On this page (In Norwegian) you can read more about the experiments that were carried out on this Norwegian mental hospital in the postwar period from 1945 to 1975. It’s about the use of LSD, electroshock, brain research funded by the U.S. Department of Defense and drug research sponsored by major pharmaceutical companies. It is perhaps not surprising that they try to forget this place and the events taking place here.
Chair in room
One of many rooms
Things that is left behind including bath tub
It was also performed lobotomy here. That’s a procedure that involves knocking a needle-like object into the eye socket and into the patients head to cut the connection between the anterior brain lobes and the rest of the brain. Lobotomy was primarily used to treat schizophrenia but also as a soothing treatment for other disorders. The patients who survived were often quiet, but generally this surgery made the patients worse. Today lobotomy is considered barbaric and it is not practiced in Norway.
From a window
Lier Psyciatric Hospital, or Lier Asylum as it was called originally, was built in 1926 and had room for nearly 700 patients at the most. In 1986, many of the buildings were closed and abandoned and they still stand empty to this day. Some of the buildings are still in operation today for psychiatric patients.
Exterior of the A building
Desinfection bath tub
These photos are from my visit there as a curios photographer. The place was clearly ravaged by the youths, the homeless and drug addicts who have infiltrated the buildings during its 23 years of abandonment. On net forums people has written up and down about ghost stories and the creepy atmosphere. I was curious how I would experience the place myself. But I found it was pretty quiet and peaceful. I went there during the day so I understand that during nighttime, one should look far for a more sinister place. The floor consisted of a lot of broken glass and other debris.
View through window
A pile with electrical boxes or something
These days, there has been provided money to demolish the. 15 million NOKs is the price. Neighbors cheer but the historic, photographers and ghost hunting kids think it’s sad. This is the most visited, and just about the only and largest urban exploration site in Norway.
I have read and recommend Ingvar Ambjørnsen first novel, “23-Salen”, which is about when he worked as a nurse at Lier Psychiatric Hospital for one year. The book provides insight into how life for patients and nurses turned out in one of the worst wards.
The famous motorized wheelchair
Doorways and peeling paint
Top floor, view to the roof and empty windows
Disused stairs outside
|
Three-dimensional printing is being used to make metal parts for aircraft and space vehicles, as well as industrial uses. Now NASA is building engine parts with this technique for its next-generation heavy-lift rocket.
The agency says that its Space Launch System (SLS) will deliver new abilities for science and human exploration outside Earth's orbit by carrying the Orion Multi-Purpose Crew vehicle, plus cargo, equipment, and instruments for science experiments. It will also supply backup transportation to the International Space Station, and it will even go to Mars.
NASA is using 3D printing to build engine parts for its next-generation Space Launch System. Shown here is the first test piece produced on the M2 Cusing Machine at the Marshall Space Flight Center.
(Source: NASA Marshall Space Flight Center/Andy Hardin)
NASA's Marshall Space Flight Center is using a selective laser melting (SLM) process to produce intricate metal parts for the SLS rocket engines with powdered metals and the M2 Cusing machine, built by Concept Laser of Germany. NASA expects to save millions in manufacturing costs and reduce manufacturing time. SLM, a version of selective laser sintering, is known for its ability to create metal parts with complex geometries and precise mechanical properties.
The SLS will weigh 5.5 million pounds, stand 321 feet tall, and provide 8.4 million pounds of thrust at liftoff. Its propulsion system will include liquid hydrogen and liquid oxygen. Its mission will launch Orion without a crew in 2017; the second will launch Orion with up to four astronauts in 2021. NASA's goal is to use SLM to manufacture parts that will be used on the first mission.
The rocket's development and operations costs will be reduced using tooling and manufacturing technology from programs such as the space shuttle. For example, the J-2X engine, an advanced version of J-2 Saturn engines, will be used as the SLS upper stage engine. Some SLM-produced engine parts will be structurally tested this year and used in J-2X hot-fire tests.
In a NASA video, Andy Hardin, engine integration hardware lead for the Marshall Space Flight Center SLS engines office, discusses the initial testing and building stages:
We do a lot of engineering builds first to make sure we have the process [worked] out. There's always weld problems that you have to deal with, and there's going to be problems with this that we will have to work out, too. But this has the potential to eliminate a lot of those problems, and it will have the potential to reduce the cost by as much as half in some cases on a lot of parts.
Since final parts won't be welded, they are structurally stronger and more reliable, which also makes for a safer vehicle.
Ken Cooper, advanced manufacturing team lead at the Marshall Space Flight Center, says in the video that the technique is especially useful for making very complex shapes that can't be built in other ways, or for simplifying the building of complex shapes. But geometry is not the deciding factor; whether the machine can do it or not is decided by the size of the part.
|