text
stringlengths
229
520k
Slide and Roll Through our joints we spread information all over our body. The movement of bones in the human body occurs as a result of a combination of slides and rolls in the joints. The joints can get sticky and stuck. We benefit so much from warming and oiling these communication channels. -Slide: is a translatory movement, the sliding of one joint surface over another. -Roll: is a rotary movement, one bone rolling on another. In class today we will pay extra loving attention to the movements of the bones. We will practice the proper body mechanics for each joint. Much love, Marielle Zoom Yoga Class,5/25, 2PM and 7PM, Central Time Join URL: https://zoom.us/j/522828578 Meeting ID: 522-828-578 Contribution: 10 dollars
Coffee cultivation and trade began in the Arabian Peninsula. Until the 15th century, coffee was grown in present-day Yemen and by the 16th century it was known in Persia, Egypt, Syria and Turkey. European travelers returned from the east with stories about the unusual black drink, “Arab wine”, as it was also called. At first, it was viewed with suspicion, and the clergy in Venice had to ask the consent of Pope Clement VIII to use it. Who discovered coffee? The best-known legend about coffee says it was discovered by chance by an Ethiopian pastor named Kaldi, in around 500 AD. He noticed that after eating the fruit of a certain bush, his goats became so energetic that they couldn’t sleep at night. Kaldi told this to the abbot of the local monastery who decided that these “berries” were unclean and had them burned. The berries released a pleasant and intense aroma that surprised the abbot. He drank an infusion of roasted beans and noticed that he became much more alert during the long hours of evening prayer. The news spread quickly, and coffee began its journey around the world. The journey of coffee In 1645, the first coffee shop opened in Venice. At around the same time, coffee reaches England, Austria, France, Germany, and the Netherlands. Beginning with 1700, European missionaries, pilgrims, merchants, and sailors brought coffee with them to the colonies and to the Americas. By the end of the 18th century, coffee had become one of the most profitable crops in the world. After oil, coffee is the most sought after commodity in the world to this day, and the second-most traded good on the stock exchange. Coffee is grown in over 80 countries around the world, with the largest producers being Brazil, Vietnam, Colombia, Indonesia, and Ethiopia. What is coffee? In 1737, Swedish botanist Linnaeus included the plant in the genus Coffeea, part of the Rubiaceae family. Although there are many species belonging to the genus Coffea, from the consumer’s point of view, only two species are important: Coffea Arabica and Coffea Canephora (or Robusta). Coffee is an evergreen shrub that grows in the equatorial areas between the two tropics: Cancer and Capricorn. The coffee bean we know is the seed of a red or yellowish fruit called a cherry, in which there are two beans placed face to face on the flat side. They are covered by a thin and delicate shell (called silverskin), covered by a hard membrane (called parchment) which in turn is surrounded by a sweet gelatinous layer. Today, 2/3 of the world’s coffee production is Arabica, with the largest producers being Brazil and Colombia. It is more expensive and finer than Robusta coffee. - It grows at altitudes between 800-2000 meters, in volcanic, fertile soils - 15-24C temperatures, not frost-resistant - Caffeine content averages 1.2% - The green bean is less bulging, has an irregular crack, which in washed Arabica turns golden when fried - It has a bluish-green color and an oval, elongated shape - The fruit reaches maturity in 6-8 months - 44 chromosomes - Pleasant bitter taste, distinctive acidity, and aromas from chocolate to fruity floral Cultivation of robusta started in West Africa and Southwest Asia. This strain is more resistant to diseases, has higher productivity, and a lower price than Arabica. - It grows from sea level to 800 meters, in warm areas with less fertile soil - 24-30 C temperatures - Caffeine content averages 2.3% - Darker, rounder bean with a straight crack, which is dark when fried - It has an uneven, greenish-brown color - The fruit reaches maturity in 9-11 months - 22 chromosomes - Raw bitter taste, intense full-body, spicy, earthy, wood, chocolate aromas The 3 Steps of processing Manual – specific to Arabica grains. The fruits can be found in different stages of development on the same branch at the same time, from flower to ripe cherry. Harvesters only choose the right ripe grains, because the overripe ones will have a fermented taste, and the unripe ones will have an unpleasant and astringent green pea taste. A single defective bean can spoil the taste of a cup of coffee. Manual harvesting is also practiced on farms in mountainous, steep areas, where access to harvesting machines is impossible. By shaking – It is a typical method for Robusta and preferred in Brazil, for Arabica varieties. It is possible in large, straight areas, where trees are grown in rows. It is a much faster and cheaper method of collection. The machines harvest all the cherries, both ripe and unripe, with branches, leaves. Subsequently, the harvest goes through a process of selection and removal of foreign bodies. A picker collects between 50-100 kg of cherries per day. Of these, only 20% are valid grains (seeds), the rest being shells and husks. Therefore, a harvester collects on average 10 – 20Kg per day, or a 60 kg bag per week. Harvesting is done every 2 weeks. It is estimated that the cost of harvesting represents half the annual cost of a coffee plantation. Metoda umedă sau prin spălare Metoda uscată sau naturală Cherry processing should begin as soon as possible after harvest, to prevent damage. Depending on the location and resources, coffee can be processed in two ways: The wet or washing method only applies to hand-picked cherries. After the red coating is mechanically removed, the beans in the parchment are placed in tanks with water to remove the pulp, thus obtaining a coffee with a higher aromatic profile. The grains in the parchment are dried in the sun on clean surfaces and turned daily for about 28 days, until their humidity reaches 11%, after which they are ready for storage. The dry or natural method consists in drying the cherries in the sun for 3-4 weeks, after which mills are used that remove the dry skin and extract the green grain. It is a cheaper, simpler method, used in Robusta and, traditionally, in Brazil and Ethiopia. Sorting, storage and export The parchment from the wet-processed beans will be removed only right before export for better storage and preservation of coffee quality. Green beans are sorted and classified according to size, shape, type (Washed or mild Arabica, Natural Arabica, Robusta, Washed Robusta), number of imperfections, density, time of harvest, and taste characteristics. Defective grains are removed. In many countries, this process is done both mechanically and manually, ensuring that only the best quality coffee beans are exported. Generally, coffee is sold in 60 kg bags. The final destination Cafea Fortuna manufacturing plant The Cafea Fortuna brand has over 25 years of experience in coffee processing. Dedicated employees and innovative technology are the two most important components on which we have built our brand and on which we rely every day to ensure superior product quality. The production process is divided into several stages: tasting and selection of green coffee in the laboratory, transport to the factory, storage, roasting, blending according to our recipe, grinding, packaging and storing of the finished product. The whole process of transport, roasting and primary packaging is an automated one, supervised by our specialized personnel. Fortuna Coffee specialists have the awesome responsibility of choosing, defining and permanently checking the quality of the purchased green coffee. The primary criterion that guides and differentiates Cafea Fortuna is the taste of coffee. The incoming batches are analyzed both physically and organoleptically. The tasting process focuses on the impressions from the beginning and the end, the flavor profile, the acidity level, the body, the balance, and identifies and rejects imperfect beans. The journey to the Cafea Fortuna factory can take up to three months. To maintain the quality of the grains, we opt to transport it in double bags that are placed in containers lined with bags that absorb moisture. Even so, the coffee must be tested on arrival, and several more times before it goes into the blend. Because the taste of coffee is so precious, we take care of it throughout production. Roasting coffee is, at the same time, an art and a science. Green coffee has no coffee flavor at all. But during the process of roasting, it develops between 800 and 1000 different flavoring substances, changes color, loses between 10-20% of weight and increases in volume. Roasting is a complex chemical process that determines the aroma of coffee, the body and the level of acidity. Depending on the roasting parameters, these components can be created, balanced or destroyed. Cafea Fortuna pays special attention to this process. Through the experience gained, we developed our own roasting recipe that gives uniqueness to each blend. To highlight all the flavors, we chose to roast the coffee lightly. If roasted too lightly, the aroma would be undeveloped, more raw. If roasted more intensely, the fruity-floral aromas would disappear and the coffee would become more bitter. After removing it from the oven, the coffee is cooled and stored in the roasted coffee silo. When roasting, the volume of the coffee bean increases by up to 60%. It expands due to the pressure of the heated gases that form inside the grain. The blending of coffee origins is essential in creating a quality taste. For example, Arabica blends are sweeter and more fragrant, while Robusta blends are more full-bodied and intense. After roasting, the various origins of coffee are mixed in controlled quantities, according to the standard recipe, carefully kept in the laboratory of Fortuna Coffee. Coffees of different origins have distinct flavors and characteristics. In addition, as with other crops (grapes, for example), the taste of berries of the same variety may vary from one harvest to another, depending on the climate conditions. As a result, our great challenge is to ensure the same taste and the same quality of products year in, year out. In order to keep the aroma profile intact for as long as possible, the coffee is packaged in a low-oxygen modified atmosphere. The coffee beans are packed immediately after roasting, in special bags with one-way valves that allow the elimination of roasting gases, without letting in oxygen. For the ground version, the coffee beans are transported to the mill. Here it will be processed according to an optimal granulation: not very fine, in order to be filter-compatible, but not too big to release all the flavors from the grains. After grinding, the coffee is stored for a long time in special bunkers to release gases that would otherwise inflate the package, but not more, so as not to lose its flavor. Each step is carefully monitored by specialists assisted by modern equipment. The ground coffee is packed in 100g sachets or vacuumed in 250g and 500g packs. Cafea Fortuna products are ready to deliver cross-country and beyond.
Background: Checking diagnostic and management decisions can help reduce medical error, however, little literature explores how this is best taught. Aims: To provide practical advice to direct teaching practices. Methods: The authors conducted a literature review using Medline and PsychInfo using search terms: check or checklist and medical error or diagnostic error, supplemented by a manual search through cited literature. Conclusion: Twelve tips for teaching how to check diagnostic and management decisions are presented. |Publication status||Published - Feb 2014|
A computer simulation that relates muscle activation patterns to harmful pressure on the knee helps participants adopt knee-protective strategies as they walk. July 7, 2022 - By Hadley Leggett Researchers at Stanford Medicine have discovered how to reduce force on the knee by teaching study participants to employ different muscles as they walk. Using results from a detailed computer simulation, called a “digital human,” participants in a small study were able to reduce the load on their knees by an average of 12%, a benefit equivalent to a person losing about 20% of their total body weight. The lighter load may alleviate pain from osteoarthritis or prevent joint injuries. “We now have sufficiently realistic mathematical and computational models of human movement that we can change how the brain excites muscles in a simulation, and see how that affects joint loads,” said Scott Delp, PhD, professor of bioengineering, director of the Wu Tsai Human Performance Alliance and senior author of the study published July 7 in Scientific Reports. “For instance, if your knee hurts, how can we change the forces on your knee so you feel better?” he said. Knee-protective walking strategies Knee pain is rampant: Almost a quarter of Americans age 45 and older suffer from osteoarthritis of the knee, and knee pain accounts for nearly 4 million visits to primary care physicians each year. “We’ve known for a while that the majority of force that compresses the knee is actually caused by muscle forces, from muscles that cross the knee and generate force when they contract,” said Stanford research engineer Scott Uhlrich, PhD, the lead author on the paper. As a person walks, the force pushing their bones together in the knee is equivalent to two to four times their body weight; during running or playing sports, force on the knee is even higher. This repetitive compression can wear away at the cartilage, causing osteoarthritis. Traditional treatments for osteoarthritis include weight loss, knee bracing and joint replacement, but none of these treatments target the muscle forces, Uhlrich said. Unlike a robot, which is built with a single motor to power each joint, humans have multiple muscles crossing their knees, ankles and hips. “We have way more muscles than we need to walk,” Uhlrich said, “which gives our brain a lot of options for which muscles to use.” For example, two different calf muscles — the gastrocnemius and the soleus — can do the same action of pushing the foot down, but only the gastrocnemius crosses the knee and creates a compression force. “People don’t realize how strong the forces from these muscles are,” Delp said. “For instance, if you hooked your Achilles tendon [which attaches the gastrocnemius and the soleus to the heel] to the back of a small car, exciting those muscles could actually lift the car.” All that force can be damaging to the knee. But with the digital human, the researchers were able to find muscle coordination strategies that generated less force on the knee joint. They discovered that by increasing activation of the soleus muscle and decreasing activation of the gastrocnemius muscle, they could drastically reduce force on the knee without changing a person’s gait. Retraining your muscles through biofeedback What the researchers didn’t know was whether real people could employ these muscle coordination strategies during a complex task like walking. “You might not think of walking as a complicated activity,” Delp said, “but there’s a lot going on under the hood.” Each time the brain tells a muscle to move, it sends a small electrical signal that can be measured in the muscle through electromyography (EMG), similar to the way an electrocardiogram measures electrical activity in the heart. Previous studies showed that giving participants a visual representation of their muscle activity — EMG biofeedback — could help them relax those muscles, but only while sitting still and doing a simple task, such as moving a finger up and down. No one had tested it for walking. Beginning with himself as the first test subject, Uhlrich attached EMG electrodes to his leg muscles while walking on a treadmill so he could visualize the muscles being used. “At first I tried it with just the raw EMG data,” he said, “but the patterns were way too complicated. I realized we needed to simplify the feedback.” Changing muscle coordination reduces force on the knee After much experimentation, the researchers settled on a single bar graph that would teach people to reduce activation of their gastrocnemius muscle, while increasing the use of their soleus muscle. “As soon as I was able to simplify the signal and do it successfully myself, I got Dr. Delp onto the treadmill,” Uhlrich said. “When he did it pretty quickly as well, we got excited and brought in more people for the study.” With each step, participants tried to shrink the bar on a graph showing the ratio of their gastrocnemius-to-soleus muscle activation. After just five minutes of this biofeedback, all 10 participants in the study were able to significantly reduce the use of their gastrocnemius compared with their soleus while walking. When researchers removed the biofeedback and tested whether participants could retain the new muscle coordination strategy after six minutes of walking without biofeedback, 8 out of 10 participants were able to do so, resulting in decreased load on the knee in six of these cases. “We were amazed,” Delp said. “We really didn’t know if we could train people to do this, which is what made the discovery so exciting.” A paradigm shift for rehabilitation and injury prevention Because the initial study included only healthy volunteers, the next step will be to test the muscle coordination strategies in patients with osteoarthritis to determine whether the reduced force on the knee translates into a reduction in pain and other symptoms. Another question: How long after a biofeedback session can the participants maintain the new pattern of muscle activation? Making the changes permanent would likely require more than one or two sessions, Delp said. “To fully incorporate the new coordination pattern into your daily life, you may need a wearable device that gives you feedback for a month or so,” he said. The researchers have recently applied for a patent and are planning to coordinate with a bioengineering company to build a wearable feedback system. In conjunction with personalized digital human simulations, wearable biofeedback could revolutionize not just treatment for osteoarthritis, but all kinds of joint pain, including overuse injuries in athletes. “There are a lot of cases where if we were to activate our muscles differently, we could reduce stress on important structures,” Uhlrich said. For example, baseball pitchers are prone to ligament tears in their elbows, which can require career-ending surgery. The researchers imagine a future in which baseball pitchers could come into a lab, have a personalized simulation of their elbow made, and use that “digital athlete” to identify muscle coordination strategies to protect their ligaments. Then they could go home with a wearable biofeedback device that would retrain their muscles to prevent injuries. “With the digital human, you can try anything,” Delp said. “And when you combine it with what we discovered in this paper — that you can teach people new strategies to coordinate their muscles — it opens so many doors.” Funding for this study was provided by the National Institutes of Health (grant EB027060), a fellowship from the National Science Foundation, and the Sang Samuel Wang Stanford Graduate Fellowship from the Stanford Office of the Vice Provost for Graduate Education. About Stanford Medicine Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu.
How can actuaries help in human services? A story about Jack Jack is a young boy with two little brothers. His mum and dad were teenagers when he was born. Jack’s mum loves her boys and Jack’s dad has a job. Jack’s grandma helps daily. This young family doesn’t have much, but Jack and his brothers are happy and well cared for. Over the next few years, things change for this family. Jack’s dad loses his job after a round of cutbacks at work and starts going out drinking with his mates to cope. He is violent when he’s been drinking and often takes out his anger on Jack’s mum. Jack’s grandma dies and the combination of grief and the loss of practical support lead to Jack’s mum becoming depressed. Jack’s teacher notices he is having trouble concentrating in class and often comes to school without lunch. Concerned neighbours have called the police several times after hearing loud arguments between Jack’s parents. Jack’s family is evicted from their home because they are so overdue on the rent. This is a family in crisis, with lots of issues, such as alcohol abuse, mental health, and lack of family support. Things weren’t always bad, but a few setbacks such as losing a job and a key support at home, tipped this family from frugally surviving into crisis. While Jack’s family is fictitious, this story is not uncommon. There are many families in similarly complex situations. But who is best placed to assist this family? Can actuaries help? I’m not suggesting that all actuaries would make good front line workers (although some of them might). However, I strongly believe that actuaries have valuable skills, such as data analysis, problem-solving and scenario modelling, that can help in human services. At Guardian Actuarial, we specialise in bringing our actuarial skills to help multi-disciplinary teams supporting families like Jack’s. Our society has “wicked” problems You may be familiar with the concept of “wicked problems”. Here’s one definition: Wicked problems are hard to define and solve, and they often involve interconnected issues. You may be familiar with some of the stats on these issues: - 45% of Australians will experience a mental health condition in their lifetime. - On any given night 1 in 200 Aussies are homeless with more than a quarter of those being children - Reported rates of child abuse and numbers of children living away from their families in “out of home care” have been increasing over time These are all issues affecting Jack’s family. Mental health, homelessness and child abuse are complex and often interconnected. We can’t solve them in isolation, and it’s hard to know which issues come first, which makes these issues wicked in their nature. We need a multi-disciplinary response The NSW government has developed an outcomes framework based on research and evidence that shows this complexity. The diagram below shows the outcomes framework as it applies to social housing: But if these issues are all interrelated, who is going to solve them? Which agencies and professions are best placed to help? The answer is not easy, but we will require a range of professionals working together to solve these challenges. We need researchers to gather the best evidence, policy makers to translate the research into policy, practitioners who are adequately resourced and skilled to apply up-to-date policy effectively, and actuaries, statisticians and economists to undertake modelling to inform the planning and service delivery required to support families like Jack’s. Recently, we were involved in a project where we applied this outcomes framework. Representatives from child protection, health, education and police were working together on joint initiatives to tackle wicked problems for local families. We worked in a multidisciplinary team to help identify appropriate indicators and counting rules to track whether the joint efforts were delivering improvements as expected. Multi-disciplinary teams of professionals are required to address our society’s wicked problems. How can actuaries help? So what do actuaries actually do? Actuaries use mathematical techniques to analyse data, perform calculations and provide advice based on predicted future scenarios. Actuaries are trained to give advice about what to do today, based on our professional view of future conditions. For some actuaries, that actuarial advice might be what price to sell an insurance premium for, but actuarial techniques can also be used in human services. In Jack’s case, it may be determined that Jack and his brothers are not safe at home, and need to be placed in foster care. But what if Jack’s mum can get treatment for her mental health and Jack’s dad gets help finding a new job? Maybe Jack and his brothers could safely return home. What if this pattern happens again in the future? Even just from a child protection perspective, there are multiple pathways that children can follow, requiring fairly complex modelling to assess future needs and resources of the system supporting families like Jack’s. This is another example of the work actuaries where actuaries have helped. Government agencies need to be able to reliably model likely numbers of vulnerable children requiring different services, such as foster care, into the future to support their budget estimates. The skills and capacity required to undertake this complex modelling are not always available within government departments. Even if they are, oversight agencies will often seek independent teams to review and assess these models before they can be used. Not only are actuaries well placed to assist, many already are helping. The diagram below shows some examples of projects that actuaries have been involved in recently: Actuaries can, and want to, help improve society’s wicked problems. Obstacles for actuaries working in human services Although actuaries are well placed to help, I think there are two main obstacles for actuaries wanting to work in human services: skill-set and brand. Obstacle 1 – Skill-set As actuaries, our formal education was focused on statistics, probability and economics, so our actuarial professional exams alone may not be enough for us to be effective in our work in human services. This might mean we need to undertake further education or either paid or voluntary work experience to supplement our analytical skills and learn the language of human services. For example, I trained and served as a telephone crisis counsellor for Lifeline for several years. This experience not only allowed me to give back to my community by helping Aussies in need, but it gave me front-line experience in some of society’s complex problems as well as an opportunity to learn the language of human services. Obstacle 2 – Brand While the actuarial professional brand is strong within insurance circles, many professionals working in human services have never met an actuary, nor do they know what an actuary can do. Since actuaries need to work in multi-disciplinary teams to tackle some of these complex issues, it is critical that we build strong relationships with other human services professionals and clearly articulate how actuaries can help. It is not always helpful to lead with “Hi, I’m an actuary” and expect everyone to know what that means! Instead, we need to explain what we can do, how we can add value. “Hi, I’m an actuary and I’m trained to give quality advice about what you might do today, based on our quantitative and qualitative prediction of likely future conditions”. As actuaries, it’s important to remember that we don’t automatically have a “seat at the table” when tackling society’s wicked problems. Recently we conducted a survey where we asked non-actuaries about their perceptions and experiences of working with actuaries. Nearly two thirds of our survey respondents had never met an actuary before working with our team, although they all agreed that they would be prepared to work with actuaries in the future. Our society has wicked problems, requiring multi-disciplinary teams of professionals to help solve them. While there are obstacles we need to overcome, actuaries are keen and well-placed to be part of the teams solving these problems. By thinking differently about what actuaries can do, there is an opportunity to utilise the skills of actuaries to bring enhanced rigour to the analysis and planning in human services. If we can overcome some key obstacles, actuaries can help tackle our society’s wicked problems. See the original article by Julia Lessing here. CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.
What does it mean to be on a 500 calorie a day for a month’s diet? It is as simple as you read, this diet simply means that the individual will aim at consuming only 500 calories daily for a period of one month. This amount is about a quarter of an adult’s recommended daily intake. The highest limit this diet can permit is 800 calories daily. We refer to this 500 calories daily diet as very-low-calorie diets, shortened to VLCD. For several years now, doctors have been prescribing VLCDs as a treatment for some health conditions. For example, obese people who are unable to undergo bariatric surgery may be recommended to go on this 500-calorie daily diet. VLCDs may also be effective before laparoscopic and bariatric surgery. Fat loss can help reduce blood loss, complication risks, and operative time. VLCD accommodates meal replacements such as shakes, drinks, and food bars rather than actual meals. There are some things worth considering before going on this diet especially when there is no doctor’s supervision. We will be discussing this next. 500 Calorie A Day For A Month Considerations Here are some of the things you should consider before going on this diet. 1. Nutritional deficiencies Chances By eating 500 calories daily for a month, you might be at risk of developing certain deficiencies. This is especially true for people with a specific group like older adults who are more prone to reducing nutrient absorption in their small intestine. 2. Gallstones may develop A person on this diet may have higher chances of developing gallstones. These gallstones are formed inside the gallbladder and they block bile ducts causing abdominal pain. The following also increases the gallstones risks; - An extended period of fasting - Repeatedly gaining and losing weight - Former gallstones - Rapid weight loss Past research showed that when VLCDs are combined with a high intake of fat this may prevent gallstone formation. Eating high fiber food and decreasing the intake of refined sugars and carbohydrates may prevent gallstone formation. 3. Lack of healthy fats Fat has the highest amount of calories among the other two macronutrients. Due to the high-calorie content of fat, it might be difficult to consume enough of them when on a 500-calorie daily diet. This could cause you to lose healthy fats. For example, unsaturated fats like those found in avocado and salmon benefit your body. A diet low in fat will also raise your risk of developing a deficiency of some vitamins (fat-soluble) like poor antioxidant absorption and vitamin E. 4. You need doctor supervision A poor food option and the risk of developing nutritional deficiencies make 500 calorie daily diet a potential danger. It is good for you to get supervised by a dietitian and doctor before going on this diet. 5. Replacement of meals is not a lasting solution Sometimes people on this diet do a day or two replacement of meals. Although it may be of help, doing this for a long time can affect one’s health negatively. Minerals, phytochemicals, macronutrients, and vitamins all interact when a person eats whole foods. Creating artificial foods will not take the place of these important interactions. On a 500-calorie diet, although you might be eating less this diet will cost more than others. The amount you will spend on replacing food will be more than what you did naturally spend on whole foods of the same quantity. Some programs will suggest that you consult a doctor weekly. 7. Quick fix vs. Lifestyle changes Anyone who will be trying out VLCD should try implementing other strategies for weight loss also. These strategies may include physical activity and nutrition counseling. VLCD is not a sustainable diet so it does not build good behaviors that will benefit your health. It is better to do small changes that you can easily maintain. 8. Muscle loss Rapid weight loss could increase one’s risk of losing his/her muscles rather than fat. Muscle mass decreases can impact one’s metabolism negatively. This effect is undesirable because it reduces the body’s ability to burn calories and avoid injury. A good approach to weight loss that can be sustained is by building your lean muscle along with your healthy diet. 9. Missing out When you are on this diet you will definitely miss out on some social events. You may not be able to eat out in restaurants as they do not specify the number of calories in their meals. This may bring anxiety when eating out with friends and families. 10. Not suitable for all Overweight or obese people are not recommended to use VLCD. Also, people with these conditions should not also use the diet without appropriate supervision and approvals from a doctor; type one diabetes, heart disease, gout, thyroid disease, gallstones, and kidney disease. 500 Calories a Day For a Month: You Still Need Nutrient One main problem with a 500-calorie daily diet is that it does not limit the carbs and fats you consume. A cup of milk and chocolate cake contains about five hundred calories. But this meal doesn’t give you the necessary nutrients hence the importance of consciously making sure that your meals contain the required nutrients like lean proteins, veggies, whole grains, and fruits. 500 calories a day for a month diet can be used when aiming for a short-term goal but if you will be doing it for an extended period of time you should seek medical advice and monitoring. We already explained that there are some health dangers attached to this diet so you will want to proceed with care. If you have health conditions that are not compatible with this diet then you should not go into it. If you can’t go on this diet then don’t do it, there are a hundred and one different diets, find what works well with you.
Since 2013, the Social Progress Index has assessed country performance based on social dimensions, placing emphasis on quality of living rather than economic gains. Initiated by Harvard’s Michael Porter, the Index evaluates achievement based on a comprehensive and relevant set of social and environmental indicators enabling the assessment of the absolute and relative performance of countries. Overall, the Index shows how well economic progress translates into better social and environmental performance by capturing three social dimensions: Basic Human Needs, Foundation for Wellbeing, and Opportunity. The 2017 Index evaluated 128 countries based on 50 indicators and shows a positive relationship between economic growth and social progress; with high income countries generally more socially and environmentally progressive. For countries at lower income levels, an increment in national income is associated with larger improvements in social progress. This trend is particularly evident when comparing the performance of African countries on the index. Here’s a look at how a select number of African countries ranked on the index and how they performed across three specific components directly linked to the Sustainable Development Goals: 2017 Social Progress Index Scorecard |Social Progress Rank (out of 128)||GDP PPP per Capita ($)||Nutrition and Basic Medical Care |Access to Basic Knowledge Source: Social Progress Imperative Table 1 shows the ranking, income level, and relative performance of six African countries based on three components of the Index.
By Katia Wright Queen Teri’i-maeva-rua II, the daughter of Ari’ifaaite a Hiro and Pomare IV, Queen of Tahiti was born in May 1841. In July 1860, at the age of 19, Teri’i-maeva-rua became queen of Bora Bora after her step-father, King Tapoa II, died. Unfortunately, very few sources survive detailing Teri’i-maeva-rua’s life. However, the surviving diaries of the British Captain Frederick Byng Montresor do record the difficulties Teri’i-maeva-rua experienced in gaining her throne. According to Montresor, many of the chieftains of Bora Bora refused to accept Teri’i-maeva-rua as queen, as she was not the direct descendant of Tapoa II. Despite Bora Bora and Tahiti remaining under the French Protectorate, it was impressed upon Montresor the need to make it clear to the population of Bora Bora and Tahitithat ‘the Great Queen’, referring to Queen Victoria, was keen to see Teri’i-maeva-rua become queen. Montresor hosted a meeting on his ship, the ‘Calypso’, of the different chieftains, Teri’i-maeva-rua and her mother, Queen Pomare. At this meeting Montresor reminded the gathering of ‘their duty to support to the new queen and to stay united if they were to remain independent of foreign interference.’ After private deliberation, they finally agreed to support their new queen. A painting of the ‘Calypso’ can be found at The Royal Museums, Greenwich. After succeeding to her throne, Teri’i-maeva-rua married Temauari’i a Ma’i. Though little information survives of their marriage, including even the date, we do know that they remained childless and adopted Teri’i-maeva-rua’s niece Ari’i-Otare. Teri’i-maeva-rua died in 1873, and was succeeded by her niece who ruled as Teri’i-maeva-rua III. Image of HMS Calypso leaving Bora Bora at Royal Museums, Greenwich -https://collections.rmg.co.uk/collections/objects/102937.html Karen Stevenson, “Aimata, Queen Pomare IV: Thwarting Adversity in Early 19th Century Tahiti,” The Journal of the Polynesian Society Special Issue: Extraordinary Polynesian Women: Writing Their Stories 123.2 (2014): 129-144 Niel Gunson, “Sacred Women Chiefs and Female ‘Headmen’ in Polynesian History,” The Journal of Pacific History 22.3 (1987): 139-172
Last August, we posted about a clever concept for an asteroid exploration rover from Stanford, JPL, and MIT that uses internal reaction wheels to flip its spiny cubalicious shape around without needing legs, wheels, rocket engines, force fields, tractor beams, or anything else. As of our 2014 article, NASA had funded this thing to Technological Readiness Level 3.5, which is somewhere in between proof-of-concept and laboratory validation, which left us optimistic that something might come of it. A few months ago, we got a chance to check out the latest prototype of this robot, and we’re excited to say that it’s made it all the way to a fully armed and operational prototype: Hedgehog, as it’s called, has its core mobility hardware fully integrated and has been undergoing microgravity testing on parabolic flights. We spoke with Rob Reid from JPL and Ben Hockman and Marco Pavone from Stanford about what they’ve been up to over the last year, and then we definitely didn’t sneak* the robot into Smithsonian’s National Air and Space Museum in Washington, D.C., for a little photoshoot. The reason that Hedgehog exists is to explore planetary bodies where there’s very little gravity, like asteroids, comets, and small moons. For example, on Phobos (one of Mars’ moons), you’d weigh about as much as a tennis ball, and a world-class sprinter could run fast enough to launch themselves off of the moon entirely. An even better example is Itokawa, the asteroid visited by Japan’s Hayabusa spacecraft. There, you’d weigh as much as a paperclip, you could reach escape velocity with a violent sneeze, and it would take you well over 2 minutes to free fall to the surface from a height of one meter. “It’s not like regular gravity,” says Reid. “It’s more like two objects in space, floating side by side. There’s very little attraction pulling these objects together.” Exploring a place like this is incredibly difficult, because the microgravity is seriously micro, and trying to use something like a wheel would most likely either slip ineffectively or just flip whatever robot it was attached to end over end and your mission would be over. Legs aren’t much better, because they need gravity to help them anchor themselves to be effective. Even robots that are specifically designed for microgravity exploration, like the Philae lander on comet 67P, have all kinds of issues even when mobility is the exact opposite of what they’re designed to be doing. Figuring out the best way to do this exploration hasn’t been easy, as Pavone explains: “Over the past 10 years or so, there’s been increasing interest in exploring small bodies—asteroids, comets, anything that’s smaller than a planet. We’ve spent quite a bit of time with scientists at JPL trying to understand what you need for that. The figures that came back from the scientists were that in general, you want to be controllable within 20 percent: if you want to move to a spot 10 meters away, you should be able to get there within 2 meters. Small bodies have surfaces that are heterogeneous, but within small patches, they’re homogenous, so you don’t need exact precision, although you need some sort of control. With that in mind, we designed this robot. Sometimes when you work in robotics, you try to come up with a design that is cool. Here, this design is completely science driven. We tried to design something as simple as possible, given the requirements. But just because it’s a simple robot doesn’t mean it’s an easy robot. Everything about it is complicated. But in our opinion, it’s the least complicated way to do the job.” Hedgehog is unique because of its form factor and method of mobility. First, the form factor: being a cube (a fetchingly spiky cube), it’s completely symmetrical, and doesn’t care a jot about which of its faces are up. This means that if it’s bouncing around an asteroid, it doesn’t need to worry about landing a certain way: it can just bounce and roll until it comes to a stop and it’s fine. And this is where the mobility comes in: Hedgehog is completely sealed (one of the key reasons why NASA seems to like the design, according to the researchers), and inside, there are three reaction wheels mounted orthogonally, one along each axis. The wheels are electrically driven, and can be abruptly stopped with a band brake (a friction belt that tightens over the surface of the wheel). When you spin one of these wheels up and then stop it abruptly, it imparts torque corresponding to its momentum to whatever it’s attached to. In this case, it forces the body of the robot to rotate around the same axis as the wheel but in the opposite direction, flipping itself with an aggressiveness in proportion to how quickly the wheel was spinning before it was stopped. With just a little momentum, the robot can gently twist or change the face that it’s resting on, and with a lot of momentum, it can violently hurl itself all over the place. This isn’t a terribly efficient way to move around in anything but microgravity, but it takes so little energy that the efficiency doesn’t even matter that much, according to Hockman. “If we’re going to a body where the gravity is very low, like a smaller asteroid, we actually don’t really care about the efficiency of the hop, because it takes almost nothing to get yourself off the ground.” You definitely wouldn’t want to use this method of mobility for an Earthbound rover, but it does work: Beach tumbling is fun, but microgravity is where it’s at for Hedgehog, and it can do some amazing things when you crank down the gravity a whole bunch. Note that in very low gravity, spinning the reaction wheels up also imparts torque on the robot, but if you do it slowly enough, you can make sure that it’s not enough torque to overcome the inertia and friction of the robot while stationary. To test all of this stuff, earlier this year the robot flew on nearly 200 parabolas in a NASA aircraft that, while accelerating terrifyingly towards the ground, can provide about 20 seconds of near zero-g. Of particular interest here is how Hedgehog performs on sand and other granular media, which acts much differently in microgravity. Watch: A few things to note about this. First, you can hear the spinning and braking of the flywheel(s), which is pretty cool. Also, you can see how important the spikes are for both traction and directional control. The version with the black spikes is from Stanford, while JPL’s version is white and slightly spikier. Even in microgravity, the robot possesses a very fine level of control, and it’s easily scalable to whatever the gravity happens to be, making this one design very efficient. That last “tornado” escape maneuver is very cool, and could be used to escape from craters or sinkholes that the robot might accidentally tumble into. Or maybe to dig a bit of a hole, if you want to see what’s under the surface. And that poor little yellow rubber ducky that got so abused? Blame ICRA 2016. It’s hard to get a sense of what kind of long distance mobility Hedgehog has in microgravity, and until we send one to the moon (or farther), the best we can do is watch what happens when a simulated Hedgehog is unleashed on a simulated asteroid Itokawa: Wheeeeeeeee! The simulation is sped up 100x, but still, wheeeeeeeee! There are plans to teach Hedgehog to do its own SLAM and high-level motion planning, which will be necessary if it’s ever to start exploring on its own. If we start thinking about what might be involved in sending Hedgehog somewhere extraterrestrial, the pieces fall into place pretty easily. The robot can be scaled up or down, from CubeSat all the way up to (we assume) Borg. The sides of the cube are covered with solar panels to generate power (or you could just run the thing off of batteries and assume it’s disposable), and they also offer access to a variety of instruments for surface analysis, like a spectrometer, a microscope, and cameras. A Hedgehog, or multiple Hedgehogs, would be deployed from a mothership that would take care of communications back to Earth and potentially help out with localization. The mothership wouldn’t have to be that big, or that expensive, and if you could instead send Hedgehogs piggyback on spacecraft that were heading to somewhere like Mars, the entire mission could be done on the (relatively) cheap, as an add-on to a flagship mission that’s heading there already. It’s hard to say how cheap, because we’re not quite at the point where numbers like that are being discussed, but as you can see from the videos, things look very promising, with the readiness level of the robot maturing rapidly. “In terms of being able to have a mission in the short term, I think we understand enough about the platform and the mobility today that if we were to go out and collect a bunch of relatively high TRL cubesat components, we could stick them together and have a functioning, almost flight-ready piece of hardware,” Reid says. “There are a bunch of other questions, like what science instruments we’d want to put on it, but for having a robot that we could move around on the surface of a small body, we could do that in very short order.” And Pavone agrees: “The general strategy is to make this ready for flight, at least in its most basic configuration, fairly soon, so that whenever a flight opportunity arises, we can raise our hands and say, ‘look, we have an option here with a secondary payload to dramatically increase your science with a minimal cost.’ ” [ Stanford ASL ] via [ JPL ] Special thanks to Ben, Rob, and Marco for meeting with us. * To everyone’s surprise, the robot made it through security without any trouble. Also, we figured that if they didn’t want people touching the Saturn V engine (pictured above), they wouldn’t have made the Saturn V engine so easy to touch, or at least would have put up a sign or something. Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
5.7 In the context of Independence In the run-up to, and following, the 2014 Scottish independence referendum, issues relating to constitutional politics have been a focus for the political parties in Scotland. Is it possible that these issues have also informed the approach to the Scots language? Political opponents of independence have often associated the promotion of the Scots language with the promotion of Scottish nationalism. For example, when the Scottish Government announced in 2015 initiatives to support the Scots language, such as a Scots Language Policy and a Scots Language Ambassadors scheme, Conservative and Unionist MSP Alex Johnstone said: "This is a predictable stunt from a Scottish Government more interested in pandering to patriots than improving education. When it could be trying to push Scotland’s schools up global league tables, or closing the attainment gap, it’s actually trying to stir up the constitution in any way it can. It’s been well proven that our school children would benefit far more from learning international languages which could open all kinds of doors for them." In many nationalist movements, such as in Catalonia, language and culture do play central roles. In Scotland, however, the SNP promote what is called a ‘civic nationalism’, focused on political and economic issues. For example, First Minister Nicola Sturgeon, in a defining address in 2012, said: “I don't agree at all that feeling British – with all of the shared social, family and cultural heritage that makes up such an identity – is in any way inconsistent with a pragmatic, utilitarian support for political independence. My conviction that Scotland should be independent stems from the principles, not of identity or nationality, but of democracy and social justice.” In the 2013 White Paper on independence, Scots is mentioned only briefly on page 312, noting: ‘the inspiration and significance we draw from our culture and heritage, including Gaelic and Scots’, which ‘shap[e] our communities and the places in which we live' (Scottish Government, Scotland's future, 2013 p. 312). - You may be from Scotland, you may be from another country from anywhere in the world – to what extent do you think Scots will continue to have a place not only in Scottish cultural life – but also in Scottish politics? This is a model answer. Your answer might be different. The political party in power in Scotland today is the SNP – and has been for a number of years. As explained in this unit, the SNP are supportive of the Scots language, just as they are of Gaelic. The acceptance of the use of Scots in schools, together with evidence that it is good for the pupils, is also a signal that the politicians will continue to promote respect for Scots in many aspects of public life. The Scots Language policy has helped develop a much wider acceptance of Scots as one of the three indigenous languages of Scotland – it has made Scots something politicians and educators cannot push to the side. The most important aspect to me yet is that the developments outlined in this unit help shape Scots speakers identity as bilingual people in a multilingual Scotland. 5.6 The main political parties and Scots 5.8 What I have learned
Electric patch helps some people with PTSD in small study People suffering from post-traumatic stress disorder (PTSD) could someday be treated with the help of an electric patch worn on their head when they are sleeping, researchers say. However, much further research is needed to confirm whether this treatment is actually effective or not, experts added. In the small new study, 12 people who had been suffering from PTSD and depression for an average of 30 years — and were already being treated with psychotherapy, medication or both — wore the patch each night while sleeping, over an eight-week period. The researchers found that the severity of the participants' PTSD decreased by an average of more than 30 percent, and the severity of their depression dropped by an average of more than 50 percent, over the study period. "Most patients with PTSD do get some benefit from existing treatments, but the great majority still have symptoms and suffer for years from those symptoms," Dr. Andrew Leuchter, senior author of the study and a psychiatrist at the University of California, Los Angeles, said in a statement. "This could be a breakthrough for patients who have not been helped adequately by existing treatments." PTSD is a mental illness marked by severe anxiety, flashbacks and uncontrollable thoughts about a traumatic event. About 3.5 percent of the U.S. population has PTSD, the researchers said, including soldiers who have been in combat, and people who have survived terrifying events. People with PTSD may try to avoid situations that could trigger flashbacks, which sometimes makes them reluctant to socialize or venture from their homes, leaving them isolated, the researchers said. People with the disorder are six times more likely than people who don't have PTSD to die by suicide, and they are at increased risk for marital difficulties and dropping out of school. For the participants in the new study — who were survivors of rape, car accidents, domestic abuse and other traumatic events —the new patch delivered a kind of treatment known as trigeminal nerve stimulation (TNS). Prior research found that TNS can treat people with epilepsy who aren't helped by medication as well as people with depression who aren't helped by therapy, the researchers said. [Bionic Humans: Top 10 Technologies] While a patient sleeps, a 9-volt battery powers the patch, which sends a low-level electrical current to nerves that run through the forehead. These nerves send electrical signals to parts of the brain, such as the amygdala and the medial prefrontal cortex, which regulate mood, behavior and cognition, and that previous studies found were linked with PTSD. The study participants had chronic PTSD and severe depression. An average of 30 years had passed since the traumatic events that had left them depressed, anxious, irritable, hypervigilant, unable to sleep well and prone to nightmares. While they continued their regular treatments, they also wore the patch when they slept, for 8 hours a night. The participants completed questionnaires about the severity of their symptoms and the degree to which the disorders affected their work, parenting and socializing at the start and end of the study. "We're excited that we're seeing strong evidence that TNS may be helpful to patients with PTSD," Leuchter said. "This was a group of patients that had been ill for years, and had been through all the best available treatments without significant relief for most of their symptoms. The fact that we could relieve symptoms in this chronically and seriously ill group was surprising and very encouraging."PTSD symptoms stopped completely for one-quarter of the patients in the study. In addition, participants generally said they felt better able to take part in daily activities. The treatment worked best in patients who used the device consistently for eight weeks — participants who were inconsistent in using the device did not have as good outcomes, Leuchter said. Future research will examine the long-term effects of this treatment, he added. "I recall one woman who came in who was just delighted," Leuchter told Live Science. "After using the device for just a few weeks, she said she was able to sleep through the night for the first time in years without nightmares." This is the first evidence that TNS can help treat people with chronic PTSD, the researchers said. The treatment showed no serious side effects during the course of the study. "Some subjects showed some slight skin irritation on the forehead where the patch was applied, and this was easily addressed by moving the patch or applying some skin cream," Leuchter said. "Some of the study subjects have continued to use the device for months or years as part of the study and have continued to show benefit," Leuchter said. "Some other subjects who stopped using the device also have maintained their improvement." [5 Amazing Technologies That Are Revolutionizing Biotech] One of the participants died by suicide in the seventh week of the study. The person had denied having any suicidal thoughts at the start of the research and throughout it. The researchers noted the participant's treating psychiatrist, who was not affiliated with the study, concluded the suicide was more likely related to the person's underlying psychiatric illness than to the device or study. Much further research is needed to see whether this strategy is actually effective at treating PTSD, said Dr. Paul Rosch, a clinical professor of medicine and psychiatry at New York Medical College who was not involved in the new study. He noted this preliminary study was small, and no sham treatment was given to participants to examine whether any benefits of the study were due to the device itself or just the placebo effect, "which is not uncommon in electric and magnetic stimulation studies," Rosch told Live Science. The researchers are now testing the patch in a larger study — they are recruiting 74 veterans who have served in the military since 9/11. PTSD affects a greater percentage of military veterans than civilians —an estimated 17 percent of active military personnel experience symptoms, and about 30 percent of veterans who have returned home from service in Iraq and Afghanistan have had signs of the disorder, the researchers said. In this larger study, half of the veterans will get TNS, and half will receive a fake TNS patch. At the end of this study, volunteers who got the fake patch will receive the option of undergoing actual TNS. "PTSD is one of the invisible wounds of war," study lead author Dr. Ian Cook, of the University of California, Los Angeles, said in a statement. "The scars are inside, but they can be just as debilitating as visible scars. So it's tremendous to be working on a contribution that could improve the lives of so many brave and courageous people who have made sacrifices for the good of our country." Cook, who co-invented TNS, is now on leave from his position at UCLA and is serving as chief medical officer at NeuroSigma in Los Angeles, which is licensing the technology and funding the research. NeuroSigma is already marketing the patch overseas and has plans to make it available to patients in the United States. The scientists detailed their findings online today (Jan. 28) in the journal Neuromodulation: Technology at the Neural Interface. - 8 Tips for Dealing with a Depressed Spouse - 7 Cool Uses of 3D Printing in Medicine - Top 3 Techniques for Creating Organs in the Lab Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
How long does it take for ice to freeze? There are several factors that affect the freezing process. These include temperature, material used for an ice tray, and the Mpemba effect. In this article we will discuss these factors and their relationship with each other. This information will be useful when you plan to buy ice for your next drink. Also, consider these factors if you want to have an ice-cold drink in a hurry. Size of ice cubes When you’re freezing food, it’s crucial to keep track of the amount in the freezer. It’s also important to check the temperature of the freezer, as too hot a freezer can take a long time to freeze. To help cut down on the time it takes to make ice, set the freezer at a temperature that’s below zero. If your freezer is older than a year, you may want to consider repairing or replacing the ice maker. Whether or not a freezer is ideal for your needs depends on the temperature and humidity level. If you live in a dry climate, it will take longer to freeze ice than in a humid climate. To make the process faster, freeze a large batch of ice at one time and transfer it to a smaller container to store them for later. Make sure that you keep the ice cubes in an airtight container after they have frozen completely. In addition to the size of the ice cubes, the type of ice can influence the freezing time. A block of ice, for example, will take the longest time to freeze. Although creating blocks of ice is easy, it will take more time to chill a beverage than a small cube. Regardless of how big your cubes are, the ice will melt faster if you put it in a cold liquid. A second way to teach children about the density of ice is to measure them using a scale. Since ice contains less water than water, if it were bigger, the ice would be denser. However, this is not always the case. When you want a cube to be smaller than a cup, the right solution is to measure both the size and weight of the two cups. The thickness of ice sheet depends on several factors: air temperature, windiness, the amount of snow buildup, and radiational cooling. Of all these factors, temperature is the easiest to gauge. The thickness of ice sheet is calculated as the fraction of the freezing degree day (FDD) below the surface temperature. In general, an ice sheet will grow by about an inch every fifteen FDD. Thicker ice will grow slower. When making ice in a standard sized ice tray, the ice will take about three to four hours to freeze. However, the amount of water in the tray and the freezer temperature will affect the amount of time it takes to freeze. If you are preparing ice for a party, this process will take between three to four hours. For this reason, ice freezing time will vary depending on the size and shape of the tray and the size of the water. A temperature of 2 degrees below freezing will take four days to freeze ten centimetres of ice. A minimum of 10 cm of ice is required for car and foot travel in Yellowknife. Then, if the temperature is 10 degrees below freezing, it will take five days to freeze a ten centimetre layer. Optimal conditions also assume that the water was close to freezing when it started to form ice. However, a constant supply of warm water in a container can interfere with this process. Water and ice are separated by a membrane between two substances. They share a common freezing and melting point. Water’s freezing point is 32degF (0degC) and its melting point is 0degF (18degC). However, when these two substances come in contact with a rock salt, the equilibrium is disrupted and the melting process begins. This process occurs when the heat energy from the outside world breaks the hydrogen bonds of the two molecules. Material of ice tray One thing to keep in mind is the amount of water the ice tray can hold. Generally, the more water that is in the tray, the longer it will take to freeze. However, the shallower the compartments are, the faster the ice will freeze. Silicon is the best material for ice cube trays as it is flexible and will not break when freezing. The surface area of the tray also matters when freezing. The larger the surface area, the more time the ice will take to freeze. The shape of an ice tray is also very important. Some trays are harder to remove than others. A round ice tray with a wide mouth is more likely to allow more surface area for ice to stick to. Silicone is also the best material for rapid freezing because it does not crack or break easily. If you are worried about the freezing time, you should try an ice tray made of stainless steel. A silicone ice tray is one of the most popular types of trays on the market. You can use the plastic, stainless steel, or wood ice tray to make your own ice cubes. To make your ice cubes, simply fill the ice tray with water and freeze it for around eighteen to twenty-two hours. Then, twist the cubes to release them. While a rectangular shaped tray is the most common type of trays, there are other options available. Plastic ice cube trays are inexpensive and stackable and tend to hold the shape of ice. Just make sure that the material is BPA-free. This is a good choice for those who want to save space and make ice cubes without a lot of effort. The Mpemba effect is a theory that explains why some liquids take longer to freeze than others. In theory, water has a higher Mpemba value than others. However, no such effect has been demonstrated in a lab setting. Researchers from Imperial College London have tested this hypothesis using various temperature and water-condensation conditions. They found that the Mpemba effect was absent when the water was initially 70 degC warmer than the others. The Mpemba effect has also been supported by indirect evidence. Experiments using pre-boiling water showed that the enthalpy of freezing is higher than the one obtained by freezing un-preheated water. This is because pre-boiling removes dissolved gas effects. Despite this, two unsystematic experiments showed that the Mpemba effect was no different in pre-boiling water. The Mpemba effect is a controversial issue, causing some to question the validity of its findings. Scientists disagree about its existence, but some believe that it may be a red herring. It could simply be the result of small measurement inaccuracies. Regardless of the source, the Mpemba effect is an interesting phenomenon, which can be applied to industrial processes. But since people are divided on its existence, it is not yet being exploited. While the Mpemba effect has been cited by many scientists, there is no clear evidence to support it. There are several possible explanations for the phenomenon, but no experiment has shown a conclusive answer. Therefore, scientists have yet to confirm the Mpemba effect. If this is the case, it would be prudent to look for a reliable method of determining the temperature at which ice can be frozen. Time it takes for ice to freeze You might have wondered how long it takes for ice to freeze. This process of ice formation is complex and involves many factors. There are various ways of speeding up and slowing down the freezing process. Here are some of them. First, you should know that the cubes of ice formed by the freezing process are called ice cubes. Besides a few important tips on how to make ice cubes, you should also know how to store ice. Depending on the type of ice you’re making, a dedicated ice maker can freeze water in two to four hours. Room-temperature water takes two hours to freeze. The process of direct freezing involves using a metal tray and compressor coils that pump refrigerant directly into the tray. This process is similar to what’s used in big ice makers. Using an ice maker is easier than you might think, because a dedicated ice maker isn’t as complicated as it seems. The freezing time depends on the amount of water in the ice cube tray. A standard ice cube tray has space for twelve tapered cubes. The average ice cube tray takes between three and four hours to freeze. The freezing time is affected by several factors. If you are using an ice tray with a larger capacity, you may have to add more water to make it frozen faster. Also, you should know that larger ice cube trays require more time for freezing. When the temperature of the water drops to a certain level, the water molecules in the ice crystals start to come apart. This process is called the phase transition, and it is important for ice cubes to freeze. The process may take up to seven minutes, but if you want to speed up the freezing time, you should know exactly how long it takes for ice to freeze. The process can be accelerated with a little bit of patience and persistence.
Crimea is a peninsula located on the north coast of the Black Sea in Eastern Europe and is almost completely surrounded by the Black Sea and the Sea of Azov from the northeast. This region has been a colony of different countries for many years until modern times and many wars have taken place in it. HISTORY OF THE CRIMEAN PENINSULA As mentioned earlier, Crimea has been a permanent colony for many years and has been ruled by many empires. For example, the southern margin of the country was colonized by the Greeks, Persians, Romans, Byzantine Empire, Crimean Goths, Geneva and the Ottoman Empire. The interior of the area was also occupied at the same time by various groups such as the step tribes and steppe empires. Later, the Mongols and other lands adjacent to the Crimea were united in the territory of Crimea during the 15th to 18th centuries. In 1783, as a result of the war between Russia and Turkey, Crimea was occupied by the Russian Empire. Following the Russian Revolution of 1917, Crimea became an autonomous republic in the Soviet Union. Later, during World War II, Crimea deported all of its indigenous population, the Crimean Tatars, to the Crimean province of Central Asia. This Crimean act was recognized by many countries at the time, such as Ukraine, as a great genocide. The Crimean peninsula was later transferred to Ukraine from the Soviet Union in 1954. With the collapse of the Soviet Union, Ukraine was re-established as an independent state in 1991, and the Crimean peninsula began to reorganize as an autonomous republic. During the 2014 Ukrainian Revolution, which ousted then-President Viktor Yanukovych, Russia intervened and deployed troops on the Crimean peninsula . Then in the same year, an illegal referendum on Crimea's alliance with Russia was held, and the vote was approved by 90 percent of the people. This happened while the referendum was completely illegal and was rejected by the United Nations, but after that, Russia formally declared Crimea its ally. The situation in Crimea is still unclear, and Ukraine still claims ownership. Many other countries also consider Crimea part of Ukraine. TOURISM ON THE CRIMEAN PENINSULA Crimea has been a holiday destination since the second half of the 19th century. During these years, the development of transportation networks brought a large number of tourists from the central regions of the Russian Empire to the Crimea. At the beginning of the twentieth century the major development of palaces and villas began, most of which still remain today. These palaces are one of the main attractions of Crimea, which many tourists travel to every year to see them. There are many myths about the tourist places of Crimea, which make them more attractive to tourists. The Soviet government later turned Crimea into a health center, where it set up a camp for young veterans, which from 1925 to 1969 sent about 300,000 children on holiday. Be. During these years, Crimea was chosen as a destination for improving health. But in the 1990s, Crimea again became a tourist destination. Popular places on the Crimean peninsula include the cities of Yalta and Alosha, the west coast of Opatoria and Saki, and the southeast coasts of Feodosia and Sudak. According to National Geographic, Crimea was among the top 20 tourist destinations in 2013. TOURISM PLACES OF THE CRIMEAN PENINSULA As mentioned earlier, Crimea is best known for its many beautiful palaces. There are also beautiful beaches in this area that attract many tourists every year. In general, the famous tourist destinations of Crimea include the following: The last residence and retreat of the Russian Tsar, Nicholas II, was in this magnificent palace. Today, the palace is sometimes used for international meetings. One of the most popular tourist and recreational cities located in the southeast of Crimea. It is a very large hill located in the center of the city of Kerch, which is in the eastern part of the Crimean peninsula. From the top of this hill you can see a very beautiful view of nature. TREASURE OF THE SCYTHIANS Kol Oba is a historical place in Crimea where they have been able to identify the treasures and relics of the Scythians and turn this place into a historical museum. CASTLE SWALLOW'S NEST A decorative castle located in a small hot spring town between Yalta and Alupka. This building was located in 1911 on the edge of a 40-meter cliff and is very beautiful. One of the oldest palaces in Crimea and one of the most popular tourist attractions located on the southern coast of Crimea. Bakhisarai Palace or Khan Palace, built in the 16th century, is one of the most famous Muslim palaces. OTHER TOURIST ATTRACTIONS OF THE CRIMEAN PENINSULA New Suite area Nikitsky Botanical Garden Aivazovsky National Gallery Balaklava Maritime Museum Collection Valley of Ghosts The Crimean peninsula is one of the most touristic regions of Russia, which is very famous for its very old and beautiful palaces and various architectures of different ethnic groups. This area has a semi-temperate but slightly cold climate and it is better to visit it in mid-spring or summer if you plan to visit it. *share the beauties ... Castle Swallow's Nest seems standing on a shaky place. This place seems dangerous because an earthquake can break this Nest any time.☹
- Open Access Whole-genome haplotyping approaches and genomic medicine Genome Medicine volume 6, Article number: 73 (2014) Genomic information reported as haplotypes rather than genotypes will be increasingly important for personalized medicine. Current technologies generate diploid sequence data that is rarely resolved into its constituent haplotypes. Furthermore, paradigms for thinking about genomic information are based on interpreting genotypes rather than haplotypes. Nevertheless, haplotypes have historically been useful in contexts ranging from population genetics to disease-gene mapping efforts. The main approaches for phasing genomic sequence data are molecular haplotyping, genetic haplotyping, and population-based inference. Long-read sequencing technologies are enabling longer molecular haplotypes, and decreases in the cost of whole-genome sequencing are enabling the sequencing of whole-chromosome genetic haplotypes. Hybrid approaches combining high-throughput short-read assembly with strategic approaches that enable physical or virtual binning of reads into haplotypes are enabling multi-gene haplotypes to be generated from single individuals. These techniques can be further combined with genetic and population approaches. Here, we review advances in whole-genome haplotyping approaches and discuss the importance of haplotypes for genomic medicine. Clinical applications include diagnosis by recognition of compound heterozygosity and by phasing regulatory variation to coding variation. Haplotypes, which are more specific than less complex variants such as single nucleotide variants, also have applications in prognostics and diagnostics, in the analysis of tumors, and in typing tissue for transplantation. Future advances will include technological innovations, the application of standard metrics for evaluating haplotype quality, and the development of databases that link haplotypes to disease. Technological progress has enabled the routine resequencing of human genomes. These genomes include rare variants at high frequency , that are the result of exponential human population growth over the past hundred generations . These variants can affect single nucleotides or larger genomic ranges by substitution, insertion, deletion, or change in copy number. Combinations of variants are present in cis on the same physical molecule or in trans on homologous chromosomes. This set of cis and trans relationships between the variants, known as the phase of the variants, affects the interpretation and implications of the relationships between genotypes and phenotypes, including disease phenotypes -. To simplify the discussion, we define a haplotype in general terms as a contiguous subset of the information contained in a molecule of DNA (Box 1). An example of a haplotype by this definition, grounded on molecular observation, is the actual sequence inherited from one parent and spanning one or more genes in a specific genomic region of interest. A corollary of this definition is that the longest possible haplotype, the `chromosome haplotype, is the full sequence of a chromosome that an individual inherited from one parent. Haplotypes have a number of important roles and applications that are listed in Box 2. Analysis of haplotypes falls generally into three categories: 1) elucidating the `haplotype block structure of the genome, 2) employing haplotypes as genetic markers, and 3) understanding haplotypes as functional units . As the number of observable genetic variants increases, so does the number of observable haplotypes. This increase in observed variants is largely the result of rare variants assayed by whole-genome sequencing (WGS). As a result, there are now many observed haplotypes with frequencies too small to estimate using population-based inference methods. Recent technological advances have enabled the determination of haplotypes by direct observation of molecular data and from genetic information, and with decreased reliance on population-based statistical estimation. Historically, it has been difficult to distinguish between the homologous haplotypes of autosomes inherited from each of the parents. For that reason, allele information from each pair of autosomal chromosomes is typically comingled into one sequence of information: the unphased genotype sequence. Phasing (or haplotyping) describes the process of determining haplotypes from the genotype data. Until recently, cost, lack of data, and computational intractability has limited the availability of phased whole-genome haplotypes . There are three basic methods for phasing: molecular, genetic, and population analysis. Molecular haplotyping is rooted in the analysis of single molecules (Figures1 and 2, Table1). If the molecule haplotyped is shorter than a chromosome, molecular haplotyping can be followed by haplotype assembly. Increasingly clever methods are being deployed to exploit high-throughput parallelization, combining data from measurements of many single molecules to build longer haplotypes. Genetic haplotyping requires pedigrees and can yield chromosome-length haplotypes . Population haplotyping requires models of the population structure of haplotypes and can only phase common variations. These three approaches can also be combined to create hybrid strategies. As a general rule, these methods can be used to phase any combination of single nucleotide variants (SNVs; commonly called single nucleotide polymorphisms (SNPs) when they are frequent enough in the population), insertions and deletions (indels), and copy number variants (CNVs). SNVs and short indels are typically easier to phase because they can be observed within individual sequence reads. Larger variants, such as CNVs, are typically assessed using genotyping arrays ,. In this review, we describe in detail the three main methodologies for phasing variants and their integration into combination strategies, and we provide quality metrics (Box 3). Finally, we provide an overview of the applications of whole-genome haplotyping in genomic medicine. Molecular haplotyping involves the direct observation of alleles on a single molecule. These molecules are often single sequence reads, ranging in size from tens of bases to thousands of bases. When two variants are observed in the same physical read, or in paired reads derived from the same molecule, they can be directly phased. Therefore, same-read molecular haplotyping gains power with sequencing read length, and the major source of error is the sequencing error rate. Although often overlooked because of its simplicity, sequencing is the most common form of molecular haplotyping. Other forms of molecular haplotyping include restriction fragment analysis, optical mapping, and coded fluorescence hybridization approaches . Long-range binning can be achieved by physical separation of the two haploid genomes prior to sequencing. Binning methods are able to resolve private and rare haplotypes and can be used to generate personalized genome-resolved haplotypes. Sequencing isolated sperm cell genomes is one simple approach, but applicable only to males. Chromosome isolation methods do not require sequencing coverage to the depth needed to resolve possibly heterozygous positions . Whole-chromosome sorting methods include microdissection, fluorescence-activated cell sorting (FACS) and microfluidics. Chromosomes are individually tagged or separated into pools that tend to contain at most one copy of a chromosome. These are genotyped or sequenced to generate whole-chromosome haplotypes. Microdissection involves arresting cells in metaphase and spreading the chromosomes to isolate them . FACS separates individual chromosomes, which are then amplified and tagged before sequencing . The `direct deterministic phasing method uses microfluidic devices to capture a single metaphase cell and partition each intact chromosome . Semiconductor-based nanotechnologies are being applied to assay single DNA molecules, deriving very long-range haplotype information. NabSys (Providence, RI, USA) tags DNA molecules with probes that are specific to particular chromosomal locations and passes single molecules of DNA with bound tags through solid-state nanodetectors to identify the locations of bound tags . BioNano Genomics (San Diego, CA, USA) labels DNA using nicking endonucleases and fluorescently labeled nucleotides, then visualizes single molecules in linearized nanochannels . Both technologies yield de novo genome-wide maps, informing structural variation and haplotypes. Nevertheless, no current technology captures all variants in the genome; for example, most are unable to assay trinucleotide repeats. These technologies are changing rapidly, and Buermans and den Dunnen have provided a recent review of the types of variants assayed by some of these technologies. The principles and methods of haplotyping described here will apply even as methods change. In some cases, combining a technology that assays large variants (for example, BioNano) with one that assays SNPs (for example, pairwise end sequencing) may best address a particular need. Any set of two or more overlapping haplotypes can be assembled into a single haplotype. Typically, after generation of many individual molecular haplotypes, sequence assembly is used to identify overlapping sequences and thus to infer a longer haplotype -. The haplotypes being assembled may be derived from heterogeneous data sources but haplotype assembly is most commonly based on a set of molecular haplotypes -, so we will discuss this prior to the discussion of genetic and population-inferred haplotypes. Assembly of molecular sequences from fragments predates the ability to sequence DNA. Assembly was originally employed for determining the sequence of proteins . Before the Human Genome Project (HGP), genome maps were assembled from restriction-fragment haplotypes . During the HGP, haplotype reconstruction relied on the assembly of matched-end sequences of clones. As the HGP wound down, for economy of scale, there was a general shift away from long-read towards short-read sequencing. This shift increased the difficulty of haplotype assembly directly from shotgun reads, and resulted in a revival in algorithms for haplotype assembly. Lancia et al. describe the `SNPs haplotyping problem by looking at the fundamental constraint shared by the group of algorithms that solve this problem: that all sequence reads must be partitioned into exactly two bins. These algorithms generally allow for sets of raw reads to be constrained to co-occur in the same bin. Such constraints arise either from paired end data or from pooling strategies. Clever experimental designs have maximized the utility of these constraints, particularly those that use statistical or molecular techniques to bin reads from a particular haplotype. In 2007, Levy et al. used single sequence reads together with some mate pairs to build long-range haplotypes for an individual genome, with haplotypes reaching several hundred kilobases. In 2009, McKernan et al. used a ligation-based sequencing assay to phase a single genome physically into blocks averaging several kilobases. In 2011, Kitzman et al. produced 350kb haplotype blocks by subpooling a fosmid library. Suk et al. also used fosmid pool-based sequencing to assemble variants into haplotypes of approximately 1Mb, up to a maximum length of 6.3Mb; fosmids were tiled into contiguous molecular haplotype sequences based on sequence overlaps . In 2013, Kaper et al. also used a dilution, amplification and sequencing approach to compile haplotypes of several hundred kilobases in length. Extreme dilution of genomic DNA can generate long-range haplotypes without requiring the sorting of metaphase chromosomes or cloning. These methods recreate, with twists, the basic method used to sequence the human genome: local haplotypes (in the order of tens of kilobases) are first carefully sequenced and then strung together by aligning overlaps. Dilution methodologies allow long fragments to be shotgun sequenced with short reads . If these long fragments overlap with a sufficient fingerprint , then haplotypes up to 1Mb may be achieved by chromosomal walking . The number of DNA molecules in a pool is small enough that there is little chance that repeated or duplicate sequences will occur within a pool. Therefore, DNA dilution methods simplify both de novo assembly and mapping reads to a reference genome. Nevertheless, these methods can be confounded by the local presence of repetitive sequences. Commercialization of dilution methodologies now includes Complete Genomics `long fragment read (LFR) and Illuminas Moleculo technology . For LFR, long parental DNA fragments are separated into distinct pools and sequenced using pairwise end sequencing. Moleculo implements statistically aided, long-read haplotyping (SLRH) by further phasing initial contigs with population information using the Prism software (Table2). Several algorithms exist to assemble reads into haplotypes (Table2). HASH (haplotype assembly for single human) uses a Markov chain Monte Carlo (MCMC) algorithm and graph partitioning approach to assemble haplotypes given a list of heterozygous variants and a set of shotgun sequence reads mapped to a reference genome assembly . HapCut uses the overlapping structure of the fragment matrix and max-cut computations to find the optimum minimum error correction (MEC) solution for haplotype assembly . There are many other sequence assembly algorithms, reviewed elsewhere ,. Duitama et al. reviewed eight algorithms for the `SNPs haplotyping problem with binned reads as input. They concluded that, under a reasonable compromise between accuracy, completeness, and computational resources, ReFHap (Reliable and Fast Haplotyping) yields the best results for a low-coverage fosmid pooling approach, which they term single individual haplotyping (SIH). More recent algorithms claim improvements on parameters such as speed and accuracy (for example, H-BOP) or focus on improving performance in the presence of high error rates ,. MixSH shows good performance as evaluated by pair consistency, a version of a metric described in Box 3 . The process of assembly may introduce phase errors at the joins between component haplotypes, and so should best be done when the overlaps between fragments can be inferred with high confidence. Such confidence can be gained both by identification of unique overlapping fingerprints or by physical separation of the original molecules. Haplotype assembly has worked very well when the underlying haplotypes are long, such as those determined by sequencing a clonal source such as a cosmid or bacterial artificial chromosome (BAC) ,. We therefore expect to see increasing development of technologies that generate sequence reads in the range of many thousands of bases to facilitate haplotype assembly. These long sequences will be generated by strobe sequencing, nanopore sequencing, and perhaps other technologies ,. The existence of chromosome territories in the nucleus can also be exploited for long-range haplotyping. In an innovative approach, pairs of reads that are likely to come from the same haplotype are generated by cross-linking chromatin sites that are potentially distant along a chromosome but spatially close within the nucleus. This technique is known as `Hi-C, and was simultaneously exploited by three different groups for sequence assembly -. Selvaraj et al. focused on haplotyping using Hi-C (which they term `HaploSeq), and in their initial report using low coverage sequencing they phased approximately 81% of sequenced alleles. Disparate sources of haplotyping information and markers can also be assembled. For example, the 1000 Genomes Project Consortium recently produced an integrated haplotype map of SNPs, small indels and larger deletions derived from SNP arrays or from exome and whole-genome sequencing . The principles of Mendelian segregation of alleles in pedigrees can be used to deduce the phasing of variants observed in ordered genotypes. At the simplest level of a family trio (both parents and one child), very simple rules indicate which alleles in the child were inherited from each parent, thus largely separating the two haplotypes in the child. The remaining (not inherited) parental haplotypes can then be reconstructed using a simple exclusion rule. As the locations of recombinations are not known, the inferred parental haplotypes will have a phase error at each recombination. These low-frequency phase errors (Box 1) will have little effect on short-range haplotypes but will scramble chromosomal haplotypes. In the context of a family quartet (two full siblings and their parents), whole-genome sequences from high-throughput paired-end short-reads can generate complete chromosomal haplotypes for all family members ,. The method can be extended to larger pedigrees by tiling or MCMC approaches . Tiling can accumulate small errors with each tile, and so MCMC and similar approaches are likely to be the best methods for pedigrees spanning more than four generations. Haploscribe is a suite of software scripts that phase whole-genome data across entire chromosomes by genetic analysis . Haploscribe implements a parsimony approach to generate meiosis-indicator (inheritance state) vectors and uses a hidden Markov model (HMM) to deduce haplotypes spanning entire chromosomes. These haplotypes are nearly 100% accurate and potentially suitable for medical diagnostics. The rule-based nature of genetic phasing has a useful property: some family genotypes are not consistent with the expected patterns of inheritance, and are thus highlighted as probable sequencing errors or, rarely, as de novo mutations . Mendelian inheritance errors (MIEs) are sites in which the genotype of a child is inconsistent with inheritance from one or both parents. In state consistency errors (SCEs), the genotype of each child is consistent with both parents but the combination of offspring genotypes is inconsistent with the prevailing inheritance state around that locus, as determined from neighboring sites. Genetic analysis enables the phasing of rare alleles that cannot otherwise be accomplished by reference to population-based data. Phasing information obtained through the sequencing of the genomes of family members maps recombinations and inheritance states at high resolution, highlighting the regions of the genome where causal variants segregate. The resulting haplotypes are highly accurate and complete. Nevertheless, genetic analysis cannot phase positions in which all family members are heterozygous. Furthermore, it is not always feasible to recruit the required participants for family-based studies. In the absence of a family context, molecular haplotyping is an excellent choice because it does not require DNA samples from other family members. We predict that, in the next decade, molecular haplotyping will largely supplant the need for genetic analysis. Dewey et al. employed family inheritance-state analysis to control sequencing error and inform haplotype phasing to quantify genome-wide compound heterozygosity from high-throughput sequencing data. To define the inheritance states of neighboring SNVs in the family quartet, Dewey et al. first used a heuristic algorithm that binned allele assortments, followed by a HMM in which the hidden states corresponded to the four possible inheritance states in the quartet and the two error states described by Roach et al. . A combination of pedigree data and statistical phasing based on inheritance state analysis was then used to infer phase for the majority of positions in each childs genome. For uniformly heterozygous positions, the minor allele was assigned to the paternal and maternal chromosome scaffolds using pair-wise pre-computed population linkage disequilibrium (LD) data from the SNP Annotation and Proxy Search (SNAP) database . These algorithms successfully determined genome-wide, long-ranging phased haplotypes in the family quartet. Phased variant data were also used to determine parental contribution to each childs disease risk in the context of thrombophilia. Population analysis leverages shared ancestry information to infer the most likely phasing of variants. The reference population can range from the very large (for example, the global human population), to the narrowly defined (for example, an isolated community). Because population relationships may be distant or cryptic, methodologies for population analysis are statistical and not deterministic. Also, because many more meioses separate all of the genomes in a large population, the length of haplotypes determined by population analysis is typically limited to thousands or tens of thousands of bases. Population inference methods work well on genotyping panels, which are compilations of common SNPs. As marker density increases, brute-force algorithms become less tractable, and algorithms such as those based on HMMs are employed . Discerning private and rare haplotypes by population-based methods is highly challenging. Population analysis cannot phase de novo mutations, rare variants or structural variants. If a rare variant is assigned to a haplotype by other methods, however, its presence on a haplotype determined by common SNPs can be probabilistically inferred . Parsimony approaches such as Clarks algorithm attempt to find the minimum number of unique haplotypes in a data set. The accuracy of this method depends on the assumption that markers are tightly linked and largely assignable to common haplotypes. Therefore, such algorithms over-predict common haplotypes. Coalescent-based methods and HMMs are also commonly employed to model population haplotype frequencies. The software programs PHASE, fastPHASE, MaCH, IMPUTE2, SHAPEIT, and Beagle implement such methods (Table2). These methods estimate parameters iteratively, so they work well with a small number of genetic markers residing on a short haplotype block. SHAPEIT (segmented haplotype estimation and imputation tool) scales linearly with the number of variants, samples and conditional haplotypes used in each iteration and can be run efficiently on whole chromosomes ,: it was applied to generate a haplotype map of 38 million SNPs for phase 1 of the 1000 Genomes Project ,. This program is versatile for population-based studies as it is able to handle data from combinations of unrelated individuals, duos and trios. SHAPEIT2 adds the ability to incorporate molecular information from sequence reads, incorporating calls and base-quality scores in a probabilistic model. It works best for high-coverage sequence. OConnell et al. have incorporated SHAPEIT2 into a general haplotyping workflow that can also detect state consistency errors in pedigrees . Wang et al. developed a population imputation pipeline, SNPTools, to phase low-coverage data obtained from phase 1 of the 1000 Genomes Project. SNPTools statistically models sequence data, scores polymorphic loci, and generates genotype likelihoods using a Binary sequence map (BAM)-specific Binomial Mixture Model (BBMM). The genotype likelihoods can then be integrated into SNPTools imputation algorithm or other algorithms such as Beagle to estimate phased genotypes and haplotypes. SNPTools haplotype imputation algorithm employs a four-state constrained HMM sampling scheme that assumes that the individual haplotype is a mosaic of the four parental haplotypes. WinHAP estimates multi-SNV haplotypes from large-scale genotype data . This software simplifies the 2SNP algorithm, using pairs of heterozygous SNVs to generate initial haplotypes and subsequently to construct a linear tree that makes it possible to infer a solution for the haplotype phase . These haplotypes are then improved by applying scalable sliding windows. Last, parsimony is used to iteratively restrict the number of haplotypes. The accuracy of population haplotyping can be improved by modeling population substructure and detecting cryptic relatedness. Such issues may be overcome by exploiting algorithms originally conceived for identical-by-descent (IBD) detection . Programs such as fastIBD and GERMLINE leverage population level IBD to define haplotypes . The extent of shared IBD between a pair of individuals depends on the number of generations since their last common ancestor as recombination and mutation increase genetic diversity. GERMLINE directly matches portions of haplotypes between individuals from phased genotype data. FastIBD uses a HMM approach for IBD detection of shared haplotypes from unphased genotype data. IBD segments are identified by modeling shared haplotype frequencies that account for background levels of LD. Most of the available algorithms for population inference of haplotypes from WGS require careful balancing of computational speed and accuracy. They also rely on the availability of well-characterized, population-matched reference datasets ; these need to be large enough to sample rare variants. Population-based phasing methods are probabilistic, limited to generation of short haplotype blocks, and will incorrectly phase rare combinations of variants, exactly those combinations most likely to be medically important. Moreover, haplotypes derived from algorithms that include population inference are likely to have an error rate that is unacceptably high for medical purposes. If an individual is a member of a completely characterized isolated population, the accuracy of population-based haplotypes can be very high. Such haplotyping has been demonstrated by Kong et al. for the Icelandic population. Use of such databases in combination with methods to phase de novo mutations and haplotypes resulting from recent recombinations could both permit increased haplotype quality and reduce the need for genetic and molecular haplotyping in patients from these populations. Combinations of molecular, genetic and population-based methods may work better than any single approach, by combining strengths and minimizing weaknesses (Table2). HARSH evolved from Hap-seqX, combining haplotype assembly with a reference population dataset to predict haplotypes from WGS data ,. Prism, mentioned earlier, is another recent hybrid algorithm . PPHS (Perfect Phylogeny Haplotypes from Sequencing) is another combination approach that combines population and molecular analysis by using raw sequence data to construct a phylogenetic tree for every short region . The phylogeny model assumes that there are no recurrent mutations or recombination within each short sequence region. For each set of SNPs in a given window, the algorithm reconstructs a local phylogenetic tree by expectation maximization and assigns haplotypes to individuals. The results for each window are then stitched together using the output of Beagle as a guide. Combination strategies such as these may increase the accuracy of population inference methods by leveraging the information provided by sequence data, or may supplement genetic analyses with population data, as described by Dewey et al.. A combination of genetic, physical and population-based approaches in a quartet yielded complete genome phasing, including phasing of 99.8% of fully heterozygous variants . Clinical applications of whole-genome haplotyping Local haplotyping has been and will remain important for genomic diagnostics. The immediate impact of whole-genome haplotyping will be to provide all local haplotypes. Local haplotypes are well known for the major histocompatibility complex (MHC) and several other loci, including the ApoE4 haplotype of the ApoE locus. MHC haplotypes are important for predicting graft compatibility and for prediction of the risks and protectivity of many phenotypes, notably type 1 diabetes . In many cases, the causative variant is not known, and the observed haplotype serves as a proxy for assaying the unknown single variant that lies in or is linked to that haplotype. In other cases, such as ApoE4, multiple coding variants must lie on the same haplotype within a single coding sequence in order to effect a particular phenotypic change. Family-based haplotyping to identify compound heterozygosity as a cause of recessive Mendelian disease is also fairly routine. Fetal and newborn diagnostics will also benefit from haplotyping. Spearheading such an approach in 2012, Kitzman et al. inferred haplotypes of a human fetus using haplotype-resolved genome sequencing of the mother, shotgun sequencing of the father, and deep sequencing of maternal plasma. Pathogenic rare variants will be a significant source of concern when practicing genomic medicine. Thus, an important clinical application of haplotypes will occur at the largely unseen analysis stage - in improving variant calling and avoiding false alarms. Already, software tools such as Platypus (www.well.ox.ac.uk/platypus) are being developed to produce improved base calling as informed by haplotypes . As knowledge and methods improve, understanding the functional interactions between regulatory elements and coding regions will permit medical decision-making that is based not only on the predicted effects of variants on the function of a protein, but also on combining separate predictions of (a) the functions of the two proteins produced by the two alleles of the encoding gene, and (b) the effects of the two cis-regulators of these two proteins. For example, if one of the cis-regulators markedly increases expression while the other decreases expression, and one protein is defective while the other is normal, then one combination of cis-regulators with the protein-coding alleles will produce wellness whereas the other combination will produce disease . Conclusions and future directions High-throughput short-read sequencing enabled rapid advances following the HGP. Unfortunately, haplotyping got left by the wayside, as the long reads characteristic of the HGP gave way to cheaper short reads. Now a combination of new strategies and new technologies is enabling the determination of personal haplotypes that will soon be economical for more routine medical use. The new strategies that we have discussed enable the use of cheap short reads for inferring longer haplotypes, typically by physically or computationally placing these reads into haplotype bins. Some new technologies, such as Hi-C, facilitate this binning process, whereas other new technologies will enable the generation of cheaper long reads. Considering the garbage-in-garbage-out principle, and that most current algorithms perform near perfectly on error-free data, improving sequencing error rate is probably the most critical factor for improving haplotypes . In other words, to improve haplotypes for use in genomic medicine, a focus on phasing algorithms and methodologies is not necessarily the greatest requirement, but rather a focus on improving the input data. More phase errors can arise with whole-genome data than with genotyping chip data. The SNPs included in genotyping chips tend to be selected for Hardy-Weinberg equilibrium, and so any SNP with a heterozygote frequency that is unexpected in relation to the allele frequencies is excluded. Such pre-selection is not done for WGS data. The ability to phase WGS data can be confounded by reference sequence errors, reference gaps and centromeres, and long interspersed nuclear elements (LINEs) . Methods are needed for filtering out these regions or for handling them in a probabilistic framework with appropriate confidence statistics. In conjunction with improvements in sequence quality, the generation of long sequence reads (of 10,000 bases or longer) is another key factor for haplotype improvement ,,. Reduction of sequencing error will have the greatest impact on high-frequency switch error, whereas improvements in read length will have a greater impact on low-frequency switch error (Boxes 1 and 3, and Figure3). Because of the importance of compound heterozygote analysis and within-gene phasing for medical applications, high-frequency switch errors must be minimized. Commoditization of haplotyping will also be necessary, and as this occurs, the costs of the various approaches will become less opaque. Other commodity technologies, such as sequencing, are best performed in high-throughput operations because such facilities offer a concentration of expertise, economy of scale, standard operating procedures, and rigorous quality control. Clinical haplotype databases will need to be developed in parallel with haplotype commoditization, much like the ClinVar database for individual variants associated with disease . It is now routine in medical genetics to consider compound heterozygosity in identifying disease risks and causes in patients. Typically, this search is either carried out by genetic haplotyping if the sequences of parents are available or is achieved by considering all possible haplotypes of detrimental variations within a gene. We identified compound heterozygosity causing Miller syndrome in our analysis of the first whole-genome sequenced family , but such analyses have been routine for years in the analysis of candidate genes, such as those for cystic fibrosis and breast cancer risk. Numerous examples include the identification of compound heterozygous causes of diseases, including the gene that encodes protein C in cerebral palsy , Charcot-Marie-Tooth neuropathy , and the gene encoding lysyl-tRNA synthetase in peripheral neuropathy . Currently, much clinical screening for compound heterozygosity is done with exome sequencing, but we predict a shift towards WGS as costs drop. As the understanding and annotation of regulatory variants continues to improve, we will see an increasing number of reports of cis-acting regulatory elements that alter gene expression and cause disease. Examples that have already been reported include a mutation in a RET enhancer that increases Hirschsprung disease risk , as well as mutations that affect thalassemia, hemophilia, hypercholesterolemia, and pyruvate kinase deficiency . Increasingly, the phase of these regulatory elements with respect to the coding variants will be part of routine diagnostics. For example, Ludlow et al. described a patient with a mutation in the promoter of one allele of GP1BB (encoding platelet glycoprotein Ib beta) and a deletion of the other allele, which together resulted in Bernard-Soulier syndrome . DNA diagnostics and prognostics that have clinical applications in oncology are expanding rapidly. For example, particular haplotypes of GNMT (encoding glycine N-methyltransferase) differently predispose individuals to prostate cancer . In many oncological applications, genetic and population phasing is of little value because of the large number of somatic mutations that may be present in tumor cells. Molecular phasing will therefore be the primary tool in this area, and algorithms that allow for multiple ploidy states will be important in handling the complexities of tumor genomes; currently most haplotype assembly algorithms assume diploidy. The MHC/HLA (human leukocyte antigen) locus is the most important haplotype influencing disease; and an understanding of the value of MHC haplotypes is therefore nothing new ,. It has traditionally been difficult to use molecular techniques that avoid low-frequency switch errors between genes of the MHC. Applications of some of the new long-range haplotyping techniques, particularly those capable of de novo assembly of regions of personal genomes within the MHC that are not in the reference genome, are likely to allow better utility of MHC typing for research, prognostics, diagnostics, and tissue transplantation . The genomic medicine of the future will rely on accurately mining patient sequence data to identify disease, wellness and actionable genes . Genomics must move beyond simple single allelic and genotypic tests of association and familial-segregation to explain phenotypes. At the simplest level, whether two detrimental variants that affect `the same gene lie in cis or in trans may spell the difference between a healthy carrier and a diseased compound heterozygote. The paradigm of medical understanding must be shifted from `the function of a gene in an individual to `the functions of each allele of each gene in an individual. To achieve this, we must transform the conceptualization of the genome in the minds of both clinicians and researchers from one that contains 22 autosomes and two sex chromosomes to one that contains 44 autosomes and two sex chromosomes, each with its own haplotype. Individual genome sequencing is being applied at all stages of life, from preimplantation, prenatal and neonatal diagnosis, to `no phenotype personalized genomics . Whole-genome haplotypes will improve the precision of personalized predictive, preventive and participatory medicine. Bacterial artificial chromosome Binary alignment map BAM-specific Binomial Mixture Model Copy number variant Fluorescence-activated cell sorting - HASH : Haplotype assembly for single human Human Genome Project Human leukocyte antigen Hidden Markov model Long fragment read Long interspersed nuclear element Markov chain Monte Carlo Minimum error correction Major histocompatibility complex Mendelian inheritance error Perfect Phylogeny Haplotypes from Sequencing State consistency error Single individual haplotyping Statistically aided, long-read haplotyping SNP Annotation and Proxy Search Single nucleotide polymorphism Single nucleotide variant Tennessen JA, Bigham AW, OConnor TD, Fu W, Kenny EE, Gravel S, McGee S, Do R, Liu X, Jun G, Kang HM, Jordan D, Leal SM, Gabriel S, Rieder MJ, Abecasis G, Altshuler D, Nickerson DA, Boerwinkle E, Sunyaev S, Bustamante CD, Bamshad MJ, Akey JM: Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science. 2012, 337: 64-69. Gravel S, Henn BM, Gutenkunst RN, Indap AR, Marth GT, Clark AG, Yu F, Gibbs RA, Bustamante CD: Demographic history and rare allele sharing among human populations. Proc Natl Acad Sci U S A. 2011, 108: 11983-11988. Keinan A, Clark AG: Recent explosive human population growth has resulted in an excess of rare genetic variants. Science. 2012, 336: 740-743. Tewhey R, Bansal V, Torkamani A, Topol EJ, Schork NJ: The importance of phase information for human genomics. Nat Rev Genet. 2011, 12: 215-223. Muers M: Genomics: no half measures for haplotypes. Nat Rev Genet. 2011, 12: 77- Tian Q, Hood L: Systems approaches to biology and disease enable translational systems medicine. Genomics Proteomics Bioinformatics. 2012, 10: 181-185. Hoehe MR: Haplotypes and the systematic analysis of genetic variation in genes and genomes. Pharmacogenomics. 2003, 4: 547-570. Roach JC, Glusman G, Hubley R, Montsaroff SZ, Holloway AK, Mauldin DE, Srivastava D, Garg V, Pollard KS, Galas DJ, Hood L, Smit AF: Chromosomal haplotypes by genetic phasing of human families. Am J Hum Genet. 2011, 89: 382-397. McCarroll SA, Kuruvilla FG, Korn JM, Cawley S, Nemesh J, Wysoker A, Shapero MH, de Bakker PIW, Maller JB, Kirby A, Elliott AL, Parkin M, Hubbell E, Webster T, Mei R, Veitch J, Collins PJ, Handsaker R, Lincoln S, Nizzari M, Blume J, Jones KW, Rava R, Daly MJ, Gabriel SB, Altshuler D: Integrated detection and population-genetic analysis of SNPs and copy number variation. Nat Genet. 2008, 40: 1166-1174. Fan HC, Wang J, Potanina A, Quake SR: Whole-genome molecular haplotyping of single cells. Nat Biotechnol. 2011, 29: 51-57. Geiss GK, Bumgarner RE, Birditt B, Dahl T, Dowidar N, Dunaway DL, Fell HP, Ferree S, George RD, Grogan T, James JJ, Maysuria M, Mitton JD, Oliveri P, Osborn JL, Peng T, Ratcliffe AL, Webster PJ, Davidson EH, Hood L, Dimitrov K: Direct multiplexed measurement of gene expression with color-coded probe pairs. Nat Biotechnol. 2008, 26: 317-325. Kirkness EF, Grindberg RV, Yee-Greenbaum J, Marshall CR, Scherer SW, Lasken RS, Venter JC: Sequencing of isolated sperm cells for direct haplotyping of a human genome. Genome Res. 2013, 23: 826-832. Ma L, Xiao Y, Huang H, Wang Q, Rao W, Feng Y, Zhang K, Song Q: Direct determination of molecular haplotypes by chromosome microdissection. Nat Methods. 2010, 7: 299-301. Yang H, Chen X, Wong WH: Completely phased genome sequencing through chromosome sorting. Proc Natl Acad Sci U S A. 2011, 108: 12-17. Thompson JF, Oliver JS: Mapping and sequencing DNA using nanopores and nanodetectors. Electrophoresis. 2012, 33: 3429-3436. Lam ET, Hastie A, Lin C, Ehrlich D, Das SK, Austin MD, Deshpande P, Cao H, Nagarajan N, Xiao M, Kwok P-Y: Genome mapping on nanochannel arrays for structural variation analysis and sequence assembly. Nat Biotechnol. 2012, 30: 771-776. Buermans HPJ, den Dunnen JT: Next generation sequencing technology: advances and applications.Biochim Biophys Acta 2014,5: 464-473. doi:10.1016/j.bbadis.2014.06.01. Roach JC: Random subcloning. Genome Res. 1995, 5: 464-473. Roach JC, Boysen C, Wang K, Hood L: Pairwise end sequencing: a unified approach to genomic mapping and sequencing. Genomics. 1995, 26: 345-353. Roach JC, Thorsson V, Siegel AF: Parking strategies for genome sequencing. Genome Res. 2000, 10: 1020-1030. Bansal V, Halpern AL, Axelrod N, Bafna V: An MCMC algorithm for haplotype assembly from whole-genome sequence data. Genome Res. 2008, 18: 1336-1346. Bansal V, Bafna V: HapCUT: an efficient and accurate algorithm for the haplotype assembly problem. Bioinformatics. 2008, 24: i153-i159. He D, Choi A, Pipatsrisawat K, Darwiche A, Eskin E: Optimal algorithms for haplotype assembly from whole-genome sequence data. Bioinformatics. 2010, 26: i183-i190. Tsugita A, Gish DT, Young J, Fraenkel-Conrat H, Knight CA, Stanley WM: The complete amino acid sequence of the protein of Tobacco Mosaic Virus. Proc Natl Acad Sci U S A. 1960, 46: 1463-1469. Olson MV, Dutchik JE, Graham MY, Brodeur GM, Helms C, Frank M, MacCollin M, Scheinman R, Frank T: Random-clone strategy for genomic restriction mapping in yeast. Proc Natl Acad Sci U S A. 1986, 83: 7826-7830. Lancia G, Bafna V, Istrail S, Lippert R, Schwartz R: SNPs problems, complexity and algorithms. In Algorithms - ESA 2001. Volume 2161. Edited by auf der Heide FM. Berlin, Heidelberg: Springer; 2001:182193. Levy S, Sutton G, Ng PC, Feuk L, Halpern AL, Walenz BP, Axelrod N, Huang J, Kirkness EF, Denisov G, Lin Y, MacDonald JR, Pang AWC, Shago M, Stockwell TB, Tsiamouri A, Bafna V, Bansal V, Kravitz SA, Busam DA, Beeson KY, McIntosh TC, Remington KA, Abril JF, Gill J, Borman J, Rogers Y-H, Frazier ME, Scherer SW, Strausberg RL:et al, The diploid genome sequence of an individual human. PLoS Biol. 2007, 5: e254- McKernan KJ, Peckham HE, Costa GL, McLaughlin SF, Fu Y, Tsung EF, Clouser CR, Duncan C, Ichikawa JK, Lee CC, Zhang Z, Ranade SS, Dimalanta ET, Hyland FC, Sokolsky TD, Zhang L, Sheridan A, Fu H, Hendrickson CL, Li B, Kotler L, Stuart JR, Malek JA, Manning JM, Antipova AA, Perez DS, Moore MP, Hayashibara KC, Lyons MR, Beaudoin RE: et al, Sequence and structural variation in a human genome uncovered by short-read, massively parallel ligation sequencing using two-base encoding. Genome Res. 2009, 19: 1527-1541. Kitzman JO, Mackenzie AP, Adey A, Hiatt JB, Patwardhan RP, Sudmant PH, Ng SB, Alkan C, Qiu R, Eichler EE, Shendure J: Haplotype-resolved genome sequencing of a Gujarati Indian individual. Nat Biotechnol. 2011, 29: 59-63. Suk E-K, Schulz S, Nowick K, Duitama J, Peckham H, Lee C, McLaughlin S, Schreiber S, Palczewski S, Holloway DT, McEwen GK, Hoehe MR: A comprehensively molecular haplotype-resolved genome of a European individual. Genome Res. 2011, 21: 1672-1685. Duitama J, McEwen GK, Huebsch T, Palczewski S, Schulz S, Verstrepen K, Suk E-K, Hoehe MR: Fosmid-based whole genome haplotyping of a HapMap trio child: evaluation of single individual haplotyping techniques. Nucleic Acids Res. 2012, 40: 2041-2053. Kaper F, Swamy S, Klotzle B, Munchel S, Cottrell J, Bibikova M, Chuang H-Y, Kruglyak S, Ronaghi M, Eberle MA, Fan J-B: Whole-genome haplotyping by dilution, amplification, and sequencing. Proc Natl Acad Sci U S A. 2013, 110: 5552-5557. Siegel AF, Roach JC, van den Engh G: Expectation and variance of true and false fragment matches in DNA restriction mapping. J Comput Biol. 1998, 5: 101-111. Yen PH, Davidson N: The gross anatomy of a tRNA gene cluster at region 42A of the D. melanogaster chromosome. Cell. 1980, 22: 137-148. Peters BA, Kermani BG, Sparks AB, Alferov O, Hong P, Alexeev A, Jiang Y, Dahl F, Tang YT, Haas J, Robasky K, Zaranek AW, Lee J-H, Ball MP, Peterson JE, Perazich H, Yeung G, Liu J, Chen L, Kennemer MI, Pothuraju K, Konvicka K, Tsoupko-Sitnikov M, Pant KP, Ebert JC, Nilsen GB, Baccash J, Halpern AL, Church GM, Drmanac R: Accurate whole-genome sequencing and haplotyping from 10 to 20 human cells. Nature. 2012, 487: 190-195. Kuleshov V, Xie D, Chen R, Pushkarev D, Ma Z, Blauwkamp T, Kertesz M, Snyder M: Whole-genome haplotyping using long reads and statistical methods. Nat Biotechnol. 2014, 32: 261-266. Duitama J, Huebsch T, Mcewen G, Suk E, Hoehe MR: ReFHap: a reliable and fast algorithm for single individual haplotyping. Proceedings of the First ACM international Conference on Bioinformatics and Computational Biology: August 2-4, 2010. 2010, 160-169. ACM, Niagara Falls, New York Xie M, Wang J, Jiang T: A fast and accurate algorithm for single individual haplotyping. BMC Syst Biol. 2012, 6 (Suppl 2): S8- Matsumoto H, Kiryu H: MixSIH: a mixture model for single individual haplotyping. BMC Genomics. 2013, 14 (Suppl 2): S5- Browning SR, Browning BL: Rapid and accurate haplotype phasing and missing-data inference for whole-genome association studies by use of localized haplotype clustering. Am J Hum Genet. 2007, 81: 1084-1097. Scheet P, Stephens M: A fast and flexible statistical model for large-scale population genotype data: applications to inferring missing genotypes and haplotypic phase. Am J Hum Genet. 2006, 78: 629-644. Gusev A, Lowe JK, Stoffel M, Daly MJ, Altshuler D, Breslow JL, Friedman JM, Peer I: Whole population, genome-wide mapping of hidden relatedness. Genome Res. 2009, 19: 318-326. Howie BN, Donnelly P, Marchini J: A flexible and accurate genotype imputation method for the next generation of genome-wide association studies. PLoS Genet. 2009, 5: e1000529- Li Y, Willer CJ, Ding J, Scheet P, Abecasis GR: MaCH: using sequence and genotype data to estimate haplotypes and unobserved genotypes. Genet Epidemiol. 2010, 34: 816-834. Stephens M, Donnelly P: A comparison of bayesian methods for haplotype reconstruction from population genotype data. Am J Hum Genet. 2003, 73: 1162-1169. Delaneau O, Zagury J-F, Marchini J: Improved whole-chromosome phasing for disease and population genetic studies. Nat Methods. 2013, 10: 5-6. Delaneau O, Marchini J, Zagury JF: A linear complexity phasing method for thousands of genomes. Nat Methods. 2011, 9: 179-181. Wang Y, Lu J, Yu J, Gibbs RA, Yu F: An integrative variant analysis pipeline for accurate genotype/haplotype inference in population NGS data. Genome Res. 2013, 23: 833-842. Cheng W, Zhou F, Nie P, Xu Y: WinHAP: an efficient haplotype phasing algorithm based on scalable sliding windows. PLoS One. 2012, 7: e43163- Yang W-Y, Hormozdiari F, Wang Z, He D, Pasaniuc B, Eskin E: Leveraging reads that span multiple single nucleotide polymorphisms for haplotype inference from sequencing data. Bioinformatics. 2013, 29: 2245-2252. Delaneau O, Howie B, Cox AJ, Zagury J-F, Marchini J: Haplotype estimation using sequencing reads. Am J Hum Genet. 2013, 93: 687-696. Efros A, Halperin E: Haplotype reconstruction using perfect phylogeny and sequence data. BMC Bioinformatics. 2012, 13 (Suppl 6): S3- Lajugie J, Mukhopadhyay R, Schizas M, Lailler N, Fourel N, Bouhassira EE: Complete genome phasing of family quartet by combination of genetic, physical and population-based phasing analysis. PLoS One. 2013, 8: e64571- El-Metwally S, Hamza T, Zakaria M, Helmy M: Next-generation sequence assembly: four stages of data processing and computational challenges. PLoS Comput Biol. 2013, 9: e1003345- Geraci F: A comparison of several algorithms for the single individual SNP haplotyping reconstruction problem. Bioinformatics. 2010, 26: 2217-2225. Deng F, Cui W, Wang L: A highly accurate heuristic algorithm for the haplotype assembly problem. BMC Genomics. 2013, 14 (Suppl 2): S2- Lo C, Bashir A, Bansal V, Bafna V: Strobe sequence design for haplotype assembly. BMC Bioinformatics. 2011, 12 (Suppl 1): S24- Taussig DM, McGinn S, Gut IG: DNA sequencing - spanning the generations. N Biotechnol. 2013, 30: 366-372. Korbel JO, Lee C: Genome assembly and haplotyping with Hi-C. Nat Biotechnol. 2013, 31: 1099-1101. Burton JN, Adey A, Patwardhan RP, Qiu R, Kitzman JO, Shendure J: Chromosome-scale scaffolding of de novo genome assemblies based on chromatin interactions. Nat Biotechnol. 2013, 31: 1119-1125. Kaplan N, Dekker J: High-throughput genome scaffolding from in vivo DNA interaction frequency. Nat Biotechnol. 2013, 31: 1143-1147. Selvaraj SR, Dixon J, Bansal V, Ren B: Whole-genome haplotype reconstruction using proximity-ligation and shotgun sequencing. Nat Biotechnol. 2013, 31: 1111-1118. Abecasis GR, Auton A, Brooks LD, DePristo MA, Durbin RM, Handsaker RE, Kang HM, Marth GT, McVean GA: An integrated map of genetic variation from 1, 092 human genomes. Nature. 2012, 491: 56-65. Roach JC, Glusman G, Smit AFA, Huff CD, Hubley R, Shannon PT, Rowen L, Pant KP, Goodman N, Bamshad M, Shendure J, Drmanac R, Jorde LB, Hood L, Galas DJ: Analysis of genetic inheritance in a family quartet by whole-genome sequencing. Science. 2010, 328: 636-639. Wijsman EM, Rothstein JH, Thompson EA: Multipoint linkage analysis with many multiallelic or dense diallelic markers: Markov chain-Monte Carlo provides practical approaches for genome scans on general pedigrees. Am J Hum Genet. 2006, 79: 846-858. Dewey FE, Chen R, Cordero SP, Ormond KE, Caleshu C, Karczewski KJ, Whirl-Carrillo M, Wheeler MT, Dudley JT, Byrnes JK, Cornejo OE, Knowles JW, Woon M, Sangkuhl K, Gong L, Thorn CF, Hebert JM, Capriotti E, David SP, Pavlovic A, West A, Thakuria JV, Ball MP, Zaranek AW, Rehm HL, Church GM, West JS, Bustamante CD, Snyder M, Altman RB: et al, Phased whole-genome genetic risk in a family quartet using a major allele reference sequence. PLoS Genet. 2011, 7: e1002280- Johnson AD, Handsaker RE, Pulit SL, Nizzari MM, ODonnell CJ, de Bakker PIW: SNAP: a web-based tool for identification and annotation of proxy SNPs using HapMap. Bioinformatics. 2008, 24: 2938-2939. Browning SR, Browning BL: Haplotype phasing: existing methods and new developments. Nat Rev Genet. 2011, 12: 703-714. Abecasis G, Altschuler D, Auton A, Brooks L, Durbin R, Gibbs R, Hurles M, McVean G: A map of human genome variation from population-scale sequencing. Nature. 2010, 467: 1061-1073. Clark AG: Inference of haplotypes from PCR-amplified samples of diploid populations. Mol Biol Evol. 1990, 7: 111-122. Zagury J-F, Marchini J, Delaneau O: A linear complexity phasing method for thousands of genomes. Nat Methods. 2011, 9: 179-181. OConnell J, Gurdasani D, Delaneau O, Pirastu N, Ulivi S, Cocca M, Traglia M, Huang J, Huffman JE, Rudan I, McQuillan R, Fraser RM, Campbell H, Polasek O, Asiki G, Ekoru K, Hayward C, Wright AF, Vitart V, Navarro P, Zagury J-F, Wilson JF, Toniolo D, Gasparini P, Soranzo N, Sandhu MS, Marchini J: A general approach for haplotype phasing across the full spectrum of relatedness. PLoS Genet. 2014, 10: e1004234- Browning BL, Browning SR: A fast, powerful method for detecting identity by descent. Am J Hum Genet. 2011, 88: 173-182. Brinza D, Zelikovsky A: 2SNP: scalable phasing method for trios and unrelated individuals. IEEE/ACM Trans Comput Biol Bioinform. 2008, 5: 313-318. Su S-Y, Kasberger J, Baranzini S, Byerley W, Liao W, Oksenberg J, Sherr E, Jorgenson E: Detection of identity by descent using next-generation whole genome sequencing data. BMC Bioinformatics. 2012, 13: 121- Rao W, Ma Y, Ma L, Zhao J, Li Q, Gu W, Zhang K, Bond VC, Song Q: High-resolution whole-genome haplotyping using limited seed data. Nat Methods. 2013, 10: 6-7. Kong A, Masson G, Frigge ML, Gylfason A, Zusmanovich P, Thorleifsson G, Olason PI, Ingason A, Steinberg S, Rafnar T, Sulem P, Mouy M, Jonsson F, Thorsteinsdottir U, Gudbjartsson DF, Stefansson H, Stefansson K: Detection of sharing by descent, long-range phasing and haplotype imputation. Nat Genet. 2008, 40: 1068-1075. He D, Eskin E: Hap-seqX: expedite algorithm for haplotype phasing with imputation using sequence data. Gene. 2013, 518: 2-6. Trowsdale J, Knight JC: Major histocompatibility complex genomics and human disease. Annu Rev Genomics Hum Genet. 2013, 14: 301-323. Kitzman JO, Snyder MW, Ventura M, Lewis AP, Qiu R, Simmons LE, Gammill HS, Rubens CE, Santillan DA, Murray JC, Tabor HK, Bamshad MJ, Eichler EE, Shendure J: Noninvasive whole-genome sequencing of a human fetus. Sci Transl Med. 2012, 4: 137ra76- Liu H, Motoda H: On issues of instance selection. Data Min Knowl Discov. 2002, 6: 115-130. Lo C, Liu R, Lee J, Robasky K, Byrne S, Lucchesi C, Aach J, Church G, Bafna V, Zhang K: On the design of clone-based haplotyping. Genome Biol. 2013, 14: R100- Landrum MJ, Lee JM, Riley GR, Jang W, Rubinstein WS, Church DM, Maglott DR: ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res. 2014, 42 (Database issue): D980-D985. Fong CYI, Mumford AD, Likeman MJ, Jardine PE: Cerebral palsy in siblings caused by compound heterozygous mutations in the gene encoding protein C. Dev Med Child Neurol. 2010, 52: 489-493. Lupski JR, Reid JG, Gonzaga-Jauregui C, Rio Deiros D, Chen DCY, Nazareth L, Bainbridge M, Dinh H, Jing C, Wheeler DA, McGuire AL, Zhang F, Stankiewicz P, Halperin JJ, Yang C, Gehman C, Guo D, Irikat RK, Tom W, Fantin NJ, Muzny DM, Gibbs RA: Whole-genome sequencing in a patient with Charcot-Marie-Tooth neuropathy. N Engl J Med. 2010, 362: 1181-1191. McLaughlin HM, Sakaguchi R, Liu C, Igarashi T, Pehlivan D, Chu K, Iyer R, Cruz P, Cherukuri PF, Hansen NF, Mullikin JC, Biesecker LG, Wilson TE, Ionasescu V, Nicholson G, Searby C, Talbot K, Vance JM, Zchner S, Szigeti K, Lupski JR, Hou Y-M, Green ED, Antonellis A: Compound heterozygosity for loss-of-function lysyl-tRNA synthetase mutations in a patient with peripheral neuropathy. Am J Hum Genet. 2010, 87: 560-566. Emison ES, McCallion AS, Kashuk CS, Bush RT, Grice E, Lin S, Portnoy ME, Cutler DJ, Green ED, Chakravarti A: A common sex-dependent mutation in a RET enhancer underlies Hirschsprung disease risk. Nature. 2005, 434: 857-863. Grice EA, Rochelle ES, Green ED, Chakravarti A, McCallion AS: Evaluation of the RET regulatory landscape reveals the biological relevance of a HSCR-implicated enhancer. Hum Mol Genet. 2005, 14: 3837-3845. De Vooght KMK, van Wijk R, van Solinge WW: Management of gene promoter mutations in molecular diagnostics. Clin Chem. 2009, 55: 698-708. Ludlow LB, Schick BP, Budarf ML, Driscoll DA, Zackai EH, Cohen A, Konkle BA: Identification of a mutation in a GATA binding site of the platelet glycoprotein Ibbeta promoter resulting in the Bernard-Soulier syndrome. J Biol Chem. 1996, 271: 22076-22080. Huang Y-C, Lee C-M, Chen M, Chung M-Y, Chang Y-H, Huang WJ-S, Ho DM-T, Pan C-C, Wu TT, Yang S, Lin M-W, Hsieh J-T, Chen Y-MA: Haplotypes, loss of heterozygosity, and expression levels of glycine N-methyltransferase in prostate cancer. Clin Cancer Res. 2007, 13: 1412-1420. De Bakker PIW, McVean G, Sabeti PC, Miretti MM, Green T, Marchini J, Ke X, Monsuur AJ, Whittaker P, Delgado M, Morrison J, Richardson A, Walsh EC, Gao X, Galver L, Hart J, Hafler DA, Pericak-Vance M, Todd JA, Daly MJ, Trowsdale J, Wijmenga C, Vyse TJ, Beck S, Murray SS, Carrington M, Gregory S, Deloukas P, Rioux JD: A high-resolution HLA and SNP haplotype map for disease association studies in the extended human MHC. Nat Genet. 2006, 38: 1166-1172. Petersdorf EW, Malkki M, Gooley TA, Martin PJ, Guo Z: MHC haplotype matching for unrelated hematopoietic cell transplantation. PLoS Med. 2007, 4: e8- Glusman G: Clinical applications of sequencing take center stage. Genome Biol. 2013, 14: 303- Sabeti PC, Varilly P, Fry B, Lohmueller J, Hostetter E, Cotsapas C, Xie X, Byrne EH, McCarroll SA, Gaudet R, Schaffner SF, Lander ES, Frazer KA, Ballinger DG, Cox DR, Hinds DA, Stuve LL, Gibbs RA, Belmont JW, Boudreau A, Hardenbol P, Leal SM, Pasternak S, Wheeler DA, Willis TD, Yu F, Yang H, Zeng C, Gao Y, Hu H:et al, Genome-wide detection and characterization of positive selection in human populations. Nature. 2007, 449: 913-918. Green RE, Krause J, Briggs AW, Maricic T, Stenzel U, Kircher M, Patterson N, Li H, Zhai W, Fritz MH-Y, Hansen NF, Durand EY, Malaspinas A-S, Jensen JD, Marques-Bonet T, Alkan C, Prfer K, Meyer M, Burbano HA, Good JM, Schultz R, Aximu-Petri A, Butthof A, Hber B, Hffner B, Siegemund M, Weihmann A, Nusbaum C, Lander ES, Russ C:et al, A draft sequence of the Neandertal genome. Science. 2010, 328: 710-722. Lawson DJ, Hellenthal G, Myers S, Falush D: Inference of population structure using dense haplotype data. PLoS Genet. 2012, 8: e1002453- Patterson N, Hattangadi N, Lane B, Lohmueller KE, Hafler DA, Oksenberg JR, Hauser SL, Smith MW, OBrien SJ, Altshuler D, Daly MJ, Reich D: Methods for high-density admixture mapping of disease genes. Am J Hum Genet. 2004, 74: 979-1000. Marchini J, Howie B, Myers S, McVean G, Donnelly P: A new multipoint method for genome-wide association studies by imputation of genotypes. Nat Genet. 2007, 39: 906-913. Kang HM, Zaitlen NA, Eskin E: EMINIM: an adaptive and memory-efficient algorithm for genotype imputation. J Comput Biol. 2010, 17: 547-560. Gabriel SB, Schaffner SF, Nguyen H, Moore JM, Roy J, Blumenstiel B, Higgins J, DeFelice M, Lochner A, Faggart M, Liu-Cordero SN, Rotimi C, Adeyemo A, Cooper R, Ward R, Lander ES, Daly MJ, Altshuler D: The structure of haplotype blocks in the human genome. Science. 2002, 296: 2225-2229. Langefeld CD, Fingerlin TE: Association methods in human genetics. Methods Mol Biol. 2007, 404: 431-460. Liu N, Zhang K, Zhao H: Haplotype-association analysis. Adv Genet. 2008, 60: 335-405. Murdoch JD, Speed WC, Pakstis AJ, Heffelfinger CE, Kidd KK: Worldwide population variation and haplotype analysis at the serotonin transporter gene SLC6A4 and implications for association studies. Biol Psychiatry. 2013, 74: 879-889. Kasowski M, Grubert F, Heffelfinger C, Hariharan M, Asabere A, Waszak SM, Habegger L, Rozowsky J, Shi M, Urban AE, Hong M-Y, Karczewski KJ, Huber W, Weissman SM, Gerstein MB, Korbel JO, Snyder M: Variation in transcription factor binding among humans. Science. 2010, 328: 232-235. Tycko B: Allele-specific DNA methylation: beyond imprinting. Hum Mol Genet. 2010, 19: R210-R220. Aretz S, Uhlhaas S, Caspari R, Mangold E, Pagenstecher C, Propping P, Friedl W: Frequency and parental origin of de novo APC mutations in familial adenomatous polyposis. Eur J Hum Genet. 2004, 12: 52-58. Olson MV, Green P: Criterion for the completeness of large-scale physical maps of DNA. Cold Spring Harb Symp Quant Biol. 1993, 58: 349-355. Ewing B, Green P: Base-calling of automated sequencer traces using phred, II. Error probabilities. Genome Res. 1998, 8: 186-194. Lin S, Cutler DJ, Zwick ME, Chakravarti A: Haplotype inference in random population samples. Am J Hum Genet. 2002, 71: 1129-1137. Rastas P, Koivisto M, Mannila H, Ukkonen E: Phasing genotypes using a hidden Markov model. Bioinformics Algorithms :Techniques and Applications. Edited by: Mǎndoiu II, Zelikovsky A. 2008, 355-362. John Wiley & Sons, Inc, Hoboken, NJ GG, HCC and JCR received support from the University of Luxembourg - Institute for Systems Biology Program. GG and JCR received support from and the National Institute of General Medical Sciences Center for Systems Biology (P50 GM076547). We thank the anonymous reviewers for their contributions. The authors declare that they have no competing interests. Authors’ original submitted files for images Below are the links to the authors’ original submitted files for images. Rights and permissions About this article Cite this article Glusman, G., Cox, H.C. & Roach, J.C. Whole-genome haplotyping approaches and genomic medicine. Genome Med 6, 73 (2014). https://doi.org/10.1186/s13073-014-0073-7 - Major Histocompatibility Complex - Human Genome Project - Genomic Medicine - Compound Heterozygosity - Phase Accuracy
Turfgrasses (Lawn Grasses) – Comparison and Selection There are usually several varieties of each turfgrass from which to choose. Which is the best for your situation? The information below, taken from a University of Georgia publication, should help. COOL SEASON GRASSES Cool-season grasses grow well during the cool months (60 degrees-75 degrees F) of the year. They may become dormant or injured during the hot months of summer. Tall Fescue (Festuca arundinacea). Perhaps the most popular grass in the mountain and upper piedmont areas of Georgia is tall fescue. This is a perennial bunch-type grass that grows rapidly and requires frequent mowing in the spring and fall. Tall fescue needs more water than the warm-season grasses to stay green during the summer. It is quickly established from seed and grows well in full sun as well as moderate shade. Tall fescue will tolerate a wide range of soil conditions, but like most turfgrasses grows best with a soil pH between 5.5 and 6.5. Lawns planted in tall fescue tend to thin out and become “clumpy” thus requiring reseeding every three or more years. Kentucky-31 (K-31) is the old, common cultivar or variety of tall fescue grown in Georgia. Most of the new cultivars referred to as “turf-type” tall fescues have slightly narrower leaf blades, slower vertical growth rates, greater density and shade tolerance than K-31. As a result, if properly managed, most turf-types will produce a better turf than K-31. More information can be obtained from Cooperative Extension Service Leaflet No. 354, Tall Fescue Lawn Management. Kentucky Bluegrass (Poa pratensis). Kentucky bluegrass has a medium leaf texture and a bright, pleasing color. There are many varieties which grow well in and north of the upper piedmont areas of Georgia. Kentucky bluegrass can become semi-dormant during hot weather, and grows best in a fertile soil with a pH of 6 to 7. While it does best in partial shade, it will grow in open sun if adequate moisture is present. Ryegrasses. Perennial ryegrass (Lolium perenne) and annual ryegrass (Lolium multiflorum) are suited for temporary cool-season turfgrasses throughout Georgia. They can be used as a temporary winter cove on new lawns that have not been permanently established. Ryegrasses are also used for overseeding, that is, to provide a green cover on a warm-season grass during the winter. However, overseeding may damage the warm-season grass unless managed correctly in the spring because the ryegrass competes for moisture, sunlight and nutrients. There are many varieties of perennial ryegrass, and depending upon the environmental conditions, they may behave as an annual or perennial. As its name suggests, annual ryegrass dies as summer approaches. It is also known as common, winter, domestic, Oregon, and Italian ryegrass. WARM SEASON GRASSES Warm-season grasses grow best during the warm months (80 degrees-95 degrees F) of spring, summer and early fall. They grow vigorously during this time and become brown and dormant in winter. Bermudagrasses (Cynodon Spp). All bermudas thrive in hot weather but perform poorly in shade. Bermudas spread so rapidly by both above-and-below-ground runners that they are difficult to control around flower beds, walks and borders. If fertilized adequately, they require frequent mowing. The bermudagrasses are adapted to the entire state and tolerate a wide soil pH. Common Bermudagrass (Cynodon dactylon). Common bermudagrass is drought resistant, grows on many soils, and it makes a good turf if fertilized and mowed right. Common bermudagrass produces many unsightly seedheads, but in spite of this fault, it is frequently used on home lawns due to the ease and economy of establishment. Common bermuda may be planted from either seed or sprigs and with intensive management will provide a high quality turf. However, the newer hybrid bermudas are generally far superior. Hybrid Bermudagrasses. Compared with common bermuda, these grasses have more disease resistance, greater turf density, better weed resistance, fewer seedheads, finer and softer texture and more favorable color. They also produce no viable seed and must be planted by vegetative means. The hybrids also require more intensive maintenance for best appearance. Frequent fertilization and close mowing, edging, and dethatching are needed to keep them attractive. All of the improved bermudas described here have been developed and released cooperatively by the University of Georgia Coastal Plain Experiment Station and U.S. Department of Agriculture. They are products of the grass breeding program of Dr. Glenn W. Burton, Principal Geneticist. Tifway (Tifton 419) Bermudagrass. Tifway has several outstanding features that make it an ideal turf for lawns and golf fairways and tees. It has a dark green color and stiffer leaves than Tifgreen. Tifway is more frost resistant than other bermudagrasses. Therefore, it will usually remain growing and green longer in the fall and will develop color earlier in the spring. This trait, along with its ruggedness, has led to its use on football fields. Tifway II Bermudagrass is an improved mutant of Tifway. Tifway II looks like Tifway and has the same desirable characteristics, but makes a denser turf, is more frost tolerant, often greens up earlier in the spring and provides slightly better turf quality. Tifgreen (Tifton 328) Bermudagrass. Tifgreen is a low-growing rapidly spreading grass. It is relatively disease resistant and makes a dense, weed-resistant turf when properly managed. Its fine texture and soft, green leaves are largely responsible for its excellence as a putting green on golf courses. Tifdwarf Bermudagrass. This hybrid is thought to be a vegetative mutant from the original Tifgreen nursery at Tifton. Tifdwarf, as the name implies, is a very short grass with tiny leaves that hug the ground very closely. It has softer leaves and fewer seedheads than Tifgreen. These characteristics contribute to its use on golf greens and make it less desirable than the other hybrids for lawn use. Tifsport (Tifton 94) bermudagrass is emerald green; keeps dark color later in fall.Low-growing, spreads rapidly, resists traffic injury Carpetgrass (Axonopus affinis). Carpetgrass is a perennial, coarse-leaved, creeping grass which grows in the central and southern regions of the state. It grows better on low, wet soils than do other grasses. It will grow well in either sun or shade but is less shade tolerant than St. Augustine and centipedegrass which it resembles. Carpetgrass may be planted by seed or sprigs. It is not winter hardy and should not be planted north of middle Georgia. Carpetgrass is recommended only for lawns on wet, low fertility, acid, (pH 4.5-5.5) sandy soils where ease of establishment and care is more important than quality. Its chief disadvantage is rapid seedhead production. Centipedegrass (Eremochloa ophiuroides). This is a low, medium textured, slow growing but aggressive grass that can produce a dense, attractive, weed-free turf. It is more shade tolerant than bermudagrass but less shade tolerant than St. Augustine and zoysiagrass. Since centipede produces only surface runners, it is easily controlled around borders of flower beds and walks. It is well adapted as far north as Atlanta and Athens. Centipede is the ideal grass for the homeowner who wants a fairly attractive lawn that needs little care. Centipede does not require much fertilizer or mowing, and compared to other lawn grasses, is generally resistant to most insects and diseases. It will, however, respond to good management and provide a very attractive turf. Centipede can be established from either seeds or sprigs. Since it is slow growing, it takes longer than bermuda and St. Augustine to completely cover. Centipede is subject to “decline” problems that can be prevented by proper management. This includes care not to overfertilize, prevention of thatch accumulation, irrigation during drought stress, particularly in the fall, and maintaining a mowing height of 1-1 1/2 inches. Centipede is well adapted to soils of low fertility with a pH of 5.0 to 6.0 but grows best — like most grasses — at a soil pH of 6.0 to 6.5. For additional information see Cooperative Extension Service Leaflets No. 313, Centipede Lawns, and No. 177, Prevent Centipede Decline. Zoysiagrasses (Zoysia Spp). Several species and/or cultivars of zoysiagrasses are available in Georgia. Most are adapted to the entire state and form an excellent turf when properly established and managed. For the best appearance, most zoysias require cutting with a reel mower, periodic dethatching, and more frequent irrigation than other warm season turfgrasses. The zoysias form a dense, attractive turf in full sun and partial shade, but may thin out in dense shade. Most zoysias grow very slowly when compared to other grasses. They usually are established by sodding, plugging, or sprigging. Two-inch diameter plugs planted on 6-inch centers, will cover completely in 12 months if irrigated and fertilized properly. Zoysia japonica is sometimes called Japanese or Korean lawngrass or common zoysia. It has a coarse leaf texture and excellent cold tolerance like Bermudas and it can be seeded. Meyer zoysia, also called “Z-52,” is an improved selection of Zoysia japonica. It has medium leaf texture, good cold tolerance, and spreads more rapidly than the other zoysiagrasses. This is the zoysia often advertised as the “super” grass in newspapers and magazines. These advertising claims are true in part, but do not tell the entire story. Zoysia matrella, also named Manilagrass, is less cold tolerant than Zoysia japonica or Meyer but more so than Emerald. It also has a finer leaf texture than Zoysia japonica and Meyer, but is coarser than Emerald. Emerald zoysia is a hybrid between Zoysia japonica and Zoysia tenuifolia that was developed in Tifton, Georgia. It has a dark green color, a very fine leaf texture, good shade tolerance, high shoot density, and a low growth habit. Emerald will develop excess thatch rather quickly if over fertilized and its cold tolerance makes it more susceptible to winter injury from the Atlanta area and north. After this grass has been mowed, new growth originates largely from the base of the plant, rather than from the branches, thereby leaving very few exposed brown stems. Emerald zoysia is moderately winter-hardy and fairly shade tolerant, but it grows more slowly when planted in a shady yard. Because of its thick growth, it is difficult to overseed. El Toro is a relatively new zoysia that was developed in California and looks like Meyer. El Toro is the fastest growing zoysia, tolerates mowing with a rotary mower, and produces less thatch than Meyer, The winter hardiness of this zoysia is not yet well established. The zoysiagrasses are (1) slow to cover completely, thus more costly to establish; (2) less drought-tolerant than Common bermudagrass; and (3) recommended for lawn use only when the homeowner is willing to provide the required maintenance. For more information see Cooperative Extension Service Leaflet No. 395, Zoysiagrass Lawns. St. Augustinegrass (Stenotaphrum secundatum). Compared to finer textured grasses like the bermudas, St. Augustine has large flat stems and broad coarse leaves. It has an attractive blue-green color and forms a deep, fairly dense turf. It spreads by long above-ground runners or stolons. While it is aggressive, it is easily controlled around borders. It produces only a few viable seed and is commonly planted by vegetative means. St. Augustine is the most shade tolerant warm-season grass in Georgia. It is very susceptible to winter injury and should only be planted with caution as far north as Atlanta and Athens. Perhaps the greatest disadvantage of this grass is its sensitivity to the chinch bug. While lawn insecticides can control this insect, frequent applications are required. The more common St. Augustinegrass varieties are Bitter Blue, Floratine and Floratam. Bitter Blue has the best shade tolerance but is sensitive to chinch bugs and St. Augustine Decline Virus (SADV). Floratine has the finest leaf texture but is also susceptible to chinch bugs and SADV. Floratam has the coarsest leaf texture, is resistant to chinch bug and SADV, but is not as shade tolerant as the others.
Following Andre’ Borschberg’s record-breaking endurance and distance flight in Solar Impulse last month, the accomplishment of a group of Swiss students and their planned trans-Atlantic, solar-powered autonomous flight are equally worthy of consideration. With a much smaller budget than Solar Impulse, the students are planning a 5,000 kilometer (3,100 mile) flight from Bell Island, Canada to Lisbon, Portugal, a seven-day test of self-guided navigation and autonomous airmanship. Recent achievements by the team suggest that success will come from good design and careful planning. Last month, their AtlantikSolar 2 Unmanned Aerial Vehicle made its first 24-hour flight. (The team generously acknowledges American Alan Cocconi’s 48-hour, solar-powered flight with his 13-kilogram (28.6 pound) So Long in 2005.) Only two weeks later, the ETH (Swiss Federal Institute of Technology) Zurich students from the Autonomous Systems Lab managed an 81.5 hour flight that sent their 5.6 meter (18.6 feet) wingspan, 6.8 kilogram (15 pound) UAV 2,316 kilometers (1,436 miles). This broke the world record for flight endurance in the airplane’s class, the team explaining this is the “longest ever demonstrated continuous flight of all aircrafts below 50kg total mass, and is also the longest-ever continuous flight of a low-altitude long-endurance (LALE) aircraft (the previous record being a 48-hour flight by the 13kg SoLong UAV ).” This great effort, the third longest solar flight ever, trails only Airbus Space’s (formerly QinetiQ’s) 53-kilogram (116.6 pound) Zephyr 7 and the 2,300 kilogram (5,060 pound) Solar Impulse 2. It is the fifth-longest flight ever demonstrated by any aircraft (both manned and unmanned), according to ETH. The flight, from the Rafz, Switzerland radio-controlled model airfield, took place between July 14th and 17th, the first three days being sunny. After four days and three nights, the aircraft landed with batteries fully charged – able to have carried the craft through another night. With the hand launch taking place at 09:32 on the 14th, batteries were fully charged each day thereafter by 13:05 local time each day, even before maximum sunlight (solar noon) at around 13:30 each day. Philipp Oettershagen, currently a research assistant and PhD student at ETH Zurich’s Autonomous Systems Lab (ASL), reports, “With the exception of takeoff, the aircraft was in fully-autonomous operation 98% of the time, and less than 2% in autopilot-assisted mode via its Pixhawk autopilot.” AtlantikSolar 2 consumed an average of 35 to 46 Watts in level flight and under calm conditions. Maximum input from the 88 SunPower E60 solar cells during each day was around 260 Watts, enabling the plane to fly through the following night and face the sunrise with a minimum state of charge around 35 percent. This would allow the plane to continue with enough energy to make it through longer nights, clouds, or winds, the latter two of which affected the flight’s last day. Thermal updrafts during the first evening and night, followed by downdrafts during the second night dropped the battery capacity to 32 percent, their lowest level throughout the four days. The airplane survived structural loads imposed by arriving thunderstorms and the strongest winds – up to 60 kilometers per hour (37.2 mph) – experienced on the last day. Gusts were strong enough to partially damage the ground station, but the airplane landed safely in autopilot-assisted mode when the winds dropped. In the near future, a “swarm” of four Atlantik Solar 2’s will be launched from Bell Island into “the harsh Atlantic environment, with the hopes that all will make it through seven days and nights of unpredictable weather and contrary winds to make it to Portugal. After that, the aircraft will test optical and infrared cameras and atmospheric sensors. Some of these flights will take 12 hours and cover 400 kilometers (248 miles) over the Brazilian rain forest. Later developments will include evaluation of the aircraft to provide telecommunications services in large-scale disasters, or live images during industrial sensing and inspections missions. We wish the ETH ASL team all the best in their endeavors, and hope to hear of a successful Atlantic crossing soon.
Physical distancing measures may need to be in place intermittently until 2022, scientists have warned in an analysis that suggests there could be resurgences of Covid-19 for years to come. The paper, published in the journal Science, concludes that a one-time lockdown will not be sufficient to bring the pandemic under control and that secondary peaks could be larger than the current one without continued restrictions. One scenario predicted a resurgence could occur as far in the future as 2025 in the absence of a vaccine or effective treatment. Marc Lipsitch, a professor of epidemiology at Harvard and co-author of the study, said: “Infections spread when there are two things: infected people and susceptible people. Unless there is some enormously larger amount of herd immunity than we’re aware of … the majority of the population is still susceptible. “Predicting the end of the pandemic in the summer [of 2020] is not consistent with what we know about the spread of infections.” In its daily briefings, the UK government has not outlined plans beyond the current restrictions, but the latest paper adds to a building scientific consensus that physical distancing may be required for considerably longer in order to keep case numbers within hospitals’ critical care capacity. Papers released by the government’s scientific advisory group for emergencies (Sage) in March suggested that the UK would need to alternate between periods of more and less strict physical distancing measures for a year to have a plausible chance of keeping the number of critical care cases within capacity. To Finish Reading: (CLICK HERE)
Bones of 325-million-year-old amphibian-like creature discovered on beach in Co Clare THE fossilised bones of a tiny amphibian-like creature that scurried around Co Clare 325 million years ago have been found. Two 10mm bones, believed to be a leg and a hip bone, were discovered in a shale condition on a beach near Doolin by Dr Eamon Doyle. It is believed the creature predated the first lizards which would eventually evolve into dinosaurs 100 million years later. According to Dr Doyle, who outlines details of the discovery in the Irish Journal of Earth Sciences, the amphibian lives during a geological time called the Carboniferous Period which lasted from 360 to 299 million years ago. This period was when amphibians evolved from fish and first began to colonise land. The creature in question could fit into the palm of a hand and probably lived along a swampy coastline, in an estuary or on a river further inland. Researchers believe the amphibian's remains may have been washed out to sea during a storm or flood. This caused the bones to eventually settle onto the muddy seafloor where they were buried and turned to fossils. Clare County Council hailed the discovery saying: "The fact that amphibian bones are rare finds in rocks of this age highlights the importance of Dr. Doyle’s discovery." MOST READ IN NEWS Last year, Dr Doyle discovered a 435 million-year-old starfish in Maam Valley in Connemara. The discovery was described by boffins as “an exceptional fossil”. It was immortalised with the new name “Crepidosoma Doyleii” in honour of Dr Doyle.
Health Promotion with Sesame seeds Sesame seeds are tiny, oil-rich seeds that develop in pods on the Sesamum indicum plant. Seeds with the outer skin are also fit for human consumption. The skin offers the seeds a golden-brown hue. Hulled seeds have an off-white coloration however flip brown when roasted. These seeds have many workable fitness advantages and have been used in medication for hundreds of years. They might also guard in opposition to coronary heart disease, diabetes, and arthritis. You can also consume sizable quantities as food. However a small handful per day is enough for fitness benefits. Good Source of Fiber Three tablespoons (30 grams) of whole sesame seeds yield 3.5 grams of fiber, which is 12% of the Reference Daily Intake. Fiber is essential in diet for assisting digestive health. Additionally, developing proof suggests that fiber may play a function in decreasing your chance of coronary heart disease, sure cancers, obesity, and type 2 diabetes. A 3-tablespoon (30-gram) serving of sesame seeds resources 12% of the RDI for fiber, which is indispensable for your digestive health. May Lower Cholesterol and Triglycerides Some research endorse that frequently consuming sesame seeds can also assist minimize excessive cholesterol and triglycerides. As a result, risks for coronary heart disease are lowered. Sesame seeds consist of the following fat - 15% saturated fat - 41% polyunsaturated fat - 39% monounsaturated fat Research shows that consuming greater polyunsaturated and monounsaturated fats relative to saturated fats may also assist decrease your LDL cholesterol and decrease coronary heart ailment risk. Sesame seeds can also assist minimize coronary heart ailment danger factors, along with accelerated triglyceride and “bad” LDL cholesterol levels. Nutritious Source of Plant Protein Sesame seeds contain 5 grams of protein per 3-tablespoon (30-gram) serving. To maximize protein availability, choose for hulled, roasted sesame seeds. The hulling and roasting techniques decrease oxalates and phytates. These are the compounds that impede your digestion and absorption of protein. Seeds in particular without skin, are a suitable supply of protein. These are indispensable constructing block for your body. These seeds are excessive in methionine and cysteine. Usually, legumes don’t contain these two amino acids in higher quantity. May Help Lower Blood Pressure High blood stress is a primary threat element for coronary heart ailment and stroke. These are excessive in magnesium, which may additionally assist decrease blood pressure. Lignans, diet E, and different antioxidants in these seeds may also assist stop plaque buildup in your arteries, doubtlessly keeping healthful blood pressure. Also, these seeds are excessive in magnesium, which can also assist decrease blood pressure. Additionally, their antioxidants may additionally assist forestall plaque buildup. May Support Healthy Bones These seeds with or without skin are wealthy in a number of vitamins that increase bone health, even though the calcium is on the whole in the hull. To restrict these compounds’ impact, attempt soaking, roasting, or sprouting the seeds. One learn about observed that sprouting decreased phytate and oxalate. Unshelled seeds are in particular wealthy in vitamins imperative to bone health, together with calcium. Soaking, roasting, or sprouting seeds can enhance absorption of these minerals. In addition to above, these seeds might also combat inflammation. Long-term, low-level infection can also play a position in many persistent conditions, together with weight problems and cancer, as properly as coronary heart and kidney sickness. Preliminary lookup suggests that seeds and their oil can also have anti-inflammatory action. Sesame seeds are a Source of B Vitamins These seeds are a appropriate supply of positive B vitamins, which are dispensed each in the hull and seed. These seeds are a true supply of thiamine, niacin, and nutrition B6, which are fundamental for applicable mobile characteristic and metabolism. Aid Blood Cell Formation To make pink blood cells, your physique desires quite a few vitamins — along with ones located in sesame seeds. Seeds of this plant are rich in iron, copper, and nutrition B6, which are wanted for blood telephone formation and function. Rich in Antioxidants Animal and human research recommend that ingesting seeds can also amplify the ordinary quantity of antioxidant pastime in your blood. The lignans in sesame seeds feature as antioxidants, which assist combat oxidative stress, a chemical response that may additionally harm your cells and make bigger your hazard of many persistent diseases. Plant compounds and nutrition E in sesame seeds characteristic as antioxidants, which fight oxidative stress in your physique. Food and Diet Plan in Pregnancy
Although the SDM process often delivers “win-wins” most decisions will still involve trade-offs of some kind; hence, the next step involves evaluating these trade-offs and making value-based choices. For example, it may be possible to deliver different levels of environmental protection (environmental flows for example) at different levels of investment, or it may be necessary to set priorities among different development objectives (e.g., irrigation versus rural electrification or drinking water provision). These trade-offs will be exposed and efforts will be made to gain an understanding of how the people most affected view them. Who is consulted and who participates in making choices may vary by the decision – with the involvement of senior government officials and national/international civil society organizations for strategic decisions and with their local counterparts for project-level decisions. Under SDM, it is not the method (SDM) or some external analysis that does the evaluation, but those seen as legitimate stakeholders, based on their own values and their understanding of the values of those affected. The SDM process requires that decision makers make explicit choices about which alternative is preferred. This can be done holistically by reviewing the trade-offs in the consequence table and assigning ranks or preferences to the alternatives directly. In this approach, participants implicitly think about which impacts are more or less important, and which set of trade-offs is more or less acceptable. Alternatively, structured methods for more explicitly weighting the evaluation criteria, making trade-offs, and scoring and ranking the alternatives may be used. The SDM process is designed to support, but not require, such structured preference assessment methods. When they are used, they should be designed to provide insight and guidance to decision makers, rather than to prescribe a formulaic answer. They can be used to focus deliberations on productive areas and maintain a performance-based dialogue, rather than a positional one. Structured methods can be demanding, but participants are generally enthusiastic about exploring their own trade-offs, learning about the values and choices of others, and knowing that (in the case of stakeholders) their input has been systematically recorded and taken to decision makers. At minimum, an emphasis on deliberative quality requires that stakeholders and decision makers involved at this stage should be expected to: - demonstrate an understanding of the decision scope and context, how it is related to other decisions, why the problem matters, and for whom the consequences are most relevant; - demonstrate an understanding of the evaluation criteria, the alternatives and the key trade-offs among the alternatives; - demonstrate an understanding of key uncertainties and their impact on the performance of the alternatives; - articulate their preferences for the alternatives in terms of the trade-offs that are presented in the consequence table. While stakeholder consensus is desirable in the SDM process, it is not mandatory. Areas of agreement and disagreement among stakeholders and the reasons for disagreement should be documented and presented to decision makers. To the extent that there is a significant difference between the views of technical specialists and the views of non-technical stakeholders, these differences and the reasons for them should be highlighted. - SDM helps find ‘win-wins’ but also highlights (and therefore obliges people to consider) trade-offs between alternatives - SDM requires that decision makers be explicit about the choices that they make - SDM enables (but does not require) structured preference assessment techniques that help participants understand their preferences when considering complex trade-offs
Types of Script Writing Script writing or, more commonly, scriptwriting can be broadly defined as writing the dialogue and relevant directions for a production. As scripts are used for a variety of purposes in a number of settings, there are specific criteria or formal structures that are often unique to a given type of script. For example, a screenplay for a film might include camera specific terminology---such as pan, zoom or deep focus---that would not appear in the script for a play. Screenplays are scripts written specifically to be produced for a visual medium, such as film or television. For the most part, screenplays are fictional in nature and designed to tell a story. Screenplays typically include a variety of information including setting, dialogue, camera instructions and may include editing instructions. It should be noted that most screenplays are not produced as written. The director, production crew and even the actors may all directly or indirectly alter the script during production, reports Screenwriting.info. Plays are productions that occur live, on a physical stage rather than the metaphoric stage of film or television. Like a screenplay, a play script includes dialogue and directions. According to Script Frenzy, a play script will include stage and scene instructions, as well as provide character names and descriptions. Scripts for audio dramas share a number of components with screenplays and the scripts for stage plays, often sharing terminology. There tends to much more extensive use of the so-called narrator to provide third person perspective than in other fictional scripts. The dialogue is also different in that it includes more descriptive language about the surroundings to help establish setting, reports crazy Dog Audio Theatre. Instructions lean toward the necessary audio components that need to accompany a given scene and may also give direction to the voice actor about how a line should be delivered. While appearing natural on screen, most news anchors are provided with scripts to read via teleprompters. News scripts tend to be bare-bones affairs that provide informational content. The components of the script the anchors do not read aloud generally include directions for the production staff about when to run a clip or to cut to a live anchor in the field. Other types of scriptwriting include producing story/dialogue for video games, education films, online content such as podcasts or marketing materials and even commercials. These scripts tend to follow the same general patterns as plays, audio dramas and screenplays.
At first glance the Boxer, with its protruding lower jaw and frequently docked tail and cropped ears, seems a majestic, if not imposing, figure. Beneath that formidable facade, however, beats the heart of a dog that is not only faithful, but filled with puppy-like playfulness- -so much so that Boxers are referred to as the “Peter Pan of dogs”–and celebrated on their own holiday: National Boxer Day! When is National Boxer Day? National Boxer Day is held every year on January 17. The pet holiday was launched in 2020 by TheWoofBook, a Facebook group for dog lovers. Fun Facts About Boxers In celebration, we’ve fetched a few interesting facts about Boxers: The Boxer was developed in Germany in the late 19th century through the breeding of Bulldogs from Great Britain and the Bullenbeisser, a powerful mastiff-type breed which was lost to the world in the early 20th century. How did the name “Boxer” come to be? There are two schools of thought. Some believe that the name was inspired by one of the breed’s ancestors, the now-extinct Bullenbeisser, or German Bulldog. Others subscribe to the theory that the dogs obtained their pugilistic moniker from the breed’s propensity to stand on their hind legs and “box” with their free front paw, and that “boxer” derived from the word “baxer,” a now antiquated German spelling of boxer. In 1946 two brave Boxers named Punch and Judy received one of the most prestigious honors to be awarded to an animal– the Dickin Medal. Presented by the British veterinary charity the Peoples’ Dispensary for Sick Animals (PDSA), the medallion was given in recognition of the dogs courage when saving the lives of two British officers by attacking a terrorist in Palestine. Over the years the Boxer has worn many hats, with work ranging from that of an aide to hunters and butchers, to taking on the role of police dog, guard dog, military messenger and service dog. Boxer coats come in two AKC-recognized colors: brindle and fawn. Up to 25 percent of boxers are “white,” which means that “flash” (a term meaning white markings) covers the majority of their body. Sadly, boxer puppies born white were once euthanized by breeders so this trait, which was looked upon as flawed, would not perpetuate. Boxers are born with floppy ears and a long tail, which in some countries are often docked so the dog will appear to possess a more formidable physique. Here’s a fact that will leave dog lovers’ mouths agape! In 2002 a Boxer named Brandy became a Guinness World record holder for the longest tongue ever for a canine. Measuring 17 inches (43 cm), the record still stands today. The Boxer is number one in the eyes of many dog lovers, and the breed comes close to obtaining the top spot on the American Kennel Club’s annual list of the most popular dog breeds, ranking 14th in 2021. The Boxer bounded in popularity in the United States after World War II, when many members of the breed were brought to their new home by returning soldiers. Since its unveiling in 2017, a sculpture of a Boxer stands among a pack of 27 pooches representing various breeds that can be seen playfully squirting streams of water at a three-tiered, Fido-centric fountain in Toronto’s Berczy Park. Although a Boxer has never won Best in Show at either the National Dog Show or Crufts, the breed has been dubbed top dog several times at the Westminster Kennel Club Dog Show. The first Boxer to win Best in Show at the annual New York City event was Warlord of Mazelaine in 1947. Success struck again in 1949, when Mazelaine Zazarac Brandy took home the title, and in 1951, when Bang Away of Sirrah Crest earned top dog status. A Boxer’s name was not called again until 1970, when Arriba’s Prima Donna took the title. Famous Boxers On the Big Screen Good Boy!— Two Boxers named Lita and Lexi showed off their acting chops when they portrayed a cookie-loving canine named Wilson in this 2003 comedy, which featured “Scrubs” star Donald Faison providing the voice for the character. Dog Park— A Boxer is among the cast of canine characters who star alongside Luke Wilson, Natasha Henstridge and Janene Garofalo in this 1998 romantic comedy. Famous Pet Parents of Boxer Dogs Following are just a few of the many famous faces who have been lucky enough to have had Boxer fur babies: Lauren Bacall and Humphrey Bogart— Bogey and Bacall began their life together as husband and wife alongside their first Boxer, Harvey, who had been presented to the couple as a wedding gift. Their love of the breed continued when they welcomed Boxers Baby (an homage to Bacall’s nickname) and George. Miley Cyrus— Seeing the beauty of the Boxer who had been used as a breeding dog before she had escaped her fate, when the “Wrecking Ball” singer adopted this rescue Boxer in 2020 she dubbed her Kate Moss. Clark Gable and Carole Lombard— A Boxer named Tuffy starred in the lives of one of the 1930s top Hollywood couples. The Gone with The Wind star even shared a cover of a 1939 issue of Movie Mirror with his canine companion. Billie Holiday— The true tale of the jazz icon and her barking Boxer buddy is told in the children’s book Mister and Lady Day. Alan Ladd— When he wasn’t starring in a movie, this Hollywood great co-starred in real life with a Doxie and two Boxers, named Jezebel and Brindie. Luke Perry— The late Beverly Hills 90210 star shared a home with two Boxers named Casey and Mac. Pablo Picasso— Known as the father of cubist art, Picasso was also a father to several four-legged family members, including a goat named Esmerelda, Afghans called Kabul, Kasbek and Sauterelle, a Dachshund dubbed Lump, and a Boxer named Jan. Justin Timberlake— The singer/actor was once a pet parent to two Boxer/Lab mixes named Buckley and Brennan. Kate Upton— The Sports Illustrated supermodel (who is a role model as an animal advocate, having hosted several pet adoption events), was a proud pet parent to a Boxer named Harley, who crossed Rainbow Bridge in 2021. Adopting A Boxer If you would like to welcome a Boxer into your heart and home, you can check online at Across America Boxer Rescue, a foster-based non-profit which finds forever homes for members of the breed who wind up in shelters or who have been abandoned. You can also search for adoptable Boxers online at: More Pet Holidays You Might Like - National Cook For Your Pets Day - Thankful for My Dog Day - Celebrate Shelter Pets Day - And don’t miss our full calendar of Pet Holidays! Friday 20th of January 2023 I love the breed. There is no better dog than a Boxer Sunday 11th of December 2022 U didn't mention the red or also called mahogany color
A book on supporting a child’s inner life wouldn’t be complete without a section on stories, ceremonies, and music. Down through the ages these activities have provided three wonderful means of teaching and inspiring children. Stories continue to have a great capacity to move and transform children. One year I had several girls in my first and second grade class whose parents had been reading to them about the visions of Mary that appeared to three children in Fatima. The girls were so inspired by those stories that for a period of time they spent their recesses going off alone, praying to Mary. The radiance in their faces as they returned to class attested to the inner joy they were receiving from their prayers. Another time, after being introduced to the life of Saint Francis of Assisi, many of my third and fourth grade students were very moved by St. Francis’s total reliance on God for everything, even his food. They asked if they could leave their lunch boxes at home for a week, and at noon go begging for food in a spot near our school where many of their parents and adult friends ate their lunches each day! Being a brand new teacher at the school, I just didn’t have enough nerve to write a note to the parents asking them not to send lunches for a week so we could go begging. Now I wish I had let the children try it. What an opportunity that would have been. The children were so ready to joyously accept as coming from God anything that was given to them or withheld, and were ripe to practice Christ’s teaching, “Take no thought for your life, what ye shall eat, or what ye shall drink.” This is the only idea in the book that I haven’t actually tried, but I wanted to include the story both to show how far you can go, as well as to illustrate how all of us, even authors, sometimes miss special opportunities. Stories have a way of drawing children deeply into the situations being described and lifting them up to new ways of thinking and being. Folk tales from around the world, fables, stories from the Bible and other scriptures, and writings about saints, heroes and ordinary people who have lived exemplary lives or behaved in inspiring ways are all wonderful sources for sharing high values with children. There are many modern children’s books that can be inspiring as well. (See Appendix.) These books can be found in your local Christian or metaphysical bookshop, and don’t overlook the bookstores in certain churches and temples. By exchanging your favorite titles with friends who share your values, you may be pleasantly surprised at the number of inspiring books you can find. The human interest sections in newspapers and magazines offer another source of stories about people who have excelled in some meaningful way. As mentioned earlier, monthly magazines like Guideposts and other church publications often include stories that are uplifting and inspiring for young people. True stories about children are particularly meaningful. It can be helpful to talk about ways to make use of the positive qualities demonstrated in a story, as long as the discussion doesn’t get preachy. Pointing out to your children how they have behaved well in similar, if less dramatic, situations can help them feel that such achievements are actually within their reach. Even little everyday acts of kindness, giving, and stretching beyond past limitations are important steps along one’s inner journey and deserve to be recognized. Ceremonies can open a very special, almost magical world to children. One time at school we were studying Native Americans. We had learned about one of their ceremonies for getting rid of negativity, which is called “smudging.” A bundle of fragrant, dried herbs is lit and then made to smolder by blowing out the flames. A feather is used to fan the smoldering herbs, directing the smoke an around each person. The person being smudged visualizes the smoke purifying him and ridding him of all negativity. One day the children were having a terrible time getting along with each other. It was just one of those days when everyone seemed to “push everyone else’s buttons.” Finally, one of the children said, “I think we all need to be smudged.” We all sat in a circle while each child went around and smudged every other child. When we were done, several moments passed when no one could even speak. The atmosphere was so changed! The act of smudging had been like both giving and receiving blessings and indeed all of the negativity was gone! When ceremonies are performed several times, they become infused with past feelings and memories. Those memories enhance the experience of the ceremony each time it is performed. Of course, performing a ceremony so often that boredom sets in must be avoided. Some ways to charge the atmosphere surrounding the ceremony with a sense of sacredness and meaningfulness include: a darkened room lit only with candles; a cloth draped around the shoulders as on a priest or priestess; a central point of focus such as an altar, a candle or a special picture or object; and waiting outside before entering the room one by one and going to a special, designated spot. Ceremonies might be for special achievements, holidays, or times of need and can involve self-offering, getting rid of an undesirable quality, or taking on a new attitude or type of behavior. An added benefit of developing ceremonies for your family is the sense of belonging and the family traditions that they can establish. For example, once a month, family members can get together and decide on a special quality or deed that each member of the family has displayed that month. Each person leaves the room while he is being discussed. In this way what he will be recognized for remains a surprise. At a designated time, the family gathers together. A bell might be rung to let everyone know that the time has arrived. The room is lit with candles. Everyone enters the room silently and gathers around a bouquet of flowers which represents all that is beautiful in our world. Around the bouquet is a gold star for each family member with his name on it. On the back of the star is written what each person is being recognized for. One person picks up a star and goes to the one whose name is written on it. He then announces what that person is being recognized for. The person who was just given a star then picks up someone else’s star and proceeds in the same manner as the first person did. When everyone has received recognition, they all join hands and sing an uplifting song together to end the ceremony. The fire ceremony is a very old and well-known ceremony to help rid oneself of an undesirable quality. Each person writes down on a piece of paper what it is he wants to rid himself of. A small fire is built and, one by one, the papers are thrown in. Each one visualizes the trait leaving him and burning up in the fire along with the paper. Many old, familiar ceremonies are already steeped in a sense of sacredness. Doing them with your family at home or wherever you interact with children makes them your own. You can perform them as they’ve been done in the past or alter them to fit your particular situation. Communion, simplified vision quests, a wedding ceremony modified to include the whole family and done on the parents’ anniversary are just a few examples. If your child is especially drawn to ceremonies, but you feel at a loss as to how to develop them, look into such cultures as the Native American and East Indian and adapt some of their ideas to fit your particular goals and situation. Music is another powerful tool that you can use to touch and uplift your child or to create the right mood for an activity. When the children in my class are having a hard time calming down, frequently someone will say, “Could we have some quiet music on?” At other times, someone will ask for some particularly lively music when we’re cleaning the room or doing some other energetic activity. Recently I was parked behind a pick-up while waiting for a child to finish his baseball practice. In the back of the truck were four children chatting and quietly playing with each other. All of a sudden someone in the cab turned on some fairly loud rock music. I could hardly believe what I observed! In mere seconds that quiet group of children were jumping around, yelling and screaming, and dancing wildly. They were the very picture of chaos. It was amazing to see how strongly the music affected them. Music has a special way of entering the heart and reaching one’s consciousness. In Education for Life, Donald Walters writes, “By rhythm and melody the mind can be inspired with devotion, or fired to risk life in battle; softened to sentiments of kindness and love; tickled to laughter, soothed to relaxation; or kindled to anger and violence.” Uplifting words set to music will stay with the child long after the singing or listening has stopped. The care that you take in selecting the right music to use with your child will reap rich rewards. It is essential to keep in mind the end result that you are striving towards when you select music to use with your child. If relaxation is your goal, then calm, soothing music is what you need to use. To help your child feel close to nature, a recording such as bird songs with a soft musical background, or the sound of the ocean surf, can be useful. Gregorian chanting and other sacred music may be used to inspire. Just as with books, a good collection of music can come in very handy. If you do not have a background in music, find someone reliable who does have such knowledge, and ask for assistance in deciding upon and locating various pieces.
Agglomeration effects in the regional economy have advantages and disadvantages. In the business administration, agglomeration refers to the accumulation of various companies, typically at a location (so-called location agglomeration). These agglomeration leads to agglomeration effects. Economies of agglomeration applies in the study of commerce, in particular the land evaluation with agglomeration effects of the stationary deals retail. From an operational perspective is about the attractiveness, understood the appreciation of a (micro) site, which results from the spatial agglomeration of trade and service enterprises in a location as agglomeration advantage. From single operational point of view, the increase in sales is a commercial operation or business referred to as agglomeration advantage of by the proximity to shops with a similar range of results. Benefits arise from the spatial agglomeration of physical capital, companies, consumers and workers. Specifically, this leads to agglomeration. - Low transport costs - A great (local) market - A large supply of labor and thus the increased chance of supply and demand for labor, particularly for specialists to compensate for fast matching, lower search costs. - The accumulation of knowledge and human capital leads to knowledge spillovers between firms. The disadvantages of agglomerations are to be mentioned: - Strong environmental pressures - High land prices - Bottlenecks in public goods (eg poor / overburdened infrastructure) - High competitive pressure - Lack of reserve areas. Includes texts translated from Wikipedia. Leave a Reply
World Thrombosis Day (WTD) a one day event is recognized around the world on October 13. The event is dedicated to focusing attention on the often overlooked and misunderstood disease burden caused by thrombosis globally… but this is not a one-time observance. In fact Minnesota Vein Center fully supports WTD and Dr. Pal provides in-office patient education sessions to discuss the health dangers of potential blood clots in the legs. WTD is an annual observational day which energizes a collective drive to increase awareness and action through educational activities for the public and for health professionals throughout the year, and year to year. At the heart of WTD are dozens of thrombosis and hemostasis societies, patient advocacy groups, medical/scientific organizations and other interested parties from around the world who seek to: - Increase the awareness about the prevalence and risks from thrombosis - Reduce the number of undiagnosed cases - Increase the implementation of evidence-based prevention - Encourage health care systems to implement strategies to ensure “best practices” for prevention, diagnosis and treatment - Advocate for adequate resources for these efforts and increased support for research to reduce the disease burden from thrombosis - And ultimately save lives To find out more about World Thrombosis Day, visit their website.
Blood Sugar, Gut Health, and Functional Nutrition The Blood Sugar-Gut Connection By: Rachel Scheer, CFM, BS Nutrition & Dietetics We talked earlier this week about how blood sugar balance plays a direct role in weight-loss, but did you know that poor blood sugar balance can be part of the root cause for developing poor gut health or irritable bowel syndrome, too?! Well for one, eating a diet full of simple carbohydrates and sugar can increase inflammation. This inflammation can cause leaky gut where food, bacteria, and toxins are “leaking” through the gut lining, triggering immune dysregulation, skin issues, metabolic issues, thyroid problems, and even neuro-issues like brain-fog, anxiety or depression. Two, a high-sugar or processed food diet also will feed gram negative bacteria in the gut. This type of bacteria produce what is called an LPS (lipopolysaccharide) that produces endotoxins and promote more inflammation in the gut. When these LPS-producing bacteria can cross through the intestinal epithelium (gut lining) in excessive amounts due to a leaky gut, we now experience even more full body inflammation, hormone imbalances, cortisol (stress hormone) increases, and metabolic syndrome and insulin-resistance. Three, high-blood sugar will also feed pathogenic gut bacteria in the gut. This can lead to candida (yeast overgrowth,) SIBO (small intestinal bacteria overgrowth) and bacteria dysbiosis (imbalance of good and bad bacteria) in the gut. This is why it is so important to work on diet and balancing blood sugar so we can properly heal the gut and promote optimal gut health. You can start balancing your blood sugar by making sure you are getting with each meal: - A high-quality animal protein such as salmon, grass-fed beef or bison, pastured chicken, or whole eggs. - 1-2 servings of mono- or poly-unsaturated fats such as avocado, olives or olive oil, walnuts, pumpkin seeds, or fatty-fish like salmon. - 1-2 cups of non-starchy vegetables or low-sugar fruit for fiber. Working with one of our RSN Coaches can help customize a plan that is specific to your body, root cause(s) and goals. Ready to start your gut-healing journey and get into the best health and shape of your life? Book a free 30-minute call today to learn more. Rachel Scheer is a Certified Nutritionist who received her degree from Baylor University in Nutrition Science and Dietetics. Rachel has her own private nutrition and counseling practice located in McKinney, Texas. Rachel has helped clients with a wide range of nutritional needs enhance their athletic performance, improve their physical and mental health, and make positive lifelong eating and exercise behavior changes. Read This Next Today, on the Scheer Madness Podcast, Rachel is joined by Stormi Knight, Functional Health… Today, on the Scheer Madness Podcast, Rachel is joined by Kaylor Betts, Founder & Host… Today, on the Scheer Madness Podcast, Rachel is joined by Amber Shaw, a Mind &… Leave a Comment
(This discussion followed the lecture on the Vedanta Philosophy delivered by the Swami at the Graduate Philosophical Society of Harvard University, U. S. A., on March 25, 1896. (Vol. I. )) Q. — I should like to know something about the present activity of philosophic thought in India. To what extent are these questions discussed? A. — As I have said, the majority of the Indian people are practically dualists, and the minority are monists. The main subject of discussion is Mâyâ and Jiva. When I came to this country, I found that the labourers were informed of the present condition of politics; but when I asked them, “What is religion, and what are the doctrines of this and that particular sect?” they said, “We do not know; we go to church.” In India if I go to a peasant and ask him, “Who governs you?” he says, “I do not know; I pay my taxes.” But if I ask him what is his religion, he says, “I am a dualist”, and is ready to give you the details about Maya and Jiva. He cannot read or write, but he has learned all this from the monks and is very fond of discussing it. After the day’s work, the peasants sit under a tree and discuss these questions. Q. — What does orthodoxy mean with the Hindus? A. — In modern times it simply means obeying certain caste laws as to eating, drinking, and marriage. After that the Hindu can believe in any system he likes. There was never an organised church in India; so there was never a body of men to formulate doctrines of orthodoxy. In a general way, we say that those who believe in the Vedas are orthodox; but in reality we find that many of the dualistic sects believe more in the Purânas than in the Vedas alone. Q. — What influence had your Hindu philosophy on the Stoic philosophy of the Greeks? A. — It is very probable that it had some influence on it through the Alexandrians. There is some suspicion of Pythagoras’ being influenced by the Sânkhya thought. Anyway, we think the Sankhya philosophy is the first attempt to harmonise the philosophy of the Vedas through reason. We find Kapila mentioned even in the Vedas: “ऋषिं प्रसूतं कपिलं यस्तमग्रे — He who (supports through knowledge) the first-born sage Kapila.” Q. — What is the antagonism of this thought with Western science? A. — No antagonism at all. We are in harmony with it. Our theory of evolution and of Âkâsha and Prâna is exactly what your modern philosophies have. Your belief in evolution is among our Yogis and in the Sankhya philosophy. For instance, Patanjali speaks of one species being changed into another by the infilling of nature — “जात्यन्तरपरिणामः प्रकृत्यापूरात्”; only he differs from you in the explanation. His explanation of this evolution is spiritual. He says that just as when a farmer wants to water his field from the canals that pass near, he has only to lift up gate — “निमित्तमप्रयोजकं प्रकृतीनां वरणभेदस्तु ततः क्षेत्रिकवत्” — so each man is the Infinite already, only these bars and bolts and different circumstances shut him in; but as soon as they are removed, he rushes out and expresses himself. In the animal, the man was held in abeyance; but as soon as good circumstances came, he was manifested as man. And again, as soon as fitting circumstances came, the God in man manifested itself. So we have very little to quarrel with in the new theories. For instance, the theory of the Sankhya as to perception is very little different from modern physiology. Q. — But your method is different? A. — Yes. We claim that concentrating the powers of the mind is the only way to knowledge. In external science, concentration of mind is — putting it on something external; and in internal science, it is — drawing towards one’s Self. We call this concentration of mind Yoga. Q. — In the state of concentration does the truth of these principles become evident? A.— The Yogis claim a good deal. They claim that by concentration of the mind every truth in the universe becomes evident to the mind, both external and internal truth. Q. — What does the Advaitist think of cosmology? A. — The Advaitist would say that all this cosmology and everything else are only in Maya, in the phenomenal world. In truth they do not exist. But as long as we are bound, we have to see these visions. Within these visions things come in a certain regular order. Beyond them there is no law and order, but freedom. Q. — Is the Advaita antagonistic to dualism? A. — The Upanishads not being in a systematised form, it was easy for philosophers to take up texts when they liked to form a system. The Upanishads had always to be taken, else there would be no basis. Yet we find all the different schools of thought in the Upanishads. Our solution is that the Advaita is not antagonistic to the Dvaita (dualism). We say the latter is only one of three steps. Religion always takes three steps. The first is dualism. Then man gets to a higher state, partial non-dualism. And at last he finds he is one with the universe. Therefore the three do not contradict but fulfil. Q. — Why does Maya or ignorance exist? A. — “Why” cannot be asked beyond the limit of causation. It can only be asked within Maya. We say we will answer the question when it is logically formulated. Before that we have no right to answer. Q. — Does the Personal God belong to Maya? A. — Yes; but the Personal God is the same Absolute seen through Maya. That Absolute under the control of nature is what is called the human soul; and that which is controlling nature is Ishvara, or the Personal God. If a man starts from here to see the sun, he will see at first a little sun; but as he proceeds he will see it bigger and bigger, until he reaches the real one. At each stage of his progress he was seeing apparently a different sun; yet we are sure it was the same sun he was seeing. So all these things are but visions of the Absolute, and as such they are true. Not one is a false vision, but we can only say they were lower stages. Q. — What is the special process by which one will come to know the Absolute? A. — We say there are two processes. One is the positive, and the other, the negative. The positive is that through which the whole universe is going — that of love. If this circle of love is increased indefinitely, we reach the one universal love. The other is the “Neti”, “Neti” — “not this”, “not this” — stopping every wave in the mind which tries to draw it out; and at last the mind dies, as it were, and the Real discloses Itself. We call that Samâdhi, or superconsciousness. Q. — That would be, then, merging the subject in the object! A. — Merging the object in the subject, not merging the subject in the object. Really this world dies, and I remain. I am the only one that remains. Q. — Some of our philosophers in Germany have thought that the whole doctrine of Bhakti (Love for the Divine) in India was very likely the result of occidental influence. A. — I do not take any stock in that — the assumption was ephemeral. The Bhakti of India is not like the Western Bhakti. The central idea of ours is that there is no thought of fear. It is always, love God. There is no worship through fear, but always through love, from beginning to end. In the second place, the assumption is quite unnecessary. Bhakti is spoken of in the oldest of the Upanishads, which is much older than the Christian Bible. The germs of Bhakti are even in the Samhitâ (the Vedic hymns). The word Bhakti is not a Western word. It was suggested by the word Shraddhâ. Q. — What is the Indian idea of the Christian faith? A. — That it is very good. The Vedanta will take in every one. We have a peculiar idea in India. Suppose I had a child. I should not teach him any religion; I should teach him breathings — the practice of concentrating the mind, and just one line of prayer — not prayer in your sense, but simply something like this, “I meditate on Him who is the Creator of this universe: may He enlighten my mind I ” That way he would be educated, and then go about hearing different philosophers and teachers. He would select one who, he thought, would suit him best; and this man would become his Guru or teacher, and he would become a Shishya or disciple. He would say to that man, “This form of philosophy which you preach is the best; so teach me.” Our fundamental idea is that your doctrine cannot be mine, or mine yours. Each one must have his own way. My daughter may have one method, and my son another, and I again another. So each one has an Ishta or chosen way, and we keep it to ourselves. It is between me and my teacher, because we do not want to create a fight. It will not help any one to tell it to others, because each one will have to find his own way. So only general philosophy and general methods can be taught universally. For instance, giving a ludicrous example, it may help me to stand on one leg. It would be ludicrous to you if I said every one must do that, but it may suit me. It is quite possible for me to be a dualist and for my wife to be a monist, and so on. One of my sons may worship Christ or Buddha or Mohammed, so long as he obeys the caste laws. That is his own Ishta. Q. — Do all Hindus believe in caste? A. — They are forced to. They may not believe, but they have to obey. Q. — Are these exercises in breathing and concentration universally practiced? A. — Yes; only some practice only a little, just to satisfy the requirements of their religion. The temples in India are not like the churches here. They may all vanish tomorrow, and will not be missed. A temple is built by a man who wants to go to heaven, or to get a son, or something of that sort. So he builds a large temple and employs a few priests to hold services there. I need not go there at all, because all my worship is in the home. In every house is a special room set apart, which is called the chapel. The first duty of the child, after his initiation, is to take a bath, and then to worship; and his worship consists of this breathing and meditating and repeating of a certain name. And another thing is to hold the body straight. We believe that the mind has every power over the body to keep it healthy. After one has done this, then another comes and takes his seat, and each one does it in silence. Sometimes there are three or four in the same room, but each one may have a different method. This worship is repeated at least twice a day. Q. — This state of oneness that you speak of, is it an ideal or something actually attained? A. — We say it is within actuality; we say we realise that state. If it were only in talk, it would be nothing. The Vedas teach three things: this Self is first to be heard, then to be reasoned, and then to be meditated upon. When a man first hears it, he must reason on it, so that he does not believe it ignorantly, but knowingly; and after reasoning what it is, he must meditate upon it, and then realise it. And that is religion. Belief is no part of religion. We say religion is a superconscious state. Q. — If you ever reach that state of superconsciousness, can you ever tell about it? A. — No; but we know it by its fruits. An idiot, when he goes to sleep, comes out of sleep an idiot or even worse. But another man goes into the state of meditation, and when he comes out he is a philosopher, a sage, a great man. That shows the difference between these two states. Q. — I should like to ask, in continuation of Professor —’s question, whether you know of any people who have made any study of the principles of self-hypnotism, which they undoubtedly practiced to a great extent in ancient India, and what has been recently stated and practiced in that thing. Of course you do not have it so much in modern India. A. — What you call hypnotism in the West is only a part of the real thing. The Hindus call it self-hypnotisation. They say you are hypnotised already, and that you should get out of it and de-hypnotise yourself. “There the sun cannot illume, nor the moon, nor the stars; the flash of lightning cannot illume that; what to speak of this mortal fire! That shining, everything else shines” (Katha Upanishad, II ii. 15). That is not hypnotisation, but de-hypnotisation. We say that every other religion that preaches these things as real is practicing a form of hypnotism. It is the Advaitist alone that does not care to be hypnotised. His is the only system that more or less understands that hypnotism comes with every form of dualism. But the Advaitist says, throw away even the Vedas, throw away even the Personal God, throw away even the universe, throw away even your own body and mind, and let nothing remain, in order to get rid of hypnotism perfectly. “From where the mind comes back with speech, being unable to reach, knowing the Bliss of Brahman, no more is fear.” That is de-hypnotisation. “I have neither vice nor virtue, nor misery nor happiness; I care neither for the Vedas nor sacrifices nor ceremonies; I am neither food nor eating nor eater, for I am Existence Absolute, Knowledge Absolute, Bliss Absolute; I am He, I am He.” We know all about hypnotism. We have a psychology which the West is just beginning to know, but not yet adequately, I am sorry to say. Q. — What do you call the astral body? A. — The astral body is what we call the Linga Sharira. When this body dies, how can it come to take another body? Force cannot remain without matter. So a little part of the fine matter remains, through which the internal organs make another body — for each one is making his own body; it is the mind that makes the body. If I become a sage, my brain gets changed into a sage’s brain; and the Yogis say that even in this life a Yogi can change his body into a god-body. The Yogis show many wonderful things. One ounce of practice is worth a thousand pounds of theory. So I have no right to say that because I have not seen this or that thing done, it is false. Their books say that with practice you can get all sorts of results that are most wonderful. Small results can be obtained in a short time by regular practice, so that one may know that there is no humbug about it, no charlatanism. And these Yogis explain the very wonderful things mentioned in all scriptures in a scientific way. The question is, how these records of miracles entered into every nation. The man, who says that they are all false and need no explanation, is not rational. You have no right to deny them until you can prove them false. You must prove that they are without any foundation, and only then have you the right to stand up and deny them. But you have not done that. On the other hand, the Yogis say they are not miracles, and they claim that they can do them even today. Many wonderful things are done in India today. But none of them are done by miracles. There are many books on the subject. Again, if nothing else has been done in that line except a scientific approach towards psychology, that credit must be given to the Yogis. Q. — Can you say in the concrete what the manifestations are which the Yogi can show? A. — The Yogi wants no faith or belief in his science but that which is given to any other science, just enough gentlemanly faith to come and make the experiment. The ideal of the Yogi is tremendous. I have seen the lower things that can be done by the power of the mind, and therefore, I have no right to disbelieve that the highest things can be done. The ideal of the Yogi is eternal peace and love through omniscience and omnipotence. I know a Yogi who was bitten by a cobra, and who fell down on the ground. In the evening he revived again, and when asked what happened, he said: “A messenger came from my Beloved.” All hatred and anger and jealousy have been burnt out of this man. Nothing can make him react; he is infinite love all the time, and he is omnipotent in his power of love. That is the real Yogi. And this manifesting different things is accidental on the way. That is not what he wants to attain. The Yogi says, every man is a slave except the Yogi. He is a slave of food, to air, to his wife, to his children, to a dollar, slave to a nation, slave to name and fame, and to a thousand things in this world. The man who is not controlled by any one of these bondages is alone a real man, a real Yogi. “They have conquered relative existence in this life who are firm-fixed in sameness. God is pure and the same to all. Therefore such are said to be living in God” (Gita, V. 19). Q. — Do the Yogis attach any importance to caste? A. — No; caste is only the training school for undeveloped minds. Q. — Is there no connection between this idea of super-consciousness and the heat of India? A. — I do not think so; because all this philosophy was thought out fifteen thousand feet above the level of the sea, among the Himalayas, in an almost Arctic temperature. Q. — Is it practicable to attain success in a cold climate? A. — It is practicable, and the only thing that is practicable in this world. We say you are a born Vedantist, each one of you. You are declaring your oneness with everything each moment you live. Every time that your heart goes out towards the world, you are a true Vedantist, only you do not know it. You are moral without knowing why; and the Vedanta is the philosophy which analysed and taught man to be moral consciously. It is the essence of all religions. Q. — Should you say that there is an unsocial principle in our Western people, which makes us so pluralistic, and that Eastern people are more sympathetic than we are? A. — I think the Western people are more cruel, and the Eastern people have more mercy towards all beings. But that is simply because your civilisation is very much more recent. It takes time to make a thing come under the influence of mercy. You have a great deal of power, and the power of control of the mind has especially been very little practiced. It will take time to make you gentle and good. This feeling tingles in every drop of blood in India. If I go to the villages to teach the people politics, they will not understand; but if I go to teach them Vedanta, they will say, “Now, Swami, you are all right”. That Vairâgya, non-attachment, is everywhere in India, even today. We are very much degenerated now; but kings will give up their thrones and go about the country without anything. In some places the common village-girl with her spinning-wheel says, “Do not talk to me of dualism; my spinning-wheel says ‘Soham, Soham’ — ‘I am He, I am He.'” Go and talk to these people, and ask them why it is that they speak so and yet kneel before that stone. They will say that with you religion means dogma, but with them realisation. “I will be a Vedantist”, one of them will say, “only when all this has vanished, and I have seen the reality. Until then there is no difference between me and the ignorant. So I am using these stones and am going to temples, and so on, to come to realisation. I have heard, but I want to see and realise.” “Different methods of speech, different manners of explaining the meaning of the scriptures — these are only for the enjoyment of the learned, not for freedom” (Shankara). It is realisation which leads us to that freedom. Q. — Is this spiritual freedom among the people consistent with attention to caste? A. — Certainly not. They say there should be no caste. Even those who are in caste say it is not a very perfect institution. But they say, when you find us another and a better one, we will give it up. They say, what will you give us instead? Where is there no caste? In your nation you are struggling all the time to make a caste. As soon as a man gets a bag of dollars, he says, “I am one of the Four Hundred.” We alone have succeeded in making a permanent caste. Other nations are struggling and do not succeed. We have superstitions and evils enough. Would taking the superstitions and evils from your country mend matters? It is owing to caste that three hundred millions of people can find a piece of bread to eat yet. It is an imperfect institution, no doubt. But if it had not been for caste, you would have had no Sanskrit books to study. This caste made walls, around which all sorts of invasions rolled and surged, but found it impossible to break through. That necessity has not gone yet; so caste remains. The caste we have now is not that of seven hundred years ago. Every blow has riveted it. Do you realise that India is the only country that never went outside of itself to conquer? The great emperor Asoka insisted that none of his descendants should go to conquer. If people want to send us teachers, let them help, but not injure. Why should all these people come to conquer the Hindus? Did they do any injury to any nation? What little good they could do, they did for the world. They taught it science, philosophy, religion, and civilised the savage hordes of the earth. And this is the return — only murder and tyranny, and calling them heathen rascals. Look at the books written on India by Western people and at the stories of many travellers who go there; in retaliation for what injuries are these hurled at them? Q. — What is the Vedantic idea of civilisation? A. — You are philosophers, and you do not think that a bag of gold makes the difference between man and man. What is the value of all these machines and sciences? They have only one result: they spread knowledge. You have not solved the problem of want, but only made it keener. Machines do not solve the poverty question; they simply make men struggle the more. Competition gets keener. What value has nature in itself? Why do you go and build a monument to a man who sends electricity through a wire? Does not nature do that millions of times over? Is not everything already existing in nature? What is the value of your getting it? It is already there. The only value is that it makes this development. This universe is simply a gymnasium in which the soul is taking exercise; and after these exercises we become gods. So the value of everything is to be decided by how far it is a manifestation of God. Civilisation is the manifestation of that divinity in man. Q. — Have the Buddhists any caste laws? A. — The Buddhists never had much caste, and there are very few Buddhists in India. Buddha was a social reformer. Yet in Buddhistic countries I find that there have been strong attempts to manufacture caste, only they have failed. The Buddhists’ caste is practically nothing, but they take pride in it in their own minds. Buddha was one of the Sannyâsins of the Vedanta. He started a new sect, just as others are started even today. The ideas which now are called Buddhism were not his. They were much more ancient. He was a great man who gave the ideas power. The unique element in Buddhism was its social element. Brahmins and Kshatriyas have always been our teachers, and most of the Upanishads were written by Kshatriyas, while the ritualistic portions of the Vedas came from the Brahmins. Most of our great teachers throughout India have been Kshatriyas, and were always universal in their teachings; whilst the Brahmana prophets with two exceptions were very exclusive. Râma, Krishna, and Buddha — worshipped as Incarnations of God — were Kshatriyas. Q. — Are sects, ceremonies, and scriptures helps to realisation? A. — When a man realises, he gives up everything. The various sects and ceremonies and books, so far as they are the means of arriving at that point, are all right. But when they fail in that, we must change them. “The knowing one must not despise the condition of those who are ignorant, nor should the knowing one destroy the faith; of the ignorant in their own particular method, but by proper action lead them and show them the path to comes to where he stands” (Gita, III. 26). Q. — How does the Vedanta explain individuality and ethics? A. — The real individual is the Absolute; this personalisation is through Maya. It is only apparent; in reality it is always the Absolute. In reality there is one, but ins Maya it is appearing as many. In Maya there is this varia tion. Yet even in this Maya there is always the tendency to, get back to the One, as expressed in all ethics and all morality of every nation, because it is the constitutional necessity of the soul. It is finding its oneness; and this struggle to find this oneness is what we call ethics and morality. Therefore we must always practice them. Q. — Is not the greater part of ethics taken up with the relation between individuals? A. — That is all it is. The Absolute does not come within Maya. Q. — You say the individual is the Absolute, and I was going to ask you whether the individual has knowledge. A. — The state of manifestation is individuality, and the light in that state is what we call knowledge. To use, therefore, this term knowledge for the light of the Absolute is not precise, as the absolute state transcends relative knowledge. Q. — Does it include it? A. — Yes, in this sense. Just as a piece of gold can be changed into all sorts of coins, so with this. The state can be broken up into all sorts of knowledge. It is the state of superconsciousness, and includes both consciousness and unconsciousness. The man who attains that state has all that we call knowledge. When he wants to realise that consciousness of knowledge, he has to go a step lower. Knowledge is a lower state; it is only in Maya that we can have knowledge.
AVID (Advancement Via Individual Determination) A new UCLA-led study published last month in the peer-reviewed journal Pediatrics suggests that AVID reduces substance use by 33%, and AVID students are more likely to socialize with peers who are more involved with academics. The study compared students in AVID to similar students in “usual” school programs that commonly cluster—or track—lower-performing students together in less academically rigorous settings. AVID Elementary is offered at 28 elementary schools in Santa Ana Unified, impacting over 8,000 students in over 300 classrooms. By teaching and reinforcing academic behaviors and higher-level thinking at a young age, AVID Elementary teachers create a ripple effect in later grades. Elementary students develop the academic habits they will need to be successful in middle school, high school, and college, in an age-appropriate and challenging way. Children learn about organization, study skills, communication, and self-advocacy. AVID Elementary students take structured notes and answer and ask high-level questions that go beyond routine answers. The strong college-going culture on an AVID Elementary campus encourages students to think about their college and career plans. Schools cover their walls with college pennants and banners, and educators speak about their college experiences. College and careers are no longer foreign concepts, and teachers provide the academic foundation students need to be on a path to college and career success. AVID Elementary closes the opportunity gap before it begins. AVID Secondary is offered at 20 secondary schools in Santa Ana Unified, impacting over 4,000 students in the AVID Elective classes, as well as thousands more schoolwide. The power of AVID Secondary is the ability to impact students in the AVID Elective class and all students throughout the campus. AVID Secondary can have an effect on the entire school by providing classroom activities, teaching practices, and academic behaviors that can be incorporated into any classroom to improve engagement and success for all students. Teachers can take what they've learned at AVID training back to any classroom to help all students, not just those in AVID, to become more college- and career-ready. To address this need, AVID has developed the AVID Elective course. For one period a day, students receive the additional academic, social, and emotional support that will help them succeed in their school’s most rigorous courses. Additionally, the language and literacy needs of long-term English language learners can be addressed through the AVID Excel elective class. AVID (Advancement Via Individual Determination) is a nonprofit that changes lives by helping schools shift to a more equitable, student-centered approach. Over 85,000 educators annually are trained worldwide to close the opportunity gap, so they can prepare all students for college, careers, and life. Our nation’s schools are full of students who possess a desire to go to college and the willingness to work hard, but many of them do not truly have the opportunity to be college-ready. These are often the students who will be the first in their families to attend college and are from groups traditionally underrepresented in higher education. AVID equips teachers and schools with what they need to help these students succeed on a path to college and career success. AVID provides scaffolded support that educators and students need to encourage college and career readiness and success. The AVID College Readiness System has been in Santa Ana Unified since 2000. It currently is used at 28 elementary schools and 20 secondary schools.
Ex-Movère. Narrating Places with Images Keywords:architecture, drawing as a tool, editing, images, narration Through paintings, engravings, short texts it is possible to know a place and at the same time discover the propensities, interests, ideological inclinations of each author. Engravings, paintings, photographs, clips are indispensable materials for the architect who intends to recompose places fragmented by the events of history, such as archaeological sites. The drawings of Jean-Pierre Houël, Karl Friedrich Schinkel, the photographs of Josef Koudelka and the reworkings of Massimiliano Gatti, although distant in time, have in common the story of the natural harmony between architecture, archeology, and nature. The Percorsi architettonici of Sergej Michajlovič Ėjzenštejn introduce the way of doing architecture of Dimitris Pikionis and Pierre-Louis Faloci: editing is used as a method to compose sequences of spaces. The conclusion refers to the role of drawing as a tool in architecture with reference to a research project underway at the PhD ‘Architecture, Arts and Planning’ of the University of Palermo. How to Cite Copyright (c) 2022 Alessandra Palma This work is licensed under a Creative Commons Attribution 4.0 International License.
The Hall effect sensor is a transducer that varies its output voltage in response to a magnetic field. Features of Hall Effect Sensor: - Such a switch costs less than a mechanical switch and is much more reliable. - It can be operated up to 100 kHz. - It does not suffer from contact bounce because a solid state switch with hysteresis is used rather than a mechanical contact. - It will not be affected by environmental contaminants since the sensor is in a sealed package. - It can measure a wide range of magnetic fields. - Option is available that can measure either North or South pole magnetic fields. Applications of Hall Effect Sensor: - Proximity Switching. - Speed Detection. - Current sensing applications. - DIY projects. - Robotics projects. - Arduino projects. - Raspberry-Pi projects. - Electrical/Electronic projects. There are no reviews yet.
Stem cell therapy has been used since the 1960s in bone marrow transplants, but today, stem cells therapies have found their way in many fields of regenerative medicine including hair treatments through both human stem cells and plant stem cells. The innovative Hair Nice hair strengthening serum uses the power of plant stem cells to restore hair vitality. Stem cells can be found in small amounts in almost any tissue or organ including the skin. The state and characteristics of the epidermis, the uppermost skin layer, determines the health and youthfulness of skin. The epidermis can regenerate quickly, and this is due to the stem cells within it. These stem cells contain all the genetic information necessary for the proper functioning of young skin. Although the number of adult stem cells don’t decrease over time, their activity can indeed slow down significantly, mainly because of environmental factors and aging. When stem cells are not at their full regenerative potential, in most cases, the blockage is caused by a distortion in the transmission of genetic information. While human stem cells are used in medicine, plant stem cells are used in cosmetics and hair treatment. Scientific studies show that we take advantage of the beneficial effects of plant stem cells to strengthen hair and rejuvenate hair follicles. Plant stem cells possess similar genetic factors as human stem cells, and can be used to influence the functioning and vitality of stem cells in our skin and hair follicles. For example, stem cells found in basil provide the necessary essential materials for the division and regeneration of stem cells in hair, while the “informational signal” stimulates the functioning of “lazy” stem cells. This process is not an intrusive one as the structure of the stem cell is not altered, we only encourage its functioning to stimulate hair growth. Hair Nice Hair Strengthening Serum The Hair Nice hair strengthening serum strengthens the vitality of the scalp, delays the natural aging process, and revitalizes the hair growth cycle. Besides basil, the Hair Nice serum also contains Panthenol (Vitamin B5), plant keratin, rosemary, MSM, caffeine, and grapefruit seed extract. Thanks to these valuable active ingredients, the serum has rejuvenating, tonifying, regenerative, epithelizing, disinfectant, and anti-inflammatory effects. It strengthens hair and stimulates hair growth. Panthenol: it has anti-inflammatory effects, calms irritated skin, strengthens hair, stimulates wound healing, and it’s an important element in the proper development of hair, nails, and skin. Plan keratin: protects skin, has anti-inflammatory properties, and regulates humidity. It’s a microprotein substance consisting of amino-acids found in the hair as well. Its strengthening and protective effects are immediately noticeable. Caffeine: studies suggest it stimulates hair growth; it enters hair follicles and revitalizes hair follicles. MSM: it has anti-inflammatory and analgesic effects, it makes cell walls more permeable, allowing nutrients to enter and toxins to leave cells. It’s essential for keratin production. Rosemary: it’s the best herb for hair growth and hair regeneration. Stimulates the scalp, activates microcirculation, stimulates hair follicles, hair growth, and prevents hair loss. Menthol: one of the constituents of mint essential oil. It has refreshing, pain relief, calming, and cooling effects, reduces scalp itching. Grapefruit seed extract: it has antibacterial, fungicide, and anti-viral effects, being useful in treating dandruff and greasy hair. Why is the Hair Nice hair strengthening serum so efficient? Apart from the plant-based active ingredients, the special “cell coordinating signals” enhance the efficiency of these ingredients. Recommended for: hair loss, thinning hair, balding, patchy hair loss, dandruff, sensitive scalp prone to itching. Its daily usage maintains the health of the scalp and revitalizes hair. 100 % NATURAL
Make your OER Textbooks more engaging with H5Ps Recent work with college instructors to enhance their online course Open Educational Resource, or OER, textbooks has me wondering why this is not a more common practice. OERs are an efficient and inexpensive way to provide course content to students. OER texts offer benefits that are not typically possible with purchased online textbooks. Adding H5P activities into OER texts opens a seemingly unlimited potential for media, interactivity and learner engagement for the learners. Our team has been receiving positive comments and requests to see more H5P elements in college materials as some students feel it reduces monotony of online learning. The basics of integrating H5Ps into OERs are outlined in this post. Please be aware that some OER textbooks may not yet exist that would match your courses, but you may consider converting your materials into an OER may benefit you, your students and your peers in the future. An Open Educational Resource online textbook is a resource that instructors can create or in most cases, can be adopted, enhanced and used by instructors without cost for their instructional purposes. OER textbooks are resources that are licensed as open-copyright. An example of open-copyright is the Creative Commons movement which is endeavoring to share educational resources globally. There are many sources of OER texts. I used Pressbooks, although there may be a more a suitable resource for your requirements or your geographical region. I used the Pressbooks authoring platform as a tool to customize OER textbooks. Pressbooks can be leveraged by educational institutions and instructors to support blended and online learning to create and remix educational material. Pressbooks is not a free service, but the one-time fee is very reasonable. The text layout and design are the responsibility of the author. Following the provided templates and widgets make creation and manipulation intuitive. HTML 5 Packager, better known as H5P, is a free tool that allows you to create custom learning objects. Pressbooks includes an H5P editor. Once in the editor, anyone familiar with H5P would feel at home. The authoring experience is the same. The only real difference is a short text code that is generated to link to the H5P learning object. This code is then pasted into the OER textbook editor. Students will see the H5P learning object at the position of the H5P text code. H5P offers dozens of possible enhancements to an OER textbook. In the section below H5P enhancements, I list a few of the H5P features I used in a recent OER textbook. Video provides learners with a delivery mode beyond text and images. Contemporary students often prefer to learn though rich media such as video. H5P interactive video ensures that learners demonstrate comprehension the video content through interactive prompts, questions and activities. Instructors can also and to embed relevant hyperlinks to additional resources during video playback. Interspersed throughout an OER textbook, quick check in questions can be placed to ensure learners have comprehended the previous section or concept. Some of the question types possible are: true or false, multiple choice, drag-the-words, fill-in-the-blanks and mark-the-words. These check-in questions can display instant feedback with comments and the possibility of a repeat attempt. The H5P tool, question set, allows teachers to create short quizzes. These are ideal for unit or chapter check-ins for comprehension. Multiple choice, fill-in-the blanks, drag-the-words, mark-the-words and drag and drop are some of the question types are supported. H5P interactive timelines can be included in an OER textbook to display information in a chronological sequence so that they can better understand trends, change, recurring events, cause and effect, and important significant events of significance. Interactive timelines allow students to manipulate the timeline to locate details important to their learning. Interactive H5P flashcards add a set of self assessing images paired with questions and answers flashcards. These are suitable for introducing new vocabulary or technical terminology, review of concepts, or informal formative assessments in OERs. There are more H5P features available to enhance an online OER. To see the possibilities, go to the H5P Examples and Downloads site at https://h5p.org/content-types-and-applications. Why would a teacher spend valuable preparation time searching for, remixing and customizing an OER? Several reasons are apparent after working with OER textbooks. They are - industry standard materials - repurposable through additions and deletions (words, sentences, paragraphs, units, chapters, media, interactive features, presentations) - accessible online - shareable through link distribution with peers or a global community - intuitive (teacher, developer, student friendly) - personal (integrate an institution or teacher’s personality into the resource) If you are a college or university educator, consider searching OER repositories to potentially locate an online OER textbook for your course(s). The benefits of acquiring, altering and sharing the textbook will benefit your students, possibly your peers and the quality of your courses. OER Commons Explained, https://www.oercommons.org OER Commons, Open texts, https://www.oercommons.org/hubs/open-textbooks Open Ed Explained, BC Campus OpenEd, https://open.bccampus.ca BC Campus Open Texts, https://open.bccampus.ca/use-open-textbooks - WAI’s Easy Checks to Start Moving Toward Digital Accessibility – 13th October 2022 - Repurpose, reuse, recycle or customize H5P learning objects – 12th September 2022 - Teaching writing? Try these H5P Tools. – 13th August 2022 One thought on “Make your OER Textbooks more engaging with H5Ps” Great post John ! OER is such a fascinating area. I wonder if teachers in general just don’t know where to go to find appropriate content ? Something else that oI wonder, is how quickly “content” becomes out of date, and faced with such content teachers might just say to themselves “It’s quicker for me to create my own resources” ?
Sleep is our body’s natural coping mechanism against wear and tear, it is our body’s way to rest and recharge, and sleep disorders are generally understood to be disruptions to this natural mechanism. It is a state of non-consciousness, and inactivity–all our senses are generally suspended during sleep, therefore making us less responsive to any external stimuli. It is relatively easier to recover from and definitely more reversible than coma or hibernation as observed in animals; meaning, under normal circumstances, we can easily awaken from sleep. A person should have at least 6 to 8 hours of sleep everyday so it can function well. During sleep, our body rejuvenates its immune, skeletal, nervous, and muscular and digestive systems. Having good skin and complexion has also been linked to having enough sleep. The importance of sleep cannot be stressed enough for the proper and efficient function of the human body. Sleep deprivation and disorders will definitely take their toll on a person’s productivity and basic daily functions. A sleep disorder should therefore not be taken too lightly. While some sleep disturbances are temporary, other types of sleep disorders might be potentially more serious than they may appear. There are several medical conditions which can lead to sleep disorder. Aside from sleep deprivation, which is in itself is already a problem; sleep disorders can also be signs of more serious physical and emotional conditions. There are many types of sleep disorders; there can be sleep disorders in adults and sleep disorders in children. Types of sleep disorders are generally classified into three categories, namely, lack of sleep, disturbed sleep, and excessive sleep. The more popular type of sleep disorder under the category lack of sleep is called Insomnia. Insomnia is a type of sleep disorder where there is difficulty in falling or remaining asleep. Patients usually complain of the inability to fall asleep. And when they do get to sleep, patients find it hard to maintain sleep, oftentimes waking up in the middle of the night. A lot of insomnia cases are linked to patients’ personal and environmental stressful conditions, and the condition is observed more in adults than in children. The next type of sleep disorder, disturbed sleep, has more varied sub-types. One of the more prevalent an potentially life threatening is sleep apnea. Sleep Apnea is a sleeping disorder characterized by pause or interrupted breathing while sleeping. The patient may go on years without being aware he or she has the condition. Sleep apnea has been associated with fatigue and sleepiness while awake, even as the patient had seemingly complete hours of sleep the previous night. The third type of sleep disorder, excessive sleep, is medically known as narcolepsy. It is a neurological condition where patient sleeps in abnormally long hours of sleep. Narcolepsy is also characterized by an uncontrollable urge to go to sleep during inappropriate times within the day, regardless if the patient has enough sleep the previous night. People with narcolepsy can also experience hallucinations at the onset of sleep called hypnagogic hallucinations, sleep paralysis for a brief time after waking up, and muscle weakness or paralysis. Other types of sleep disorders not discussed here but are also commonly widely experienced include snoring, restless legs syndrome, periodic limb movements of sleep, and bed wetting or enuresis, among others.
tagged w/ Tears of shame After reading and commenting on my friend kennymotown's article about first lady Michelle Obama and the grade level of the speech she gave at the convention in terms of education. I felt compelled to read and understand a little bit more about the truth definition of the word racism and what it can do to human behavior..... According to the Oxford English Dictionary, the term describes ‘the theory that distinctive human characteristics and abilities are determined by race’. The word itself is rather recent, probably going back only to the 1930s. There are two attitudes towards the concept of racism: one says that ‘racism’ is usefully applied only where it is derived from a perception of race and the ensuing fixation on ‘typical’ racial traits. In this sense ‘racism’ describes the racialist attitudes of the nineteenth and twentieth centuries, deriving from the merger of physical anthropology und ethnography on the background of the idea of evolution. Another school has argued that racism consists in intentional practices and unintended processes or consequences of attitudes towards the ethnic ‘other’. According to this line of thought, it is not necessary to possess a concept of ‘race’ to entertain prejudices towards other peoples. As the term was coined in reaction to the rise of German Fascism and its antisemitic theory of race, ‘racism’ carries in itself the condemnation of what it means — it is true indeed that self-professed racists are very rare. Basically, racism lives in practice, not in theory; sociologists such as Michael Banton, therefore, have denied that the phenomenon of racism might be accessible to theory. Some theoreticians of imperialism have argued that only whites could be racists. Marxist thinking has tended to consider it as a corollary of the development of capitalist society. The sociologist Robert Miles, by contrast, has pointed out that pre-capitalist societies, too, afford manifold opportunities to observe racism. Concentrating on racism under the conditions of colonialism and in societies with a large contingent of foreign immigrants, Miles has put forward the suggestion that it must be regarded as an ‘ideology’. To rescue the concept of ‘racism’ from indiscriminate conflation with exclusionary practices, on the one hand, and from being tied up too closely with the nineteenth-century understanding of ‘race’, on the other hand, he has suggested that racism refers ‘to a particular form of (evaluative) representation which is a specific instance of a wider (descriptive) process of racialisation’. The psychological precondition of racism is anxiety. On a sociological level it may be said that mobile societies and those experiencing great social changes are especially prone to develop some or other sort of racism: contempt of the ‘other’ provides a reassuring feeling of identity. Philosophically speaking, racism is the result of a world view that does not leave any conceptual room for the strange, the unknown. The anthropologist Claude Lévi-Strauss has surprised his audience with his discovery that the Indians of Southern America possessed the very rare ability to accept the ‘other’. According to Strauss, the cosmogony of these Indians included the idea that the world was complete thanks only to the existence of other beings different from themselves. When the conquistadores arrived they were initially taken for this complement to Indian identity. Racism has many faces; its particular expressions are dependent on the socio-economic, religious, and cultural situation of any given society. This versatility notwithstanding, the moral overdetermination of skin colour is one of its most conspicuous, ever-recurring elements. The Christian world has excelled at consigning dark complexion to the realms of the mysterious and the bad. In pagan antiquity, however, this was quite different: the stereotypes associated with black Africans were rather of a positive nature: blackness signified qualities such as wisdom, or the love of freedom and justice. One of the earliest examples of what, in modern parlance, amounts to state-organized racism in European history was the persecution of the Jews in fifteenth-century Spain. In 1492 King Ferdinand succeeded in defeating the Arabs at Granada. Eight hundred years of Muslim rule in Southern Spain came to an end. In the wake of the victory, the Jews were expelled. Though converts to Christianity were allowed to remain, the enforced Jewish exodus signalled that the times were over when political rulers could tolerate the existence of the ‘other’ on their territory. This had been possible in the Roman Empire as well as in Greek city-states. Post-medieval, centrally governed countries, by contrast, had lost the will and the philosophical preconditions for putting up with foreign ethnic groups. Since the fifteenth century instances of organized racism have accumulated. The holocaust happened in a cultural climate of which it has been said that it bore many resemblances to the atmosphere in Spain at the time of the expulsion of the Jews. — H. F. AugsteinAfter reading and commenting on my friend kennymotown's article about first lady... more
The Benefits of Worm Castings, Compost and “Tea” A nice cup of good, hot tea has for years been enjoyed as a restorative to the mind and body. Centuries ago human kind learned that the flavor and beneficial essence of certain plants could be drawn from their leaves, bark and roots by steeping them in water, sometimes fortifying the brew with a bit of milk and honey. How well we understand that a nip of soothing mint tea will settle the stomach, a cup of fragrant chamomile tea will soothe frayed nerves, and a heavy mug of vitamin rich alfalfa tea can stimulate a weak appetite. By steeping these plant materials in water we can partake of what is best about them when eating the plant is not an option. This concept of using water to draw beneficial extracts from solid materials for the purpose of making a liquid solution has applications beyond making we humans feel better, however. Our plants and even our soils can benefit greatly from a nice cup of tea when that tea is derived from a plant nutrition source like compost or worm castings. Understanding the value of castings and compost Good compost, worm castings or vermicompost added to the soil carry to the root zone a rich compliment of soluble plant nutrients and growth enhancing compounds, a diverse and populous consortium of microbial life and a substrate of organic matter harboring a storehouse of nutrients that are not lost to rain and irrigation. The plant is delivered an ongoing, reliable food source when bacteria and microscopic fungi feed on the organic matter, releasing some of the nutrients to the soil and storing others for their own energy and reproduction. When nematodes and protozoa in turn feed upon them the nutrients stored in the bacterial and fungal bodies are released to the soil in a plant available form. According to Dr. Elaine Ingham, when soil, compost or castings support protozoa numbers on the order of 20,000 per gram of solid matter, 400 pounds of nitrogen per acre are released through their predation of bacteria. When we feed organic matter to the soil, the soil life feeds nutrients to the plant. Further, unlike soluble plant fertilizers, the nutrients stored in organic matter and the bodies of the microbial life are not lost through irrigation to contaminate ground water. Hair-thin fungal tentacles, called hyphae, wrap about soil and organic matter particles in their search for food, forming aggregates that are the basis for good soil structure. Thus, both the fungi and the organic matter are held in the soil. Bacteria exude sticky glues that enable them to cling to solid particles of mineral and organic matter, ensuring they too remain in the soil and, like the fungi, aid in the formation of aggregates. Nutrient retention and cycling are not the only benefit to castings and compost use, however. By inoculating the soil with the rich, diverse, microbial life present in good these materials the plant root is protected from disease and attack by root feeding organisms. Because the diversity of organisms aids in ensuring everyone present has a predator no one organism in the root zone is easily able to reach populations sufficient to cause significant damage. Plant roots exude foods that encourage colonization by microbial life beneficial to the plant, reducing the number of possible infection points. Many microorganisms exude compounds inhibitory to pathogenic organisms, further reducing the chance for pathogen blooms sufficient to cause plant damage When we add castings, vermicompost or compost and the rich consortium of microbial life they support to the soil, we aid in increasing the complexity and diversity of organisms in the root zone, thus aiding in disease and pest suppression. It may not be in the root zone alone where worm castings demonstrate the ability to suppress pest attack, however. There is a growing body of research suggesting that castings derived from a feedstock of plant materials are rich in a compound called chitinase. Chitin, a component of the exoskeleton of many insects, is damaged by chitinase, leading some researchers to believe its presence in the castings may be inhibitory to some insects. Research being conducted in California is demonstrating suppression of white fly and ambrosia beetle in some tree species when castings containing chitinase are applied at the root zone. From castings to tea So, “why tea?” one may wonder. With compost and worm products demonstrating such tremendous benefit to soil and plant life why take the extra steps to generate a liquid from this already understood and easily applied solid material? Leaf surfaces, like plant roots, harbor a rich microbial population that protects the leaf, and thus the plant, from infection and attack by pathogenic organisms. When the microbial consortium present on the leaf surface is reduced by pesticide use or environmental damage it exposes leaf surface, opening infection points. We can reinoculate the leaf with the diverse communities of microbial life found in compost and worm castings by applying a tea made from these materials. Further, teas can be applied as soil drenches and root washes after pesticide use, to reintroduce to the soil microbial communities that may have been damaged by the pesticide. The microbes can then continue to provide protection from pathogens to the plant as well as aiding in breakdown of any pesticide residues in the soil, thereby preventing ground water contamination. Teas also carry the soluble nutrients and beneficial growth regulators contained in the solid matter used to make the tea. Many of these compounds can be absorbed through the leaf surface, feeding and enriching the plant. Tea or leachate? The microorganisms present in an aerobic compost or vermiprocessing system require significant amounts of moisture in order to break down the organic materials present. They use the water in both their life processes and as avenues for moving through the material. These organisms are swimmers. Thus, when we build a system for the remediation of organic wastes, whether or not worms are involved, we moisten the organic materials to ensure efficient breakdown. As the bacteria and fungi reduce the organic material the water held within the feedstock is released to the system. Further, as organic materials are broken down by microbial decay moisture is generated as a by-product of aerobic activity. What this means is that these systems often generate fluids generally referred to as leachates. Leachate from an actively decomposing pile of organic debris will often carry many of the soluble nutrients that had been present in the solid matter, producing a beneficial growth response when used to water plants. It will also carry small numbers of the microorganisms present on that solid matter, as well as small bits of undecomposed organic material. This becomes a matter of some concern when materials like manure or post consumer food residuals make up even a portion of the feedstock in the system. There is the possibility that fecal coliforms and other pathogenic organisms can be present in the leachate, potentially contaminating plant and fruit or vegetable surfaces with which it comes into contact. Further, the bits of undecomposed organic debris in the leachate will continue to be broken down in the liquid where oxygen levels are very low, through the action of anaerobic microorganisms. As they slowly decompose these bits of material anaerobes produce alcohol and phenols toxic to plant roots. It is not always possible to tell when leachate will produce a beneficial growth response and when it will cause damage. Without a lab test it is not possible to tell when leachate will harbor potentially pathogenic organisms. As such, it is generally recommended that leachate from compost or worm bins not be used on plants, but rather used to moisten the system if it dries out or to moisten new feed stocks before they are included in the system. Steeping the finished, stable end product of a composting or vermicomposting system in agitated, aerated water, then adding a nutrient mix for microbial growth makes a true tea. The water is agitated to extract as many of the organisms clinging to the solid matter as possible and the nutrient mix provides those microbes dislodged into the liquid with a food source on which to grow and reproduce. Aerating the water ensures that it is aerobic organisms being supported in the liquid. This blend of food and oxygen in the tea enables the microorganisms to grow to numbers rivaling those found in the solid matter from which the tea is derived. Teas must then be used within a few hours of being generated in order to ensure aerobicity and high microbial populations. Once the oxygen and food are consumed, anaerobic organisms will begin to populate the system, producing alcohols and phenols toxic to plants. Good tea begins with good, quality compost, worm castings or vermicompost, or a blend of these materials. Provided the solid material is stable and supports sufficient beneficial microbial life there is nothing in these liquids to cause plant damage. Using the tea Compost and castings teas are a relatively new product in today's agriculture and gardening industries. Researchers are still identifying uses, though there is considerable research demonstrating that teas can suppress fungal disease in a variety of plant species and aid in disease prevention on plants where disease pressure is great. Application rates for tea will vary considerably with the type of plant being treated, climate, and whether or not the plant is already battling a pest or infection. Dr. Elaine Ingham suggests that in agricultural fields the application rate begin at 5 gallons of undiluted tea per acre per week and adjusted as needed based on performance. For home owner use, teas can be applied to flowers, perennials, turf, roses, shrubs, trees and vegetables from a hand sprayer at a dilution ratio of one part fresh, undiluted tea to five parts water, applied once per week. The tea can be applied more or less frequently or at a lower dilution ratio, as needed based on performance. What we do not know about teas still far outweighs what we do know, though research demonstrates an exciting future for tea use. The possibility of finding a means of controlling certain plant diseases with a truly effective yet benign material that simply capitalizes on nature's own means of control is a basic precept of sustainability. And while we may not know everything there is to know about tea, we know that using it harms nothing. All rights reserved, Kelly Slocum, 2001 Copyright © 2008 Vermiculture Canada
C, S, M and M/Flickr The Food and Drug Administration issued a warning letter this week to a Petaluma beef producer, saying the amount of penicillin in a slaughtered cow’s liver was well beyond government-set limits. The agency warned a dairy farming operation that the level of penicillin found in one cow’s liver is ten times the acceptable limit, and urged the operator to take steps to fix the problem. The letter is one more action the FDA has taken in line with its campaign, announced in June, to cut down on excessive use of antibiotics in animals that people tend to eat – cows, pigs and chickens. The FDA made the move to limit overdrugging of animals, a practice often meant to make them gain weight faster. Such use, though, can encourage the proliferation of drug-resistant bacteria that makes human diseases much harder to treat. The Vancouver Sun reported on the problem, talking to a doctor who described the consequences of excessive antibiotic use in animals: A food chain contaminated by drug-resistant bacteria bodes ill for both public health and the cost of health care, and as drug resistance in microbes increases, the number of effective antibiotics in the doctors' arsenal has dropped. "As doctors we are seeing that people have infections that were easily treated years ago, when all the basic antibiotics took care of most of the infections that people had," said Vancouver physician Dr. Bill Mackie, chairman of the environmental health committee of the B.C. Medical Association. "Of late there has been increasing [drug] resistance; when you put someone on an antibiotic that you expect to do its job, it doesn't work." When a course of antibiotic treatment fails, people stay sick longer and doctors must resort to more exotic and often more expensive drugs, Mackie said. The FDA response to the problem has been to issue “voluntary guidelines” that it hopes food producers will follow. But when it comes to other substances that shouldn’t be in our hamburgers, it turns out the government and public is often in the dark. A March audit by the U.S. Department of Agriculture found that the agency examining beef samples for toxins has no minimum accepted [PDF] for toxins such as copper, arsenic and many pesticides. That means authorities have no basis to cry foul when they become aware of beef that's tainted with an unregulated substance. In one instance, according to the audit [PDF], authorities in Mexico turned back a shipment of U.S. beef because the copper limits were too high. Here’s another example from the audit: Unlike other countries, FDA has not set a tolerance for arsenic. In 2008, a producer self-reported that arsenic had been mistakenly ingested by his cattle, and voluntarily withheld contaminated animals from the food supply after they were slaughtered and tested positive for arsenic poisoning. If the producer had not acted voluntarily, FSIS would not have had a basis to stop distribution of this meat once it was in commerce. USA Today covered the audit when it was issued and quoted one food safety advocate who found the result troubling: "It's unacceptable. These are substances that can have a real impact on public health," says Tony Corbo, a lobbyist for Food and Water Watch, a public interest group. "This administration is making a big deal about promoting exports, and you have Mexico rejecting our beef because of excessive residue levels. It's pretty embarrassing." Some contamination is inadvertent, such as pesticide residues in cows that drink water fouled by crop runoff. Other contaminants, such as antibiotics, often are linked to the use of those chemicals in farming. For example, the audit says, veal calves often have higher levels of antibiotic residue because ranchers feed them milk from cows treated with the drugs. Overuse of the antibiotics help create antibiotic-resistant strains of diseases. The auditors recommended that the USDA examine additional substances for the agency to test for, a task the agency said it would complete in one year. In the meantime, the agency has begun publicly posting lists [PDF] of beef producers whose cows are tested and determined to have unacceptable levels of monitored substances – mostly antibiotics.
SCIENCE linx of It is a private page for the personal use of the author; providing shortcuts for the author's personal use. mechanics deviates from classical physics in that instead of predicting with certainty the outcome of an observation, it predicts all possible outcomes and the probability of each. This, the authors claim, explains CTCs in a consistent way. Everett's many universes interpretation of this "randomness", which is very controversial though it prevails in some areas of study, says that if an event can physically happen with a certain probability, then it does---in some universe. Physical reality consists of a collection of universes, a multiverse, so to speak, that contains its own copy of the observation and its outcome. According to Everett, quantum theory predicts the subject probability of the outcome of the observation by prescribing the proportions of universes in which that outcome occurs. Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/time-travel-phys/ |www.witiger.com/linx2images.htm||for graphics and images| |www.witiger.com/linx3travel.htm||for travel and vacations| |www.witiger.com/linx4outdoors.htm||for hiking, camping, outdoors| |www.witiger.com/hobbies/history.htm||for ancient history & archeology| |www.witiger.com/hobbies/science.htm||for quantum physics etc.|
- SPECIAL REPORTS - THE MAGAZINE Problem: Advanced Mathematics: Spiral Curves Cool Link: CAD Panacea Joke of the Week: CampingTwo guys, Joe & Bill, went camping in the desert. After they got their tent set up, they fell sound asleep. Some hours later, Joe wakes his friend and says, "Bill, look up at the sky and tell me what you see." Bill replies, "I see millions of stars." "What does that tell you?" asked Joe. Bill ponders for a minute, and then says, "Astronomically speaking, it tells me that there are millions of galaxies, and potentially billions of planets. Astrologically, it tells me that Saturn is in Leo. Time wise, it appears to be approximately a quarter past three in the morning. Meteorologically, it seems we will have a beautiful day tomorrow. What does it tell you, Joe?" Joe is silent for a moment, then says, "Bill, you idiot! Someone has stolen our tent"! Problem: Advanced Mathematics: Spiral CurvesWhat is a spiral curve, as used in surveying? (A) A spiral curve is also known as a compound circular curve. (B) A spiral curve is a hyperbolic curve in the horizontal plane. (C) A spiral curve is a loxodrome. (D) A spiral curve is a curve with a uniformly varying radius. Come back next week for the solution! This is problem 43(2-46) from the new second edition of 1001 Solved Surveying Problems by Jan Van Sickle. Reprinted with permission from 1001 Solved Surveying Problems by Jan Van Sickle (1997, 728 pp., Professional Publications Inc.). For details on this and other FLS exam-prep books, call 800/426-1178 or visit www.ppi2pass.com . > Click here for last week's question and solution Cool Link of the Week: CAD PanaceaAn informative Web site with CAD tips, tricks, news and more.
I Know Why The Caged Bird Sings Topic Tracking: Ignorance Ignorance 1: Though Momma doesn't know who Shakespeare is, Marguerite and Bailey anticipate that she would not approve of him, simply because he was white. To her, race is the most important issue, but it clearly keeps her ignorant about many subjects. Ignorance 2: Marguerite, like most black people in Stamps, knows almost nothing about white people. She does not even consider them human: they are too different from her. She sees them as creatures with see-through skin, who are unpredictable, incomprehensible and very strange. Ignorance 3: Since Marguerite and Bailey don't know who their parents are, or why they abandoned them, the children live in painful ignorance. What did they do to deserve such treatment? It seems unfair, but there is nothing they can do except wait, half-loving, half-hating their parents. Ignorance 4: Maya doesn't know what Mr. Freeman is doing to her. She likes him, though, and she wishes she could know. She doesn't understand why adults have to be so secretive and mysterious, why they can't take the time to explain anything to her, when she tries so hard to understand. Ignorance 5: Maya does not understand that what happened was not her fault: in fact, she does not really understand what happened at all. She is not sure when to tell the truth and when to lie, and no one helps her. It is especially ridiculous that women in the courtroom think that she is on their level, just because she has "had sex," when she doesn't even really know what sex is. Ignorance 6: Momma believes what she wants to believe about her religion (and most other things), and she will not listen to any other interpretation, especially not from her grandchildren. She does not often change, or accept new information into her life. Ignorance 7: Completely ignorant of who is audience is and what they want out of life, Mr. Donleavy speaks to the graduating class as if they were fools. He assumes they all want to be sports heroes, because he can't imagine a black person using his or her mind. Though he probably has never spoken at length with a black person, he thinks he knows who they are and what they are capable of. Ignorance 8: Bailey, and even Willie, live in terrible ignorance, never knowing why white men hate them so much, simply knowing that they are hated and must protect themselves at all costs. They are never given an explanation, though they desperately search for some understanding of why they must live in fear and poverty. Ignorance 9: Daddy Clidell's friends are able to run a profitable business by cheating white people. Their schemes are easy to pull off, because the people they cheat never believe that a black person could be smart enough to cheat them. Although they are completely ignorant as to what black people know and are capable of, these rich white men think they have complete control over every deal. Ignorance 10: Maya is upset when Dolores calls her mother a whore, partly because she is insulted, but also because she is afraid Dolores might be right. Maya has no idea whether the accusation is true or not, so it terrifies her to hear it spoken. Maya lashes out at Dolores because there is nothing else she can do: she is so ignorant of her mother's true nature that she cannot say for certain whether Dolores is right or wrong. It is this uncertainty that is most upsetting to Maya. She wants her mother to be perfect, but she is aware of how little she knows about her. Ignorance 11: Maya worries that she might be a lesbian, though if she had any idea what a lesbian was, she would not be worried. Equipped with an overactive imagination, self-consciousness, and a lot of rumors, Maya decides she must be abnormal, and decides she needs to have sex with a man to make herself normal.
As we discussed before, the three angles of a triangle always add up to 180°. In each case . By the way, means "the measurement of angle A". To find the total number of degrees in any polygon, all we have to do is divide the shape into triangles. To do this start from any vertex and draw diagonals to all non-adjacent vertices. |Here is a quadrilateral.| |If we draw all the diagonals from a vertex we get two triangles.| |Each triangle has 180°, so 2 ×180° = 360° in a quadrilateral.| |Pentagon – 5 sides||3 triangles × 180° = 540°| |Hexagon – 6 sides||4 triangles × 180° = 720°| |Septagon – 7 sides||5 triangles × 180° = 900°| |Octagon – 8 sides||6 triangles × 180° = 1080°| Are you noticing a pattern? Turns out, the number of triangles formed by drawing the diagonals is two less than the number of sides. If we use the variable n to equal the number of sides, then we could find a formula to calculate the number of degrees in any polygon:
05.15.2013 - Nanosatellites now have their own mass transit to catch rides to space and perform experiments in microgravity. 05.09.13 - Hubble found the building blocks for Earth-sized planets in an unlikely place – the atmospheres of a pair of burned-out stars called white dwarfs. 05.09.13 - The team operating NASA's Curiosity Mars rover has selected a second target rock for drilling and sampling. The rover will set course to the drilling location in coming days. 05.09.13 - Sierra Nevada Corp. (SNC) Space Systems of Louisville, Colo., completed its first major, comprehensive safety review of its Dream Chaser Space System. 05.10.13 - On May 10, 2013, Earth, the sun and the moon lined up to create a solar eclipse visible from the South Pacific. 05.09.13 - Researchers have begun taking infrared pictures of planets posing near their stars in family portraits. 05.09.13 - Astronaut Karen Nyberg says she will be savoring this Mother’s Day weekend before departing for Kazakhstan, and ultimately space. 05.09.13 - The launch of a NASA Terrier-Improved Orion sounding rocket on May 9 brought to an end a very successful campaign studying ionospheric activity and its impact on radio, communication and navigation signals. 05.03.12 - From its orbit around the Earth, the NASA-NOAA Suomi National Polar-orbiting Partnership satellite or Suomi NPP satellite, captured a night-time image of California’s Springs Fire. 05.08.13 - NASA engineer Acey Herrera checks copper test wires inside the thermal shield of an instrument for the James Webb Space Telescope. 05.08.13 - With three crew members set to return home in less than a week, the station’s Expedition 35 crew tackled a full agenda Wednesday.› Space Station Live Recap | › Crew Timelines 05.07.13 - The roar of a 5,000 pound rocket engine has returned to the Johnson Space Center. 05.07.13 - The Equatorial Vortex Experiment was successfully conducted on May 7 from the Marshall Islands when a NASA Terrier-Oriole sounding rocket was launched followed by the launch of a Terrier-Improved Malemute sounding rocket 90 seconds later. 05.08.13 - Workers added a nosecone to the top of the second solid rocket booster at the Space Shuttle Atlantis exhibit as construction speeds ahead to a June 29 opening. 05.07.13 - The supermassive black hole at the core of our Milky Way galaxy is gobbling up hot gas, according to a new study from the Herschel space observatory. 05.07.13 - The station's Expedition 35 crew worked with Robonaut Tuesday while continuing preparations for homecoming of three crew members.› Space Station Live Recap | › Crew Timelines 05.06.13 - As LDCM flew over Indonesia's Flores Sea April 29, it captured an image of Paluweh volcano spewing ash into the air. 05.06.13 - A video of smoke from California’s Springs Fire was created by animating satellite imagery from NOAA’s GOES-15 satellite. 05.06.13 - Expedition 35 began its final week in space Monday while a ground-commanded robotics demonstration continued on the exterior of the station.› Space Station Live Recap | › Crew Timelines 05.06.13 - The Orion crew module is being put through a series of tests that simulate the loads the spacecraft would experience during its mission.
MLA Position Statements and FAQs Technology-based Distance Education: While technology is having a profound impact on colleges and universities across the nation and around the globe, the higher education community still has much to learn regarding how and in what ways technology can enhance the teaching and learning process. Educating students separated from a campus by many miles is not new to the field of education. However, distance education, which evolved from yesterday's correspondence courses, differs dramatically from the past, because technology-based, interactive study can unite not only the separated teacher and student but also the students with other students. This interaction generally occurs in one of two ways: 1. Asynchronous interaction (e.g., email that is read or delivered at the student's or the instructor's own pace or via group discussion technology integrated into computer programs used or distance education) sometimes referred to as "store and forward." 2. Synchronous interaction in real time (e.g., chat rooms, video conferencing, phone conferences). In some networks, all learners in a specific system receive instruction through a combination of all these modalities. Today, an array of corporate and virtual universities, existing only in cyberspace, are competing with traditional educational institutions for a piece of the growing market for anytime-anyplace, just-in-time education services. At least thirty-three states have developed statewide virtual universities or are part of a consortium. In 1988, 44% of all higher education institutions were offering distance education courses. Recent studies estimate that by the year 2002, 84% of all four-year colleges and universities, public and private, will offer distance education courses. More than 200 Websites containing more than 4,000 courses offer some form of continuing medical education to medical professionals, students, and caregivers. The International Data Corporation forecasts that by the end of 2002 over 2.2 million students will be enrolled in distance learning courses. The U.S. government is setting up national infrastructures such as the Next Generation Internet (NGI) to develop advanced networking capabilities and improve Web applications for distance education. Other federal initiatives including the Distance Education Demonstration Project (DEDP), the Learning Anytime Anywhere Partnerships (LAAP), and efforts of the Web-based Education Commission may significantly increase acceptance of technology-based distance education. While contributing to traditional missions of higher education, Web-based technology is causing a re-examination of traditional policies and rules, and of higher education institutions' role in society. Although services for distance education participants may differ from, they must be equivalent to services available from the traditional campus. Thus, the provision of library resources and services is essential and participation of library leadership in planning the future of distance education is critical in several areas. The Association of College and Research Libraries Guidelines for Distance Learning acknowledge that "access to adequate library services and resources is essential for the attainment of superior academic skills in post secondary education, regardless of where students, faculty, and programs are located." Further, information literacy instruction is critical for life-long learning and is a primary outcome of higher education. Issues such as accreditation, faculty and student support, technology training, online library services, confidentiality and privacy, and ownership of materials that surround these changes must be reviewed and involvement of as many internal and external stakeholders as possible is necessary. Accreditation standards require that colleges and universities provide students with access to library resources. The basis for most current accreditation guidelines are derived from: Training and support needs are different for early adopters of distance education than for most faculty. Most faculty, however, need carefully planned training programs, specific goals for each learning session and ongoing training presented in low-risk environment. Most health sciences libraries have knowledgeable staff skilled at providing the necessary hands-on experiences and specific assistance in locating online resources. Librarians can also help faculty gain insight into the challenges that students face in locating online resources when taking online courses. Additionally, librarians are advocates of broad, incremental training using hardware and software that draws on the principles of adult education and collaborate teaching essential to technology-based education. Librarians typically serve a broad spectrum of learners at many educational levels with diverse learning needs and skills and with multicultural backgrounds. In this respect, librarians can provide insight into the development of technology-based distance education courses that are sufficiently flexible to meet the needs of the diversity of students found in the digital classroom. It is also true that students who take technology-based distance education must change their thinking about how they learn. They learn how to learn while collaborating with their peers, their faculty, and those who facilitate the process. Librarians have skills derived from managing collaborative efforts such as consortium building, and they are familiar with professional information-seeking jargon that may have to be explained, especially to students whose experience with language, culture, or field of study is limited. Technology-based education also creates increasing pressures for institutions to become more customer oriented than before. They must be student centered in outreach, engagement, and services. They must extend the long-standing practice emulated by libraries to reach out to students, providing them with personalized responses and attention. Tools that make for self-reliance such as online book purchases and twenty-four-hour access to electronic information resources are also expected by students in distance learning courses. Technology can be a knowledge-construction tool that enables learners to use higher-level cognitive skills, combine new information with prior knowledge and form new sets of understandings. However, online information tutorials are essential to provide the continued support needed by users of technology-based courses. Fairly specific instructions regarding what action to take in certain online situations are required. It is particularly helpful if users can locate people who have computer and Internet skills, such as librarians, to serve as local resources. Distance students need online library support and online students view lack of library materials as a significant problem. One study, found that nearly 80% of the 1,014 respondents stated that they needed to use the library regardless of specific course requirements. In most states, colleges and universities provide library services to distance education students via interlibrary loans, access to statewide networks, and statewide licensing agreements for electronic library databases. While distance students rely heavily on these services, they do not often have the luxury of going to the stacks to personally search for what they need. Online library services are gradually becoming more available, but only a few major educational institutions have comprehensive digital libraries. In addition to support for comprehensive digital collection, educational institutions offering distance education should provide: Technology-based education frequently requires security measures for confidential transmittals such as conversations between faculty, sharing information about students, and electronically tracked faculty activity and interactions. Librarians and the health information associations to which they belong can provide assistance and guidance in developing policies that balance the confidentiality and privacy rights of individual users with those of the producers of online resources. The 1999 Report on Copyright and Digital Distance Education recommends that (1) the fair use doctrine should apply to activities in the digital instructional environment; (2) any expanded exemptions should be premised on the use of technologies that protect against unauthorized use to minimize the risk of piracy to copyright owners, and (3) the expanded fair-use exemption should continue to be available only to nonprofit educational institutions. The Technology Education and Copyright Harmonization Act (TEACH) updates copyright for distance education. If passed by Congress, the TEACH Act would significantly benefit online distance education. The following provisions of the bill are particularly important: (1) exempting digital transmissions from Section 106 rights to the extent necessary to permit such transmissions in the ordinary operation of the Internet; (2) eliminating the physical classroom requirement for remote reception of educational material; (3) enabling the asynchronous use of material by permitting material to be stored on a server for subsequent use by students; and (4) expanding the categories of work exempted from the performance right to include reasonable and limited portions of audiovisual and dramatic literary and musical works, as well as sound recording of the musical works that are already within the scope of the exemption. The Medical Library Association (MLA) and the Association of Academic Health Sciences Libraries (AAHSL) support recommendations included in The Power of the Internet for Learning: Moving from Promise to Practice, a report of the Web-based Education Commission chaired by Senator Bob Kerrey of Nebraska and Vice Chair Representative Johnny Isakson of Georgia, especially recommendations to: Healthy People 2010 identifies opportunities for health communication to contribute to the improvement of personal and community health during the first decade of the 21st century and notes that often people with the greatest health burdens have the least access to information, communication technologies, health care, and supporting social services. MLA and AAHSL encourage key stakeholders-including health professionals, researchers, public officials, and the lay public-to collaborate on a range of activities to reduce disparities in underserved communities that often lack access to crucial health professionals, services, and communication channels. (See also MLA statement, Essential Library Support for Distance Education). Health sciences librarians need to meet the challenge of becoming more informed about distance education technology and its applications, including its use as a methodology for education and training of our profession. The following list of resources is intended as an aid to identify additional ways that health sciences libraries can participate in this emerging service area. Prepared May 2002 by Logan Ludwig, Ph.D., Chair, MLA Governmental Relations Committee, and Associate Dean, Library and Telehealth Services, Loyola University Stritch School of Medicine, Maywood, IL. For more information, contact Mary Langman, 312.419.9094 x27. Medical Library Association © 1999-2012 Medical Library Association Send site questions/comments to systems administrator Sponsorship and advertising opportunities Last Updated: 2008 January 18
||This article needs additional citations for verification. (January 2008)| |Wikibooks has a book on the topic of: First Aid| A medical emergency is an injury or illness that is acute and poses an immediate risk to a person's life or long term health. These emergencies may require assistance from another person, who should ideally be suitably qualified to do so, although some of these emergencies can be dealt with by the victim themselves. Dependent on the severity of the emergency, and the quality of any treatment given, it may require the involvement of multiple levels of care, from first aiders to Emergency Medical Technicians and emergency physicians. Any response to an emergency medical situation will depend strongly on the situation, the patient involved and availability of resources to help them. It will also vary depending on whether the emergency occurs whilst in hospital under medical care, or outside of medical care (for instance, in the street or alone at home). For emergencies starting outside of medical care, a key component of providing proper care is to summon the emergency medical services (usually an ambulance), by calling for help using the appropriate local emergency telephone number, such as 999, 911, 111, 112 or 000. After determining that the incident is a medical emergency (as opposed to, for example, a police call), the emergency dispatchers will generally run through a questioning system such as AMPDS in order to assess the priority level of the call, along with the caller's name and location. Those trained to perform first aid can act within the bounds of the knowledge they have, whilst awaiting the next level of definitive care. Those who are not able to perform first aid can also assist by remaining calm and staying with the injured or ill person. A common complaint of emergency service personnel is the propensity of people to crowd around the scene of victim, as it is generally unhelpful, making the patient more stressed, and obstructing the smooth working of the emergency services. If possible, first responders should designate a specific person to ensure that the emergency services are called. Another bystander should be sent to wait for their arrival and direct them to the proper location. Additional bystanders can be helpful in ensuring that crowds are moved away from the ill or injured patient, allowing the responder adequate space to work. ||This article may contain original research. (July 2011)| Responders acting within the scope of their knowledge and training as a "reasonable person" in the same situation would act are often immune to liability in emergency situations. Usually, once care has begun, a first responder or first aid provider may not leave the patient or terminate care until a responder of equal or higher training (e.g., fire department or emergency medical technicians) assumes care. This can constitute abandonment of the patient, and may subject the responder to legal liability. Care must be continued until the patient is transferred to a higher level of care, the situation becomes too unsafe to continue, or the responder is physically unable to continue due to exhaustion or hazards. The principles of the chain of survival apply to medical emergencies where the patient has an absence of breathing and heartbeat. This involves the four stages of Early access, Early CPR, Early defibrillation and Early advanced life support Unless the situation is particularly hazardous, and is likely to further endanger the patient, evacuating an injured victim requires special skills, and should be left to the professionals of the emergency medical and fire service. Clinical response Within hospital settings, an adequate staff is generally present to deal with the average emergency situation. Emergency medicine physicians have training to deal with most medical emergencies, and maintain CPR and ACLS certifications. In disasters or complex emergencies, most hospitals have protocols to summon on-site and off-site staff rapidly. Both emergency room and inpatient medical emergencies follow the basic protocol of Advanced Cardiac Life Support. Irrespective of the nature of the emergency, adequate blood pressure and oxygenation are required before the cause of the emergency can be eliminated. Possible exceptions include the clamping of arteries in severe hemorrhage. Non-trauma emergencies While the golden hour is a trauma treatment concept, two emergency medical conditions have well-documented time-critical treatment considerations: stroke and myocardial infarction (heart attack). In the case of stroke, there is a window of three hours within which the benefit of thrombolytic drugs outweighs the risk of major bleeding. In the case of a heart attack, rapid stabilization of fatal arrhythmias can prevent sudden cardiac arrest. In addition, there is a direct relationship between time-to-treatment and the success of reperfusion (restoration of blood flow to the heart), including a time dependent reduction in the mortality and morbidity. - E.g., Virginia Code § 8.01-225. See also
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 106 5 Free Speech and the Internet 5.1 INTRODUCTION The preceding chapters have sought to provide a framework for un- derstanding how global networks influence local values, political institu- tions, and ways of doing business, as well as how those networks might be governed. This chapter and the next look more closely at some par- ticular issues namely, those related to free speech and to the tensions between privacy and freedom of information. To a certain extent, the selection of these topics is arbitrary. In other chapters, the report has touched on other topics that might reasonably be examined more closely: consumer protection and copyright; the social changes inherent in a networked world; and the shifting boundaries be- tween public and private spaces and the blurring of the line between con- sumer and producer. Transnational issues could have been added as well: tax policy, customs and tariffs for Internet traffic, and technical standard- ization are obvious examples. But free speech and privacy stand out in two respects: they have at- tracted considerable public interest, and they are characterized by conflict between the two nations that are the focus of this report. Therefore, this chapter and the next will address these issues. The intention is to discuss them as examples of the tensions and challenges that global networks in- troduce in a society's values, but these are issues with such strong legal overtones that it is impractical to approach them without incorporating legal considerations into the discussion as well. 106 OCR for page 107 FREE SPEECH AND THE INTERNET 5.2 THE VALUES INVOLVED IN FREE SPEECH 107 For both the United States and Germany, freedom of speech is such an important formal value that it is explicitly protected by the First Amendment to the U.S. Constitution and by Article 5 of the German Basic Law. Because of this constitutional protection, legislatures have very little latitude to pass laws that restrict speech. If the legislature, or any other governmental body, moves too far in that direction, individuals in each country can seek relief in the highest court. This constitutional protection of free speech obligates both govern- ment and private parties to tolerate many kinds of expression, regardless of how much it may clash with individual values or with the traditions of the country. Yet, restrictions on speech are common around the world, with many instances of censorship and criminal prosecution for the criti- cism of government policy. Even in the United States and Germany, policymakers have sought legislation from time to time that would place restrictions on various kinds of speech. Such legislation has usually been struck down as unconstitutional, but the continual efforts made, and the restrictions sometimes allowed, suggest that the right of free speech is not absolute and that some substantive value is being explicitly or implicitly applied to distinguish protected from unprotected speech. This substan- tive value (or these values) may well be in tension with the formal value of free speech. Some of those competing values may also be formal ones. For ex- ample, the exercise of free speech might directly or almost directly cause physical harm such as injuries and death resulting from the publication of bomb-building instructions or the psychic trauma of children that might occur as the result of exposure to certain kinds of sexually explicit mate- rial. Similarly, one cannot (falsely) shout "Fire!" in a crowded theatre; as Oliver Wendell Holmes noted, "Your freedom ends where my nose be- gins." Where the connections among formal values are relatively clear and unambiguous they are not always so it is relatively easy to make judgments about which one should take precedence. The situation is not so straightforward when substantive values are involved. Generally speaking, formal values such as free speech establish rights and procedures that enable a society to function effectively and, it is hoped, fairly. But it takes substantive values to provide the glue, the shared outlook that makes a society more than a collection of individuals. If the values under which a society operated were composed exclusively of formal values, normative views of the world, such as the hierarchical, OCR for page 108 108 GLOBAL NETWORKS AND LOCAL VALUES the egalitarian, or the fatalistic, which hold societies together and distin- guish them from one another, would be denied any status whatsoever. In fact, substantive values do come into play. For example, restric- tions on free speech may be the result of seeking balance the formal value of free speech weighed against the competing claims of certain sub- stantive values. Of course, the notion that a balance is involved suggests that the mere existence of a conflicting substantive value is not a sufficient reason to restrict free speech. The critical question is whether the exercise of free speech violates a substantive value to an unacceptable degree; an- swering this question entails a value judgment that is not only conten- tious but often rendered differently in different societies, even those as similar as the United States and Germany. The treatment of two such issues hate speech and protection of children and adolescents is dis- cussed in the following sections. 5.3 COMMON AND DIFFERENT TRADITIONS AND THE INTERNET Free speech was an important right long before the advent of the Internet, but there were practical limitations on how well individuals could exercise it to influence their societies. People could find a soapbox in Hyde Park or Union Square, send a letter to the editor, or distribute leaflets.2 But if they wanted to have an impact on public policy or on society at large, they had to go through intermediaries. The Internet brings society much closer to the ideal of a free market of ideas, in that surfacing a wide range of ideas in a public forum, including those dispar- aged as fringe, is easier than it has ever been before. Nevertheless, limita- tions clearly remain, and the availability of ideas on a Web site does not assure that everyone will find them or require that everyone access them. 5.3.1 Hate Speech Hate speech can be defined as the willful public expression of hatred toward any segment of society distinguished by a characteristic such as iMichael Thompson, Richard Ellis, and Aaron Wildavsky, 1990, Cultural Theory, Boulder, Colo.: Westview Press; for an application to the topic of this report see Michael Thompson, 2000, "Global Networks and Local Cultures: What Are the Mismatches and What Can be Done About Them?," in Christoph Engel and Kenneth H. Keller, eds., Understanding the Impact of Global Networks on Local Social, Political and Cultural Values, Baden-Baden: Nomos 113-130. 2Computer Science and Telecommunications Board, National Research Council. 1994. Rights and Responsibilities of Participants in Networked Communities. Washington, D.C.: Na- tional Academy Press. OCR for page 109 FREE SPEECH AND THE INTERNET 109 color, race, religion, ethnic origin, or sexual orientation. Hate speech can be particularly debilitating to a society because it attacks an entire group. Thus it threatens the peaceful coexistence of different groups within the population and, ultimately, the stability of the community. Hate speech is more than merely hurtful; it creates a climate that can lead to depriving certain groups of their civil rights. The danger need not be concrete and immediate; sad experience has shown that the ver- bal stigmatization of particular groups in a community can build up negative attitudes in the population at large, which can lead to discrimi- nation and may even erupt into violence against the group. Despite the near-universal revulsion to hate speech among civilized peoples, there are significant differences between the United States and Germany in how it is handled. Two cases, widely reported in the media and described here in Chap- ter 3, demonstrate the problems created by these differences: the online sale of Mein Kampf (August 1999) and the CompuServe case (May 1998~. The first arose from differences in the laws of the two countries concern- ing what can be distributed, and the second concerned the responsibility of a service provider for the messages transferred through its network. The CompuServe case attracted particular attention in the American press, with headlines like "Germany's Internet Angst," "A 'cyber-coup' for Germany's cyber-cops," "German Net future questioned," and "Efforts to control the Net abuse liberty." The United States In terms of value balance, the United States gives the formal value of free speech more weight than essentially any substantive value and al- most all other formal values. Therefore, attempts to proscribe hate speech using legal remedies such as the criminal code or municipal regulations have invariably been struck down by the Supreme Court, based on the idea that such remedies violate the constitutional right to freedom of ex- pression contained in the First Amendment. Indeed, because Article 20 of the International Covenant on Civil and Political Rights3 required signa- tory states to agree that "any advocacy of national, racial or religious ha- tred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law," the United States refused to ratify that part of the Covenant. Furthermore, in ratifying the Genocide Convention, the 3The International Covenant on Civil and Political Rights was adopted and opened for signature, ratification, and accession through U.N. General Assembly resolution 2200A (XXI) of 16 December 1966. It entered into force on 23 March 1976. OCR for page 110 0 GLOBAL NETWORKS AND LOCAL VALUES United States made specific reservations to prevent any impact of the Con- vention on First Amendment rights in the United States. A measure of the primacy given to the right to freedom of expression is that the First Amendment does not specify any exceptions, and the Supreme Court has been very cautious in allowing any. Over the years, it has developed a strict set of criteria defining circumstances in which some state abridgement of free speech might reasonably be allowed in order to serve other constitutional goals, but the exceptions have been very few. Proposed government restrictions that are based on the content of an expression have to be capable of standing up to an intense examination called "strict scrutiny." Under this test, restrictions can be justified only if the state is able to show a compelling public interest in doing so. Even then, it has to choose the least restrictive means for achieving the desired aim. Furthermore, if the proposed measures are too vague or too broad, in all likelihood they will be rejected as unconstitutional. In fact and in practice, the strict scrutiny test is equivalent to the initial assumption that any restriction on free speech is unconstitutional. Government measures aimed at preventing the purely abstract dan- gers of hate speech, which would certainly encompass most substantive- value concerns, have always been struck down by the Supreme Court because they have not passed the strict scrutiny test. In 1952, the Court did hold, in Beauharnais v. Illinois,4 that the defamation of a group should not fall within the protection of the First Amendment. But even that deci- sion, though never officially reversed and overruled, has not guided sub- sequent Court action, particularly following Collin v. Smiths and R.A. V. v. City of St. Paul6 (Box 5.1~. In Brandenburg v. Ohio, 395 U.S. 444 (1969) (per curiam), the Supreme Court held that the First Amendment even protects speech that encour- ages others to commit violence, unless the speech is capable of "produc- ing imminent lawless action." Thus, arguing that "if the First Amendment protects speech advocating violence, then it must also protect speech that does not advocate violence but still makes it more likely," a threejudge panel of the 9th Circuit Court of Appeals held that a Web site and posters calling abortion doctors "baby butchers" and criminals were protected by the First Amendment. The court stated that "political speech may not be 4343 U.S. 250 (1952~. 5578 F.2d 1197 (7th Cir. 1978~; cert. denied 439 U.S. 916 (1978~. 6505 U.S. 377 (1992~. OCR for page 111 FREE SPEECH AND THE INTERNET 111 punished just because it makes it more likely that someone will be harmed at some unknown time in the future by an unrelated third party."7 7244 F.3d 1007 (9th Cir. 2001~; rein. en bane granted, 268 F.3d 908 (9th Cir, October 3, 2001~. The latter citation refers to an order from the court that the case be reheard by the en bane court, with the threejudge panel opinion not being cited as precedent by or to this court or any district court of the Ninth Circuit, except to the extent adopted by the en bane court. OCR for page 112 2 GLOBAL NETWORKS AND LOCAL VALUES On the other hand, the Supreme Court has allowed exceptions to First Amendment protection when the expression could likely lead to a hate- engendered crime. In such cases, the Court has applied the "Clear and Present Danger Test." Expressions that give rise to a clear and present danger of criminal action, and thus infringe on the rights of some segment of the population, can be forbidden. This exception is called "communi- cations tending to incite lawlessness" or "advocacy of unlawful action." Germany The German legal system, in contrast to the American system, gener- ally penalizes hate speech. Given the experience under National Social- · 1 ~ 1 ~ ~ A_ ~ · To 1 1 · ~ 1 ~— 1 1 To 1 1 - ~sm and the former Berman L,emocrat~c l~epubl~c, the federal l~epubl~c takes the position that a democracy has to be able to defend itself as a political system. There is a particularly strong feeling that it must be able to stop any attempt to reestablish a National Socialist authority. Interest- ingly, in addition to the resolve of the post-war German generation to resist National Socialism, other countries that fought the Nazi regime and certain ethnic groups (such as those of lewish descent, who were victim- ized by the regime) expect this vigilance of Germany. In addition to mea- sures targeted against hate speech, there are also German laws that pro- hibit the defamation of victims of National Socialist crimes, denial of the Holocaust, wearing of the swastika, and distribution of National Socialist propaganda. The compatibility of these laws with the constitution has never seri- ously been questioned, even though in Germany, as in the United States, freedom of expression is an important value. The Bundesver-fassungs- gericht (the German equivalent of the U.S. Supreme Court) says that free- dom of expression is simply an inherent aspect of democracy. However, the constitutional right of freedom of expression, as granted in Article 5 Abs. 1 GG, is worded as follows: Anybody has the right to freely express his opinion in words, written materials, and pictures and to distribute it and to draw information from generally accessible sources without any interference. The freedom of the press and the freedom of broadcasting and film are guaranteed. There is no censorship. These rights will find their barriers in the provisions of the general laws, the legal provisions for the Protection of the youth and the right to personal honor. The wording of this article is similar to guarantees in other Western European constitutions (for example, Article 10 of the European Conven- tion on Human Rights). There is a good deal of room for interpretation in OCR for page 113 FREE SPEECH AND THE INTERNET 113 the words and, particularly in view of the last sentence, a number of cir- cumstances in which this constitutional right can be restricted. Thus the prohibition against hate speech would fall under the category of a general law. Its provisions are viewed as "not directed against the expression of an opinion as such, but that rather serve the protection of a worthy legal value, without consideration of any special opinion (italics added)."8 For- bidding Holocaust denial has been justified by the Bundesver- fassungsgericht as necessary to protect the personal honor of the Holocaust's victims, who might otherwise be viewed as threatened and compromised.9 There are efforts, sometimes driven by actions and interpretations of the European Court of Human Rights, to limit the extent to which the right of free speech can be abridged. For example, the Bundesver- fassungsgericht requires that the conflicting interests be balanced and that there be a consideration of whether there are any less restrictive means available in order to achieve the intended goal. But, in the face of Germany's recent history, it is not surprising that the prohibition of hate speech is regarded as legitimate and appropriate. The contrasts between Germany and the United States in regard to free speech are relatively easy to understand. The generally high tolerance in the United States for free speech is generally regarded as critical in a highly het- erogeneous society one with a long history of absorbing wave after wave of immigrant groups to avoiding pressures that might otherwise arise to con- form ideologically and culturally. Indeed, guaranteed individual and politi- cal liberties have always been one of the attractions of the United States to those forced to leave their homeland for reasons of political repression. Re- cent history in Germany, on the other hand, has provided a sad lesson in how fast political propaganda and incitement in a relatively homogeneous society can lead to the separation and murder of whole segments of the population. It has led to a broad consensus on the need to place limits on freedom of expression in order to preserve freedom generally. This practical explanation raises the question of whether it is fair to characterize the American situation as one in which the formal value of free speech dominates any consideration of substantive values or whether the commitment to diversity, which free speech facilitates, is itself a sub- stantive value. In the latter case, societal cohesion and individual liberty both support the idea of free speech, giving added weight to its protec- tion. In the German situation, there is warranted concern that the shared substantive values protection of the rights of minorities and the dignity ~BVerfGE 7,198, 209 f. 9BVerfGE 90, 241, 252. OCR for page 114 4 GLOBAL NETWORKS AND LOCAL VALUES of individuals may be threatened by an unequivocal commitment to free speech; so the balance between the two plays out differently. 5.3.2 The Protection of Children and Adolescents Both the United States and the Federal Republic are deeply concerned with protecting children and adolescents, and both have established laws in that spirit.~° Those that deal with material in print, film, or electronic media are of two basic kinds. First, there are laws aimed at preventing abuse and maltreatment, which make it illegal to distribute, purchase, or possess written materials, videos, and other items that depict child por- nography. The argument is that such material is a stimulus to carrying out the acts depicted, and that it leads producers to abuse children in the course of its production. It is no surprise, then, that on both sides of the Atlantic, legislatures have proscribed child pornography in every format and venue. The dis- tribution of child pornography through the Internet, as well as its posses- sion, is a criminal offense. Even images that have been created by com- puter or drawn, where children are obviously not involved in production, may be illegal. In neither country have constitutional concerns been seriously raised about these laws. In the United States, they meet the strict scrutiny test. In Germany, although the contents of child pornography are, in principle, i°In Germany it is even at the constitutional level; see Art. 6 Abs.2 GG or Art.5 Abs.2 GG. But in the United States as well, the Supreme Court found, in the decision of Ginsberg v. New York (390 U.S. 629 (1968~), that the state had a legitimate interest in protecting the physical and psychological well-being of minors. iiIn the United States, the Child Pornography Protection Act of 1996 (CPPA) expanded the definition of child pornography to include any visual depictions of individuals that ap- pear to be minors, or visual depictions presented in a manner to convey the impression of a minor, engaging in sexually explicit conduct. (As of this writing [November 2001], this provision of the CPPA is pending before the Supreme Court. It was held unconstitutional by the U.S. Court of Appeals for the Ninth Circuit (Free Speech Coalition v. Reno, 222 F.3d 1113 (9th Cir. 2001~), but was upheld by the First, Fourth, Fifth, and Eleventh Circuits (United States v. Fox, 248 F.3d 394 (5th Cir. 2001~; United States v. Mento, 231 F.3d 912, (4th Cir. 2000~; United States v. Acheson, 195 F. 3d 645 (pith Cir. 1999~; United States v. Hilton, 167 F.3d 61 (1st Cir. 1999), cert. denied, 528 U.S. 844,120 S. Ct. 115,145 L. Ed.2d 98 (2000~. Under the U.S. criminal code, possession, distribution, and transportation of child pornography so defined is a felony. In Germany, Section 184 of the German Criminal Code prohibits the distribution of both "real" and "fictive" child pornography (real with real persons involved; fictive with drawings, computer-produced images, and even written or acoustic material). However, the German Criminal Code does not prohibit the possession of fictive child por- nography. OCR for page 115 FREE SPEECH AND THE INTERNET 115 protected by the Constitution, child and adolescent protection has been recognized as a legitimate basis for outlawing it. The second area of law related to protecting minors aims at prevent- ing them from being exposed to material that might be psychologically traumatic or might adversely affect their development. This is the more difficult area of the two. Much of the material is itself not considered innately harmful and, therefore, is not proscribed; the practical question is how to specifically control only the inappropriate material, and how to accomplish that without interfering with those who have a right to re- ceive it. Here the balancing of rights comes into play more directly, as does the determination of the appropriate roles of government, the pri- vate sector, and parents. How, then, have the United States and Germany dealt with this set of issues? The United States In February 1996, the Congress adopted the Communications Decency Act (CDA), a sweeping law that held content providers criminally liable if a person under 18 years of age obtained "obscene," "indecent," or "pa- tently offensive" material through any "telecommunications device." There was a so-called "safe harbor" provision, which protected a pro- vider who makes good-faith efforts to deny access to individuals under 18; such efforts would include the use of a credit card, a debit account, an adult access code, or an adult personal-identification number. The Act triggered immediate challenges and was quickly reviewed by the Su- preme Court (Reno v. American Civil Liberties Unions ). The Court found (as had the lower courts) that the so-called Section 223 (47 USC 233) provisions of the CDA were too broad and too vaguely formulated. The vagueness of the expressions "indecent" and "patently offensive" allowed for such a wide range of interpretations that they could not be reconciled with the Court's strict criteria for allowing freedom of speech to be abridged. The chilling effect of the ambiguities in the law would lead producers to be so cautious that it would inhibit legitimate i2In order to be able to consider technological innovations in this area without a statutory change, every method that is feasible will be treated in the same way. The Federal Commu- nications Commission would have had the task of choosing suitable systems and to qualify them as such. The safe harbor clause has as its aim similar to the age restriction on youth- endangering publications or visits to establishments in red-light districts the denial of ac- cess to online offers to adolescents only, and not to adults. The complete criminalization of the contents is not intended with this so-called Zoning Approach. i3521 U.S. 844 (1997). OCR for page 116 OCR for page 122 OCR for page 123 OCR for page 124 OCR for page 125 OCR for page 126 OCR for page 127 OCR for page 128 OCR for page 129 OCR for page 130 OCR for page 131 OCR for page 132 Representative terms from entire chapter: 6 GLOBAL NETWORKS AND LOCAL VALUES freedom of expression and restrict the availability of content that adults might quite legally want to obtain. Even the safe-harbor clause was regarded as inadequate. It was not clear that the access control systems available would be judged sufficient to trigger the protections of the safe-harbor clause. And even if effective, installing the controls would entail substantial costs beyond the capacity of most noncommercial providers. Therefore the law would discriminate against them. Finally, much of the objectionable content came from abroad, where American law could not easily be enforced. In response to the Court's action, Congress took a different approach, passing the Child Online Protection Act (COPA) at the end of 1998. COPA had a narrower scope of application than CDA, but its intention was simi- lar and it has often been referred to as "CDA II." The intention of its spon- sors was to deal with the Supreme Court's objections by dropping unac- ceptable terms like "obscene" and "indecent" and substituting a narrower "harmful to minors" standard. Furthermore, COPA dealt only with the commercial distribution of material and only on the World Wide Web. It did not try to regulate other Internet services such as newsgroups. COPA also included a safe-harbor provision that exempted from prosecution parties that take good-faith measures through any reasonable means fea- sible under available technology (e.g., the use of a credit card) to restrict access by minors to material that is harmful to them. Still, many of the groups that objected to the CDA also found the new statute to be objectionable, and the American Civil Liberties Union (ACLU) and other groups challenged it in court. The United States Dis- trict Court for the Eastern District of Pennsylvania issued a preliminary injunction against COPA, holding that the law was likely to be found in- compatible with the First Amendment for many of the same reasons that the CDA had been rejected.~4 Content providers would be inhibited, by fear of liability as well as by the costs associated with installing access- control software, in what they produced, with the net effect of adults be- ing less able to receive legal material that they might want. The District Court acknowledged that youth protection was a legiti- mate reason for restricting freedom of expression, but it argued not only that less restrictive means were available but that the prescribed access- control systems would be of limited effectiveness anyway; they would not apply to foreign Web sites, noncommercial providers, or newsgroups. American Civil Liberties Union v. Reno, 31 F. Supp.2d 473 (E.D.Pa. (1999~. This decision can be seen online at 22 GLOBAL NETWORKS AND LOCAL VALUES 5.4.2 Germany The laws of the Federal Republic place much greater responsibility on host providers, although they do not regulate other intermediaries such as search-engine operators or providers of hyperlinks. In Germany, host providers are "responsible for foreign contents that they provide for use if they had knowledge of these contents and it is technically possible, and also reasonable, to prevent their use." This is called "notice liability"; that is, if one knows about the material, one is liable if no action is taken to remove it. Furthermore, under German law, a provider cannot defend itself by arguing that it didn't consider the questionable contents to be illegal. Article 14 of the EU Commission's Directive on Electronic Com- merce takes the same approach. There have been no explicit constitutional objections to this law raised in Germany. It obviously goes in a very different direction from U.S. law. However, many argue that the host provider's liability is actually more limited than it may appear because the provider need only act if it is "tech- nically possible . . . and . . . reasonable" to prevent the distribution of the objectionable material. This allows for some judgment and balancing by the prosecutors and courts in deciding, for example, whether a small pro- vider could "reasonably" be expected to install blocking software so ex- pensive that it might put the company out of business. Moreover, the law does not require that the host provider make an active effort to root out illegal material. With these factors softening the impact of the liability provisions, there appears to be a broad consensus throughout Europe that the Ger- man law and the E-Commerce Directive of the EU Commission represent an appropriate middle path. In the view of most Europeans, these regula- tions balance the protection of minors with the right to freedom of expres- sion and the economic interests of host providers. With the laws in the United States and Germany as different as they are in this case, and with the strong consensus and deep, principled con- viction that exists in each country for its own law, it is difficult to see how a practical compromise can be achieved and easy to see how the differ- ences will inevitably lead to conflicts. The Bavaria v. CompuServe case, mentioned earlier in Chapter 3, certainly demonstrates the problem. American criticism of the German action in the CompuServe case was based on the strong objection in the United States to any action that would (1) have a chilling effect on freedom of speech and (2) unreasonably or unnecessarily burden a private company with economically debilitating regulations. Germans, for their part, are generally much less concerned than Americans that government regulations might burden industry, if those regulations appear otherwise warranted. Furthermore, most Ger- FREE SPEECH AND THE INTERNET 123 mans would attach more importance to the protection of minors than to the protection of free speech and would have no compunction about for- ever blocking a transgressing newsgroup or even 282 of them if it were necessary to prevent the distribution of child pornography. But another source of the tension that arose in this case was the frus- tration of the German prosecutors, who had very little leverage to take action against CompuServe USA. Because the company is headquartered in the United States and its executives live there, German law could not reach them. The United States would not cooperate in extradition pro- ceedings because the company's actions were not violations of U.S. law. The Munich prosecutor, anxious to enforce the German law on child pornography, instead charged the executive director of CompuServe Ger- many, the local affiliate, with violation of the law. The problem, of course, was that the local affiliate had no way of blocking the offending newsgroups. Thus the prosecutor's actions were criticized in Germany as well as in the United States; but the German criticism arose not because of any objection to host-provider liability but because the person charged was not the person responsible. In fact, though the executive director was initially found guilty, the conviction was overturned in November 1999 precisely because the court recognized that he was neither responsible for sponsoring the newsgroups nor able to remove them from the network. 5.5 INTERNET CONTENT REGULATION AS A CHALLENGE TO GOVERNANCE The difficulties in regulating Internet content epitomize the challenges that global networks present for governance. It therefore does not come as a surprise that almost all the elements discussed in Chapter 9 (on gov- ernance) in abstracto have a bearing on content regulation. 5.5.1 The Limited Power of Traditional National Regulation It is useful to keep in mind that the Internet contributes to globaliza- tion in two ways. First, it is a global entity that brings together cultural and political influences from many countries and gives rise to a burgeon- ing new field of commerce. Second, the Internet makes it possible for established businesses to coordinate activities across the globe through various commercial arrangements, freeing them to a certain extent from the constraints of geography and national boundaries. Globalized business activities are much more difficult for govern- ments to regulate and control, both because they may not be physically located within a country's boundaries and because nations compete to 124 GLOBAL NETWORKS AND LOCAL VALUES attract businesses.24 This reduces the feasibility of strong, unilateral com- mand-and-control as well as the reach of penal law. The change is one of degree, and national governments certainly do not lose all their options.25 For example, a person residing in a country can be held liable for violation of its national law or regulation even if he or she is part of an international business or the illegal action involves transmission of inappropriate mate- rial from another country. Similarly, a nation could enforce its laws extraterritorially by attach- ing a foreign company's assets that happened to be located within its boundaries or even arresting a visiting company official.26 Under Ger- man law, prosecutors not only would be allowed to take these actions, but are actually required to do so. With respect to Internet sites, some have suggested that nation-states could actually go further. They might attack foreign Web sites that contravene their laws, using such technical means as denial-of-service attacks similar to those mounted by hackers against Yahoo! and amazon.com.27 There seems little question that such tactics would violate public international law28 but, perhaps more to the point, they illustrate how the initial value balance involved in a decision to re- strict transmission of certain content can be distorted by the means em- ployed to implement the decision. The ideal situation, of course, would be one in which national laws pertaining to the Internet and other global activities were harmonized. That does not seem to be a realistic expectation for the foreseeable future, however. So the most reasonable hope is for cooperation among govern- ments to help providers and hosts understand the laws and regulations in each jurisdiction. Over time, this kind of transparency might lead toward creative harmonization and compromise. The practical question is how far one nation can go in imposing laws and regulations in a global economy in which firms have the ability to 24On the governance of the Internet in greater detail, see Christoph Engel, 2000, "The Internet and the Nation State," in Christoph Engel and Kenneth H. Keller, eds., Understand- ing the Impact of Global Networks on Local Social, Political and Cultural Values (Law and Econom- ics of International Telecommunications 42), Baden-Baden: Nomos, 201-260. 25This point has been stressed repeatedly by Jack Goldsmith. In the context of this report see in particular Jack Goldsmith, 2000, "The Internet, Conflicts of Regulation, and Interna- tional Harmonization," in Christoph Engel and Kenneth H. Keller, eds., 2000, Governance of Global Networks in the Light of Differing Local Values, Nomos: Baden-Baden, 197-207. 26For greater detail, see Werner Meng, 1994, Extraterritoriale JurisdiEtion im offentlichen Wirtschaftsrecht, Berlin. 27Cable News Network, "Cyber-attacks Batter Web Heavyweights," February 9,2000. See . 28Cf., Jamie Frederic, 1997, "Rwandan Genocide and the International Law of Radio Jam- ming," American Journal of International Law 91:628. FREE SPEECH AND THE INTERNET 125 withdraw their activities from the nation's territory. Some observers be- lieve that this threat is overstated that firms are unlikely to abandon a large national market that would be difficult to maintain without some presence in the country. There may also be other reasons for keeping a presence in a country, including the preference of investors or the avail- ability of research-and-development capacity. However, although these considerations may make it impractical for a firm to avoid a nation's laws on illegal Web content or its intellectual-property regulations, it is cer- tainly possible for the firm to move large parts of its operation offshore, to the detriment of the nation's economy. 5.5.2 International Legal Harmonization International treaties provide one way of creating global order in a world where there is no supranational government. They work reason- ably well when there is a common view on the values to be protected, general agreement about what needs to be done, and an obvious advan- tage in dealing with the issues on a global basis. A number of treaties are in existence today that appear, at least nominally, to deal with matters closely related to some of the content issues that have arisen with regard to the Internet. For example, the Convention on the Prevention and Punishment of Genocide, dating from 1948,29 requires the parties to make criminal the "direct and public incitement to commit genocide." The 1966 Interna- tional Convention on the Avoidance of All Forms of Racial Discrimina- tion30 proscribes words and acts of racial discrimination. The United Nations' Human Rights Pact of the same yearn not only deals with hu- man rights, but also bans war propaganda and "every encouragement of nationalistic, racial, or religious hatred [that] incites discrimination, ani- mosity, or violence." In addition, there is a UN International Convention on the International Right of Correction from the year 195332 (although neither the Federal Republic nor the United States has adopted it). One promising approach to internationalizing some aspects of Internet regulation would be to extend existing treaties to the new con- text. That would require a willingness on the part of each signatory coun- 29Convention of 09.12.1948, BGB1. 1954 II. 729. 30Convention of 07.04.1966, BGB1. 1969 II 961; compare also BTDrs. 13/1883. 3iInternational Pact on Civil and Political Rights of 09.12.1966, BGB1. 1973 II 1533. 32Convention on the International Right of Correction from 31.03.1953, UNTS 435, 192. The "right of correction" refers to the right of a nation "directly affected" by a private or public report that it considers "false or distorted" to secure "commensurate publicity" for the "corrections" that the nation wishes to publicize. 26 GLOBAL NETWORKS AND LOCAL VALUES try to interpret or extrapolate the treaty's provisions to the new environ- ment of the Internet and to amend its own national laws to reflect the new interpretations. Thus far, that has not happened. For these and other reasons, there continues to be a push for new treaties to achieve international harmonization. Of course, they are easier to negotiate when nations largely agree on the issues. That requires either finding issues on which there is essential unanimity to begin with or de- fining a set of countries or a region with largely shared values. What should be evident from the discussion in this chapter is that the value agreement must pertain not only to the problem giving rise to the chal- lenge but also to the appropriateness of government roles and regulatory tools for implementing a solution. At the moment, the one area in which it appears likely that some in- ternational harmonization will be achieved, at least in Europe, is the regu- lation of child pornography. In tune 2001, the European Committee on Crime Problems (CDPC) of the Council of Europe approved the Draft Convention on Cybercrime, which was submitted to the full Committee of Ministers for adoption in September 2001. Article 9 of the Draft Con- vention commits signatories "to adopt such legislative and other mea- sures as may be necessary to establish as criminal offenses under its do- mestic law, when committed intentionally and without right," acts that relate to child pornography.33 In addition, a supplement to the Europol agreement is being prepared that gives the European police authorities wider jurisdiction to deal with the production, sale, and distribution of child pornography. However, the inclusion of content-related offenses other than those related to child pornography (e.g., the "distribution of racist propaganda through computer systems") proved too controversial to include in the Draft Convention. The European Committee on Crime Problems may consider an additional protocol relating to these offenses, but it faces op- position from a number of civil liberties organizations.34 The problem with harmonization is that if consensus requires draw- ing a too-small circle of cooperating nations, violators can find a regula- tory haven fairly easily in a nation-state not party to the convention. There 33These acts include producing child pornography for the purpose of its distribution through a computer system; offering or making available child pornography through a com- puter system; distributing or transmitting child pornography through a computer system; procuring child pornography through a computer system for oneself or for another; and possessing child pornography in a computer system or on a computer-data storage me- dium. See . 34See . FREE SPEECH AND THE INTERNET 127 are, of course, political and economic pressures that can be brought to bear on nonsignatory states to bring them into compliance. And for that matter there are carrots as well as sticks, as has been shown in certain aspects of global environmental protection.35 There are dangers in this approach, however, where global networks are concerned. The uneven penetration of the Internet (and its benefits) has already created a global sense of "haves" and "have-nots" that might well be exacerbated by unidirectional pressure from the United States or Eu- rope on other nations, regardless of the merit of their position. Beyond that, there is the danger that harmonizing with a particular set of values, or adopt- ing a universal approach to the structure of legal institutions, will reduce the very diversity that the Internet has the useful potential to promote. 5.5.3 Commercial Law As pointed out elsewhere in this report, there are a number of circum- stances in which commercial law rules that have been developed for resolving business conflicts by coordinating the laws of different nations- could be used to deal with harmful contents accessible through the Internet. Consumer fraud, for example, does not change its legal charac- ter just because it is carried out with the aid of a Web page. Nevertheless, commercial law is a weak foundation for matters such as child pornography and politically tainted hate speech. The major problem in such cases is that the potential harm is to people who are not likely to bring a private legal action for redress, may well not have stand- ing to sue, and might have a difficult time proving damage. Who would sue and how would the case be made if easy access to child pornogra- phy increased the risk that more children might be abused? Who would sue and what would be the proof if easy access to Nazi propaganda increased the risk that extreme right-wing political forces might gain on the next Election Day? Even if the law gave standing to the public at large, would enough people have the incentive and the wherewithal to bring such actions? 5.5.4 Self-regulation Without State Intervention A number of groups, certainly among them the Netizen and e-com- merce communities, argue that in most instances the best approach to controlling the diffusion of offensive Internet-based material is self-regu- 35See Rudiger Wolfrum, ea., 1996, Enforcing Environmental Standards. Economic Mecha- nisms as Viable Means, Berlin. 28 GLOBAL NETWORKS AND LOCAL VALUES ration. The great attraction of this approach is the flexibility it provides; individuals can make their own judgments about what material they want to avoid (or to access), and the need to force value consensus within a particular country or across the globe is removed. When one nation's nudity is another's pornography, broad consensus is next to impossible. On the other hand, access-control systems, age-verification systems, and various kinds of filtering software can facilitate customized nonstate regulation. To understand filtering systems, it is important to distinguish between a site's content and the judgment one makes about it. For example, though a site might have an image of a naked woman or a swastika, there may be many judgments about whether or not such content is offensive one per- son might think so; another might not. Many filtering systems are designed by vendors who act both as labeler and judge they describe the content and also make a judgment about appropriateness (though they may or may not provide the user with an option to override their judgment). A second approach is to separate the functions of labeler and judge. To facilitate content labeling, the World Wide Web Consortium designed the Platform for Internet Content Selec- tion (Box 5.2), which provides a standardized vocabulary and format for labeling content. Once labels have been associated with specific content, the user can deploy a filter that examines the labels associated with in- coming content, and based on those labels, makes judgments about whether content with certain labels should or should not be displayed. Note that that different filters can behave differently with regard to the same content. That is, Filter A may allow content that is labeled as containing "nudity" and reject content that is labeled as containing "swas- tikas," while Filter B may do exactly the opposite. A second issue is that the scope and granularity of the labeling are critical. If the labeling vocabulary does not include a category for "swas- tikas," a filter based on this approach cannot block content containing swastikas. At least one particular vocabulary of the Internet Content Rating Association allows labeling of sites that contain certain kinds of language, nudity or sexual content, violence, and information related to gambling, drugs, and alcohol. However, there is no reason in principle that a party concerned about other categories of possible offensiveness cannot create vocabularies that cover them (though in practice, obtaining a broad scope of coverage for such alternatives is difficult). Though filtering systems can be created by anyone, the required ef- fort may be large. In principle, the organizations responsible for filtering systems must stand behind the judgments they make about offensiveness (and perhaps about content labeling as well), and users of filtering sys- FREE SPEECH AND THE INTERNET 129 130 GLOBAL NETWORKS AND LOCAL VALUES tems may make their own judgments about the attractiveness of products from different vendors based on how well their own values about offen- siveness are reflected in the vendors' judgments. Thus, users not wishing to see pro-racist material might use filters developed by civil-rights orga- nizations, or users not wishing to see anti-religious material might use filters developed by their church.36 One of the attractive features of a labeling system is that it is inher- ently self-policing. The value of the label depends on the reputation it develops for reliability. Each site that receives the label's endorsement has a stake in giving it meaning. The user community itself has an inter- est in the quality of the label and can also be part of the enforcement process. AS movie- and video-rating organizations in the United States have learned, making judgments about offensiveness is fraught with difficul- ties. Such groups must tread a fine line between being overly rigid and prescriptive in their classifications and being so ambiguous that no real information is conveyed to the user. Generally speaking, categories or rules that have some flexibility are more likely to be suitable for a rapidly changing world like the Internet. An important technical issue is the extent to which computer-execut- able rules for distinguishing between appropriate and inappropriate con- tent can be formulated. Some of the filtering software with which people have experimented thus far has shown how difficult this can be, some- times leading to absurd results, as when some particular words are coded as unacceptable. Moreover, filtering systems are usually designed with some particular point of view to take advantage of a market, pursue an ideological agenda, or avoid liability on the part of the software provider. This means that, at least until now, there has been little incentive for trans- parency in how the filters are created37 and little attempt to take oppos- ing interests or values into account, as one might hope would be the case in a legislative approach to regulation.38 In that sense, filter systems can 36A fuller discussion of the advantages, disadvantages, and other realities of filters is con- tained in CSTB, National Research Council, Youth, Pornography, and the Internet: Can We Pro- vide Sound Choices in a Safe Environment?, Washington, D.C.: National Academy Press, forth- coming. 37This is not to say that it is impossible or even difficult to increase transparency of filters by making available the lists of Web sites that are blocked or the lists of keywords that might be objectionable. However, vendors of filter products often argue that the creation of their blocked lists or "bad words" is their intellectual property, and that publication of such lists would deprive them of the benefits of their work if others took their work as a starting point to develop other lists. 38This has been recently pointed out by Lawrence Lessig, 1999, Code and Other Laws in Cyberspace, New York: Basic Books. FREE SPEECH AND THE INTERNET 131 work against certain free speech values of a community and, indeed, help to de-integrate the community. Host providers have a different problem in undertaking self-regula- tion; the control systems available to content providers or content users are not applicable to them. First, the material that host providers carry is an aggregate from a huge spectrum of content providers; and second, they are not end users, so that filtration software would be inappropriate. Many host providers have adopted their own codes of ethics. They may commit themselves, for example, to checking complaints about sites that come from users or to working cooperatively with legal authorities of par- ticular nation-states to take action against sites involved in illegal activity. Critics of self-regulation point out that because such codes of ethics are unenforceable, they are primarily symbolic. However, it may be pos- sible to develop a legal framework that would make codes enforceable, even if the host providers themselves determined the details of the code. A more serious criticism is the possible curtailment of free speech; the codes may deprive content providers who are sanctioned or excluded by a host provider of the due process they would have under a more formal legal structure. Such points have not been thoroughly discussed at this early stage in the development of these self-regulatory instruments. The role of hosts as intermediary between user and content provider suggests that it may be inappropriate to think of them as engaged in regu- lation per se. Their role in a nongovernmental regulation scheme is to provide a service to users who would like to be shielded from harmful or otherwise unwanted contents. Users could do this for themselves by sim- ply not accessing certain sites or by installing filters on their computers (or using other technologies that may be available in the future), or they could access the Internet via a service provider with a declared access policy. Whether users want to pay for the host's service is something to be determined by the market. In fact, it would appear that, in the future, host providers will compete with each other and with companies produc- ing self-help tools like filters, and users may choose on the basis of conve- nience, comprehensiveness, and selectivity. 5.5.5 Hybrid Regulation Self-regulation and intermediation have many attractive features, but if governments do not intervene, the market alone will shape the array of mechanisms actually used to control the distribution of harmful content. These mechanisms, in turn, will largely determine what material is elec- tronically available to whom. Obviously, the outcome may not always conform to the values of the society. It might therefore be useful to con- sider hybrid forms of regulation, combining public and private controls. 32 GLOBAL NETWORKS AND LOCAL VALUES Governments can use both sticks and carrots to influence the opera- tion of self-regulatory schemes.39 As pointed out earlier, command-and- control regulation of content providers doesn't work very well in the net- worked world. The CompuServe case indicates that an alternative for governments is to threaten action against host providers. But there are softer options. Governments can insist on an organizational framework for self-regu- lation that gives outside interests a voice and ensures that the process of developing and applying a rating system or excluding a provider from a host network is transparent. They can give industry limited antitrust or liability protection to encourage joint rulemaking and vigorous joint ac- tion. Or they can set up an authority to check on how well self-regulation is working (a role played by the U.S. Federal Trade Commission with respect to certain privacy issues and other aspects of consumer protec- tion). It is even possible to envision governments supporting or encour- aging education and training programs to improve the media competence of users so that they are better able to use the self-help tools that become more and more available as technological advances occur. It does seem likely that a hybrid regulatory approach will finally emerge, but it is difficult to predict what particular balance of mecha- nisms will actually obtain in each country. The experimentation now go- ing on appears to be healthy, and if there is a bottleneck, it is the legal system's difficulty in understanding the technical possibilities and react- ing quickly and flexibly to them. It may well be that in an area as techno- logically dynamic as this one and as capable of bringing about major so- cial changes, expert panels similar to those developed under the aegis of the Intergovernmental Panel on Climate Change could play an important role. They might be especially useful in advising governments on the state of the technology and the feasibility of various regulatory ap- proaches. 39For the theoretical framework, see Fritz W. Scharpf, 1997, Games Real Actors Play: Actor- Centered Institutionalism in Policy Research, Boulder, Colo.: Westview Press. OCR for page 122 OCR for page 123 OCR for page 124 OCR for page 125 OCR for page 126 OCR for page 127 OCR for page 128 OCR for page 129 OCR for page 130 OCR for page 131 OCR for page 132 Representative terms from entire chapter:
Could Ivory Coast turmoil make chocolate more expensive? Chocolate lovers everywhere have reasons to be nervous about the political turmoil in Ivory Coast. The West African nation produces nearly 40% of the world's raw cocoa. And without cocoa, of course, there would be no chocolate. Already the wholesale price of this crucial raw ingredient in one of the planet's favourite foods has doubled in the last four years. And that was before the single largest producer of the commodity began its recent slide towards conflict. So will Ivory Coast's problems push up the price of a bar of chocolate in the shops? In some respects they already have. The current stand-off between incumbent President Laurent Gbagbo and Alassane Ouattara, the man held by the United Nations to have won recent elections, follows years of tensions.'Sapped confidence' "The tensions have starved Ivory Coast of investment and sapped the confidence of cocoa growers," Laurent Pipitone, an expert in economic issues at the London-based International Cocoa Organisation, told the BBC. Cocoa Highs and Lows - Dec 2009: $3,510/ tonne - Nov 2010: $2,666 - Dec 2010: $3,000 - Cocoa accounts for 6-8% of cost of chocolate bar "It takes three years for a cocoa bush to become productive after it's been planted," he says. With the political outlook uncertain, farmers in Ivory Coast have been less willing to take the financial risk and put in the effort required to grow more cocoa, which means the country's productive capacity has gone into gradual decline. This has been one reason why world cocoa prices have risen in recent years. But intriguingly, the general view among analysts seems to be that the latest escalation of political tension will not make matters much worse than they already are. That is partly because of the nature of cocoa production. Ivory Coast's crop is produced by thousands of independent small farmers. The chances are that in the short term they will carry on working, whatever the political environment. "The farmers need the income," explains Mr Pipitone. "They may stop planting new cocoa plants but they won't stop producing with what they've already got," he says. He also believes the growing political crisis will not stop the farmers getting their products to market.Disease impact If the normal channels for selling their products get closed off by unrest in the main city, Abidjan, Ivory Coast's farmers will simply move more of their cocoa in small quantities across borders into neighbouring countries where they can sell it, he believes. However, the international price of cocoa has risen about 12.5% since early November as a direct consequence of the problems in Ivory Coast. Cocoa is traded in two places: London and New York. The price - currently around $3,000 (£1,900) a tonne in New York - is still actually a lot lower than it was in the early part of 2010. In New York, the price hit a 30-year high of around $3,510 (£2,350) a tonne in December 2009. In London, the peak came a few months later in July. At those times the world really was facing a real prospect of a cocoa shortage, which made the price shoot up. The key issue then was not so much political uncertainty in Ivory Coast - though that was a factor - but the impact of disease. Ivory Coast is the world's largest cocoa producer but Ghana and Indonesia are also important players. This time last year, Ghana's cocoa industry was battling against "black pod" and "swollen shoot", while Indonesian farmers were up against "VSD" (Vascular Streak Dieback). Chocolate lovers will be relieved to know that all these forms of disease appear to be on the wane. Indeed, this year, after a run of poor harvests, Ghana's cocoa farmers have enjoyed a bumper crop. Higher exports from Ghana are expected to partly offset any shortfall from Ivory Coast. Ivory Coast has suffered similar disease issues to neighbouring Ghana but not to the same extent. Its problems have been more of a political nature. The net effect is that cocoa prices are higher than they were several years ago, partly due to the ongoing impact of tensions in Ivory Coast. But prices are not as high as they were a few months ago when the main issue was disease in Ghana and Indonesia.'Changed recipes' So what does all this mean for the cost of a bar of chocolate? It is hard to know exactly. Cocoa is the ingredient that makes chocolate special but industry experts say the raw ingredient only accounts for 6-8% of what the consumer pays for the final product. The rest is partly the cost of other ingredients such as sugar and milk, but more importantly it includes manufacturing, distribution, advertising and the chocolate makers' profit. Nonetheless, analysts say high cocoa prices over the last few years have had an impact on the way chocolate is made and sold. It is reported that some manufacturers have changed their recipes, reducing the amount of raw cocoa they use. Others have reduced the size of the products they sell while keeping the price the same. The pricing strategies used by the world's major chocolate makers are, it seems, every bit as complicated and hard to unravel as the political intrigues in Ivory Coast.
This week we did a number of experiments with oil, water, food coloring and various props to explore the property of surfaces. The physical properties like surface tension and solubility are related to the strength of Intermolecular Forces -- the attractive forces between molecules. Surface Tension Experiments These came from the website of the Chicago Section of the American Chemical Society 3 bowls or containers with water a piece of string a paper clip 1. Sprinkle pepper on the surface of cold clean water in a shallow dish. Allow the particles to spread out and cover the surface. 2. Put your finger in the bowl. 3. Put a drop of liquid soap on your finger. Put your finger in the bowl again. What should happen: Pepper should rush away from your finger in a star pattern. What did happen: Pepper rushed away from finger in a circle -- still impressive. 1. Float a small loop of string in the middle of the surface of water. 2. Put a drop of liquid soap inside the loop. What should happen: The surface tension inside the loop of string should weaken by the soap but the surface tension outside the string should have pulled the string outward.What did happen: The string sank before we could try step 2. 2. If they don't, place a paper towel on the surface of the water, place the objects on the paper, and then remove the paper. 3. Now put a drop of liquid soap on the water surface. What should happen: As soon as the tension is broken by the soap, these items should sink to the bottom. This one worked as planned! 2 clear glasses or plastic cups 1. Pour about an inch of water into the cup. 2. Add food coloring to the water. 3. Pour about an inch of glycerin into the second cup. 4. Gently add colored water. 5. Add oil until you get three layers. 6. Stir. Allow to settle.The water will mix with the glycerin, but the oil will separate back out. 7. Add a layer of liquid soap. 8. Stir gently. The oil will mix with the glycerin. What's Happening: Different liquids have different densities, and according to the density, the liquids will settle in a certain order when mixed. Oil is less dense than water and therefore will settle on top of water. (Sorry that it's sideways. When I figure out how to fix it, I will repost it!) Tall narrow jar 1. Fill the cylinder with water. 2. Add the food coloring. Do not let the water become too dark. 3. Slowly pour oil into the cylinder. It should make a thick layer on top of the water. 4. Slowly sprinkle the salt into the cylinder on top of the oil. The salt coats the oil and causes it to fall to the bottom of the graduated cylinder in globs. The oil will gradually return to the top of the graduated cylinder. Vegetable oil is less dense than water. When the salt is added, it sticks to the oil and drags it down. Once at the bottom, the water dissolves the salt and the oil floats back up. The reason the oil doesn't dissolve into the water happens because of its difference in polarity. Water and salt are both polar. Oil is non-polar. Only polar substances will dissolve polar substances. A non-polar substance will not dissolve in a polar substance. This is the rule of "like dissolves like."
While this looks impressive, I do wonder about the climate and energy impacts of all this development. With the construction sector growing at over 11%, is India devising ways to ensure its growth is as friendly to our climate as possible? I took my query to Mili Majumdar, Director of the Sustainable Habitats division at one of India’s most influential environmental organizations, TERI. Mili, who has been a leader in the green building movement in India, told me that there are significant efforts underway to promote clean energy solutions and sustainability in the buildings sector. Not only is there a national energy efficiency code (PDF), and an Indian version of the popular U.S. LEED rating system, there is an entirely home-grown, comprehensive building rating system called GRIHA. I think it is awesome that the word “GRIHA” (which stands for Green Rating for Integrated Habitat Assessment) actually means “abode” in Sanskrit … what an appropriate name!GRIHA is customized for building types that are common in India and for Indian climatic zones (climate often affects how much energy or water a building uses). TERI’s Director-General, Dr. R. K. Pachauri – who also chairs the Intergovernmental Panel on Climate Change – initially suggested the idea of creating a building rating program that was tailored for India’s local conditions. Mili and her team developed the concept into a full-fledged evaluation system that incorporates everything from site planning to waste management, from building systems (air conditioning, lighting etc.) to water use, and from building materials to indoor air quality. What sets GRIHA apart from many other building rating systems is that it also gives special emphasis to the use of renewable energy. Developers get extra points for relying – either in whole or in part – on renewable sources like solar power. Last year, the government of India decided that all new government buildings across India would have to comply with GRIHA, and various Ministries also announced incentives to support GRIHA. Mili said that GRIHA is quickly challenging some myths, such as the notion that a green building costs more. In fact, GRIHA buildings have had little or no cost differential, and some have even had lower construction costs. For the few projects that have had a small initial cost increase of up to 5%, the additional money is recovered almost immediately from energy and water savings that continue over the building’s lifetime. And the coolest part? Mili’s team estimates that GRIHA may be able to avoid nearly 38% of the carbon pollution that would otherwise have been created by building construction, operation and energy use between now and 2030. Saving the climate and saving people money? In Bollywood parlance, that’s a super-hit “double role”! I asked Mili what we should expect from GRIHA as it expands over the next few years. She told me she hopes to draw in a wider range of stakeholders to adopt the rating system (especially the private sector), to live up to expectations that have already been set, to increase awareness about GRIHA, and – most importantly – keep GRIHA credible by maintaining high standards and avoiding “greenwashing.” That sounds, to me, like a recipe for success. More (green) power to her and all the others who are working hard to build a climate-safe, energy-secure future for India!
One entry found for vaudeville. Main Entry: vaude·ville Pronunciation: vd(-)-vl, väd-, vd-, -vil Etymology: from French vaudeville "a humorous song or skit," derived from early French vaudevire "a song that makes fun of something," from chansons de vau de Vire "songs of the valley of Vire" : theatrical entertainment made up of a variety of songs, dances, and comic acts Word History In the 15th century, a number of humorous songs became popular in France. The songs were said to have been written by a man who lived in the valley of the River Vire, which is located in northwest France. The songs became known as chansons de vau de Vire, meaning "songs of the valley of Vire." Other people were soon writing and performing similar songs. Before long, people no longer connected such songs with the valley of Vire. The name chansons de vau de Vire was shortened to one word, vaudevire. Further changes in spelling and pronunciation have given us the modern word vaudeville, which refers not only to humorous songs, but also to other forms of popular entertainment.
Research on OCD $59.95 Buy and instantly download this paper now The paper begins with an annotated bibliography and a review of the research on obsessive-compulsive disorder (OCD). The paper examines these studies' focus on a therapy or pharmaceutical approach and looks at whether the approach was found to be helpful in the treatment of OCD, and what recommendations the author makes in terms of his or her study results. The paper offers the thesis that although a number of studies have been conducted regarding OCD treatment and conclusions are being drawn based on these results, more research needs to be conducted to better refine the data and to consider alternative approaches. From the Paper:"Obsessive-compulsive disorder (OCD) is a potentially disabling syndrome that can last throughout an individual's lifetime. Those suffering from OCD become enmeshed in a pattern of repetitive thoughts and behaviors that are senseless and distressing, but extremely difficult to overcome. Disagreement exists about the number of people afflicted with OCD. In the recent past, mental health professionals considered it a rare disease, because only a small minority of their patients had the condition. That was because many people with the illness did not seek treatment. However, a survey conducted in the early 1980s by the National Institute of Mental Health (NIMH) showed that OCD affects more than 2% of the population, or more common than such severe mental illnesses as schizophrenia, bipolar disorder, or panic disorder. The social and economic costs of OCD were estimated to be $8.4 billion in 1990." Sample of Sources Used: - Abramowitz, Jonathan S., Effectiveness of psychological and pharmacological treatments for obsessive-compulsive disorder: A quantitative review. Journal of Consulting and Clinical Psychology 65.1(1997): 44-52. - Beck, A. T. Cognitive therapy and emotional disorders. New York: International Universities Press, 1976. - Ellis, Albert. Overcoming Destructive Beliefs, Feelings, and Behaviors: New Directions for Rational Emotive Behavior Therapy New York: Prometheus Books, 2001 - Gannon, Walter. Altering the brain and mind. American Journal of Psychiatry. 161.6 (2004): 1038-1048 - Hackman, A., and C. McLean. A comparison of flooding and thought stopping in the treatment of obsessional neurosis. Behavior Research and Therapy. 13(1975): 263-269. Cite this Research Paper: Research on OCD (2010, August 16) Retrieved June 19, 2013, from http://www.academon.com/research-paper/research-on-ocd-128895/ "Research on OCD" 16 August 2010. Web. 19 June. 2013. <http://www.academon.com/research-paper/research-on-ocd-128895/>
>> Stay informed about: Earthing on boats > A friend that's just boaugh a canal boat is 'doing' it up and is > worried about the > electrics and has heard that boats use positive earth so he's asked > me for advice. > I know about electronics & house wiring (well a bit anyway) but have > never considered this on a boat where he'd like to have a computer TV > Now considering I was going to help him sort the electrics out what > else should I know that's differant from 'Land' electrics I tried > searching for a helpful > UK website but none found. > Any pointers Usually, a boat has no "earth return", so the engine and electrics need a separate connection to the battery negative. The usual system is to have a distribution panel on which the positive is on the switch, indicator-lights and fuse (or contact-breaker) side, and all the negatives are fed back to a negative bus bar whic is connected to battery negative. The main switch is usually positioned close to the battery or batteries, and if a 2-battery system is used, a blocking diode is used so that the domestic battery can run down to almost zero, whilst you still have a full battery for starting If you can get hold of a copy of "The 12-volt Doctor's Handbook", this is a great starting point. Also Reading University's website used to have some great stuff on marine electrics, but I don't know if this is still the case.
Intellectual Property Rights and the Right to Participate in Cultural Life New York Law School November 1, 2008 Although many contend that human rights law is a justification for intellectual property rights, precisely the opposite is true. Human rights law is far more a limit on intellectual property rights than a rationale for such regimes. In a variety of ways, human rights law requires states to take specific, concrete steps to limit the effects of intellectual property rights in order to protect international human rights. This powerful and emancipatory dimension of human rights law has unfortunately been overshadowed by those who claim human rights as a basis for granting exclusive rights. The U.N. Committee on Economic, Social, and Cultural Rights – the body created to monitor state compliance with the terms of an international treaty called the International Covenant on Economic, Social, and Cultural Rights – is in the process of drafting a General Comment that will interpret the “right to take part in cultural life,” a right protected under Article 15(1)(a) of the treaty. This submission was designed to provide the Committee with an overview of some of the ways in which intellectual property rights can affect this right and what states may be required to do to protect the ability of individuals to participate in cultural life. Number of Pages in PDF File: 8 Keywords: international human rights, human rights, international law, intellectual property, international intellectual property, right to participate in cultural life, cultural participationworking papers series Date posted: September 22, 2009 © 2013 Social Science Electronic Publishing, Inc. All Rights Reserved. This page was processed by apollo6 in 0.406 seconds
Assembly: mscorlib (in mscorlib.dll) The zero-based index in array at which copying begins. array is a null reference (Nothing in Visual Basic). arrayIndex is less than 0. array is multidimensional. array does not have zero-based indexing. arrayIndex is equal to or greater than the length of array. The number of elements in the source ICollection is greater than the available space from arrayIndex to the end of the destination array. The type of the source ICollection cannot be cast automatically to the type of the destination array. If the type of the source ICollection cannot be cast automatically to the type of the destination array, the nongeneric implementations of ICollection.CopyTo throw InvalidCastException, whereas the generic implementations throw ArgumentException. This method is an O(n) operation, where n is Count. Windows 98, Windows Server 2000 SP4, Windows CE, Windows Millennium Edition, Windows Mobile for Pocket PC, Windows Mobile for Smartphone, Windows Server 2003, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP SP2, Windows XP Starter EditionThe Microsoft .NET Framework 3.0 is supported on Windows Vista, Microsoft Windows XP SP2, and Windows Server 2003 SP1.
Move over, Moai. Easter Island may now boast another odd claim to fame: a midlife longevity drug. In a new study, researchers report that an antibiotic called rapamycin--after the island's Polynesian name, Rapa Nui--enabled middle-aged mice to live up to 16% longer than their rapa-free counterparts. The discovery marks the first time a drug has been shown to lengthen life span in mammals, even when administered late in life. Scientists first stumbled on rapamycin in soil samples taken from Easter Island in 1965. A bacterium found in the soil, Streptomyces hygroscopicus, secreted the stuff to fend off its bacterial and fungal rivals. Rapamycin has since been used to prevent organ rejection in transplant patients and, most recently, as an antitumor drug. The compound works by inhibiting mTOR, a protein that regulates cell growth and survival. When researchers realized that calorie restriction, which is known to lengthen life spans in mice, also suppresses mTOR activity, they began to wonder if rapamycin might boost longevity as well. Encouraged by earlier studies showing that insects and worms live longer on rapamycin, a trio of labs--the University of Texas Health Science Center at San Antonio; the University of Michigan, Ann Arbor; and the Jackson Laboratory in Bar Harbor, Maine--decided to test the compound on mice. The labs had access to hundreds of mice genetically diverse enough to model human diversity, thanks to the U.S. National Institute on Aging's Interventions Testing Program, which investigates treatments with life-extending potential. Pharmacologist Randy Strong and molecular biologist Z. David Sharp, who headed the study's Texas arm, planned to feed young mice rapamycin and observe the drug's effects as they aged. But by the time the researchers formulated a feed that made the rapamycin stable and easily digestible, the mice had grown old--20 months old, or about 60 human years. Because calorie restriction and other life-lengthening measures work best when started young, Strong and his colleagues didn't expect the experiment to work in midlife. Yet the mice lived 28% to 38% longer than the controls from that point on, the researchers report in Nature, the equivalent of 6 to 9 extra years in humans. Their overall life expectancy rose 5% to 16%. "We were really excited because this appears to be the first drug that slows aging even if it's started later in life," says Strong. Although he and his colleagues aren't yet sure how rapamycin lengthens life, it's thought that suppressing mTOR, whatever the method, prompts the body to hunker down and wait for better times, slowing its growth processes and strengthening its defenses against cell-damaging stressors. The study comes as "a pleasant surprise," says University of Washington, Seattle, molecular biologist Matthew Kaeberlein, who was among the first to propose the mTOR-longevity link. "This tells us the [mTOR] pathway affects aging in mammals ... and probably affects people as well." Don't expect antiaging drugs to hit the market anytime soon, though. Rapamycin is known to raise cholesterol levels and, as a potent immune system suppressant, the compound could make its consumers more susceptible to infections. Kaeberlein hopes future studies will measure the health of rapa-enhanced mice and the effects of varying rapamycin doses, in hopes of divorcing the drug's benefits from its dangers.
Gordon Parks was one of the seminal figures of twentieth century photography. A humanitarian with a deep commitment to social justice, he left behind a body of work that documents many of the most important aspects of American culture from the early 1940s up until his death in 2006, with a focus on race relations, poverty, Civil Rights, and urban life. In addition, Parks was also a celebrated composer, author, and filmmaker who interacted with many of the most prominent people of his era—from politicians and artists to celebrities and athletes. Posts tagged with Gordon Parks Familiar as we are with the history of the Civil Rights Movement, the day-to-day realities of segregation and racism sometimes escape us. Gordon Parks photographs the separate world of ’50s African-American society.
Posted on May 25th, 2012 8 comments Add a comment >> An article on Backpacker Magazine’s website lists “America’s 10 Most Dangerous Hikes.” The one closest to the Adirondacks is Mount Washington in New Hampshire. The mountain is infamous for its fickle and sometimes extreme weather. “Known as the most dangerous small mountain in the world,” Backpacker says, “6,288-foot Mt. Washington boasts some scary stats: The highest wind velocity ever recorded at any surface weather station (231 mph) was logged here on April 12, 1934. And 137 fatalities have occurred since 1849. No surprise: Most are due to hypothermia—and not only in winter. ‘They call them the White Mountains for a reason,’ says Lieutenant Todd Bogardus, SAR team leader for New Hampshire’s Fish & Game Department. ‘We see snow right on through the year.’” Other hikes that made the list include the Bright Angel Trail in the Grand Canyon, the Barr Trail on Pikes Peak in Colorado, the Mist Trail on Half Dome in California, and the Muir Snowfield on Mount Rainer in Washington. Click here for the complete list. So if you were to choose the most dangerous hike in the Adirondack Park what would it be? Topping my list would be the Trap Dike and the adjacent slides on Mount Colden. A hiker was killed in the dike last year, and several others have been injured on this route over the years. Another candidate would be the Eagle Slide on Giant Mountain. A fall in the wrong place could be disastrous. Both of these are off-trail excursions. Any thoughts on the most dangerous trail in the Adirondacks?
Research Methods Homework Help | Research Methods Assignment Help The various research methods used for collecting primary data are Survey is the most widely used method of marketing research. It involves asking the respondents answers to question. TH questions are aimed at eliciting the information needed by the interviewer. Manila questionnaire survey is a very useful survey method when personal interviews are not possible. (b) Experimental method: The method helps to highlighting those areas in which problem may arise, where planed are inadequate and areas which require change immediately. This method simulates a real market situation. (c) Observational Method: This method involves the collection of data through observation by examining the rerecords and seeing the titration and behavior of the subjects under consideration for research. This method yields accurate results. Tools used in data analysis Most frequently used tools in data analysis are: (a) Correlation and Regression: Correlation analysis is used to identify the relationship between two variables and regression that is used to identify the degree of relationship between he variable under study. (b) Time series analysis: This is used to relate the present data to foresee the future. (c) Factor analysis: Elements that lead to problem or which are under study are categorized in to identical factors and analyzed. For more help in Research Methods click the button below to submit your homework assignment Our Research Methods Assignment Help Services Includes:
Mukti Bahini the later appellation of the forces of the war of liberation. The immediate precursor of the Mukti Bahini was Mukti Fauj, which was preceded denominationally by the Sangram Parishads formed in the cities and villages by the student and youth leaderships in early March 1971. When and how the Mukti Fauj was created is not clear nor is the later adoption of the name Mukti Bahini. It is, however, certain that the names originated from the people who joined the liberation struggle. The Mukti Bahini obtained strength from the two main streams of fighting elements: members of armed forces of erstwhile East Pakistan and members of the urban and rural sangram parishads. The earliest move towards forming the liberation army came from the declaration of independence made by major ziaur rahman of East Bengal Regiment. In the declaration made from Kalurghat Betar Kendra (Chittagong) on 27 March 1971, Zia assumed the title of "provisional commander in chief of the Bangladesh Liberation Army", though his area of operation remained confined to Chittagong and Noakhali areas. Major Ziaur Rahman's declaration marked a break with Pakistan by the army. On 12 April 1971 Colonel (later General) mag osmany assumed the command of armed forces at Teliapara (Sylhet) headquarters. Osmany was made the commander-in-chief of Bangladesh Armed Forces on 17 April 1971. Serious initiative for organising the Bangladesh liberation army was taken between 11-17 July. In a meeting of the sector commanders in calcutta four important resolutions were taken in consideration of strategic aspects of the war, existing problems and future course of resistance. These were: (1) Composition and tactics of the combatants would be as follows: (i) Guerrilla teams comprising 5 to 10 trained members would be sent to specific areas of Bangladesh with specific assignments; (ii) Combat soldiers would carry out frontal attacks against the enemy. Between 50 and 100 per cent would carry arms. Intelligence volunteers would be engaged to collect information about the enemy among whom 30 percent would be equipped with weapons; (2) The regular forces would be organised into battalions and sectors. (3) The following strategies would be adopted while carrying out military operations against the enemy; (i) a large number of guerrillas would be sent out inside Bangladesh to carry out raids and ambushes; (ii) industries would be brought to a standstill and electricity supply would be disrupted; (iii) Pakistanis would be obstructed in exporting manufactured goods and raw materials; (iv) communication network would be destroyed in order to obstruct enemy movements; (v) enemy forces would be forced to disperse and scatter for strategic gains; (vi) attacks would be launched on scattered enemy. soldiers in order to annihilate them. (4) The whole area of Bangladesh would be divided into 11 sectors. Regular and irregular forces The regular forces consisted of the three forces: Z-Force under the command of Major ziaur rahman, K-Force under khaled mosharraf and S-Force under KM Shafiullah. Most of the soldiers came from East Pakistan Rifles and East Bengal Regiment. Those members of the EPR, Police and Army who could not be accommodated in these battalions were divided into units and sub-units to fight in different sectors. The irregular forces were those who were trained for guerrilla warfare. In addition, there were also some independent forces that fought in various regions of Bangladesh and liberated many areas. These included mujib bahini, Kaderia Bahini, Afsar Battalion and Hemayet Bahini. Bangladesh Navy was constituted in August 1971. Initially, there were two ships and 45 navy personnel. These ships carried out many successful raids on the Pakistani fleet. But both of these ships were mistakenly hit and destroyed by the Indian fighter planes on 10 December 1971, when they were about to launch a major attack on Mongla seaport. Bangladesh Air Force started functioning on 28 September at Dimapur in Nagaland, under the command of Air Commodore AK Khondakar. Initially, it comprised of 17 officers, 50 technicians, 2 planes and 1 helicopter. The Air Force carried out more than twelve sorties against Pakistani targets and were quite successful during the initial stages of the Indian attack in early December. Mukti Bahini in the final phase The liberation forces started carrying out massive raids into enemy fronts from October 1971. After the signing of the Indo-Soviet Treaty in August 1971, India began to demonstrate more interest in the Bangladesh war. And finally, India entered the war on 3 December 1971. In fact, the Indian soldiers were already participating in the war in different guises since November when the freedom fighters had launched the Belonia battle. Because of the geo-morphology of Bangladesh, the war could not be won too swiftly. Even then, Dhaka was liberated in a matter of two weeks, the previous successes of the freedom fighters during the preceding few months having been a major contributing factor. On 16 December 1971, commander of the 14 division of Pakistan army Major General Jamshed surrendered to Indian General Nagra near Mirpur bridge in Dhaka. At 10.40 am, the Indian allied force and Kader Siddiqui entered Dhaka city. That signaled the end of 9-month long War of Liberation of Bangladesh. Scattered battles were still waged at various places of the country. The Commander of Eastern Command of the Pakistan Army Lt. General ameer abdullah khan niazi surrendered to the commander of the joint Indo-Bangladesh force and the chief of Indian eastern command Lt. General Jagjit Singh Aurora. The Bangladesh Forces were represented at the ceremony by Group Captain AK Khondakar. [Helal Uddin Ahmed]
Now in Construction & Building Two Danish architects have developed the printable home. They design a building, have the 3-D morphed into the 2-D, print it out on a CNC milling machine, and assemble the home themselves. Scott Ward of Stevens Engineers, which has worked on an NHL practice facility and five Division I college facilities, describes how engineering helps create the rinks we all watch our favorite sports on. Building exteriors soak in the sun all day. The folks at SolarWall have found a simple, inexpensive way to turn those rays into heat for the interior. In essence a metal is built outside the existing wall, creating a cavity. A fan sucks the heat from the surface of the metal through the holes and into the buildings ventilation system. The system can be put to use in the summer too, cooling interiors by five degrees at night. Promote the art, science and practice of mechanical engineering in the field of engineering and technology management.
Remember to enter Amazon via the VDARE.com link and we get a commission on any purchases you make—at no cost to you! "Hispanic Heritage Month"—What's To Celebrate? (Se also by Linda Thom: “Hispanic Heritage Month”—What’s to Celebrate? Part II: Unmarried Mothers And The Coming Underclass) In the Civil-Rights Sixties, Congress established various ethnic heritage celebrations, including Hispanic Heritage Week, which later lengthened to a month and is now in full swing (September 15 to October 16.) To celebrate, each year the Census Bureau releases Hispanic Heritage Fast Facts. Here are the latest figures on the growth of Hispanic residents: U.S. Population Change In words: between 1990 and 2010, the population of the United States increased by 60 million people. Driven by lopsided immigration, Hispanics made up 28 million of those additional residents—i.e., almost half (46.8%) the country’s population increase. The Census Bureau projects that, on our present immigration-driven course, Hispanics will comprise 30 percent of our population in 2050, at 133 million. According to Webster’s New World Dictionary of the American Language, one of the definitions of heritage is “something handed down from one’s ancestors or the past, as a characteristic, a culture, tradition, etc.” So, to determine what Hispanics will be handing down to their posterity, let us look at current Hispanic culture. According to a recent Census Bureau Press Release, (CB11-153), “Education impacts work-life earnings five times more than other demographic factors.” So how well are Hispanics educated? Answer: Sadly, Hispanics on average are very poorly educated. The table below shows numbers from the Census 2010 Annual Social and Economic Supplement of the Current Population Survey. (Calculations by author.) It shows the highest level of education for U.S. residents who are 18 years of age or older, and breaks them out by ethnicity. All +18 years 1st – 8th grades 9th – 11 grades Total Less than High School This lack of education results in substantially lower The Census Bureau recently published an analysis of education and estimated earnings. The whole study can be read here: Education and Synthetic Work-Life Earnings Estimates, American Community Survey Reports, Issued September 2011.[ PDF] Table One of the study shows that for those aged 25 to 64 years of age, - Hispanics’ median earnings are $19,934 annually; - Non-Hispanic Whites’ earnings are $31,461; - Blacks’ earnings are $21,239; - Asians’ earnings are $30,265; - other non-Hispanics’ estimated earnings are $21,699. (These, of course, are computer models of projected earnings). In September each year, the Census Bureau releases income and poverty statistics. The table below shows 2010 poverty rates by race and ethnicity. In words: The poverty rate for Hispanics is more than double that of non-Hispanic Whites (and Asians). Significantly, the poverty rate for non-citizens is 26.7% and there are just 5.4 million of them according to the report. But the number of Hispanics living in poverty is 13.2 million. That means that most poor Hispanics are citizens, many of them native-born. Note that Blacks, relatively few of whom are immigrants, have a high poverty rate. In 2010, 17 percent of Blacks over the age of 18 had less than 12 years of education. Over time, educational attainment has improved but despite long residence in this country, they still lag far behind Whites and Asians. Question: Will Hispanics be different? Answer: no, they won’t. - Maternal Education Maternal education is one of the key variables in a child’s physical, social and economic well-being. This is true for all races and ethnicities. Here is a link to a study entitled, Maternal Education, Home Environments and the Development of Children and Adolescents, by three British economists. [PDF] According to the abstract, the authors studied “the intergenerational effects of maternal education on children’s cognitive achievement, behavioral problems, grade repetition and obesity.” Their bottom line: A good maternal education improves cognitive ability, reduces behavioral problems, reduces grade repetition, and reduces the number of teen pregnancies and the number of criminal convictions of their children. (It does not appear to affect obesity—for what that’s worth.) The study makes for very interesting reading because the authors do not have a political agenda. The subjects of the study are British and the longitudinal data included the children, their mothers and their grandparents. (The Heritage Foundation has an abundance of literature for the U.S.). So how well educated are Hispanic mothers? The Center for Disease Control collects data from birth certificates from all 50 states. In, Expanded Data From the New Birth Certificate, 2008, (Volume 59, Number 7) the CDC states, “Differences among racial and Hispanic-origin groups in educational attainment are substantial. . . .When levels of secondary education are compared, 88.7 percent of white mothers had a high school diploma or higher compared with 77.3 percent of black and 56.3 percent of Hispanic mothers. “Differences in the levels of advanced education are more pronounced. 33.9 percent of white mothers compared to 12.2 percent of black and 8.3 percent of Hispanic mothers reported having a bachelor’s degree or higher. “Asian mothers had the highest level of advanced education with 55.2 percent reporting at least a bachelor’s degree. . . .” In the 2008 final data published by the CDC, Table 14 shows that 41 percent of Hispanic mothers are native-born. In other words, many Hispanic mothers with dreadful educations were born and raised in the USA. Their relative educational failure is home-grown. And how are their children performing in school? Not well at all. The “Hispanic Heritage” release by the Census Bureau shows the ten places in the country with the highest percentage of Hispanics in 2010. Number 7 on the list is Salinas, California, with 75 percent Hispanic residents. The California Department of Education website shows that Salinas High School total enrollment is 2,563 students—of which 82% (1,547) are Hispanic. Of the non-English-speaking students, just 525 speak Spanish. So most of the Hispanic students speak English, meaning they are almost certainly U.S.-born. The “No Child Left Behind” tests in California are called STAR. The California state website shows that in English/Language Arts, only 32 % of Hispanic students at Salinas High School scored proficient or above. Only 16% scored proficient in math; 37% in science and 30% in history and social sciences. Every other racial and ethnic group scored considerably higher than Hispanics. Salinas is not an aberration. Thus Oxnard, California, is number 8 on the Census list of the ten highest percentages of Hispanic residents. And the Oxnard school numbers are equally appalling. Oh, by the way, Oxnard has a gang problem—just like that in Salinas. Conclusion: Hispanics, both foreign and native-born, are not faring well. And their children are not faring well. And their grandchildren are not faring well. That is the “Hispanic Heritage”. On our present course, Hispanic immigrants and their descendants will be a huge underclass in 2050. America does not need a second underclass. America needs an immigration moratorium—now. Linda Thom [email her] is a retiree and refugee from California. She formerly worked as an officer for a major bank and as a budget analyst for the County Administrator of Santa Barbara.
The camps are a great way for your child to have an immersion style Spanish experience. Campers use music, stories, crafts, games, contests, skits and plays to learn Spanish. Space is limited to 14 children per camp. Camp one introduces and reinforces basic Spanish: greetings, numbers, colors, food, and clothes. Children will be exposed to key verbs. Children perform several songs and contests for parents at the end of the week. Camp two expands beyond the basic Spanish of camp one, and the children will be preparing for and acting in a play at the end of the week. A great follow-up for camp one. Camp three is taught primarily in Spanish and is focused on increasing oral proficiency and beginning literacy. Children will write and illustrate a storybook in Spanish. July 7-11: Beginners with little or no comprehension. Age 5-10 July 14-18: Beginners with some comprehension - (cont. of camp 1). Age 5-10 July 21-25: Intermediate with good comprehension of basics. Age 6-11
SAN FRANCISCO -- Four authors and an illustrator of children's science books won the 2007 AAAS/Subaru SB&F Prize for Excellence in Science Books for recently published works that promote scientific literacy, are scientifically sound, and foster an understanding and appreciation of science in readers of all ages. The prizes are being awarded for individual books in four categories: Children's Science Picture Books, Middle Grades Nonfiction Science Books, Young Adult Science Book and Hands-on Science/Activity Book. The winners were selected by a judging panel, and will receive a cash prize of $1,500 and a plaque. "These prizes recognize the importance of science books for children and young people that engage these readers by being clever and entertaining, while teaching them about science in the process," said Alan I. Leshner, chief executive officer at the American Association for the Advancement of Science (AAAS) and executive publisher of its journal, Science. The AAAS/Subaru SB&F Prize for Excellence in Science Books celebrates outstanding science writing and illustration for children and young adults. AAAS and Subaru co-sponsor these prizes to promote science literacy by drawing attention to the importance of good science writing and illustration. "Subaru would like to congratulate the award winners for their outstanding contribution to science writing and illustration," said Tim Mahoney, Subaru of America Inc. "This type of contribution is one that is recognized today, but can be appreciated for generations to come." The 2007 recipients are: Children's Science Picture Book An Egg Is Quiet Author: Dianna Aston Illustrator: Sylvia Long Chronicle Books, 2006 Striking and accurate drawings of all types of eggsfrom the very tiny blue crab egg to the hefty ostrich eggbring this book to life. The beautiful illustrations and simple
U.S. Army airmen destroyed 11,042 enemy aircraft on the world's battlefronts during 1943, while 2,885 U.S. planes were losta basic ratio of 3.8-to-1. The War Department, which used "extreme caution" in compiling these official figures, counted an additional 6,942 enemy planes as probably destroyed or damaged. The ratio in various theaters (U.S. Navy figures are not included): ¶In Asia and the Southwest Pacific6.5-to-1. ¶ In Alaska and the South and Central Pacific2.8-to-1. ¶ In Western Europe4.3-to-1. ¶In the Mediterranean2.7-to-1. The real meaning of these satisfactory figures was not always clear from the figures alone. The differences between the ratios in different theaters show variations in the combat...
Meet Glacier, Kootz, Denali, and Sitka, the newest residents of the Wildlife Conservation Society’s Bronx Zoo. The furry foursome—one young grizzly and three brown bear cubs—are orphans, rescued in separate incidents in Montana and Alaska. The bears share a common history—their mothers were killed after habitually wandering too close to humans. The three brown bear cubs are siblings, born in early 2009 on Baranof Island in southeastern Alaska. The Alaska Department of Fish and Game rescued the orphaned trio and temporarily transferred them to Fortress of the Bear, an education and rescue center in Sitka, Alaska. The young grizzly bear, a male from Glacier National Park in Montana, was originally rescued by park rangers and kept for a few days at Washington State University’s Bear Center. The playful bears are busy exploring their new home at the zoo’s Big Bears exhibit, and clearly enjoy each other’s company (see video “All four bear cubs are healthy and adjusting well to their new surroundings,” said Jim Breheny, Bronx Zoo Director and WCS Senior Vice President of Living Institutions. “We are happy to provide a home for these four animals that would not have been able to survive in the wild without their mothers.” The bears are named for their origins. Of the three cubs, the largest male is named Kootz, which means brown bear in the Tlinget language; the smaller male, Denali, is named for the national park in Alaska; and the female, Sitka, is named after the fishing town where she and her siblings lived for a month after the rescue. Glacier, the young male grizzly, is a year older than the Alaskan bears and named for the national park in Montana where he was born. WCS Conservation Efforts The bears remind us of the challenges we face in finding solutions for coexisting with wildlife. As people continue to develop land and build homes in areas that are prime wildlife habitat, encounters between people and bears have become more frequent. WCS conservationists are working in the Adirondacks and the American West to educate the public on how to reduce human/bear conflicts. By raising awareness of the importance of keeping human food away from bears, as well as guiding land use decisions that will minimize rural sprawl, WCS is helping to improve relations between bears and people.
Press Release August 22, 2007 Gene Triggers Obsessive Compulsive Disorder-Like Syndrome in Mice Study Suggests New Treatment Targets Using genetic engineering, researchers have created an obsessive-compulsive disorder (OCD) - like set of behaviors in mice and reversed them with antidepressants and genetic targeting of a key brain circuit. The study, by National Institutes of Health (NIH) -funded researchers, suggests new strategies for treating the disorder. Researchers bred mice without a specific gene, and found defects in a brain circuit previously implicated in OCD. Much like people with a form of OCD, the mice engaged in compulsive grooming, which led to bald patches with open sores on their heads. They also exhibited anxiety-like behaviors. When the missing gene was reinserted into the circuit, both the behaviors and the defects were largely prevented. The gene, SAPAP3, makes a protein that helps brain cells communicate via the glutamate chemical messenger system. "Since this is the first study to directly link OCD-like behaviors to abnormalities in the glutamate system in a specific brain circuit, it may lead to new targets for drug development," explained Guoping Feng, Ph.D., Duke University, whose study was funded in part by the National Institute of Neurological Disorders and Stroke (NINDS), the National Institute of Mental Health (NIMH) and the National Institute of Environmental Health Sciences (NIEHS). "An imbalance in SAPAP3 gene-related circuitry could help explain OCD." Feng, Jeffrey Welch, Ph.D., Jing Lu, Ph.D., William Wetsel, Ph.D., Nicole Calakos, M.D., Ph.D., and colleagues report on their discovery in the August 23, 2007, issue of Nature. "This serendipitous discovery illustrates how pursuit of basic science questions can provide important insights with promising clinical implications into poorly understood diseases," said NINDS director Story C. Landis, Ph.D. "Ultimately, the challenge will be to translate what we learn from this stunning new genetic animal model into help for the 2.2 million American adults haunted by unwanted thoughts and repetitive behaviors," added NIMH director Thomas R. Insel, M.D., who conducted clinical studies on OCD earlier in his career. Previous studies of OCD had implicated a circuit in which the striatum, which straddles the middle of the brain, processes decisions by the cortex, the executive hub at the front of the brain. But exactly how circuit communications might go awry remained a mystery, and glutamate was not a prime suspect. Nor were Feng and colleagues initially interested in OCD. Rather, they sought to understand the function of the protein made by the SAPAP3 gene, which is involved in glutamate-mediated communications in the cortex-striatum circuit. To find out how it worked, they used genetic engineering to generate SAPAP3 knockout mice. The mice seemed normal at first, but after four to six months, all developed telltale bald patches of raw flesh on their faces, caused by compulsive scratching. Videotapes confirmed that the sores were self-inflicted — grooming behavior gone amok. "We were surprised by the magnitude of this phenomenon," recalled Feng. "The parallels with OCD were pretty striking." In a series of behavioral tests, his team determined that the SAPAP3 knockout mice also showed anxiety-like behaviors, often associated with OCD. They were slower to venture into — and quicker to exit — risky environments. And like their human counterparts, the animals responded to treatment with a serotonin selective reuptake inhibitor (fluoxetine), which reduced both the excessive grooming and anxiety-like behaviors. SAPAP3 is the only member of a glutamate-regulating family of proteins that is present in large amounts in the striatum. It is part of the machinery at the receiving end of the connections between brain cells, where the neurotransmitter binds to receptors, triggering increased activity among the cells. The researchers found that lack of SAPAP3 genes dampened the increased activity usually caused by glutamate and stunted the development and functioning of circuit connections. When the researchers injected the striatum of seven-day-old knockout mice with a probe containing the SAPAP3 gene, it protected them from developing the OCD and anxiety-like behaviors 4 to 6 months later and corrected the circuit dysfunction. This confirmed that the absence of the SAPAP3 gene in the striatum was indeed responsible for the OCD-like effects. The findings suggest that anxiety-related behavior may stem from the striatum, which serves as a pivotal link between the cortex and emotion hubs. The researchers note that recent genetic studies of OCD have hinted at involvement of glutamate-related mechanisms. Feng's team is also looking beyond the SAPAP3 gene to other related genes in the circuit that could lead to similar behavioral problems. They are exploring how the SAPAP3 gene affects neural communications and how it works at the molecular level — with an eye to possible applications in drug development. Collaborating clinical investigators are exploring whether specific variants of the SAPAP3 gene in humans may be related to OCD spectrum disorders, such as trichotillomania, or obsessive hair pulling — a human syndrome also characterized by bald patches on the head. Also participating in the research were: Nicholas Trotta, Joao Peca, Catia Feliciano, Ramona Rodriguiz, Meng Chen, Duke University; Jin-Dong Ding, Richard Weinberg, University of North Carolina; J. Paige Adams, Serena Dudek, NIEHS; Jianhong Luo, Zhejiang University, China. The research was also funded, in part, by McKnight and Harwell Foundation awards. ReferenceWelch JM, Lu J, Rodriguiz RM, Trotta NC, Peca J, Ding JD, Feliciano C, Chen M, Adams JP, Luo J, Dudek SM, Weinberg RJ, Calakos N, Wetsel WC, Feng G. Cortico-striatal synaptic defects and OCD-like behaviours in Sapap3-mutant mice. Nature. 2007 Aug 23;448(7156):894-900. PMID: 17713528 The mission of the NIMH is to transform the understanding and treatment of mental illnesses through basic and clinical research, paving the way for prevention, recovery and cure. For more information, visit the NIMH website. About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit the NIH website. NIMH Press Office - Mental Health Information - Statistics on Mental Disorders - Summaries of Scientific Meetings - Information about NIMH - RePORTER: Research Portfolio Online Reporting Tool Expenditures and Results - PubMed Central: An Archive of Life Sciences Journals - Recommendations for Reporting on Suicide - News from the FieldExternal Link: Please review our disclaimer. News From the Field NIMH-Funded Science on EurekAlert - Out of Sync With the World: Body Clocks of Depressed People Are Altered at Cell LevelExternal Link: Please review our disclaimer. - Nerve Stimulation for Severe Depression Changes Brain FunctionExternal Link: Please review our disclaimer. - Nearly 20 Percent of Suicidal Youths Have Guns in Their HomeExternal Link: Please review our disclaimer.
I was 4 years old when my mom first tried telling me about Santa Claus. I went directly to the chimney, looked inside, and told my mom there was no way a fat man in a red suit could fit in there. I admit that I was precocious. When my daughter was in 3rd grade, she told me the kids in her class had a big debate over the true existence of Santa Claus. But by 4th grade most kids have it figured out. It’s too bad more adults don’t employ the critical and logical thinking I demonstrated when I was a pre-schooler, and that most kids can manage by the time they are 9. The skeptical Santa. For Christmas I present to my readers the truth and facts that seem to be absent in today’s cable television news and entertainment shows. Bigfoot is a man in an ape suit. Cryptozoologists think bigfoot may be Gigantopithecus blacki, a species thought to be extinct. Cryptozoologists believe it is still surviving in North America. Gigantopithicus was an ape closely related to the extant orangutan. In other words it was a giant orangutan. It was the largest known primate to ever live on earth. The genus originally evolved 9 million years ago but no fossils younger than 300,000 years have ever been found. The only fossil evidence we have consists of 3 lower jaws and 1300 teeth. Based on the size of the jaws and teeth, scientists estimate it was 10 feet tall and weighed 1200 pounds. All of the fossil evidence has been excavated from sites in China and Vietnam. None come from North America. The dentition suggests it was dependent upon bamboo for sustenance. A reconstruction of the extinct Gigantopithecus blacki. It was quite a beast. The reconstruction of this species is also quite a stretch–no limb bones nor skull of this species have ever been found. It co-existed with Homo erectus for at least 500,000 years. A viable breeding population of a species this large in North America could not exist unknown to modern science, especially in this day and age of trailcams and satellite and aerial photography. The impressions of the feet are easily faked. Moreover, a big ape like Gigantopithicus was a knuckle-walker and wouldn’t leave footprints such as those that are faked. It is obvious that Bigfoot is a hoax. Bigfoot is a man in an apesuit. This famous photo is actually a Hollywood stuntman in an ape suit. And here’s the man, Bob Heironimus, along with the suit that fooled so many gullible people. Bob Heironimus admitted that he was the one in the apesuit in the famous film frequently shown on television. He passed a lie detector test. The latest hoax about Bigfoot comes from a veterinarian who claims she has DNA evidence of a man-ape hybrid. Her results haven’t been published in a peer-reviewed journal. Don’t hold your breath. There will never be a zombie apocalypse–it is not physiologically possible. Zombies have become popular fodder for fiction recently. I like to watch those tv shows and movies and read those books too, but the Discovery Network sunk to a new low last week when they aired a special supposing what would happen, if there was a real zombie apocalypse. They had scientists proposing that a virus could actually cause zombie-like symptoms. No way. There is no virus that can re-animate a person. In fact, there is no viable scientific explanation for reanimation. Zombies are a subject of supernatural horror, not realistic science fiction. Supposedly, zombies don’t breathe, nor do they have circulating blood. Yet somehow, they can walk, use their senses to detect where potential prey is located, and attack and eat. This is not biologically possible. Without oxygen human muscles will not work and no brain function whatsoever is possible. Just watch an MMA fight. On occasion an MMA fighter will get caught in a sleeper hold. Usually it takes less than a minute of asphyxiation to cause them to lose conciousness. Zombies never breathe, yet they can move their muscles indefinitely. There is a general lack of logic in zombie fiction as well. Supposedly, there are endless masses of people who have become zombies. This is simply an excuse to show violence, as survivors kill this infinite supply of zombies. An endless swarm of zombies is not logical. Look outside. How many people do you see walking in the street right now? I see no one. The vast majority of people would turn into zombies while they were inside houses, buildings, hospitals, and cars. There would be no mass swarmings of zombies. They would be trapped inside, unable to get out because, according to zombie literature, they can’t turn door handles (though they can detect prey). The survivors in zombie apocalypse fiction always choose to do the exact opposite of what a logical person would do. I’d head for Cumberland Island where there are no people and no bridges that would enable zombies to cross the intracoastal waterway. But instead, these fictional survivors always head toward prisons, hospitals, army bases, and malls–where there would be lots of zombies. The whole premise of a zombie apocalypse is totally illogical. Religion is a business scam that should be taxed. There is an organization that has gone to court to try and end the IRS rule prohibiting political speeches in churches. Endorsing a political candidate can endanger a church’s tax exempt status. I agree with this organization–they should be allowed to endorse candidates. But they should be taxed anyway because all religions are nothing more than a business scam. Religions make money by promoting lies invented by a bunch of old men beginning about 2,700 years ago. Preachers, priests, and rabbis all profit from brainwashing people into believing archaic myths. I don’t know if there is a God or not but I know he’s not an invisible Jewish rabbi who let himself be tortured to death, so he could die for everybody’s sins, then reanimate like a zombie for a few days before ascending to heaven to sit next to himself. If that sounds mixed up and illogical, realize I’m only restating in different words what the bible claims. Anybody who takes the bible seriously is more gullible than people who believe in the existence of bigfoot. Patches on foot soles can not detoxify heavy metals from the human body. Not long ago, I saw a commercial on some late night cable for a product that supposedly could detoxify heavy metals from the human body. It was a patch, not unlike gauze, placed on foot soles. I rolled on the floor laughing at this absurdity. Less than a month after they began showing this commercial, I read the FDA had ordered this company to stop making this unproved claim. I wonder how many gullible souls wasted money on that joke. The NRA is a terrorist organization. The National Rifle Association is a terrorist organization. Every year, guns kill more people in the U.S. over a 4 month timespan than the terrorists did on 9-11. All of the NRA’s arguments against gun control are illogical and lack critical thought. Here I debunk each one. 1. “Guns don’t kill people. People kill people. If a killer doesn’t have a gun, they’ll use a baseball bat or a knife” Yeah, but people using guns kill an average of 10,000 people a year in the U.S. It’s a lot easier to pull a trigger than to manually have to stab someone with a knife or hit someone with a baseball bat. Guns are more effective too. There is no way a man with a knife could have killed 20 schoolchildren and 6 teachers at the Newtown, CT. school. Someone would have been able to physically stop the wimp and probably before he killed anyone. 2. “If guns are outlawed, only outlaws will have guns.” This argument is simply a lie. The police will still have guns. Are the police outlaws? 3. “Cars kill more people than guns.” This is the stupidest argument of all. The purpose of a car is transportation. The purpose of a gun is killing. As the lyric to the Lynyrd Skynyrd song “Saturday Night Special” goes, “Handguns are made for killing. They ain’t no good for nothing else.” We need cars to survive in the modern world we live in. We don’t need guns to survive. 4. “Guns keep me safer.” Gun owners are over 4 times more likely than non-gun owners to die from gunshot wounds. 5. “If the principle had a gun, he could have prevented the school shooting in Newtown.” Two armed policeman were present at the Columbine School shooting. They could not stop it. 6. ”Owning a gun is protected by the 2nd amendment.” Before talking about gun control, every politician, even those favoring tougher regulations, always have to make the statement that they support the 2nd amendment. That shows how much the NRA terrorizes our politicians. Stating support for the 2nd amendment is nonsensical because the 2nd amendment is poorly written and ambiguous. From the way the 2nd amendment is written, it is impossible to determine whether the founding fathers meant gun ownership was a collective or an individual right. The founding fathers used plural words such as “militia” and “people” when referrring to gun rights. One could make the argument that gun ownership is merely a collective right, not an individual right. The 2nd amendment does not use the word “both,” debunking the claim that it protects both the collective and individual right to own guns. Moreover, all they had then were single shot muskets and pistols. If these gun rights nuts want to take a strict interpetation of the constitution, the only guns that should be legal are single shot muskets and pistols. 7. “The assault weapons ban didn’t work before.” That ban was too weak and not strictly enforced, but that’s beside the point. We have laws against murder, but that doesn’t stop murder. According to the NRA’s logic, we should legalize murder because laws against it don’t work. 8. “The majority of gunowners are responsible.” The majority of people in this country believe God is an invisible rabbi who sees all, and many also believe in the existence of bigfoot and angels, the possibility of a zombie apocalypse, and that patches on foot soles can detoxify heavy metals. Most are in huge credit card debt. Americans are not a responsible rational people. In response to the Newtown shootings Senator Jay Rockefeller wants the National Academy of Science to study violent video games. Rockefeller is too cowardly to attack the real cause of the violence in America– the availability of handguns and assault rifles. Instead, he wants to scapegoat video games because it’s much easier than taking on the terrorist organization known as the National Rifle Organization. Hey Jay–it wasn’t video games that killed 20 schoolchildren in Newtown…it was assault rifles. Gun control proponents are wrong about one thing. They claim there is no hunting use for rifles with high capacity magazines. Rednecks in Texas like to shoot into herds of feral hogs with these weapons. Here’s Ted Nugent using a machine gun to shoot into a herd of feral hogs from a helicopter. Not exactly sporting either. Ted Nugent is a sadistic howling maniac. In my opinion all assault rifles and handguns should be outlawed. Gun deaths in the U.S. are 20 times higher than the average rate in the rest of the developed world. The only legal firearms should be single shot rifles and shotguns for hunting. A single shot firearm still gives an unfair advantage to man over beast, but at least it’s not the glorified slaughter as depicted in the above photo.
The Biodiversity Research and Teaching Collections (BRTC) (formerly Texas Cooperative Wildlife Collection) was established in 1938 by the late Dr. William B. Davis, founder of the Department of Wildlife Management (later Wildlife and Fisheries Sciences) at Texas A&M University. The collections within the BRTC serve as historical and modern evidence of the distribution of wildlife in Texas, and provide valuable ecological and life history information for an array of vertebrate species. The collections are used in the research of Texas A&M faculty, graduate students, and scientists worldwide, as well as for the teaching of natural history, conservation and wildlife management, both within the university and in public schools. The Biodiversity Research and Teaching Collections (BRTC) primarily documents the faunal history of Texas, the United States, Central and South America, and the Gulf of Mexico. This includes over one million specimens and their associated historical documents, so as to assure their accessibility to current and future generations. Historically the BRTC has been an invaluable source of data for researchers in the fields of biodiversity, vertebrate evolution, endangered species, wildlife and fisheries conservation, and even forensic biology. This information is also made available to the public, to increase awareness of the natural history of Texas and thus enabling the citizens of Texas to make better-informed decisions affecting their natural environment. Operated by the department for the Texas A&M AgriLife Experiment Station, this facility is dedicated to research and teaching that promotes development of a sound biological basis for warmwater aquaculture and aquatic ecology. Amenities include laboratories, hatcheries for red drum and other species and a 36-pond complex, all within 10 miles of the central campus. Laboratories are equipped with extensive flow-through and recirculating tank systems, comprising more than 200 units, and with a variety of modern research equipment for work in areas of nutrition, bioenergetics, environmental physiology and developmental biology. NSF Biosytematics and Biodiversity Center Established by a grant from the National Science Foundation, with matching funds from the university and the Texas Agricultural Experiment Station, the center provides cutting edge technical capabilities for research in systematic biology and fosters increased scholarly interaction among faculty and students. Although the Department of Wildlife and Fisheries Sciences operates the center, principal investigators include faculty members from the Biology and Entomology departments. Occupying almost 1,500 square feet in the old Herman Heep Building, the center houses three flow cytometers, including a Coulter Elite flow cytometer and cell sorter; a digital imaging system for computer enhanced karyotyping; and an image analysis system for morphometry of structures ranging from subcellular to exomorphological. Ecological Systems Laboratory The Ecological Systems Laboratory promotes formal exposure to systems analysis and simulation as an integral part of the training of professionals and academicians involved in ecological research or natural resource management. Systems analysis refers both to a general problem-solving philosophy and to a collection of quantitative techniques, including simulation, developed specifically to address problems related to the functioning of complex systems. Workshops on the use of systems analysis and simulation in ecology and natural resource management which are cosponsored by WFSC, the International Society for Ecological Modeling, and a host institution and are offered in either English or Spanish.
In 1925, Alain Locke published his famous anthology entitled The New Negro. In the essay he wrote to frame the moment as he saw it, also entitled "The New Negro," Locke described the landscape of Harlem as filled by different notions of what it meant to be a black American. In many renderings of this moment, historians and intellectuals remember the Renaissance as typified by disagreements and antagonism. However, this project ascribes to Locke's rendering of this movement. Rather than understanding these seemingly disjointed expressions of life as distractions from a unified black American agenda, this project understands this diversity as the catalyst of the movement and wealth to black American history. The reason why Locke is a reliable figure to evoke for this study has much to do with his upbringing and professional success. Alain Leroy Locke was born on September 13, 1886 to Pliny Ishmael Locke and Mary Hawkins Locke in Philadelphia, PA. As a child, young Locke attended Central High School in Philadelphia. As a young man, Locke's life seemed set aside for greatness as he excelled as both a leader and a scholar. After high school, Locke enrolled in Harvard College in 1904 as a student of philosophy to be spurred on by prolific minds such as Josiah Royce, Hugo Munsterberg, George Santayana and William James.(www.alainlocke.com) By the time he finished schooling, he had been elected to Phi Beta Kappa, been named a Rhodes Scholar at Pennsylvania, studied at Oxford University and achieved a Ph.D in philosophy from Harvard University. (Locke, New Negro: 415) As time would show, Locke would carry these successes into helping define an emerging shift in the way black Americans would define themselves. The summation of such a storied career won Locke a professorship at at Howard University from 1912-1925. Locke's major contribution to the world of black art and letters came primarily in his work as a writer and anthologist. As a writer, he wrote many influential articles such as "The New Negro", "Negro Youth Speaks", "The Negro Spirituals" and "The Legacy of Ancestral Arts," all of which were published in his groundbreaking anthology The New Negro. These works shaped the manner in which black American artists and academics viewed themselves, emphasized both the humanness inherent in black people through reference to the diversity of voices and talents in black America and indeed their essential connection through "legacy" to the African continent. He also authored a book entitled Race Contacts and Interracial Relations, 1916. Locke's deft at articulating a dominant feeling among black Americans manifests itself most notably in his essay "The New Negro". As this was also the title for his famous anthology, he used this essay to draw out themes in the varied works he anthologized. This tactic helped him to articulate his notion of black America as a symbiotic continuum which he saw most evident in the development of black Harlem. Locke's primary goal in the essay "The New Negro" is to migrate from monolithic notions of an "Old Negro", as well as from the exhausted frameworks of bourgeois intellectual black leadership toward an idea that gives creative agency and credibility to the "rank and file" of Negro life (Locke, New Negro: 6). His motive here is to posit the idea of a "New Negro" as a means of rediscovering individuality of voice in the context of community. He employs metaphors of movement to represent that this New Negro "transformation" is an essentially American phenomenon of reinvention through transplanting. Locke's essential project is one that seeks to expand the parameters of what is Negro leadership. Locke essentially debunks the way Americans remember the Negro past in order to redefine and relocate what leadership is as well as who is eligible to lead. In order to unpack these propositions, one must ask both what role the Old Negro plays in the production and presentation of the New Negro, as well as what is it that makes this movement distinctly American. We can find answers to this question through viewing how Locke sets out to represent the nature of this novelty. Interestingly, Locke posits monolithic notions of the "Old Negro" as "more myth than a man" and the blind acceptance of this "formula" against ideas of "the thinking Negro" and the true diversity of actual human beings (Locke, 3). This move is significant because Locke uses this idea to create space for a more accurate representation of the Negro community in light of the antecedent ideological poles of the moral leadership and imaged blackness. He works from the established leadership of the "Sociologist, the Philanthropist, the Race Leader" and states that such individuals have in their laps a "changeling"(Locke, 3). This language denotes a shift in the focus of the African-American uplift project from a trickle-down notion to a more inclusive, more corporeal effort. A useful moment that evidences what Locke is doing in this essay is the light in which he casts Garveyism. To the likes of Dubois and other black intellectuals, Garvey was nearly embarassing to their tradition of respectability and moral suasion. Strikingly, Locke states that though fleeting and ostentatious, Garveyism's focus on "the possible role of the American Negro" in "the future development of Africa" is both "one of the most constructive" and "universally helpful missions that any modern people can lay claim to."(Locke, 15) He continues from this point to talk about "constructive participation" and "group incentives" as means to incorporate this controversial yet undeniably relevant voice into the greater fabric of black American leadership. Given that Garvey was viewed so poorly by bourgeouis leadership because he favored loyalty over skill, for Locke to cast him in a favorable if not positive light, while himself a part of the bourgeouis intellectual class, worked to expand the notions of what was credible black leadership. A useful interpretive metaphor to describe this phenomenon is that of the Biblical sense of a "body". In this respect, cabarets, churches, black intellectuals, artists, the working class and Harlem's assorted professionals were all a part of one continuous corpus. The significance of this metaphor is in how it suggests that the Negro race is only "fitted and held together" by "what every joint supplies"1. Consequently, the "peasant" and the "professional" were equally valuable and necessary to the success of Harlem. Locke contends that Harlem's ability to help the diverse portions of Negro life to find one another was indeed its greatest wealth. Click characters above to navigate the site. New Negro | Set The Stage | Buckets or Books? | Jazz or Junk? | NAACP or UNIA? | Church Leaders | Finale! | Curtain Call
Loneliness May Be Catching TUESDAY Dec. 1, 2009 -- A new study suggests that lonely people attract fellow "lonelies" and influence others to feel lonely, too. "Loneliness can spread from person to person to person -- up to three degrees of separation," said James H. Fowler, co-author of the study published in the December issue of the Journal of Personality and Social Psychology, and professor of political science at the University of California, San Diego. "What this means is that if I don't know anything about you, but I know your friend's friend is lonely, then I can do better than chance at predicting whether or not you will be lonely," he said. Indeed, the study suggests that not only is loneliness contagious, but lonely people tend to isolate themselves in small groups that somehow compound or increase those feelings of solitude. According to Fowler, the data suggests that the average person feels lonely about 48 days a year, but for the lonely, that feeling can be ever-present. In addition, the study indicated that people who felt lonely were more likely to be friendless, or constantly shedding friends, a few years later: Compared with those who are never lonely, lonely people can lose about 8 percent of their friends over a four-year period, for instance. Fowler co-authored the findings, funded by the U.S. National Institute on Aging, with John T. Cacioppo, professor at the University of Chicago, and Nicholas A. Christakis, professor at Harvard University. The researchers worked with more than 5,100 participants who were the offspring of the original subjects of the landmark Framingham Heart Study. The team constructed graphs tracking the participants' ongoing friendship patterns over two to four years. They found that, among neighbors, an increase of loneliness of just one day per week triggered a rise in loneliness among neighbor-friends, as well. And that loneliness actually spread throughout the community as affected neighbors saw each other less, the researchers said. Women appeared more vulnerable than men to "catching" loneliness, the researchers found. Mark R. Leary, professor and director of the social psychology program at Duke University, whose work zeroes in on the need for social acceptance, called the study impressive in its sample, analysis and conclusion. He added that the contagion of loneliness could be, to some degree, a situation of people mimicking the styles of those around them. "Non-lonely people who are exposed to lonely people may make others in their network a little more lonely by behaving in these less-affirming ways. Perhaps this is why the effect of loneliness can be seen at three degrees of separation. My friend has a lonely friend, so my friend starts acting less affirming overall, which makes me act a little less positively, which then affects my other friends." So what can be done to help the lonely, to integrate them better with others? Leary suggested that those who interact with lonely people recognize that their tendency to pull inward emotionally and be less outgoing is a trait of loneliness, not of something else. "It reflects loneliness and a need for connection, rather than indifference, dislike or rejection. People can reach out to their lonely loved one rather than withdraw themselves," he said. Fowler agreed. "For the mental health provider, this means treating not just the patient, but potentially also the patient's friends," he said. "For the employer, this means emphasizing activities that help their employees to connect to one another socially. For the family member, this means you should tend to your own networks, too, while you help your kin feel more connected." For more on emotional health, head to the American Academy of Family Physicians. Posted: December 2009
Patrick Lenahan joined the 22nd Regiment of Foot on 15 March 1775, when the regiment was recruiting to full strength in preparation for embarkation for America. The Irishman was also a tailor, and it wasn't long before he was working with Watkins. Normally, regiments in America received new regimental clothing (coats, and cloth for waistcoats and breeches, as well as buttons, buttonhole lace and other finishing materials) in October or November. The tailors then had the winter to make waistcoats and breeches, and fit the coats to the soldiers. The effort required to tailor fit each garment was well spent, because the clothing was expected to last for a full year and then still be usable for off-duty and fatigue use. Clothing that fit properly would wear properly, provide the best comfort when on duty, and the best defense against inclement weather. Well-fitted clothing was not a matter of form but of function. The tailors of the 22nd Regiment may not have been so busy in the winter of 1775 because the regiment's new clothing, along with that of the 40th Regiment, had been captured when the ship carrying it sailed in to Philadelphia in August 1775. This was due to a poor understanding of the political and military situation early in the war; the 22nd and 40th had originally been ordered to New York and were diverted to Boston when they arrived off of the American coast in June. The ship with the new clothing left Great Britain several weeks after the regiments and literally passed in the night a British warship stationed to divert shipping from ports that were not under British control. Work was nonetheless available for the tailors. At the court martial of another soldier in Boston, John Watkins testified that he cut out suit of brown clothing for an officer of the 22nd as well as making a greatcoat for the officer. He also cut out a surtout (a type of overcoat) for the officer's servant. Patrick Lenahan testified that he assembled the surtout in early December. Presumably they were paid for this extra work which was outside of work on regimental clothing. The fact that Watkins cut out the garments indicates that he was the more experienced tailor, able to measure and pattern the garments, while Lenahan's being tasked only with assembly suggests that he was newer to the trade. Two years later, Lenahan was sent from Rhode Island to Philadelphia to join the 22nd Regiment's light infantry company which had sustained a number of losses in the 1777 campaign. That he was chosen for this active, campaigning company shows that his work as a tailor did not detract from his fitness as a soldier. Unfortunately he would not remain long in this new role. He died on 18 September 1778, of unknown causes. John Watkins enjoyed a much longer career. He served for the remainder of the war in America and returned to Great Britain with the 22nd Regiment, finally taking his discharge on 6 June 1785 after over 19 years in the 22nd Regiment. He received an out pension because he was 'worn out & rheumatic' and signed his own name on his discharge. Like many campaigners, though, Watkins was not done with the army. On 4 January 1788, at the age of 48, he joined an invalid corps on the island of Jersey, a unit that garrisoned and maintained military installations. He continued in this corps through 22 August 1800 when he was once again discharged and pensioned, this time at the age of 60.
||This article has multiple issues. Please help improve it or discuss these issues on the talk page. This vocabulary forms a language variously identified as sequential art, graphic storytelling, pictorial stories, visual language or comics. Whilst scholars have yet to unite on a term to define the language, the communicative tools of that language have been formalised in works by authors such as Will Eisner, Scott McCloud and Mort Walker. Creative team A comic book's creative team (or sometimes creators) generally refer to the same individuals: those responsible for the specific creation of a particular book or story. However "creators" can also refer to the individuals who first wrote/drew a particular character or title. For example, the character of Superman was created by Jerry Siegel and Joe Shuster, but while they are that character's "creators," they are not per se the creators/creative team of every title featuring him. The "creative team" usually refers to two main roles, with around four subsidiary ones. Primarily, the term refers to the writer and artist. This latter term is usually used to refer to the penciler, but also includes the role of an inker and colorist. There is usually also a letterer involved in the hands-on "creation" of a comic book, and then an editor behind-the-scenes. Any combination of these people (that includes the key roles of writer and artist) can reasonably be said to refer to a "creative team": - The "term describes the individual(s) who created the comic book in question. A writer, artist, letterer, and editor will usually be credited in the comic book. Note that these functions can be performed by one or more people, acting collectively or individually. A comic book may have one writer and multiple artists, for example, or may be the creation of a single person." The complete creative team on a small press, independent or self-published comic will likely be smaller than that on a more mainstream title. At its most basic, the creative team can see just one person filling every necessary role; at its most complex it includes a considerably larger group. - "The writer, naturally, writes the comic book. This is not, however limited to "writing the words in the balloons," as many newcomers often think, but rather requires developing and putting down on paper the entire story in such a way that the artist can then interpret it into visuals for the reader. It is possible to have multiple writers on a single comic book. Sometimes one writer will plot the comic, and a second will write the dialogue after the fact. In other cases, many writers may plot a comic book together, with one of them (or another writer) supplying the dialogue." Moreover, the writer can be two (or more) individuals working as a team. The writer (or writers) often produce a full script of the comic, often in a panel-by-panel, page-by-page form to guide the artist, often detailing dialogue, actions, thoughts, motives, expressions, similar to a screenplay. Some writers such as Alan Moore are famed for describing precisely how the artist should draw each individual part of every scene. However, some writers create extremely sparse scripts, giving the artist a creative edge. Stan Lee instigated the "Marvel Method" (below) saw the writer's role reduced solely to writing the words, leaving the burden of storytelling with the artist. Comics are usually a collaboration between different individuals, typically a writer and an/several artists. Broadly the term "artist" is used interchangeably with penciler, since it is almost-always the penciler who produces the initial artwork, and provides the bulk of the artwork. Hence the writer and artist (penciler) are usually said to be the exclusive authors of a particular comic, even though there are usually other individuals involved in the creative process. A general breakdown of a comics' "artists" includes three roles, often — but not always — provided by three separate individuals/teams. Alysson Lyga puts it thus: "Historically, the three functions were performed by three individuals separate from the writer and were distinct job titles." The three titles/roles/jobs are: The role of the penciler is, for most comics (excepting, for example, fully painted comics) the primary artistic chore, and hence the penciler is usually referred to as the "Artist". The term refers to the fact that initial artwork is done in pencil, so that mistakes can be corrected and so that the layout is not set in stone immediately. Some artists choose to work in ink immediately; some do their own inking; many nowadays draw on a computer via a wacom tablet, sidestepping the need for actual pencils in favour of digital pencils: - "A penciler does the initial work of laying out the page based on the script. He or she creates each panel, places the figures and settings in the panels, etc." The role of the inker is to supplement and enhance the pencil artwork. The purpose of inking was initially to define the artwork both for the colorist and for the printing process. Many artists work with a single inker, or small group of inkers, who best accentuate their art-style. Some artists also provide their own inks, but the separation of the roles both speeds up production (particularly vital on a monthly schedule) and allows a buffer-stage for editorial-or-other art-changes. With the advent of computer art, some modern "pencil" computer artwork is "digitally darkened," (i.e. the gray pencil lines are darkened to imitate the inking stage), bypassing the role of an inker. This ignores the role of the separate inking stage (and often the separate individual who carries out the inking), where artwork can be added to or refined: - "The penciled pages are then passed to an inker, who uses black ink to render the pencils into fuller, rounder tones. The inker usually adds depth and shadow to the images - a good inker will bring out and enhance the strengths of a penciler's artwork." The role of the colorist is to add color to the artwork, either by hand or on computer. Historically, the colorist (and the inker) would work directly on the original artwork, but modern advances mean that the coloring (and sometimes inking) is now done digitally on a computer, and hence can be refined and changed with comparative ease. The colorist will often make the ultimate decision over palette (color scheme), adding to the tone of the book. "Muted", "Pastel" and "Technicolor" color schemes can change the whole tone and feel of a comic, and is a key part in comics production, despite being arguably the most overlooked artistic role. The artistic roles on comics can, of course, be filled by any number of individuals, from one to an almost indefinite number. The writer-artist (e.g. Will Eisner, Frank Miller, Darwyn Cooke) typically combines the two roles of writer and penciler, but can also incorporate the role of inker as well. More commonly, the pencils and inks may be produced by the same individual on some, or all, of their projects. Some comics feature more than one writer, penciler, inker or colorist, and some feature individuals (often uncredited) who "ghost" work in the style of a particular artist, or work as a background artist, working with (or for) an artist who might only draw certain figures, panels or scenes, leaving the rest to their collaborator. The role of colorist is more likely to be a role filled by a separate individual to the rest of the artwork, although some artists do provide their own colors, most typically those who work digitally. (In addition: Some "[c]omics are produced in black and white, with gray tones instead of colors, etc. Some comics are painted in full color, rendering further artists moot.") In recent years, studios and companies such as Digital Chameleon and Comicraft are credited with, for example, colors and lettering, with the credit given to the company/studio and not a specific individual within that company. The role of the letterer is usually separate to the role of writer and (all individuals under the catch-all term of) artist, and refers to "[t]he individual who places word balloons and captions on the finished artwork and fills them with words based on the script." Typically this is the last stage in a comic book's production, although the letterer may liaise with the artist initially to make sure there will be space to fit the speech bubbles into the artwork without obscuring too much/any of it. - "Letterers also often provide the sound effects prevalent in comics though sometimes the artist will render them." The lettering in a comic is usually designed to be unobtrusive, and in some cases (e.g. The Sandman) is used cleverly to differentiate between different characters. Richard Starkings' company Comicraft provides lettering services (and also supplies digital fonts for letterers) by a number of individuals under a collective credit. The editor of a particular comic (and there are usually, for larger companies at least, many tiers of individual editors, group editors and head editors) is the "individual charged with the editorial functions," typically implying that the editor has "broad control over the content and direction of the story... shepherds the creative process, or may function as an "extra set of eyes" to catch errors and glitches in the process" or any combination thereof. The editor is often the individual who shoulders the responsibility, be that for continuity errors or story glitches, late comics or (perceived or actual) mismanagement of a creative team. Some editors work by dictating the broad or specific direction of a title or story arc while others give their writers/artists free rein. Typically the editor on a work-for-hire project (i.e. one where the writer/artist does not own the character whose story they are telling) will have more direct influence over and input to the story than would the editor of a creator-owned title. Many comics have a specific editor over-seeing a specific story, who is answerable to a "Group Editor" who may be responsible for a number of titles, perhaps linked in theme. This individual will ultimately be answerable to an Executive Editor/Editor-in-Chief. "Cartoonist" is a very broad term, sometimes "used by many artists who perform multiple tasks in the creation of a comic, including the writing... with the implication that the work is predominantly the creation of a single vision." In mainstream comics, however, "cartoon art" is seen as less-realistic, so the term "cartoonist" is usually best applied to representative artists and artwork (i.e. newspaper strips such as Garfield and Peanuts, and many alternative comics, where the characters cannot be said to have an overly-realistic look to them), and so applies to relatively few modern, mainstream comics. Comics, comix and graphic novels Comic book The generic and most common term for the individual issues of a particular series, and the format in which they are presented: - "Traditionally, a comic book was a stapled, magazinelike product that told a serialized story or anthologized many stories over a period of months and years. The term has evolved to describe any format that uses the combination of words and pictures to convey a story, and thus is accurate when applied to both the medium itself and the periodical form. As a result, all graphic novels are comic books, but not all comic books are graphic novels." The term is sometimes seen as an awkward, not least because of the cultural baggage tied-up both in comics' origins in childish, humorous stories (hence "comic"), and in the negative associations forced upon them in the 1950s by senate hearings and Fredric Wertham. Comics are thus often alternately and in contradictory fashion seen as solely for children, and not for children at all. Comic strip Comic strip generally refers to daily and Sunday newspaper features widely syndicated during the past century. Many comic strips have been collected into comic books, paperbacks and hardcover books. - "The plural of comic is typically used as a singular (as "politics" is) to refer to the entire medium or industry. Hence "comics industry" or "comics creators." This is usually employed to avoid the unintentional consequences of using the adjective "comic," which implies comedic content. (For example, "the comic industry" might be misinterpreted as meaning "an industry that is funny," while "the comics industry" can mean only "the industry that creates comic books.")" - See: Alternative Comics, Independent Comics Underground (and Alternative) Comics or Comix were born both out of the general 1960s counter-culture, and as a part reaction to the Comics code-induced censorship of all mainstream comics inspired by the hysteria surrounding Dr Wertham's seminal book Seduction of the Innocent (1954), which was interpreted as labelling comics partially or wholly responsible for juvenile delinquency: - "A word coined in the 1960s to describe titles that nowadays would be considered alternative. The comix were titles created as a reaction to the juvenilization of comics compelled by Congress in the 1950s." As a result of the sanitisation of mainstream comics, most underground comix dealt with themes of sex, drugs and politics, and were propagated through so-called head shops. Pamphlet, Periodical, Monthly, Floppy, Single, Issue These terms are all used interchangeably by fans to describe individual comics. They: - "describe the original comic book form, that of a slim, magazinelike periodical, not designed for long-term use or wear and tear. The term [Pamphlet] is considered derisive by some." Periodical evokes magazines, and is less-widely used, but arguably most accurately describes the serial nature of on-going comics. Monthly likewise is tied to the historically common release time of most comics, while floppy simply describes the spineless nature of comics (as opposed to collections), and single denotes their place as parts of continuing narratives. "Comic" is still a simpler term. Graphic novel ("GN"), OGN The term "Graphic novel" (simply put a novel conveyed in pictures) is: - "Used to describe the specific format of a comic book that has greater production values and longer narrative." The term became popularised when titles such as Dark Knight Returns, Maus and Watchmen began to break into the (non-comics) "mainstream," and from that point forwards has been more-or-less conflated and confused (erroneously) with trade paperback. While comic books are extremely similar to graphic novels, some cartoonists, such as Ed Koren, argue that compiled comic books do not classify as graphic novels. For that reason, the qualifying "Original" (hence "OGN") is often added to the front of the term when describing a story told through the medium of comics which debuts in the higher-production-values (increasingly as a hardback, almost-always with a spine) format: - "The graphic novel is more like a traditional novel, in that it is published on an independent schedule. It is longer in format than a periodical and typically contains a complete story unto itself. Graphic novels usually have higher production values than the typical stanpled comic book; they may be squarebound, for example, with cardstock covers. Some may be hardcover volumes. Although a graphic novel usually stands on its own as a complete story, it is possible to have an ongoing series or limited series of graphic novels telling a single story or series of related stories. A typical abbreviation in the industry for graphic novel is "GN," usually used as part of a title to indicate to a reader or browser that the title in question is not a periodical." - The term OGN is "[a]n abbreviation for original graphic novel, often used to differentiate a graphic novel that contains a wholly new story from a trade paperback." Motion comic A motion comic is "a hybrid of comic books and animation." based on the actual panels of a comic. Examples include DC Comics' Watchmen: Motion Comics, Peanuts Motion Comics and Marvel Comics' Spider-Woman. Trade paperback, collection, collected edition - Not to be confused (although it usually is) with "Graphic Novel" Trade paperback collections are compendiums of individual issues of particular comics: - "A comic book trade paperback is a squarebound edition that collects and reprints a mini-series, maxi-series, or story arc in this sturdier format, giving readers a complete story at one time rather than over a period of months. Sometimes a trade paperback may collect stories that are not interconnected but rather are related by some theme. Many trade paperbacks also contain additional material, such as an introduction or foreword, interview, or character sketches." Comics series such as Dark Knight Returns and Watchmen are thus not "Graphic novels," but "trade paperbacks" - as they debuted in single issue format, and were only subsequently collected. The trade paperback is rapidly becoming the format-of-choice for many readers, and are, by their nature, more readily available: in bookshops as well as specialist comic book shops. Their benefits include (typically) containing a complete story, rather than a single part of one, as most on-going monthly series' are. Another benefit is that typically a trade paperback is greatly less expensive than collecting the issues separately. Prestige format | Bookshelf format Prestige format | Bookshelf format describes the manner in which some (usually one shot) titles are printed and bound. Most are 48- or 64-pages in length, and tend to have a (thin) spine. They are broadly synonymous with (mini-)Original Graphic Novels, and are almost always longer than a normal comic book (which tend to be either 22 or 32 pages in length, depending in most cases on whether advertisements are included in the pagecount). Back Issue A Back issue comic is an unsold earlier comic issue that has been kept to sale but taken off the shelf. Usually a comic store will have them in boxes in a section of the store to be gone through. Example: say the new comic is Wolverine issue 161 a back issue would be any issue that preceded the new one that is not on the shelf. Some comic stores will keep the last 2-4 issues onshelf so their customers can follow a story arc. Comic Digest Size Format Digest Size Format is a magazine size, smaller than a conventional or "journal size" magazine but larger than a standard paperback book, approximately 5½ x 8¼ inches, but can also be 5⅜ x 8⅜ inches and 5½ x 7½ inches. These sizes have evolved from the printing press operation end. The main publications remaining in digest size now are Reader's Digest and some Archie comics digests. From the late 1960s on, several comic book publishers put out "comics digests," usually about 6¾ x 4 inches. Gold Key Comics produced three digest titles that lasted until the mid-1970s: Golden Comics Digest, Mystery Comics Digest, and Walt Disney Comics Digest. DC Comics produced several in the early 1980s (including DC Special Blue Ribbon Digest and The Best of DC), and Harvey Comics also published a few during the same time (including Richie Rich Digest Magazine). Archie Comics has published numerous comics digests since the 1970s, and in the 2000s Marvel Comics has produced a number of digests, primarily for reprint editions. The manga graphic novel format is similar to digest size, although slightly narrower and generally thicker. Outside the comic "Alternative comics" Describes a certain type, or genre of comics (although alternative comics can also be cross-genre), and tends to merely imply "non-Superhero". The term "alternative" is: - "used as an adjective, usually to describe anything that is not mainstream. Its use connotes a qualitative difference in storytelling styles, subject matter, and form. It comes from the dominance of the large, corporate publishers, causing smaller publishers to label themselves as the "alternative" reading choice." Fanboy or Fangirl The term "Fanboy," usually used as a pejorative term, (although many described individuals wear it as a badge of honour) describes the anal retentive nature of extreme comics fans. It signifies a complete immersion in the world of comics, comics trivia and so forth. "The Marvel Method" The "Marvel method" is a manner of writing comics popularised in the 1960s by Stan Lee (with his artistic collaborators, in particular Jack Kirby) in large part simply to speed up the process. Rather than producing a full script, (typically) the writer and artist would talk over a rough plot outline, and then the artist would produce the full comics-worth of pages. The writer would then add dialogue to the artwork after it was done, rather than the other way around. This method of working is still used occasionally, particularly by artist-plotters Keith Giffen and Alex Ross. Its use in the creation of the vast majority of Marvel's 1960s key output, has drawn considerable criticism and created large amounts of confusion, since it clouds the issue of who did what. Artists Jack Kirby and Steve Ditko, for example, have alleged that the actual input from "writer" Lee was minimal, and that it was regularly/completely left to the artist to produce the plot and story, even as the writer was given most of the credit. Comics continuity almost-always refers to the existence and use of a shared universe, although any comic can have internal continuity independent of this. Simply, the term describes a consistency of internal plot, and usually of characterisation and external references also. Initially, many comics were stand alone, "done in one" stories with a beginning and end taking place within the confines of a single comic issue, often structured in chapters as are most novels. Over time, the comics companies realised the lucrative potential of the crossover comic, whereby other characters from a company's shared universe appeared in issues of each other's comics. (This ultimately led to the formation of "team" books such as the Justice Society of America, Justice League of America and Avengers.) During these crossover character interactions, editorial footnotes would often reference previous adventures and comics issues, but an actual editorially enforced "continuity" was not strictly adhered to, leading to some characters' actions appearing "out of character," or outrightly contradicting earlier plot-points. As comics were deemed largely ephemeral items, this was not considered that much of a problem, until the full advent of comics fandom. As a result of fan/reader scrutiny, the continuity both of individual characters and of the wider universes in which comics companies' characters interacted began to become more important. The Marvel "No Prize" became a humorous method by which readers could write letters to authors and editors pointing out mistakes or "continuity errors" in various comics, and were then named in print and awarded a "No Prize" (in reality a coveted sheet of paper declaring itself a non prize). In 1985, cross-universe continuity took on new levels of depth and (intended) consistency at the two main comics companies: DC and Marvel. Marvel launched its cross-line toy-driven-event Secret Wars, which required all characters to undergo specific changes at specific times, and required considerable editorial dictates and conformity. DC launched the Crisis on Infinite Earths, one of the earliest maxi-series', to address universe-wide continuity and attempt to explain away, remove or revise all previous errors in continuity. The reader was reminded that the DC Multiverse consisted not merely of the core DC Universe, but of a number of different iterations of various heroes on a multitude of different planets. Companies and characters purchased by DC (such as the Charlton Comics characters and Captain Marvel) as well as older characters like the JSA were (re-)assigned their own Earths, which were then destroyed and folded into one, core Earth. This naturally resulted in a number of contradictions and discrepancies in individual characters' histories, so a new, uniform continuity was created and the revised origins of the resulting heroes were retold in the hopes of maintaining consistent continuity. Naturally with hundreds of characters and dozens of writers, over the years uniform and consistent continuity is difficult to maintain, and most comics companies periodically address the erosion of internal consistency with big "events" designed to explain and simplify (although at times they do neither) discrepancies, and maintain continuity. Similar to internal continuity, the "canon" of comics characters/universes is often subject to change, but refers to the stories which are, at any one point, part of the "official", "accepted" history and story of particular characters/universes. Alternate versions of characters (such as DC's Elseworlds and Marvel's speculative What if...? titles) are necessarily not canon. However, stories can change from being non-canonical to being accepted as canon - and vice versa. In particular, line-wide continuity-changing events (such as DC's Crises and Marvel's controversial recent Spider-Man: One More Day storyline) retroactively affect which stories are part of a character/universe's core canon, as they may revise or ignore previous events and happenings. For example, DC's Crisis on Infinite Earths addressed continuity and consistency errors over almost 50 years of comics publication, and retrofitted events and characters into the history of the DCU as if they had always been there. (For example, the JSA went from being JLA-contemporaries from a parallel world to being their earlier, historical counterparts some years previously.) The Post-Crisis DC Universe removed many stories from "official canon", explaining them as Imaginary Tales or ignoring them completely. Retcon or "Ret-con" is a portmanteau shorthand phrase for "retroactive continuity", and is the descriptive term used to explain continuity- and canon-effecting stories. A retcon affects the past history of characters and/or the whole shared universe, and says that the "new" changed events have always been that way. This can lead to intense confusion, as compounded events can cause even the most knowledgeable fanboy to falter over what is currently the accepted canon. Linked: retrofit, retroactively embedding something (usually a plot point or subsidiary character) into a past story, for the purposes of a current story. This can give added weight to a story, implying that the impetus for a current story had been around for some time. ex. The X-Men: Deadly Genesis limited series from 2006 "retrofit" the story line from 1975's Giant-Size X-Men#1 to include new characters and plot points. It can also be used to update a character for more modern times. For instance, Iron Man #1 (Vol. 4) updated Iron Man's origin story so that he was wounded in Afghanistan instead of Vietnam. Pre- and post-Crisis Labels referring to DC Universe continuity and canon, with the separator being the 1985 ret-con event Crisis on Infinite Earths. Simply, Pre-Crisis stories were not as stringently policed or edited, and often contained errors and internal inaccuracies (in large part because of their frequent nature as one-shot stories, rather than linked tales designed to follow evolving and changing characters). Pre-Crisis stories are often seen as throwaway and frivolous, perceived to be dominated by imaginary tales and "camp" characterisation. Neither label is entirely accurate, nor is the broad-brush assumption that a lack of cohesive continuity denotes a complete disregard for it. The Post-Crisis DCU is that which was formed in the pages of the CoIE maxi-series, and is (or was intended to be) far more internally consistent and interlinked. Characters' origins were revised and updated, conflating previous stories and origins into one, accepted canonical one. Writer-artist John Byrne's Superman: The Man of Steel mini-series, for example, provided the post-Crisis origin of Kal El, while Crisis-architects Marv Wolfman and George Pérez produced the two-issue History of the DC Universe to briefly detail a broad overview of the post-Crisis DCU, showing the sequence of events as well as the revised origins of many characters (later to be fleshed out in their own series). Even the post-Crisis DCU was not without its continuity problems, however, and several subsequent events have attempted to address them, making the "post-Crisis" label largely defunct. However, because of the 1985 maxi-series's landmark status, the label persists in one form or another. A comics "event" describes a large storyline which almost-always involves a crossover between one or more characters, titles, universes or companies, but usually denotes an internal company crossover. These then typically fall into two broad categories: character or universe events. i.e. a Batman "event" will likely only feature the Batman family of characters (an example would be the Batman: Knightfall storylines), while a multi-character crossover will usually be universe-wide and affect several different individuals (an example would be Marvel's Civil War event, which affected almost-every character and title in their shared universe). Cross-Universe events and inter-company events are considerably rarer, but do happen. 1996's DC vs. Marvel event saw the DCU and MU brought together (and ultimately, briefly, merged), while the DC Universe has also featured in events/crossovers with, for example, the WildStorm and Milestone universes. DCU, MU, Earth-1, Earth-616 The concept of a shared universe, wherein a company's diverse cast of characters are able to interact and crossover between books and events is usually labelled the " - Universe" (DC, Marvel, Image, CrossGen, Valiant, etc.). Comics fandom has produced various shorthand ways of referring to the various universes, however, and the comics themselves also refer to themselves in specific ways. These labels are usually reserved for the universes of "the Big Two" (Marvel and DC), in large part because they are the main American comics publishers and have the largest shared universes. A non-exhaustive list of terms includes: - The Marvel Universe, sometimes abbreviated to the MU. The shared universe in which the X-Men, Spider-Man and Avengers, etc. all exist and interact - Earth-616, The Six-One-Six, etc. denotes the numerical designation of the Earth which the Marvel Universe inhabits. The term was coined in the pages of Captain Britain, by either Alan Moore or Dave Thorpe and may have been chosen for reasons of historical significance, wry commentary, or random choice. See also: Marvel Multiverse. - The DC Universe or DCU refers to the shared universe inhabited by Batman, Superman, the Justice League of America, etc. - Earth-1 was the Pre-Crisis designation of the "main" DCU, in contrast to Earth-2 (featuring the JSA), and latterly dozens of individual Earths which were home to a plethora of characters, and were destroyed in the Crisis on Infinite Earths maxi-series. - New Earth is the designation of the "main" DCU after the events on 2005's mini-series event Infinite Crisis, in which a revised Multiverse of 52 worlds was created. See also: DC Multiverse, Multiverse world lists. In addition to the core shared universe, some companies have subsidiary universes/imprints, which can be part of the main universe, or can not be (or can be thoroughly confusing). DC Comics' mature readers' imprint Vertigo Comics, for example mainly publishes stand alone ongoing, mini- and maxi-series, but also variously includes characters who were once part of the DCU, or have intereacted with it in such a way as to make them at least an honorary part of it. Characters such as The Sandman family of titles, Doom Patrol and Swamp Thing all began publication as part of the DCU, but have gradually drifted to a corner of it quite far removed, if still nominally a part. The WildStorm Universe, which was initially published by Image Comics, is now largely accepted as part of the wider DC Multiverse, but not part of the DCU-proper. Similarly, the Ultimate Marvel Universe is not part of the 616, while the MAX Imprint is on the fringes in a similar way to the Vertigo/DC interaction. Crossovers can be both internal and between different universes and companies. At their most basic level, a crossover can refer simply to a character making a guest appearance in a different comic (e.g. Daredevil "crossing over" into an issue of a Spider-Man comics), but typically a "crossover" implies more than a simple appearance, and denotes a cohesive storyline spanning more than one title, often as part of an event. Thus when the JLA and JSA featured in a two-part story beginning in the pages of one comic and concluding in the pages of the other, this is referred to as a crossover. Typically, crossovers are more than two-issues in length, and often span multiple titles, rather than just two. For example, the X-Men: Messiah Complex crossover event sees the storyline unfold over four X-Men titles, as well as two one-shot issues. Most crossovers occur within the confines of a shared universe, although crossovers between universes (and companies (see below)) also occur, for example in the upcoming DC/WildStorm: Dreamwar crossover event. - N.B. Crossover and Tie-in issues are often confused, conflated and used interchangeably. This is inaccurate. Cross-company or Intercompany crossovers occur when the characters of two different publishers' universes meet (and usually fight - see "comic book clichés"). Usually this is in a non-canonical event, although occasionally happenings can be referred to in mainstream continuity (e.g. the character of Access appeared in a couple of DC comics issues independent of the DC/Marvel intercompany crossover). The biggest and most famous example of intercompany crossovers are the irregular meetings of the DC and Marvel Universes, most notably in 1996's crossover event DC Vs. Marvel/Marvel Vs. DC, which threw all the DCU and MU characters together in one big event, which ultimately spun out into a series of merged-character one-shot issues co-published between the companies as the Amalgam Comics line. Other crossovers include the irregular meetings between DCU characters and the Dark Horse-licenced properties Aliens and Predator; or the various Marvel and DC character-crossovers with Top Cow's Witchblade, Darkness, etc. Tie-in, guest appearance, guest star A tie-in issue, usually involves a guest appearance of one sort or another, and occurs on the fringes of an independent storyline or event. Different from a full-fledged crossover issue, the two are often confused - and, indeed, if ill-plotted or written are difficult to tell apart. Whereas a crossover issue plays an integral part in furthering the plot, a tie-in simply expands upon a minor point, side-issue or tangential-but-somehow-linked story, which is not required by the reader to fully comprehend the plot of the main storyline/event, but nevertheless enhances it or creates greater depth. A guest appearance is when a character not normally associated with a specific title/book appears briefly (or sometimes for several issues) appears there. Often (somewhat cynically, if accurately) seen as a money-making move by publishers to boost flagging sales by inserting a popular character into a lesser-selling book (and in particular, it seems, Wolverine), a guest appearance may be a throwaway occurrence, may further the plot, or may be part of an event, crossover or tie-in. The character who puts in a guest appearance is, reasonably enough, also known as a guest star. The concept of a shared universe is one in which a multitude of different characters co-exist and/or interact. Typically this concept confines itself to one publishing company's output (although concepts such as the Wold Newton family extend the boundaries considerably), and it is most common in the main superhero universes of DC and Marvel. The benefit of having a shared universe is that characters can make (sales-boosting) guest appearances and allow for team-ups between different characters, as well as allowing the "team" concept (JLA, Avengers, etc.) to exist at all. Stan Lee's initial Marvel Universe creations in the 1960s best exemplify the "shared universe" concept, whereby characters (and villains) would feature across multiple titles, sometimes in the foreground of the story, sometimes as cameos in passing, but always underlining the interlinkedness of the universe. Cover date, publication date Most comics include a "cover date" on their covers, but this is rarely the actual date of publication, even though it can easily be referred to as such, not least for ease of reference. Much like magazines (which are typically cover-dated a month ahead) most comics are cover dated a couple of months ahead of their being published. The reasons behind this dates back to comics being available on a newsstand, rather than through the direct market in a comic shop: - "This month was not the month the book went on sale, it was the month the issue was to be removed from the newsstand in the event the book did not sell." A character's "origin" is the fictional story which describes (almost always solely for Superheroes) how they came to be; gained their powers; arrived on Earth; were bitten by a radioactive spider, etc. Origins need not be established immediately, they can be told in flashback, or slowly over the course of several issues or, indeed, years. Origins are often subject to revision and ret-cons, and may find themselves having additional information retrofitted in at a later time. They are also frequently updated to better reflect their times. For example, the origin of Iron Man has gradually been revised and updated, so that instead of serving in the Vietnam War, he serves in Korea or the (first) Gulf War. "One shot", stand-alone issue, "done in one" Although "one shots" and "stand-alone issues" (sometimes referred to as "done in one" stories) refer to subtly different things, the two are similar in their design and intent.[clarification needed]]] Imaginary tales, Elseworlds, Alternates, Possible futures, What If...? All these terms refer to specific and general "non-canonical stories", often - but not exclusively - featuring alternate versions of established heroes and/or events. For many years some DC comics would feature stories labelled as "Imaginary Tales," signifying that the events which occurred therein did not have an active effect on continuity, and therefore that anything could happen, even the bizarre and contradictory. (This also meant that some seemingly-bizarre or outrageous stories were deliberately labelled and described with the tagline: "NOT a Hoax! Not an Imaginary Tale!" to separate them from those which were non-canonical.) With 1989's Gotham by Gaslight prestige format one shot, in which the story of Batman was re-cast and set in the Victorian era, DC produced occasional titles which they labelled Elseworlds, to set them apart from the main DC Universe. These stories (and characters) can also be referred to by other names, but are most likely to be talked of Alternate World/Universe stories/counterparts. Some Imaginary Tales and Elseworlds have been assigned their own alternate (numbered) Earths in both the DC and Marvel Universe; others (like Frank Miller's dystopian Dark Knight Returns or Mark Waid & Alex Ross' Kingdom Come have variously been considered pseudo-canonical, as potentially in-universe futures for their respective casts.) Marvel Comics' main rival to the mainly-DC preserve of alternate tales are their series' and one-shots under the What If...? banner. These tend to shy away from DC's Elseworlds stated method whereby "heroes are taken from their usual settings and put into strange times and places - some that have existed, and others that can't, couldn't or shouldn't exist," (i.e. in typically in self-contained continuities) and are largely based on specific events or happenings. Most What If...? stories changed one minor (or key) detail in a major Marvel event, and posited how events might have played in a mostly same-universe situation - "What if Spider-Man joined the Fantastic Four?", while many Elseworlds suggested what would happen if known heroes had existed in completely different situations (e.g. Superman: Kal which grafts the Superman mythos onto the story of King Arthur). Some Imaginary Stories can be adapted into the accepted canon of a Universe (particularly possible futures, but also as taking place on parallel worlds which can then be interacted with), but it is most common for them to either stay completely separate, or even for some formerly-canonical stories to be retconned into Imaginary Stories as after a major event. Thus some stories which may have once been "real" for the characters to whom the occurred can be retroactively said not to have happened, and thus that any memory of them is of an "Imaginary" tale. (For a knowing take on retroactive continuity and Imaginary Tales, see Alan Moore's Supreme: The Story of the Year) Direct market When comics were first launched, they could be purchased in many places, most particularly at the newsstand, alongside newspapers and magazines. In the 1980s, with comics sales on the wane, attempts were made (notably by convention-organizer Phil Seuling) to buy comics direct from the publisher, rather than through a traditional magazine distributor. In addition, rather than returning unsold copies after a certain date (see: cover date), with a higher initial discount, buyers could keep unsold copies as back issue stock. This led to the formation of specialist shops ("comic shops"), with wide-ranging stock of older issues, as well as the creation of a number of tailored comicbook distributors. Specific comics terminology The images that are usually laid out within borders are known as panels. The layout of the panels can be in a grid. Watchmen was notable for utilizing a nine panel grid of three rows and three columns. Occasionally, Alan Moore and Dave Gibbons would use larger panels that broke the format of the grid to emphasize specific acts or points in the narrative. Panel frames The border or edges of a panel, when drawn, are called frames. These are normally rectangular in shape, but this shape can be altered to convey information to the reader. A cloud shaped panel can indicate a flashback or a dream sequence, whilst one with a jagged edge can be used to convey anger or shock. A panel without a frame is used to convey space. The frame itself can be formed by the image. For example, a scene can be framed by a door frame or by binoculars. Full bleed is usually used on a comic book cover, and is when the art is allowed to run to the edge of each page, rather than having a white border around it. Bleeds are sometimes used on internal panels to create the illusion of space or emphasize action. This is more common in manga and modern comics. Splash page (and splash panel) Splash page or sometimes referred to simply as a "splash," is a full-page drawing in a comic book. A splash page is often used as the first page of a story, and includes the title and credits. Splashes that are not on the first page of a story are sometimes called interior splash pages. Interior splashes may, or may not include titles and/or credits. A panel that is larger than others on the page is called a splash panel. A splash that appears across two pages of a comic book is called a "double splash" or a two-page spread. Rarely, splash pages will stretch over more than two pages; such multi-page spreads often take the form of fold-out posters. 300 and Holy Terror, both by Frank Miller, are told entirely in two-page spreads. Occasionally, a two-page spread is drawn vertically, so that the comic has to be turned 90 degrees to read it. This is widely disapproved of because it breaks the continuity of the medium, and is rarely used anymore. When used early in the issue, the splash provides a means of establishing characters or setting as well as draw the reader's attention. If used far later, it is commonly employed to dramatically portray the climax of a story. Rarely does an issue include more than two splash pages; however, Superman #75, Vol. 2 is notable for consisting entirely of splashes, as was The Mighty Thor #380 (Vol. 1). Speech balloon, word balloon, speech bubble The speech or word balloon (also known as a speech bubble), is a graphic used to assign ownership of dialogue on a particular character. Bubbles which represent an internal dialogue are referred to as "thought balloons". The shape of the balloon will indicate the type of dialogue contained, with thought balloons being more cloud-like and connected to the owner by a series of small bubbles. Speech bubbles are more elliptical, although those used to represent screaming or anger tend to be spiky, and square boxes have been used to represent dialogue spoken by robots or computers. Whispers are usually represented by balloons made up of broken lines. Surprised thoughts in Japanese Manga are usually round and tend to spike out. Balloons such as radio, or TV, may be represtented by a spiked balloon. Certain creators are particularly renowned for their inventiveness with the format of the balloon; writer and artist Dave Sim (who also letters his own work), is particularly innovative with this aspect of the comic book - using particular balloons for drunkenness, echoes etc. Comic book captions are a narrative device, often used to convey information that cannot be communicated by the art or speech. Captions can be used in place of thought bubbles, can be in the first- second- or third-person, and can either be assigned to an independent narrator or one of the comics' characters. Simply put, they are: - "Boxes on a comic book page that contains text... While sometimes used to convey dialogue, they are more often used to impart a character's thoughts or as a narrative device." Like word balloons, they need not be of uniform shape, size, design or color (indeed, some modern comics use different colors to assign different textual captions to different characters). Motion lines Motion lines, also known as "speed lines", are lines that are used to represent motion; if a person or some other mobile object is moving such indicators of movement will follow in straight lines behind it. Line length may be said to vary proportionately to the rate of speed of the object moving. Symbolia or Emanata Mort Walker defined in his book The Lexicon of Comicana, the iconic representations used within comics and cartooning as "symbolia". Examples being the lightbulb above a character's head to indicate an idea, the indication of sleep by a saw cutting a log or a line of "zzzz", Kirby dots, and the use of dotted lines to indicate a line of sight, with daggers being used instead of dotted lines to indicate an evil look. Sound effects Sound effects and environmental sounds are presented without balloons, in bold or "3D" text in all upper case. Percussive sounds usually have exclamation points. Usually, they are written/drawn in a way as to emphasize their nature, such as the sound effect from a fast racer car almost leaning from the car's speed, or a shrill noise depicted in a jagged, scratchy form. - BAM! (pistol shot) (image) - SPANG! (bullet hitting metal) (image) - SPLAT! (bullet hitting masonry or concrete) (image) - WANG! KAWUNNGG! (bullet hit with ricochet) (image) - POW! (fist hitting chin) (image) - SOK! (fist hitting chin) (image) - CRAK! (nightstick hitting skull) (image) - CRACK! (wrench hitting skull) (image) - CREAK! (squeaky door opening) (image) - EEEEEEEEEE! (scream) (image) - CRASH! (furnishings being destroyed in a fight) (image) Marvel Comics' characters have several specialized sound effects that are under Marvel's copyright: Spider-Man's web deploys with a "thwip", Wolverine's claws emerge with a "snikt", and Nightcrawler teleports with a "bamf". See also - Lyga and Lyga (2004) p. 162 - Lyga and Lyga (2004) p. 165 - Lyga and Lyga (2004) p. 161 - Lyga and Lyga (2004) p. 163 - "I'd rather the average reader not notice my work...that's my job, after all... Our job is to NOT be noticed..." post at Digital Webbing by Jason Arthur, February 26, 2008. Accessed March 24, 2008 - Lyga and Lyga (2004) p. 164 - Lyga and Lyga (2004) p. 162-163 - Web Draws on Comics, Wall Street Journal, July 18, 2008 - Indeed DC Comics published a 6-issue mini-series entitled "Fanboy" celebrating the term. In recent years, Marvel Comics would likewise reclaim the insult "Marvel Zombie" in a series of that name. - Some maintain that the numbers refer to 61/6, or June 1961, the alleged "actual" publication date of the first Marvel comic Fantastic Four #1 (cover-dated November 1961). Marvel UK Earth-616 comments. Accessed March 25, 2008 - It has been noted that "616" is thought by some scholars to be the actual Number of the Beast, rather than the more-widely believed "666". "More 616", May 19, 2007 on Tom Brevoort's Marvel blog, quoting Alan Davis. Accessed March 25, 2007 - Arguably the most likely theory is that "616" is a random number chosen to signify the vast array of parallel worlds that exist, in contrast to the DCU designations which tended towards the low-end of the number scale. The "616" Universe message board post, May 11, 2005 by Alan Moore's son-in-law John Reppion. Accessed March 25, 2008 - Mike's Amazing World of DC Comics: Musing of a Fanboy: "Secrets to DC Cover Dates", July 30, 2004. Accessed March 26, 2008 - Edkin, Joe Chapter Four - The Page Breakdown Writing for Comic Books (2006). Retrieved on 2-10-10. - McCloud, Scott (1993). Understanding Comics: The Invisible Art. Kitchen Sink Press. ISBN 0-87816-243-7. - Walker, Mort (2000). The Lexicon of Comicana. uPublish.com. ISBN 0-595-08902-X. - Lyga, Allyson A. W.; Lyga, Barry (2004). Graphic Novels in your Media Center: A Definitive Guide (1st ed.). Libraries Unlimited. ISBN 1-59158-142-7.
Home to a large portion of Canada's francophones (French-speakers), many of whom regard Quebec's language and culture as so different from those of the rest of Canada that Quebec should be a separate country. The rest of Canada keeps agonizing about whether to change the Canadian constitution to deal with this. The Quebec debate has a passionate and even bloody history. The Front de Libération du Québec (FLQ) bombed Montreal in 1963, kidnapped the British trade commissioner in 1970, and kidnapped and murdered the Québec labour and immigration minister Pierre Laporte in October, 1970. This last incident led to the October Crisis, when then-Prime Minister Pierre Trudeau invoked the War Measures Act, under which 465 people were arrested. - Quebec City - Other major cities - Montreal, Trois-Rivières, Hull - Date Entered the Federation - July 1, 1867 - Provincial Flower - Iris Versacolour - Provincial Bird - Snowy Owl - Je me souviens - "I remember" - 7 987 191 (2011) - 1 542 056 sq km - Lieutenant Governor - His Honour The Honourable Pierre Duchesne - The Honourable Pauline Marois (Parti Québécois) |Click on the picture for free downloads| including this image. Since then, there have been at least two major constitutional wranglings whose purpose was, in part, to try to make the Quebeckers happy: the Meech Lake Accord, in the mid-1980s, and the Charlottetown Accord, in the early 1990s. Neither was accepted by the Canadian people. In October of 1995, Quebeckers voted to stay in Canada 50.6% to 49.4%. In April 2003, the separatist Parti Quebecois party was voted out of power, the Quebec people instead installing the federalist Liberal leader Jean Charest as their new premier. |Chateau Frontenac, Quebec City| Chateau Frontenac is a French-style hotel on the St. Lawrence River in Quebec City, the capital of the province of Quebec. Architects designed the hotel to resemble a European castle, taking its name from one of New France's early governors, Comte de Frontenac. Construction finished on the main entrance in 1893, and the tower was added in 1925. Famous sons and daughters of Quebec include:
One of the biggest buzzwords ever, it seems, is “Big Data”. But what EXACTLY is big data? It might seem a little condescending, but for those who aren’t in the industry, explaining big data requires some “dumbing down.” If you’re in the position to explain this to someone with zero experience in it, who isn’t tech savvy, or who comes from an entirely different field, leave the jargon out and instead focus on explanations and similes they’ll understand. A perfect example is when the CEO asks for a quick rundown—and her background is in corporate leadership, she uses an old smartphone, and simply isn’t on the same page as you. Don’t get frustrated. This is your time to shine. Big Data is Exactly What it Sounds Like Here’s an idea of how the explanation will go for a five year old, but feel free to pepper in your own ideas, too. Just make sure it stays simple, to the point and you don’t go off track. “Big data is exactly what it sounds like—a collection of data that’s so big it’s tough to process. It’s like the US Census Bureau information. That’s way too much information (since millions of people are surveyed) to look at at once. The biggest problems with this much information is being able to store it, share it with other people, or figure out just what the heck those numbers mean. Usually with data, there are technology tools that can do all this work easily, but too much data is too big of a job for them. Big Data is actually better, smarter use of the data.” Why We Use Big Data “What do we want with all this information, anyway?” It’s a great way to spot trends, such as figuring out how many people in your town prefer chocolate ice cream over vanilla. This information can be very useful to an ice cream company that can’t decide whether to advertise for their chocolate ice cream cake or vanilla one. After all, why spend one hundred dollars on a vanilla ice cream ad and just $50 on the chocolate one when the data says most people prefer chocolate? I actually prefer the vanilla/chocolate twist… why wasn’t that an option? Oh no! Our data is already skewed! Data Tells Us All Kinds of Things Data can tell us all kinds of things, like what type of people are doing what, where the most puppy adoptions take place in each state, and what types of clothes, food or toys people prefer. This is really important for businesses who can use that information to make more money. Otherwise, they might be trying to sell a tricycle for little kids to a bunch of middle schoolers who want dirt or mountain bikes—no training wheels, please. How do You Collect Big Data? Big Data Sounds Easy Using big data might sound like a piece of cake, but whenever you have a lot of something it can be really hard to manage. Think of it this way: It’s pretty easy to clean your room when there’s just your crayons to put back in the box and a couple of shirts to toss in the hamper. But if you just had a huge party and there are dirty cups, leftover pizza and balloons everywhere, suddenly it seems like it will never get done—even though ‘cleaning up’ is basically the same chore no matter how big it is. You Need to Manage Big Data Correctly Size makes a huge difference. Just look at Godzilla—it can easily get out of control. That’s why big data can be such a tough problem to solve. You need to manage it right or it can make things much worse. After all, you wouldn’t clean your room by trying to vacuum up those dirty clothes on the floor, would you? Vacuums do a great job for some things, but when not used right they just don’t make sense. The same goes for managing big data.
Day 16 – A – Z Challenge “Change writers hope that readers will join them in what Charles Johnson calls “an invitation to struggle.” Whereas writers of propaganda encourage readers to accept certain answers, writers who want to transform their readers encourage the asking of questions. Propaganda invites passive agreement; change writing invites original thought, openheartedness, and engagement. Change writers trust that readers can handle multiple points of view, contradictions, unresolved questions, and nuance. If, as André Gide wrote, “Tyranny is the absence of complexity,” then change writers are the founders of democracies. […] Socially conscious writers want authenticity and transparency to saturate every page of their work. They strive to teach readers how to think, not what to think. They connect readers to ideas and experiences that readers would not have on their own. Always, this kind of writing coaxes readers to expand their frames of reference, or, as the Buddhists say, to put things in bigger containers.” —Mary Pipher, Ph.D., Writing to Change the World Write. Have Your Say. Don’t Stop Until You Get Your Story Out. As you read this, I am heading back from New Mexico where I spent the day on Friday teaching creative writing to maximum security inmates there. In our workshop (I went with two undergraduates and my colleague) we encouraged our students to tell us a story. Whether through memoir, fiction, or poetry, the story we write about ourselves, our lives, is absolutely crucial. Our stories and the way we tell them are arbiters of ‘life’s coming attractions’ as Einstein once said. I believe in the healing power of words. I believe that they truly can ‘change the world’. We simply need to pick up our pens, and start writing. “And by the way, everything in life is writable about if you have the outgoing guts to do it, and the imagination to improvise. The worst enemy to creativity is self-doubt.” ― Sylvia Plath, The Unabridged Journals of Sylvia Plath How do we get out of our own way? By getting started. By putting the pen on the page and writing without stopping for twenty minutes a day. By shutting down the monkey mind chatter and going deeper into the rivers of our unconscious mind. By opening a vein. So, as I head home, across hundreds of miles of open desert and the White Mountains, think about what’s been bothering you, what’s been making you ache. Consider taking a break and dawdling over your writing desk, pen in hand. See what emerges. I will be thinking of you as I marvel over the particular brand of light that always astonishes me when I visit New Mexico. It’s as though the clouds are enlightened beings, floating on their backs, sure of their place in the whole universe of time and space. It’s as though the flora and fauna bow under them, and everything — absolutely everything — glows with luminosity. In other words, it is a place of dreams and dreaming while waking. “Why not just tell the truth?” —Raymond Carver Why not find out what you know? Why not kick the ground beneath you and see what’s there. © 2015 Shavawn M. Berry All rights reserved Feel free to share this post with others, as long as you include the copyright information and keep the whole posting intact. If you like this piece please share it with others. You can like me on Facebook or Twitter to see more of my writing and my spiritual journey on my website at www.shavawnmberry.com.
Okay, here we are with the rules of writing again. I’ve been reading The Dream Lover, a recently published biographical novel about the writer George Sand. Once again, here’s a high-profile, critically acclaimed book that brings to mind one important fact writers should remember: readers, in general, don’t give a damn about the rules of writing. George Sand is a pretty impressive historical figure. Born Aurore Dupin, she endured a dramatic childhood and a miserable marriage before moving to Paris, writing numerous bestselling books, and adopting a scandal-worthy lifestyle that included cross-dressing and numerous lovers. All this in the early 1800s. It would be kind of hard to cover this much historical material without doing something that all the writing advice people say is a sin: breaking the “show don’t tell” rule. Now, to give the rules-of-writing folks their due, the “rules” do help identify danger spots that writers face. Too many adverbs can lead to awkward and hard-to-read sentences. Prologues can sometimes just delay getting into the action of the story. And telling readers what happens can be less engaging than showing them what happens. But codifying these suggestions into Rules that Must Not Be Broken overlooks a couple of facts. First, a writer needs to be able to choose whatever technique best suits what they’re writing. A highly educated novelist from the early nineteenth century is going to speak in a far more formal voice than other kinds of narrators, and “tell” writing is going to be more appropriate for this than an action-driven, in media res style. Secondly – and, I think, more importantly – readers aren’t usually as concerned about the rules of writing as writers are. For all its breaking of the “show don’t tell” commandment, The Dream Lover doesn’t seem to have suffered. Critics love it. Librarians love it. Any individual reader may or may not love it; I actually found the early chapters hard to get into, and it took a while for Sand’s story to really get interesting. But again and again, I see books being published, praised, and even becoming bestsellers, blissfully unaware or uncaring of the “rules” they’re breaking. So I think the lesson is this: your job as a writer is to tell the story the way it needs to be told. Fix things that need to be fixed in editing, and if you run into a particularly clunky spot, keep an eye out for too many adverbs. But the rules are there to serve the story, not the other way around.
United States and Fidel Castro's Cuba, now more than forty years old, is still a source of great political and moral contention. The collapse of the Soviet Union and, with it, the end of the Cold War, signaled a change in the implications of the type of socialism governing Cuba. The alleged threats that had hovered so close to the continental U.S. throughout these paranoid and dangerous days of ideological impasse were now neutralized by the dismantling of the infrastructure that had brandished them. Cuba, once a unique and remote ally to the U.S.S.R., served as an outpost for anti-American hostilities and a potential vessel through which to deliver the devastating blows that may have turned the Cold War hot, now is an isolated bastion for ideals abandoned by most of the world. In the Western Hemisphere, they are alone, paying for what most American citizens will tell you is their philosophical transgression. Today, Cuban citizens live in a state of constant depression. The American State Department will assert that this is the result of an inherently flawed form of governance, that communism is naturally inclined to fail due to its transgression of the inborn human desire for self-determination. This is a half accurate tenet. It's true that communism cannot survive in practice in this world. But rather than assess this incapacity to some fundamental error in the design of communism (a topic too broad for discussion in this forum), it may be more accurate to suggest that communism's unpopularity with the rest of the western world excludes it from an increasingly global economic structure. The universality of resources and freedom that globalism claims to purport are not available to those not willing to play by a set of very stringently delineated rules that has no place for socialism. The U.S. State Department's official policy on Cuba has been a two part one, unchanging throughout the course of the relationship therein. The first and primary part is an open stance of opposition toward the Castro government. The United States has declared itself repeatedly, and from one presidential administration to the next, in advocacy of a democratically elected head of state whose approach to governing is in line with the generally accepted western definition of democracy. Until the establishment of such a government, however, the United States is unrepentant in its criticism and animosity for Castro. As an unrecognized government, Castro's regime is subject to all of the exclusions afforded by the global system. The United States restricts itself from trade with the nation, finds itself in constant friction over issues of immigration and takes time to diplomatically attack Cuba in the U.N. And most other global forums. The second part of American policy is to supply hungry and unemployed Cuban citizens with many of the goods and services to which global excommunication has deprived them access. This is the surface level of Cuban/American relations. And the going theory is that this position was arrived at by way of the Cold War. Capitalism certainly did play its part in what would become hostility. But the truth has very little to do with communism. Castro, an excelling law student at the time his revolutionary fervor began mounting, did not call himself a communist, a socialist or a Marxist. He was merely an enemy of a state that had been very amenable to the U.S. In 1952, Castro was poised to enter the Cuban government in a most modest and evenhanded fashion, campaigned for a parliamentary seat in an election. But an American supported mutiny subverted President Socarras, replacing him with General Fulgencio Batista. Batista made the U.S. investment in Cuban domestic affairs well worth their while and, as a side note, canceled the election in which Castro was slated to compete. Castro sought legal recourse by declaring such an act unconstitutional but his plea was rejected in court. When he turned to military means to make the point, his rebellion was put down. He was imprisoned until 1955. In that space of time, Batista's Cuba became a playground for commercial imperialism. Cuba became the home to a multitude of American businesses and oil companies. Batista sold his citizens and his nation's resources out to the highest bidder in exchange for support against revolt. The highest bidder may have been anybody from the American government to private American industry to Mafioso enterprising. Batista's government became a corrupt tool of totalitarianism. His many human rights violations and democratic transgressions were disregarded as he was a most convenient economic partner. When Castro was released from prison, he took refuge in the mountains of Cuba, where he and fellow revolutionaries garnered strength. In 1956, their location was a rebel base, designed to serve as an asylum for they and fellow political idealists. Communism was not a condition discussed significantly amongst them. Over only a few years, Cubans who were discontented with Batista's unbalanced retail of their country began to surround Castro. The rebel base evolved into a community whose influence reached far into the depths of Cuba's impoverished. Tensions erupted daily on the streets in guerilla strikes and armed conflagration. When the friction bubbled over, Batista fled and left the country's future in the hands of the Fidel Castro. For his troubles, Catro would become a romantically regarded radical and an iconic revolutionary. His was a revolution designed to place power into the hands of the Cuban people and the intention received much admiration from like-minded discontents around the world. This was not a perspective held within the confines of the American government. Castro's sweeping revolution in 1959 formally ended the cooperation between the two countries that was so overwhelmingly beneficial to the U.S. One of Castro's first acts was to lay claim to all foreign interests operating inside of Cuba's borders. He expropriated U.S. concerns and was instantly met with hostility by the Eisenhower administration.. The warm regards in which they had held Batista were replaced by an icy disagreement over who deserved direct access to Cuba's resources: the United States or Cuba. Castro felt that Cuba's people did, so his reforms were centered around making Cuba an agrarian state. The United States argued on their behalf by cutting diplomatic ties with Cuba and supporting any insurgency instructed toward the subversion of Fidel Castro. Between 1959 and 1961, Cuba was, for all intents and purposes, a new state looking to find its place in the world. Castro, for his part, was new at the game too and not strictly consigned to any one governmental faith. But the United States made democracy unavailable to him if he wasn't willing to compromise the good of his citizens. He had become quite popular in Cuba, however. His was the face of the revolution and the bright promise of a plentiful Cuban future. His personage did so much as to inspire hope in his people. The Soviet Union recognized the asset in his popularity and, in their mutual isolation from the American empire, they became natural bedfellows. It was at this point, a year into the Kennedy administration, that Castro's government became a totalitarian socialist regime. From this point on, there was no turning back. The United States and Fidel Castro would be eternal enemies. The early U.S. policy was characterized by a series of aggressive efforts to turn the tides of revolution back in Cuba. This included methods as varied and extreme as furnishing dissident Cuban fighters with arms and inundating Cuban citizens with pro-American and anti-Castro propaganda. America's opposition to Castro was extreme. His unseating was the only condition by which Cuba could return to normalized relations with the U.S. And its democratic allies. Essentially, if Cuba could not be utilized to the direct benefit of the United States and its economic reach, the U.S. was determined to punish it diplomatically. While today the situation between Cuba and America is regarded as one spawned of those same ideological distinctions that kept America and Russia at arms for fifty years, the crux of our mutual bitterness is far more the progeny of American imperialism and the inevitable roadblocks therein. The economic disagreements caused by Castro's reformation, that drove Cuba into the arms of Mother Russia, helped to evolve a government of socialistically inclined intellectuals into a hardliner agent of the Iron Curtain. If this hadn't been ensured by the circumstances surrounding Castro's inception into authority and the concurrent American campaign to prevent and, subsequently, oppose that occurrence, President Kennedy and brother Robert shared a personal obsession with the Castro situation that would not allow them to deprioritize the conflict. Eisenhower's plan for dealing with Castro and Cuba, though stemming from the conventional hostile U.S. stance, placed an equal if not greater emphasis on political action, propaganda and intelligence gathering than actual military action. But the Kennedy's were determined to step up the intensity of the Cuba situation both for personal and political reasons. They considered it to be a great and perpetuating defeat that Castro was…
If you want to pursue a career in the animal industry, then this is an excellent place to start. This course will provide you with the underpinning skills and knowledge in animal care and giving you experience required to further develop your understanding. Work experience is an important part of this course as it prepares you for employment within the industry. As part of your course, you will be expected to undertake work experience housed at the centre you are based at – this may include some weekend work. Successful completion of this programme could allow you to progress into further animal-related study or into employment within the sector. During this course you will explore both theoretical and practical subjects and you will study the following topics: Animal Health and Well-being; Feeding and Watering Animals; Animal Science; Animal Housing; Grooming; Behaviour; Contribute to the Nursing of Animals and Caring for Farm Livestock. You will need 4 GCSEs to include two from: English; Mathematics and Science (minimum grade D). You will be required to attend an interview and taster day on ‘the yard’ and provide a reference (from an employer or recent school/college report) along with evidence of specific and sustained interest and motivation with animals and the animal industry; and an ability to cope with a programme of physical activity. Assessment is a combination of coursework and practical assessments.
Also called La Serenissima, literally meaning ‘the most serene’, Venice was one of the four maritime republics in Italy, the other three being Amalfi, Genoa, and Pisa. The term ‘Serene Republic’, however, more successfully suggests the enormous power and majesty of this city that was for centuries the unrivalled mistress of trade between Europe and the Orient. It suggests too the extraordinary beauty of the city, its lavishness and fantasy, which is the result not just of its remarkable buildings but of the fact that Venice is a city built on water, a city created more than 1,000 years ago by men who dared defy the sea, implanting their splendid palaces and churches on mud banks in a swampy and treacherous lagoon. Venice is built over ruins dating back to the Roman Empire. According to myth, Venice was established in 811 AD when a flock of pigeons carrying a white cross flew here. In 823 AD, St. Mark, symbolised with a winged lion and the patron saint of Venice, came to these islands. The city used to be headed by the Doge, a Latin word for ‘leader’, and a figure unique to Venetian politics. In the 13th Century the fourth crusade led by Pope Innocent III, consisting of approximately 20,000 soldiers from France, Spain and Germany, arrived at Venice on their way to the Holy Land to reclaim it from the Muslims. Venice asked for a very heavy price to allow them to pass through which the soldiers could not afford and they, subsequently, headed for Constantinople instead to explore the near east on Venetian orders. The most famous of these travellers was Marco Polo. During the 14th and 15th Centuries, Venice established schools which lasted for over 600 years and were the envy of Europe. These schools were attached to the churches and taught a trade bound by common nationality, charity and religion. They fell into decline when Constantinople was captured by the Turks. Eventually Venice came under Austria and stayed part of the latter until brought back to Italy by Napoleon Bonaparte. One hundred and fifty small side canals, the grand canal and its one and only main square, the Piazza San Marco, dominate Venice. The Piazza is both the religious and political centre of the city. The Ducal Palace, dedicated to the Doges, and the Church of St. Mark with relics of the saint, are both situated here. The Grand Canal, the main thoroughfare route, is dotted with numerous Venetian palaces of remarkable beauty. The canal narrows and boat traffic increases as one approaches the famous Ponte Rialto—Rialto Bridge—arched high over the waters. The windows in the arch belong to the shops inside. This is the commercial hub of the city, with open-air vegetable, fruit, and fish markets on the left, and on the right, an upscale shopping district. When Napoleon entered Venice with his troops in 1797, he called Piazza San Marco “the world’s most beautiful drawing room” and promptly gave orders to redecorate it. His architects demolished an old church that stood at the end of the square farthest from the Basilica, and put up the Ala Napoleonica [Napoleonic Wing] to unite the two 16th Century buildings on either side. Today the arcades of these three grand buildings shelter shops and cafés. The Tetrarchs dating to Roman times stands in a corner of the square. The Basilica di San Marco is one of Europe’s most beautiful churches. An opulent synthesis of Byzantine and Romanesque styles, it is laid out in a Greek cross topped off with five plump domes. Begun in 1063 and inaugurated in 1094, it was built to house the remains of St. Mark the Evangelist, which were stolen from Alexandria two centuries earlier by two agents of the Doge. St. Mark, the founder of the Coptic Church in Egypt, was the Adriatic missionary from Libya who went north into Europe and wrote the gospel. The story goes that the Doge’s agents stole the Saint’s remains and hid them in a barrel under layers of pickled pork to get them past the Muslim guards. These remains were finally returned to Alexandria by Pope John Paul II as a symbol of peace. Four bronze horses transported from the hippodrome of Constantinople by the Venetians in 1204, after the Venetian conquest during the fourth crusade, stand atop the Basilica, symbolising the power and force of the Republic. The three exterior facades of the church are richly decorated with marble inlays. The various statues of diverse origin and importance and classical and mythological subject matter of the reliefs stressed the idea of the state as sacred, noble and traditional. In Venice, the Doge elected the Patriarch and bishops without any interference from Rome. Gold, used everywhere, including the exterior in the four large lunettes, assured the people that Venice feared nothing. The interior confirmed the concept of magnificence and uniqueness. The flooring is in marble mosaic, the columns of rare marbles, and the walls lined with marble in various colours below and mosaics in coloured glass and 24 carat gold above, as are the cupolas. One has the impression of walking over a rich oriental carpet, surrounded by a half light which accentuates the gleaming richness of the golden mosaics. The golden altarpiece inside was the last work of art left intact at the fall of the Republic and is a testimony to the vast wealth of the state treasury. It is a singular unique example of Gothic art measuring 3.5 by 1.4 meters and inset with oriental enamels of various eras, and precious and semi-precious stones. Campanile, the tall brick bell tower in the square is a reconstruction of the original, which stood for 1,000 years before it collapsed one morning in 1902 without warning. The view from the tower on a clear day spanning the city, Lido, lagoon, and mainland as far as the distant Alps, is breathtaking. The Basilica and Campanile, two essential elements of the Piazza, seem almost abstract symbols of the Orient and Christian worlds, the former decorative and curvilinear, the latter severely austere. For centuries divided, these worlds were brought together in Venice and reborn through the centuries of her history. Rising above the Piazzetta [small extension to the Piazza] San Marco, the Ducal Palace is a Gothic-Renaissance fantasia of pink-and-white marble, a majestic expression of the prosperity and power attained by Venice during its most glorious period. It has a delicate weightless quality, rather like lace, due to its softly coloured design motif, pinnacles and aedicules. It was began in the 9th Century and the Republic spared no expense in embellishing both interior and exterior. At first it resembled a castle and was built for primarily defensive purposes. The present building retains the Gothic forms of the 14th Century with wide open loggias and windows. The main entrance is through the Porta della Carta or Paper Gate leading to the Giants’ Staircase. The palace had well defined functions with rooms dedicated to the Doge’s private apartments, various magistracies such as the College and Senate, Palace of Justice with Courts and the Prisons, the Armoury, Great Council Hall, Hall of Scrutiny, Hall of Four Doors and Hall of the Antecollege, to name but a few. The Golden Staircase is the official route along which one passed inside, on one’s way to the audience halls. All rooms were richly decorated with stuccoed ceiling panels and framed frescoes by Tintoretto, Veronese, J. Palma the Younger and other leading artists of the time. In the Great Council Hall, the largest hall in the palace, measuring 54 meters long, 25 meters wide and 12 meters high is Tintoretto’s masterpiece The Last Judgement, a kaleidoscope of figures in movement and the artist’s largest piece of work. Overlooking a side canal, is the Bridge of Sighs connecting the prisons and court rooms. Marble tracery windows in the bridge gave prisoners their last glimpse of the outside world. Casanova was the only prisoner to escape through the roof of the palace. Across the waters of St. Mark’s Basin facing the Piazzetta, is San Giorgio, an island with the Palladian church, Church of Santa Maria della Salute and Dogana [customs house]. Palladian is a classical Roman style of architecture developed by Andrea Palladio in the 16th Century, in which columns are built into the walls of the facade. The Piazza, today, stills throbs with vitality, colour and gentle beauty as pigeons, people and classical music by live bands in the outdoor cafes create a harmonious rhapsody. Serenissima. There is only one so serene in this world. 🙂 The craft of murano glass started in Venice in the 11th Century. A highly ornamental and creative form of hand-blown glass, murano glass-blowers need to have a minimum of 15 years experience to be qualified for independent work. The most expensive glass is the red coloured as it contains gold. Painted black, 1,000 pounds in weight, 1.4 meters long with the left side longer than the right by 20 centimetres, the gondola epitomises all that is romantic and historical in Venice. In earlier times, Venetians used logs of wood to sail through the canals. The rich decorated their boats in gold and semi-precious stones. Venice was then hit by the plague in the 15th and 16th Centuries. The state, thereafter, passed a law enforcing all boats to be painted black as a sign of mourning. And they have since stayed black. Note: My camera got damaged whilst travelling through Greece and Italy. I have, hence, instead used photos from various guides and museum books for my Italy web pages as per the credits.
Reduce, Reuse, Recycle. These are the three R’s. Our school wants to be greener and this week we are celebrating activities with the objectives of making teachers and students: - reduce their water and electricity consumption and the rubbish we produce, - reuse objects if that is possible, and - recycle as much rubbish as possible. These objectives are never easy to fulfill but there is much at stake: our Earth. There are thousands of things which can be done to reduce our consumption and waste. In this fantastic American website you can explore and read on how to help save our planet. But maybe you like simulation games too. If so, there are two funny and interesting interactive websites you might like to explore:
The Brazilian Keratin Treatment is a hair-straightening process that promises straight, smoother, and shinier hair that can last anywhere from six weeks to a few months. This process can take up to four hours and includes multiple steps. According to salontoday.com, researchers in Brazil discovered this treatment more than 10 years ago and because of its success in maintaining straight hair, many stylists in the United States have adopted it. How It Works Salontoday.com explains that original Brazilian researchers discovered this technique when they learned that keratin (a natural protein) and cosmetic-grade formaldehyde (which cross-links proteins in hair) could be applied to the open hair cuticle. After this formula is applied thoroughly to the hair, a 450 degree flatiron is used to seal it. This step may be repeated several times, until the keratin is thoroughly infused. During this process, the heat from the flatiron can release fumes. These fumes can possibly contain formaldehyde, a well-known chemical that is considered a carcinogen by many health and safety organizations, including the International Agency for Research on Cancer. It is considered a probable human carcinogen by the Environmental Protection Agency (EPA): “Formaldehyde, a colorless, pungent-smelling gas, can cause watery eyes, burning sensations in the eyes and throat, nausea, and difficulty in breathing in some humans exposed at elevated levels (above 0.1 parts per million). High concentrations may trigger attacks in people with asthma. There is evidence that some people can develop a sensitivity to formaldehyde. It has also been shown to cause cancer in animals and may cause cancer in humans. Health effects include eye, nose, and throat irritation; wheezing and coughing; fatigue; skin rash; severe allergic reactions. May cause cancer.” (http://www.epa.gov/iedweb00/formalde.html) Even if the product you’re using claims to be formaldehyde-free, it is important to check the ingredients or MSDS (Material Safety Data Sheet) of the product as it might contain chemicals that are just as potent and dangerous as formaldehyde. As reported on CBSnews.com, there are no existing studies that conclude the total effect of the inhalation of fumes via Brazilian Keratin Treatment [UPDATE 11/1/10– Check out the latest research from Oregon OSHA], but The Cosmetic Ingredient Review Panel, a group of scientists and doctors dedicated to evaluating and establishing recommended standards for cosmetic ingredients, suggested that a safe level of formaldehyde is .2%. Allure magazine tested Brazilian Keratin Treatment samples from United States salons that contained at least ten times more than that number. Formaldehyde fumes have the potential to harm the client, the applicator, and people in fairly close proximity to the source. Formaldehyde Exposure to Employees & The Law The Occupational Safety and Health Administration (OSHA) enforces formaldehyde overexposure regulations: According to OSHA standard 1910.1048 , the current permissible exposure limit (PEL) for formaldehyde is 0.75 part of formaldehyde per million parts of air (ppm) as a time-weighted average over an 8-hour period. The short-term exposure limit (STEL) is 2 ppm for any 15-minute sampling period. Engineering controls should be taken if your action level is 0.5 ppm. To determine this you will need to conduct sampling. It is recommended to use a Certified Industrial Hygienist to conduct this testing. As said on Salontoday.com: “Vent, Vent, Vent.” Source capture ventilation is an optimal technique for assisting in the removal of formaldehyde fumes. To do this, a fume extractor would be placed at the source of the fumes (in fairly close proximity to the client’s hair) to quickly pull them into an Activated Carbon filtration system and then released back into the room. Sentry Air Systems recommends the Portable Floor Sentry [Model SS-300-BKT] for assistance in the removal of Formaldehyde fumes during Brazilian Keratin Treatments.** This fume extractor is portable, lightweight, quiet, and equipped with a self-supportive flex hose so the operator can position the source capture area in the most effective position. The video below is a demonstration of a Sentry Air System fume extractor being utilized at a salon during a Brazilian Keratin Treatment. **Please note that it is the individual’s responsibility to use multiple safety controls while performing Brazilian Keratin Treatment. Sentry Air Systems and its equipment is not deemed a determining source of whether or not your facility or operation is “safe” or meets any federal, state, or any additional authority’s guidelines concerning formaldehyde control in correlation with Brazilian Keratin Treatment. According to the National Cancer Institute, Formaldehyde is a known human carcinogen and is associated with lung disease and other serious illnesses. It is the individual’s sole responsibility to confirm the suitability of the equipment for their particular application. Proper maintenance, which entails changing the carbon filter on a frequent basis, is essential to ensure the equipment is functioning correctly. There is no guarantee that containment with this system will be 100% effective. If you are allergic or sensitive to formaldehyde and/or other chemicals used in the Brazilian Keratin Treatment, please consult a health professional. ** For further information on this unit and how it can decrease potential health threats to yourself and your clients, please call us at 800.799.4609, or email us at [email protected]. SalonToday.com, “Salon Today Investigates Brazilian Keratin Services”: http://www.salontoday.com/ArticleLanding/tabid/130/Default.aspx?tid=1&ContentID=237839 U.S. Environmental Protection Agency, “Formaldehyde”: http://www.epa.gov/iedweb00/formalde.html CBSNews.com, “Health Alarm Over New Hair Straightener”: http://www.cbsnews.com/stories/2007/10/26/earlyshow/health/main3414868.shtml Allure Magazine, “Scared Straight”: http://www.allure.com/magazine/2007/10/scared_straight?currentPage=1 National Cancer Institute, “Formaldehyde and Cancer Risk”: http://www.cancer.gov/cancertopics/factsheet/Risk/formaldehyde#a4 Livestrong.com, “Brazilian Keratin Treatment”: http://www.livestrong.com/article/69316-brazilian-keratin-treatment/ United States Occupational Safety and Health Administration, “Formaldehyde”: http://www.osha.gov/SLTC/formaldehyde/
To say that Marcel Duchamp was a strange fellow is just scratching the surface. The man behind the Dada movement, he once signed a urinal and proclaimed it art. That piece, "The Fountain", is now considered a major landmark. So maybe Duchamp knew what he was doing. So, sometime in the early 1920s, Duchamp decided that he needed a new identity. Apparently, being known as the guy who signs urinals wasn't enough. So what did he do? Well, let him explain the process: "I wanted to change my identity, and the first idea that came to me was to take a Jewish name. I was Catholic, and it was a change to go from one religion to another!" Well, isn't that terrific! We would welcome Duchamp with open arms, of course. Who but us Jews to appreciate such an eccentric genius! Sadly, there was an obstacle: "I didn't find a Jewish name that I especially liked, or that tempted me, and suddenly I had an idea: why not change sex?" See, we can understand Duchamp a bit here. We've spoken numerous times of how hideous Jewish names tend to be... Wait, wait, wait, wait. Wait! Change sex? Huh? Oh, apparently Duchamp started dressing up as a woman, under the pseudonym Rrose Selavy. That's as far as the sex change went. Huh. Rrose Selavy? He couldn't find a Jewish name, yet he was happy with that?
- Hives Treatment for Babies - Symptoms of the Disease - Types of Hives - Form of the Disease Treatment - Diagnosis of Hives - Treatment of Hives in Children Hives Treatment for Babies Among the most common allergic diseases in children, affecting primarily the skin, an important place is given to hives. It can quite easily be diagnosed by parents themselves. Most often the disease discomforts children up to 3 years of age, but hives` presentations are also possible in adolescents and adults. Symptoms of the Disease In case of hives, baby`s body (often in skin folds, lips, in the areas of the close contact with clothes) is covered with light pink or deep red blisters completely different in size and shape. These rashes itch severely, as a result, a child starts to claw them, which leads to their significant increase in size and conjugation with each other. If you gently push down on such spots, you will notice white bulging points in their center. The appearance of hives is skin response on intake of allergen, which contributes to production of a large amount of histamine, thereby thinning the walls of blood vessels and their better permeability. Thus, the skin gets a significant amount of liquid, which is followed by swelling and blisters filled with water. The peculiarity of hives is their typically fast extinction. The skin rash usually lasts no more than a few hours (in rare cases-up to 2 days), after which it disappears from the former area and emerges in a totally different area of skin. The most important thing while detecting hives in your child is to find the cause of the disease, which is often an allergen. If this is not done and the contact with the causative agent is kept, the kid's state will worsen, you will see a hypersensitivity of the skin and its swelling. The Quincke's disease can happen in severe cases. Therefore, we offer you a list of the most common irritants that cause hives: - food stuffs: milk, eggs, sausage, seafood, nuts, honey, carrot, strawberry, melon, citrus, apples, chocolate or various kinds of food additives; - infections caused by viruses, bacteria or parasites; - medicines: antibiotics, analgesics, nonsteroidal anti-inflammatory drugs, ACE inhibitors, b-group vitamins, etc.; - impurities in the air: dust, pollen, floss and wool, etc.; - physical stimuli: heat, cold, sunlight, water, vibration, sweat, friction, etc.); - bites: venom of bees, wasps, jellyfish, etc.; - some perfumery products, dyes; - nickel or resins. Types of Hives The acute and chronic forms of the disease are distinguished depending on the cause of hives, their nature and duration. The symptoms of acute hives usually appear abruptly (within 1-2 hours after exposure to the allergen), so, it is often quite simple to determine the cause. In these conditions, the symptoms will manifest each time after direct contact with the allergen. The duration of the most acute cases is usually 24-48 hours. The disease is easily treatable. In the case of the duration of the rash for more than a few weeks, it is worth talking about the chronic form of the disease. It is successfully treated in children; and approximately in 50% of cases, the disease disappears on its own within 6 months. However, there are times when the disease is resistant to ongoing treatment, which results in the persisting of the disease for many years. Fortunately, chronic hives occur quite rarely. In this case, the compulsory recourse to a pediatrician or a dermatologist for further consultation is recommended. Form of the Disease Treatment Hives may proceed in mild, moderate and severe forms. In case of a mild form, the state of a sick child is satisfactory. Symptoms are typical, but they are poorly expressed. Thus, for example, the itch is not strong, the intoxication is not expressed, there is no swelling. Typically, in case of mild hives, the rash coves one area not more than 24 hours, and then it disappears. The moderate form of the disease is characterized by deterioration in the condition of a child. There are expressed symptoms of intoxication, a fever may be present, the Quincke's disease (giant hives) appears on some areas of the body it is a quickly and abruptly resulting allergic swelling of the skin, which spreads throughout the body within a short space of time. Therewith, eyelids and lips become swollen first; and then the baby`s hands and face become drastically changed, and swelling of the larynx leads to breathing disorder. Also in moderate form, the disease damages the gastro-intestinal tract. The combination of hives with swelling of the larynx is quite dangerous. In severe form, all of the above-mentioned symptoms are florid and quite severe. Diagnosis of Hives To diagnose hives in children and reveal their reason is usually fairly easy. To do this, you should usually let know the doctor the time and place of the appearance of the first symptoms. To confirm the diagnosis, the doctor will take the skin tests, will conduct a thorough survey and send the child to undergo a blood test. Treatment of Hives in Children Let's figure out what to do if a child has hives and how to treat the disease. The treatment of hives is provided in 4 phases: - Amelioration of the allergen exposure. - Purification of the body. - Intake of drugs (drug therapy). First, you need to remove the effect of any factors that might trigger the disease. If hives appeared as a result of food allergy, it is required to get the causative agent out of the kids` body as soon as possible (give him a laxative and constantly make him drink much liquid). Cleansing enema. It is usually applied in the case of food allergies. It is maximum effective in the early hours of hives. Drug therapy. The most effective drugs to treat hives are antihistamines. They relieve the acute form of the disease, reduce itching or even eliminate the rash. They can be applied by injection or in a pill form. The intake of such medications is helpful at night, especially if a kid get trouble sleeping due to the disease, as they cause drowsiness. In addition, compresses can help to eliminate itching (mix a tablespoon of vinegar with a glass of cool water and apply it as a compressor on the affected area of the body). Dieting. Dieting will help to keep the effect of treatment and hinder the causative agent from penetrating into your kids` body. If hives are caused by food allergen, you should primarily eliminate a possible irritant from the daily diet of the child and try to strictly adhere to the hypoallergenic diet for 2-4 weeks. The above-mentioned diet can include kefir, cottage cheese, steamed or boiled vegetables, as well as some kinds of non-allergenic fruit. In doing so, exclude the consumption of pastry and baking, strong tea, salt in large quantities and canned products. During the rehabilitation period, do not give your child eggs, milk, carrots, beets, red peppers, tomatoes, mushrooms, citrus fruits, apples, cocoa, honey, nuts, as well as any smoked and fried foods and condiments. In the meantime, you can gradually add new products in a small quantity into the menu of your child. If during a week after adding new products, hives do not appear again, you can include boiled fish and meat into your kid`s diet. However, the shift to a standard food should not happen earlier than in a month. In case of hives, it is prohibited to take hot baths and sunbathe, because ultraviolet light has a negative impact on healthy skin, not to mention the irritated one. Therefore, if your child breaks out in hives in the summer, dub his skin with quality sunscreen before going outdoors. Pay special attention to the treatment of this disease, because hives take the 2nd place after asthma among dangerous allergic states.
|Forest County Circuit Court||Crandon||Wisconsin 54520||12 miles| |Oneida County Municipal Court||Rhinelander||Wisconsin 54501||13 miles| |Vilas County Circuit Court||Eagle River||Wisconsin 54521||24 miles| |Langlade County Circuit Court||Antigo||Wisconsin 54409||30 miles| |Lincoln County Circuit Court||Merrill||Wisconsin 54452||37 miles| Courts are government institutions that administer justice in criminal and civil matters. They're mediated by one or more judges, magistrates or justices who have been appointed or elected. Courts rely on an adversarial system of justice, which is intended to resolve disputes with fair and impartial outcomes. Their primary purpose is to decide questions of law and determine facts. Attorneys for each side present arguments and introduce evidence to support their arguments. In some court cases, the final judgment is handed down by a jury of citizens chosen through a selection process. In other cases, a judge makes the final ruling. Cases may be heard in criminal court, civil court, or appellate court.
In most countries around the world, well-educated individuals are the ones that utilize social media, however, in Germany this is not the case. According to The Local, over 50% of social media users in Germany aren’t well-educated. On the other hand, 70% of social media users in the UK are well-educated. Image Courtesy of OECD Statistik’s Twitter Account Communism and Eastern Europe: Eastern Germany (what used to be the GDR) is post-communist, meaning previously it consisted of a classless society devoid of private ownership and capitalism. It also entailed resisting globalization and being closed from other countries. Hence, Eastern German citizens might still have that conservative communist mindset lingering in the back of their heads. They could be trying to protect their local businesses and people from Western firms and ideologies. Dr.Klemens Skibicki from Cologne’s Business School supports this theory by stating that in Germany, “protectionism and distaste for communication through the market economy [makes people] see the power of social media more as a threat than an opportunity”. One reason why it can be seen as a threat to Germans is due to the fact that with social media comes persistence, replicability, scalability, and searchability of all things posted on it. Any person from anywhere around the world can easily search, find, and exponentially share anything posted by a German on social media. In my Advertising class, I learned of a phenomenon in Eastern Europe called ostalgia, or nostalgia for communism. It’s the idea that after the abolishment of communism in a formerly communist country, citizens of the country eventually miss their older, “trusted” brands and products that they grew up with during the communist era. The Local states that Germany is the second oldest nation in the world. Hence, a lot of its citizens in the Eastern part of Germany are old and have spent more of their lives in the communist era. One reason why educated Eastern Germans tend to refrain from using social media could be due to the many older Eastern German citizens having ostalgia; they could want to stick to their older news and communication mediums that they used during the communist era as they might trust them more because they have been around them for longer. East Germany vs West Germany: Professor Van Hook from Jones International University has stated that Germany is having problems trying to unify Western and Eastern ideologies even decades after the fall of communism in the East. According to The Washington Post, Eastern Germany has less foreigners because it is a less accepting environment. This is partially because there is still the existence of right-wing neo-Nazi sympathizers in Eastern Germany. Being right-wing means being more conservative, hence, the right-wing party supporters in the East would probably be reluctant to open up to the world through social media. Possible Effects on Society and Social Capital: Robert Putnam defines social capital as “the connections among individuals and the social networks and the norms of reciprocity and trustworthiness that arise from them”. This lack of social media network usage by educated people in Germany could result in a higher offline social capital relative to other countries, as people aren’t staring at their phones on social media all the time, instead they speak to each other in person more. However, it could also result in a lower offline social capital, as people are less aware of what others are doing or where they are and it might be harder to communicate, schedule, and be updated on meetings or get togethers. It has negative consequences for Germany’s online social capital, as educated people are communicating online through social media less than in other countries. This means less bridging (weak relationships) being turned into bonding (strong relationships), and less maintenance of strong bonds through social media. The reason why other post-communist countries such as the Czech Republic (which has only roughly 30% of its social media users receivers of low or no education levels) aren’t generating similar social media statistics as Germany could be due to Germany’s more aging population and the many older, conservative right-wing Eastern Germans sticking to traditional forms of media and communication used before the emergence of social media and during the communist era. As mentioned by Van Hook, the post-communist ideologies of Eastern Germany have been conflicting with the Western, more liberal capitalist ideologies of the West. I find this fascinating because it exemplifies the immense impact of communism on a society. Today, 27 years after the abolishment of the Berlin Wall, the socialist mindset instilled by the GDR’s communist rule is still present and affecting many people in Eastern Germany. Also, social media is a lot more popular among younger audiences and generations in general. Hence, Germany, with the second oldest population in the world, consists of older people who aren’t as attracted to social media as the people in younger societies who are more eager to sign up for and be engaged in social media. Featured Image Courtesy of Shivang Bajaj
Ann Stewart Anderson's exhibition Women and War: From Troy to Terrorism presents images created to emphasize the roles women have played during war. History is filled with depiction of battles: wars fought with swords and bombs, on horseback and with drones. Troops are killed and injured. Some are decorated heroes, others remain the unknowns. Generals plot, front liners lead surges with bravery and valor. Military events form national histories of territorial expansion, of ideologies defended and defeated. The combatants leave wives and sisters, mothers and aunts, grandmothers and lovers, children and mothers in law-thousands of women whose lives are clouded by war induced grief. They mourn the deaths of sons, raise fatherless children, and live with husbands and fathers who bear permanent physical and psychological wounds. There are no Arlington Cemeteries for mothers, no Arc de Triomphes for orphans, no Wall of Remembrance for widows, no Eternal Flames for sisters. Anderson presents these survivors through painted images of the iconic women from antiquity who have become symbols for the effects of war on females, modern female combatants and widows, of mixed media depictions of the men lost in battle. Women and War: From Troy to Terrorism acknowledges and honors the women for whom war is about destruction and who inevitably live out lives permeated with the unfathomable sorrow of wartime loss. Visit the Women And War: From Troy To Terrorism website:
TODAY roughly 10 per cent of the population – around 500,000 people in Scotland alone – live with knee osteoarthritis, which at one extreme can be a mild irritant and at the other can ruin lives. Osteoarthritis is the most common chronic musculoskeletal disorder and there is currently no cure available. It is worsened by some of the major health issues of our time, including an ageing population, obesity and a sedentary lifestyle. The reaction from some medical professionals, however, is still that not much can be done about it and that it is simply part of getting older for some people. But only last year, a report from patient-led charity Arthritis Care looked at the consequences and found out that many people had to retire earlier than they would like because of their knee condition. There were stories of plumbers and electricians and other self-employed people who simply had to give up their job. The condition is also the leading cause of disability allowances, so the social and economic impact is huge. This is one of the reasons Glasgow Caledonian University (GCU) is co-ordinating a €4.2 million (£3.6m) European research project into knee osteoarthritis – KNEEMO – which begins this month and will run for the next four years. Working with partners from six other universities and three private companies, we will oversee the development of new methods to diagnose, treat or even prevent the disease from developing among those who are particularly at risk. We will also set up a network to share best practice and to train researchers who are working on treating the condition. KNEEMO will look at how knee osteoarthritis is diagnosed, developing new techniques which will allow doctors and other allied health professionals such as physiotherapists and podiatrists to spot patients who are showing signs of the illness and also those who are at risk of the disease developing. Knee osteoarthritis is currently treated with pain or anti-inflammatory medication, exercise therapy or the use of braces, with knee replacement surgery as the final option. Research has shown the effectiveness of treatments varies greatly and the project will seek to improve this by more closely matching treatments to the various types of people who live with the condition. It also unites the most renowned osteoarthritis researchers in Europe. This brings in the expertise needed to make progress in treating the condition. The scale of it is another advantage. We have a large number of organisations carrying out 15 parts of the study, and this lets us look at all aspects of knee osteoarthritis and join them up within the study. GCU’s own strength is in biomechanics. We have a gait lab on campus, and with osteoarthritis there is a large research focus on biomechanics – basically the forces working on the joint exceeding what the joint can handle. In a healthy human being, most of the forces go through parts of the knee which are able to handle them, but it is thought that with osteoarthritis too much of the forces are going through parts which can’t handle those forces. So most of our research at GCU is looking at those joints and forces and considering the biomechanical issues. Other partners bring their own strengths. Aalborg University in Denmark, for example, are experts in computer modelling, and will build a more accurate 3D representation of the knee joint. They will create personalised computer models of patients’ knees for the first time. These will make it easier to understand how various factors, such as the way a patient walks or runs, are linked to the disease. Another part of the project, run by the University of Southern Denmark in Odense, will develop tools that will help to identify people at a high risk of developing osteoarthritis early, and to prevent them from becoming severely disabled. My PhD in the 1990s was about osteoarthritis. My professor at that time in Utrecht said to me: “The first thing you need to know is that nobody is interested, but it is a huge problem and someday that will change.” We can no longer accept that knee osteoarthritis is something that people should be expected to live with – science has moved on. We now have a good idea of what’s going wrong, but if the default attitude is that there is nothing we can do then it will be hard to get our message across. That’s why it’s so important that we have this network, and the research and public attention which goes with it. Together they will change perceptions. • Martijn Steultjens is professor of musculoskeletal health at GCU. The focus of his research is on degenerative and inflammatory joint diseases and other chronic musculoskeletal pain syndromes www.gcu.ac.uk
To augment the story I posted yesterday, here is an article from Discovery Magazine followed by Dr. Mercola's interpretation. These articles explain, in some detail, the way that these chemicals can effect our health, down to the RNA in our cells. I know this is a rather lengthy post, but I think it's worth it to read or at least skim it. I also think it will give you a bit better perspective on this 100 Days of Fun with Real Food challenge.What You Eat Affects Your Genes: RNA From Rice Can Survive Digestion and Alter Gene Expression |Image courtesy of AMagill / flickr| What’s the News: It’s no secret that having lunch messes with your biochemistry. Once that sandwich hits your stomach, genes related to digestion have been activated and are causing the production of the many molecules that help break food down. But a new study suggests that the connection between your food’s biochemistry and your own may be more intimate than we thought. Tiny RNAs usually found in plants have been discovered circulating in blood, and animal studies indicate that they are directly manipulating the expression of genes. What’s the Context: - MicroRNAs, or miRNAs, are molecules involved in regulation of gene expression, the transcription of genes into proteins. miRNAs bind to the messenger RNAs that ferry genetic information from DNA to the ribosomes, which translate messenger RNAs into proteins. - When a miRNA binds a messenger RNA, it keeps it from being translated, thus preventing that gene from being expressed. How the Heck: - This team of researchers at Nanjing University had been studying the miRNAs that circulate in human blood and were surprised to find that some of the miRNAs weren’t homegrown but instead came from plants. One of the most common plant miRNAs was from rice, a staple of their Chinese subjects’ diets. Intrigued, they confirmed with a variety of tests in mice that the miRNA, which, in its native environs, usually regulates plant development, was definitely coming from food. - When they put the rice miRNA in cells, they found that levels of a receptor that filters out LDL, aka“bad” cholesterol, in the liver went down. As it turned out, the miRNA was binding to the receptor’s messenger RNA and preventing it from being expressed, sending receptor levels down and bad-cholesterol levels up. They saw the same effect when they tried it in mice. - Going further, when they fed rice to mice but also gave them a molecule that would turn off the miRNA, the liver receptor bounced back and bad cholesterol levels went down. - The team concludes that miRNAs may be a new class of functional components in food, like vitamins or minerals—even in an animal that’s pretty far removed from their home organism, they can manipulate gene expression and have an effect on nutrition. The Future Holds: - It’s only logical that what we eat has an effect on the expression of our genes, in the general sense that nutrients from food are involved in cellular processes that control and are controlled by gene expression. But this is an unusually direct route, and surprising from an organism that’s so different from mammals. - Since miRNAs from plants haven’t been on scientists’ radar before, this should be a field ripe for further exploration. Do corn miRNAs circulate in the blood of people in societies that eat gigantic quantities of corn, like the US? What receptors might those miRNAs control? Reference: Zhang, et al. Exogenous plant MIR168a specifically targets mammalian LDLRAP1: evidence of cross-kingdom regulation by microRNA. Cell Research, (20 September 2011) | doi:10.1038/cr.2011.158 Dr. Mercola's Interpretation "You are what you eat" is one of the most profound and instructive sayings ever to be passed down to us through the ages, and thanks to an explosion of exciting new research into the way that food directly affects your genes, it can no longer be written off as merely a metaphorical expression.In fact, food provides far more than just the material "building blocks" and "fuel" for the 'body-machine; it is also a source of genetic information, which is capable of informing the cells and processes within your body, for better or for worse.What is quite amazing is the difference in biological response when comparing the right and wrong typesof foods.In fact, new research has revealed that eating thewrong plants can actually directly alter your genetic expression, which can lead to a myriad of diseases. Micro-RNA Molecules from Your Food May Control Up to 30 Percent of Your Genes Groundbreaking new research shows that microscopic RNA in the plants you consume enters your body and is actually capable of affecting theexpression of up to 30% of your genes!Never before could it have been imagined that your "genes" could be so profoundly affected by things you eat.There is also the field of lectinology, which has opened our eyes to how plants – particularly grains and legumes – have a set of defenses, not unlike "invisible thorns," which can cause direct, non-immune mediated harm to a wide range of tissues and organs within your body.Medical science is beginning to awaken to how profoundly food is intertwined with health and disease, and how nutrients affect genes, and how our genes respond to nutrients. This, in fact, is the field of study known as Nutrigenomics – something, I believe, you will be hearing far more about as the science begins to gain wider appreciation. It is a burgeoning new field, in fact launched soon after the completion of a working draft of the Human Genome project (2003), which failed to provide the long sought after "holy grail" of modern biology.In a nutshell, the project failed to identify one gene for every one protein in the human body, forcing researchers to look to epigenetic factors -- namely, "factors beyond the control of the gene" – to explain how the body is formed, and how it works. What is the most important factor beyond the control of the gene? Diet. Eating the Wrong Plants Can Mess With Your DNA Expression Chances are you've never heard of micro RNA (miRNA) … but that doesn't mean it hasn't already been impacting your health … RNA is one of three major macromolecules, like DNA. Micro RNA are basically small pieces of RNA that interact with your genes, essentially stopping certain genes from being expressed.MiRNA exists in human body fluid naturally; for instance, researchers have detected high expression levels of immune-related miRNAs in breast milk, particularly during the first 6 months of lactation. It's thought that this genetic material is transferred from mother to baby to help modulate the development of the infant's immune system. Cow's milk also contains miRNA, which is currently being explored as a possible new standard for the quality control of raw milk.However, micro RNA also exists in plants, and for the first time research has shown that eating the wrong plants may transfer this plant miRNA to humans -- with potentially devastating implications.The study, published in the September 2011 edition of the journal Cell Research, determined that microRNA from cooked plant foods like rice, wheat and potatoes can in fact collect in your blood and tissue, leading to a number of potential health problems.The study further revealed that microRNA remains completely stable after not only cooking, but through the digestion process as well. Most importantly, the researchers found a significant quantity of microRNA in the human body, concluding that:" … plant miRNAs are primarily acquired orally, through food intake."So whenever you eat rice and certain other plant foods, including potatoes and wheat, you are ingesting genetic material that may turn certain genes "off." To date, microRNA has been implicated in a number of diseases ranging from cancer and diabetes to Alzheimer's disease. But what exactly is microRNA, and why is it so important? "Gene Regulators" in Your Rice, Wheat and Potatoes MicroRNA has been widely shown to alter many critical biological processes, including apoptosis – the process of programmed cell death and DNA fragmentation. As a result, the dysregulation of microRNAs has been linked to cancer and various other diseases. However microRNA are also responsible for regulating your genes on a very large scale. As mentioned, it has been estimated that miRNAs account for less than 1% of genes in mammals, but that up to 30% of genes are regulated by them.Amazingly, microRNAs are known to regulate the flow of genetic information by controlling the translation or stability of something known as messenger RNAs, which is a molecule of RNA that carries valuable genetic coding information within your body.What's more, this plant miRNA has been shown to interfere with human microRNA by mimicking it and binding to the receptors. In the study, researchers examined the two highest levels of these microRNAs in human participants, and found that it is shockingly prevalent among many dietary plant staples.As results of the study show, three microRNAs were detected in rice and other foods including Chinese cabbage, wheat, and potato. Of course these are all highly common food staples for many families not only in the United States, but around the world. This means that you may be unknowingly consuming plant microRNAs that could be increasing your risk of cancer and other disease. Even more concerning is the fact that the study authors observed this effect in both healthy men and women, reporting:"Upon investigation of the global miRNA expression profile in human serum, we found that exogenous plant miRNAs were consistently present in the serum of healthy… men and women."What you eat, therefore, is who you are in the most literal sense possible.This fact, while often overlooked, is fundamental in understanding how to optimize your health. If you eat the right foods, you thrive; eat the wrong foods, and you suffer. The problem is the field of nutrition is infused with the same intensity of impassioned debate and confusion as religion and politics – and rarely, only rarely do you get a clear picture of what is good for you, as an individual.It can take a lifetime to figure out how to perfect a diet, particularly one suitable for you as an individual. The good news is that modern research is beginning to make headway in figuring out what is good for virtually all humans, at least in most cases. Certain foods appear to be problematic for many … and most grains continue to be at the top of this list. Lectins: "Invisible Thorns" of the Plant Kingdom MicroRNAs are only one component of plant foods that stretch beyond the scope of vitamins and minerals … Did you know, for instance, that many of the plants we consume for food, particularly grains and legumes, contain chemical and physical defenses that protect against being eaten?These include anti-nutrients that interfere with the digestion of starches (anti-amylase), proteins (protease inhibitors), minerals (phytate), and many other similar molecules. Sprouting, fermentation, cooking and processing can sometimes reduce and/or eliminate these substances, but not in all cases.There is one category, of particular interest, known as lectins. Lectins get their name from the Latin word legere, from which the word "Select" derives – and that is exactly what they do: they select (attach to) a very specific number of biological structures.Lectins are capable of disrupting the health of the creatures that consume them, often piercing through the protective coating of their digestive tracts, and gaining entry into systemic circulation.Wheat, for instance, contains an exceptionally small lectin known as wheat germ agglutinin or WGA, which is capable of attaching to the surface proteins of nearly all of its natural predators, from bacterial to fungi, worms to insects, mice to men.Because all of the these creatures are composed, in part, of the biopolymer n-acetyl-glucosamine, and because WGA is designed to attach – exactly and exclusively – to this glycoprotein (part sugar, part protein), it is Nature's ingenious way of saying: "Hey, back off!" – at least when it comes to eating excessive amounts of the seed storage form of the mature grass plant, e.g. cereal grains.In an article published on GreenMedInfo.com, Sayer Ji describes lectins as "invisible thorns," explaining:"Nature engineers, within all species, a set of defenses against predation, though not all are as obvious as the thorns on a rose or the horns on a rhinoceros. Plants do not have the cell-mediated immunity of higher life forms, like ants, nor do they have the antibody driven, secondary immune systems of vertebrates with jaws. They must rely on a much simpler, innate immunity.It is for this reason that seeds of the grass family, e.g. rice, wheat, spelt, rye, have exceptionally high levels of defensive glycoproteins known as lectins. These 'invisible thorns' are an ingenius means of survival."Lectins were first discovered in castor bean casings, which contain the lectin ricin. Ricin is so toxic that only a dose the size of a few grains of salt can kill an adult if injected or inhaled. In fact, the US military investigated it for potential military use in the First World War. Like micro RNA, lectins are capable of directly affecting gene expression within cells. The Very Real Danger of Genetically Engineered Foods Given the fact that research now shows microRNA are appearing in humans who eat rice, it brings up many questions about the way the food we eat interacts with our physiology. While the Cell Research study had nothing specifically to do with genetically modified foods, the implications have everything to do with them.MicroRNA appears to have dangerous implications for human health, so it stands to reason that genetic modification, which by definition involves organisms in which the genetic material (DNA) has been altered, may too. Further, it brings up a whole new way by which GM foods might harm human health, considering researchers have been using genes very similar to micro RNA to "turn off" certain plant genes.As reported in The Atlantic:"Researchers have been using this phenomena to their advantage in the form of small, engineered RNA strands that are virtually identical to miRNA. In a technique called RNA interference, or RNA knockdown, these small bits of RNA are used to turn off, or "knock down," certain genes.RNA knockdown was first used commercially in 1994 to create the Flavor Savr, a tomato with increased shelf life. In 2007, several research teams began reporting success at engineering plant RNA to kill insect predators, by knocking down certain genes. As reported in MIT's Technology Review on November 5, 2007, researchers in China used RNA knockdown to make:' ...cotton plants that silence a gene that allows cotton bollworms to process the toxin gossypol, which occurs naturally in cotton. Bollworms that eat the genetically engineered cotton can't make their toxin-processing proteins, and they die.'And:'Researchers at Monsanto and Devgen, a Belgian company, made corn plants that silence a gene essential for energy production in corn rootworms; ingestion wipes out the worms within 12 days.'Humans and insects have a lot in common, genetically. If miRNA can in fact survive the gut then it's entirely possible that miRNA intended to influence insect gene regulation could also affect humans."The research on micro RNA also has implications on the very doctrine by which biotech companies make claim to GM food safety:substantial equivalence (the idea that there is no difference between GM and non-GM crops). There is obviously much left to be discovered about how DNA and RNA interacts with human beings … and it is becoming increasingly clear that plants with altered DNA cannot be "substantially equivalent" to their natural counterparts. The Atlantic continues:" … if companies like Monsanto want to use processes like RNA interference to make plants that can kill insects via genetic pathways that might resemble our own, some kind of testing has to happen. A good place to start would be the testing of introduced DNA for other effects -- miRNA-mediated or otherwise -- beyond the specific proteins they code for. But the status quo, according to Monsanto's website, is:'There is no need to test the safety of DNA introduced into GM crops. DNA (and resulting RNA) is present in almost all foods. DNA is non-toxic and the presence of DNA, in and of itself, presents no hazard.'Given what we know, that stance is arrogant. Time will tell if it's reckless. There are computational methods of investigating whether unintended RNAs are likely to be knocking down any human genes. But thanks to this position, the best we can do is hope they're using them. Given it's opposition to the labeling of GM foods as well, it seems clear that Monsanto wants you to close your eyes, open your mouth, and swallow." How Can You Eat to Optimize Your Genetic Expression? Given the knowledge that the food you consume ultimately becomes the life source of your entire body, it is important that you eat well not only to utilize vital nutrients but also to optimize your genetic expression.This is cutting-edge information, but it is becoming very clear that there is far more to "food" than vitamins and minerals. Research has only scratched the surface into micro RNAs and their impacts on human health, but the preliminary research suggests they may provide one more method by which grains may harm your health.For most, it appears healthy eating entails limiting carbohydrates from grains and potatoes, and instead focusing on carbs from vegetable sources. This is in line with the "Paleo" way of eating, which involves focusing on foods that are in line with your genetic ancestry, such as vegetables, nuts and grass-fed meats, while limiting sugars and grains. Cereals, potatoes and bread were non-existent prior to the dawn of agriculture, and there's reason to believe these foods are discordant with our ancient genome. We need to relearn what foods are ideal for our bodies not just to live on, but to thrive on.You can find more information about how to eat to support positive genetic expression in my nutrition plan. Also keep in mind that your diet is but one way to influence your genetic expression. Your emotions, pharmaceutical drugs, exposure to pollutants, and even exposure to sunlight (vitamin D) and supplements like curcumin play a role in how your genes are expressed. Now that you've read the articles, I do want to be totally upfront about the fact that I do not totally agree with Dr. Mercola's dietary suggestions for a strict Paleo diet plan. I do not believe that everyone needs to restrict themselves from entire food groups (grains, legumes, sugars and starches). While this may be necessary for some people, it can be equally unnecessary or even harmful to others. I believe that as unique individuals there is never a "one size fits all" dietary plan that works for everyone. Additionally, I do want to ask you to take this idea a bit further . . . if plants are grown with chemicals they uptake those chemicals into their structure and become a part of the plant. Now their seeds - which would be their offspring - are altered. When we consume these foods, our genetic codes (the RNA shifts Dr. Mercola spoke of) are also changed. This leads to a breakdown in the way our bodies function on a cellular level. Add to that the hybridization of foods that change their genetic codes as well. Now the animals who eat the plants and the humans that eat the animals are all subjected to a radical change in an RNA signature of a food that was originally digestible. This can lead to some very real problems in the ability to break down foods into particles that can be used by individual cells, creating fractures in the ways that our bodies were designed to function. In the coming days I will post, what I believe to be, the most radical ways to shift this quickly and efficiently.It's my desire to help you by arming you with facts, guiding you with suggestions and supporting you with ideas and recipes to inspire you to reach your own optimal health! Here's to Your Best and Healthiest Life,
Legendary And Deservedly So The history of Gibson Guitars is not only a fascinating story but stretches back farther than most people realize. Gibson's beginnings trace back to 1894 and started in Kalamazoo, Michigan with a luthier named Orville Gibson. Orville was a top notch craftsman and had strong opinions about instrument design and quality. He started with mandolins and an "F-Hole" carved top guitar design. His reputation as a master craftsman quickly spread and in 1902 Orville incorporated the Gibson Mandolin and Guitar Company, LTD. One of the greatest guitar brands in history was born. A Change of Hands... Orville's only patent was to an innovative mandolin design. In 1900, he met a group of investors who wanted to manufacture guitars and violins of his design under the protection of his patent. In 1904 Orville sold the rights to his patent to the group. After his sale of the patent, Orville's contribution to the company is unclear. Orville's health began to fail and he was eventually diagnosed with chronic endocarditis. Orville succumbed to the disease in 1918. A Period of Serious Innovation... A year after Orville's death, a virtuoso classical mandolinist and acoustical engineer named Lloyd Loar joined Gibson. Loar cultivated Orville's original carving concepts and brought about the creation of the L-5 guitar which sported the first "F-Holes" seen on a guitar. The L-5 became the first guitar to take a serious role in the orchestra scene, replacing the tenor banjo as a rhythm instrument and became the basis for Gibson's dominance and superiority in the new field of arch top guitars. The 1920's saw a flurry of innovations. Along came elevated fretboards, height adjustable bridges and one of the most important guitar components, the adjustable truss rod. Not ony does the truss rod balance the tension of the strings and neck keeping the neck properly aligned, it also prevents the neck from eventually warping under the tension of the strings. In 1924, Loar came up with an instrument that was about 30 years ahead of its time - the electric bass. Loar's bass was totally radical for the time and neither Gibson Guitars management nor the public accepted it. The rejection of his bass led to Gibson Goes Electric... Gibson ES 150 The Big Band era of the 1930's was in a large part responsible for the development of the electric guitar. Gibson upsized the L-5 to give it the oomph to cut through the horn sections of the orchestras. It was renamed the Super 400, the name being inspired by its price - $400. In that era that was a staggering sum of money for a guitar(now you'll pay thousands if you can find one). Not only was the price huge, the guitar was huge as well. It was a wonderful guitar but unwieldy as all get out. Gibson's solution was the ES 150. The ES 150 was a Spanish style guitar designed to be electrified and fitted with a hexagonal pickup. The ES 150 was not only the first commercially successful electric guitar but also played a part in shaping the future of guitar playing. A young swing and jazz player named Charlie Christian discovered the ES 150 and used it to develop a single note style of playing inspired by the solo lines of horn players. Charlie's forceful style and techniques evolved into lead guitar playing as we know it today. Another Change of Hands... During World War II materials were very scarce and Gibson's instrument production ground to a halt. In 1944 Gibson Guitars was bought by Chicago Musical Instruments, a noted music wholesaler. Thanks to the war's affect on business there was an enormous demand for musical instruments. This resulted in production resuming in 1946. In 1948 Gibson Guitars made what was probably one of its best moves ever. They hired an industry veteran named Ted McCarty. In 1950, McCarty assumed the presidency of the company and remained there until 1966. Under McCarty's direction Gibson absolutely boomed. His tenure saw the development of some of the most iconic of the Gibson Guitars such as the Les Paul, the SG, the Explorer, the Firebird and the Flying V. In addition, his tenure also saw the development of the Tune-O-Matic stopbar tailpiece and the humbucking pickup. As a result of these developments and innovations Gibson Guitars workforce increased by a factor of 10, profits by a factor of 15 and sales ballooned by over 1000%. McCarty's time at Gibson was definitely a boom era for the company. Next: Gibson Gets Solid...
Your body requires nutrition from the food you eat to carry out every body function and process. Vitamins and minerals must first be absorbed from the digestive tract by complex mechanisms. Sometimes, you may have a deficiency of a certain nutrient because your body is unable to absorb and use it properly. VItamin and minerals deficiencies can cause several types of symptoms. Check your skin, scalp and eyes for dryness and itchiness. Also check to see if your lips are dry and cracked. A lack of moisture in your skin may indicate low vitamin A or vitamin E levels. These vitamins also aid skin healing. Check your skin for slow-healing wounds. Look for flakiness and dandruff in your scalp; this may indicate a deficiency of vitamin A, vitamin B6 or the mineral zinc. Inspect your tongue and gums. Canker sores or a sore tongue can indicate low levels of vitamin B2, vitamin B3 or vitamin B12. Tender, bruised or bleeding gums may be due to a deficiency of vitamin C. Observe the inner lining of your eye, by gently pulling your lower lid downwards. If the lining appears pale and white rather than pink, you may have an iron deficiency. This mineral is important for red blood cell production, and a deficiency can lead to low energy, tiredness and dizziness. Examine your neck for a bump or bulge at the front center, close to where your larynx lies. Also check for bloating, weight gain, fatigue and low energy levels.This may indicate a deficiency of the mineral iodine, which is found in seafood and added to salt. Iodine is needed for thyroid hormone production, which is important for normal metabolism levels. Examine your fingernails closely. There are several signs of mineral and vitamin deficiencies that affect the fingernails. If your nails are thin, brittle and easy to split, you may have low levels of magnesium or copper. If you have white spots, you may have a zinc or calcium deficiency. Spooned nails indicate low iron or vitamin B12 levels. - Keeping a food diary can help you assess your daily diet to see if you are getting the right nutrition for your lifestyle. - Low levels of iron can cause iron-deficiency anemia, while low levels of vitamin B12 cause pernicious anemia. - Some vitamin and mineral deficiencies may be due to an illness or adverse reaction to prescription medication; your doctor can determine the cause and treatment. - If you have any symptoms or signs of a nutrient deficiency or illness, see your doctor for a check-up. You doctor can recommend a blood test and other examinations to correctly diagnose if you have a vitamin or mineral deficiency. - Do not self-diagnose or self-treat yourself with supplements. - Jupiterimages/Comstock/Getty Images
Climate Change/Science/Sun-Earth System In Sun's Influence on Earth, a simplified view of the Sun-Earth system has been used. The results are correct, and show how simple physical principles directly inform us about some of the most important aspects ofthis view is inadequate; this will become especially clear in the discussion of paleoclimate (ancient climate) and ice ages. Here let us review a few important aspects of the system to set the stage for later discussions. Perhaps the most profound influence on the Earth-Sun system is the geometry involved. The most basic part of this geometry is the orbit of Earth around the sun, which is governed by the gravitational attraction between the two bodies. Kepler showed that orbits are ellipses, rather than perfect circles. Earth's orbit is slightly elliptical, with an eccentricity of just 0.01671 ; even though this value is small, it has important consequences. The perihelion, or smallest distance from the sun, occurs during the northern hemisphere's winter and is about , and aphelion, the most distant point from the sun, is during the northern hemisphere's summer and is about This difference does not cause Earth's seasons, but can influence the severity of seasons (discussed in the paleoclimate section) and does introduce small variations to the annual incoming solar radiation ("insolation") as there are very slow variations in the eccentricity. A second important effect to consider is the tilt of Earth's spin axis with respect to the ecliptic plane, which is basically the average plane of Earth's orbit around the sun. The angle between the spin axis and the perpendicular to the ecliptic plane is called Earth's obliquity, and is currently about 23.4 degrees. This angle is the primary reason for seasons on Earth, for as the planet traverses its orbit, the amount of insolation at points on the surface slowly change, with winter occurring when the pole faces away from the sun and summer when the pole faces toward the sun. Seasons are more extreme with larger obliquity, and high latitudes (e.g. Antarctica) experience more extreme changes in insolation than the tropics, leading to more pronounced season. Earth's obliquity slowly changes in time, which has important consequences for very long-term climate change. A third important part of the Earth-sun geometry is called precession, and is actually the combination of parameters. Precession is the slow variation of direction of the spin axis, and is affected by both a turning of the spin axis and a slow change in the shape of Earth's orbit. For the contemporary climate, the precession only matters because it determines the relative position of the poles to the sun during Earth's orbit. There are important consequences for long-term climate change, though, which will be discussed later. The geometry of the Earth-sun system is a large part of the astronomical basis for Earth's climate. Other astronomical factors that are important include the evolution of the solar system and the sun itself, as well as electromagnetic phenomena (e.g. the solar wind). These topics are well worth studying, even in the context of climate, but they are beyond the scope of this book, as they bear little relevance on contemporary climate change. - ^ Eccentricity is defined for all conic sections, and is a relationship between the semimajor (a) and semiminor (b) axes. It can be determined by - For a perfect circle a = b, so the eccentricity is zero, for an ellipse a > b, and the eccentricity is bounded See also [Wolfram MathWorld].
Treatment of seedlings 1) Once your seeds have emerged, ensure that they get adequate water every day. Most seedlings die from lack of water or overheating. 2) If your seedlings have been grown indoors or under protection, they will need to be ‘hardened’ before being transplanted outside. Hardening is best achieved by exposing them to direct sunlight for increased amounts of time over the period of about 7-10 days. An example would be. Day 1-3 expose for 3 hrs, Days 4-7 expose for 5 hrs, Day 8-10 the whole day, then transplant outside. 3) Light plays a very important role in the growth of your seedlings. Too little light and the seedlings will ‘stretch’ and grow towards the light giving you tall, leggy seedlings with a pale colour. Adequate light will give you strong robust seedlings with a good colour. 4) To help the plants build up cellular structure and encourage them to ‘fill-out’ run your hands over the tops of the seedlings, this stimulates the plants into thinking that there is mechanical stress in their environment, like animals moving around and over them. This causes the plants to increase the strength and growth of their cellular structure and makes for stronger more robust plants. 5) Your plants are ready to transplant when there are 2 or more true leaves on the plant. When transplanting try and get the soil and seedling plug moisture at a similar level. If the plug is dry then a transfer barrier can develop where the soil is wet and the plug remains dry. This is one of the biggest causes of transplant stress. 6) Water the bed thoroughly after transplanting to assist with re-hydration of the soil and seedling. 7) Transplanting is best done in the cooler parts of the day. Evening is better than morning transplanting, as this gives the seedling a chance to send out some roots. An even better scenario is to transplant when you are expecting a few cloudy/cooler days in a row. Growing on seedlings. One option is to grow your seedlings on in larger pots or jiffy bags. The idea behind this is to start seedlings in the middle of winter and keep them growing until the weather warms up enough to transplant outside. One can use progressively larger pots or jiffy bags as you transplant the older seedlings up. What this will do is enable you to get a crop of veggies off your plants very early in the season. This is one way that market gardeners beat their competition and manage to secure higher prices with plant ripened veggies a full month or more before anyone else.
A new history of Texas for schools : also for general reading and for teachers preparing themselves for examination Page: 79 of 412 This book is part of the collection entitled: From Republic to State: Debates and Documents Relating to the Annexation of Texas, 1836-1856 and was provided to The Portal to Texas History by the UNT Libraries. The following text was automatically extracted from the image on this page using optical character recognition software: ERA OF COLONIZATION. more land. Merchants and mechanics were given town lots on which they might erect their stores or shops. All immigrants were to be free from taxation for six years; Austin, as empresario, or leader of the colony, was, on the fulfilment of his contract to settle three hundred families, to receive immense grants of land. All colonists were required to become Roman Catholics, to swear to uphold the government of the Spanish king, and to fur- nish evidence of good moral character.* With the prom- ise of so much good fortune, many immigrants were willing to follow Austin. The First Colonists.--Austin, being poor, was not able to fit out a vessel for carrying to Texas the needed tools and provisions. J. L. Hawkins, of New Orleans, his warm personal friend, came to his assistance by fit- ting out the schooner "Lively" with all necessary stores and placing her at his disposal. The schooner, loaded with supplies, made a safe trip to the mouth of the Brazos, where the tools and provisions were concealed to *The following is an extract from the oath colonists were compelled to take: "In the town of Nacogdoches before me, Don Jose NMaria Guadiana, came Don Samuel Davenport and Don William Barr, residing in this place, and took a solemn oath of fidelity to our sovereign, and to reside permanently in his royal dominions; and more fully to manifest it, put their right hands upon the Cross of our Lord Jesus Christ, swore each of them, before God and the holy cross of Jesus Christ, to be faithful vassals of his most catholic majesty, to act in obedi- ence to all laws of Spain and the Indies, henceforth abjuring all other allegiance to any other prince or potentate whatever, and to hold no correspondence with any foreign power without permission from a lawful magistrate and to inform against such as may do so, or use seditious language unbecoming a good subject "Signed: JoSE fAMRIA GUADIANA. Here’s what’s next. This book can be searched. Note: Results may vary based on the legibility of text within the document. Citing and Sharing Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding. Reference the current page of this Book. Pennybacker, Anna J. Hardwicke. A new history of Texas for schools : also for general reading and for teachers preparing themselves for examination, book, 1895; Palestine, Tex.. (texashistory.unt.edu/ark:/67531/metapth2388/m1/79/: accessed June 27, 2017), University of North Texas Libraries, The Portal to Texas History, texashistory.unt.edu; .
Preparing for a bone marrow (stem cell) transplant A bone marrow (stem cell) transplant has been recommended as the best treatment for your child. The transplant process is physically demanding. There are steps however that you can take to help your child maintain strength, flexibility, and endurance throughout treatment. Regular exercise in the weeks before a transplant will help your child go into the transplant in the best physical shape possible. Exercises should include ones that your child enjoys. Younger children will benefit from playground activities and tricycle riding. Activities for older children and teens may include running, biking, or a vigorous walk. Check with your clinic and physical therapist to learn which activities are allowed for your child. While in the hospital The level of physical therapy that each patient receives will vary, depending on the child’s disease, the type of transplant being performed, and the child’s response to the transplant process. Before transplant, a physical therapist will assess your child for strength, flexibility, and endurance. This staff member knows what is “normal” for different age groups. With this understanding, the physical therapist can test your child’s abilities before transplant and measure changes during your child’s time in the hospital. Exercise will help: - Prevent dangerous health problems caused by prolonged bed rest, such as pneumonia and blood clots; - Keep your child’s muscles strong and flexible; - Improve blood movement in the body and improve lung function (a respiratory therapist will help teach proper lung exercises); - Improve appetite; - Decrease stress; and - Help your child maintain a level of independence in everyday tasks. Because activity is so important to maintain health, exercise will be a mandatory part of your child’s daily routine on the transplant unit. A physical therapist will meet with your child 3–5 times per week as part of the transplant protocol. Your child will be required to: - Walk at least 5 laps around the floor during the course of the day; - Wear supportive shoes while walking laps in the halls and during physical therapy; - Bring supportive leg braces if already provided; and - Sit up in a chair and take part in room activity at least 4 hours per day. Sometimes your child might feel too ill to get out of bed. At these times, the physical therapist will focus on bed exercises and helping your child move out of bed to the chair as able. Tips for family and friends Family and friends are a primary part of your child’s life and will play an important role in your child’s recovery process. These are ways that family members and friends can help: - Respect your child’s desire to be involved in goal and decision making. - Help explain medical processes in a simple way. - Support and reassure your child during times of feeling afraid. - Encourage your child to keep taking part in activities. - Develop a daily schedule. - This can serve as a reminder of what is expected each day and provide clear direction for activities. - This allows your child to feel control over his days and may increase his willingness to take part in activity. - This may include opening the blinds each morning, getting out of bed at the same time each day, watching a favorite TV show, and walking at a certain time daily. If you have questions about the need for rehab services before, during, or after a transplant, please call Rehabilitation Services. If you are inside the hospital, dial 3621. In the local area, dial 901-595-3621. If you are outside the Memphis area, call toll-free 1-866-2ST-JUDE (1-866-278-5833), extension 3621. This document is not intended to take the place of the care and attention of your personal physician or other professional medical services. Our aim is to promote active participation in your care and treatment by providing information and education. Questions about individual health concerns or specific treatment options should be discussed with your physician. St. Jude complies with health care-related federal civil rights laws and does not discriminate on the basis of race, color, national origin, age, disability, or sex. ATTENTION: If you speak another language, assistance services, free of charge, are available to you. Call 1-866-278-5833 (TTY: 1-901-595-1040). ATENCIÓN: si habla español, tiene a su disposición servicios gratuitos de asistencia lingüística. Llame al 1-866-278-5833 (TTY: 1-901-595-1040). 1-866-278-5833 تنبيه: إذا كنت تتحدث بلغة أخرى، فيمكنك الاستعانة بخدمات المساعدة اللغوية المتوفرة لك بالمجان. يرجى الاتصال بالرقم .(1-901-595-1040 :الهاتف النصي)
- Training Course - United Nations Educational, Scientific and Cultural Office - IHE Institute for Water Education (UNESCO-IHE) - 10-27 Jun 2014 This course introduces the participants to the state-of-the-art concepts and practices of flood risk management. It covers the European experience in managing floods and stresses in the use of the latest tools in flood risk management. The course will introduce the basic concepts of flood risk management and the latest tools and techniques available in managing flood risk. Specific contents are: - Introduction to flood risk management - Flood risk management in practice – the different models of FRM - The role of uncertainty in evaluating flood risks - Sources of risk and their quantification (including flash floods, flood hazard mapping and climate change impacts) - Risk pathways (including 2D flood inundation modelling and reliability analysis of flood defence structures) - Vulnerability (consequences on receptors): Risk perception, community behaviour and social resilience - Pre-flood measures in FRM: sustainability issues, long-term planning, flood forecasting and warning, flood risk maps - During flood measures in FRM (flood emergency response and evacuation planning) - Post flood measures (flood recovery) - EU framework directive on floods; other national (eg UK) flood directives. European experience in managing floods. - On completion of this module the participants are able to: Understand and explain the main principles of flood risk management; - Understand the Hydroinformatics tools available for flood risk management; - Conceptualise the main principles of EU flood directive and have knowledge about European experience in flood risk management; - Understand and explain the main principles of flood forecasting and warning and uncertainty issues associated with flood forecasts; - Familiarise with the different flood forecasting models; - Utilise their hands-on experience in the step-by-step modelling procedure to build flood inundation models. Event fee€ 2700 The course is designed for current and future water professionals (engineers and scientists), decision-makers and others involved in flood modelling and flood management, particularly those who would like to be familiarise with the latest tools and techniques in flood risk management. Pre-requisites are knowledge about hydrology and hydraulics; some experiences with flood modelling/management is desirable but it is not a must. How to register Please register online. - Climate Change, Early Warning, Recovery, GIS & Mapping, Disaster Risk Management - Netherlands, the
by Natalie Gibson After completing a unit on William Shakespeare’s Romeo and Juliet, the freshman class at The June Buchanan School was challenged to create models of Shakespeare’s Globe Theatre. By constructing these models, students gained a greater understanding of how the theater of Shakespeare’s day differs from our own. “While Romeo and Juliet is the first Shakespearean play that some of these students have read, they were able to understand it beautifully,” said Natalie Gibson, the high school English instructor at JBS. “Shakespeare’s works are timeless, and incorporating hands-on activities, such as these models, gives students a greater understanding of the life and times of Shakespeare.”
A Martian candy store Astronomers and geologists are now in the equivalent of a Martian candy store of scientific objectives: the lowest point of Gale crater, called Yellowknife Bay is literally teeming with minerals that could only be formed in the presence of water – most notably, gypsum. Project leader John Grotzinger explained during a press conference that the area is a “[..] jackpot unit. Every place we drive exposes fractures and vein fills.”. The scientists initially decided to visit this area just as a small detour on Curiosity’s way to Mount Sharp, but when they observed the richness of objectives, they decided the rover should definitely stay a while, and even start drilling for the first time. The drilling will consist of five holes about 5 cm deep into the bedrock; the rover will then collect rock powder from the site and analyze it. “Drilling into a rock to collect a sample will be this mission’s most challenging activity since the landing. It has never been done on Mars,” said Mars Science Laboratory project manager Richard Cook of NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “The drill hardware interacts energetically with Martian material we don’t control. We won’t be surprised if some steps in the process don’t go exactly as planned the first time through.” It is generally accepted that the now barren Martian surface was once covered with vast quantities of water. Furthermore, it is believed that the water occured in many forms – still lakes, rivers, and even seas; all these lead to the conclusion that Mars was once habitable. The discovery of the mineral-filled veins within Yellowknife Bay rock fractures adds to the picture because this type of minerals can only form in the presence of water. “These veins are likely composed of hydrated calcium sulfate, such as bassinite or gypsum,” said ChemCam team member Nicolas Mangold of the Laboratoire de Planétologie et Géodynamique de Nantes in France. “On Earth, forming veins like these requires water circulating in fractures.” They didn’t find any sites like this close to the landing site, so finding one now seems a fluke. “The orbital signal drew us here, but what we found when we arrived has been a great surprise,” said Mars Science Laboratory project scientist John Grotzinger, of the California Institute of Technology in Pasadena. “This area had a different type of wet environment than the streambed where we landed, maybe a few different types of wet environments.” The drilling is expected to reveal even more information; the rock chosen for this innovative mission has been named “John Klein” in tribute to former Mars Science Laboratory deputy project manager John W. Klein. Click here for reuse options! “John’s leadership skill played a crucial role in making Curiosity a reality,” explained Cook. Copyright 2013 ZME Science Enjoyed this article? Join 40,000+ subscribers to the ZME Science newsletter. Subscribe now!
Add To Cart Add To Cart - Grade Level▼▲ - Media Type▼▲ - Guides & Workbooks▼▲ - Resource Type▼▲ - Author / Artist▼▲ - Top Rated▼▲ We often tend to compartmentalize subjects when teaching. However, real life doesn't serve up problems or issues in neat, subject-labeled situations! The Math Lessons for a Living Education curriculum teaches math through a blend of stories, copy work, oral narration and hands-on activities, showing how math is used in "real life"---just like a living math book! Thirty-six weeks of instruction guide students through the content, story, and hands-on activities using inexpensive manipulatives provided/made by the parent. This book is written to be used by teachers and students together. It includes a suggested weekly schedule (30 minutes per lesson, 5 days per week, 36 weeks) with easy-to-manage lessons that include reading, worksheets, and assessments. Pages are perforated and three-hole-punched so that parents can easily tear out, hand-out, and store pages. Students will read the pages in the book and complete the corresponding section provided by the teacher. Assessments are given at regular intervals. Answer keys are available online. This fourth-grade resource features the continuing story of twins Charlie and Charlotte, who are learning that life is full of learning opportunities! As students read, they'll be drawn into an adventure that teaches them about fractions and geometry, among other skills. Covering one year of 4th grade math, by the end of the course students will have reviewed what they've learned in previous grades, and learned about new fraction concepts, metric units of measurement, basic geometry, and averaging. Grade 4. 350 pages. Perfect Bound with Perforated 3-Hole Punched Pages. Number of Pages: 350 Vendor: Master Books Publication Date: 2016 |Dimensions: 11.00 X 8.50 X 0.75 (inches)| Series: Lessons for a Living Education BJU Math Grade 4 Student Worktext, Third Edition (Updated Copyright)BJU Press / 2014 / Trade Paperback$27.78 Ask a Question▼▲ Find Related Products▼▲ Here at Christianbook.com, we offer thousands of quality curriculums, workbooks, and references to meet your homeschooling needs. To assist you in your choices, we have included the following symbol next to those materials that specifically reflect a Christian worldview. If you have any questions about specific products, our knowledgeable Homeschool Specialists will be glad to help you. Just call 1-800-788-1221.
Larry Lynd is lead investigator of the CIHR New Emerging Team for Rare Diseases at the University of British Columbia. Peter Klein is a member of the research team and led a group of UBC journalism students who produced Million Dollar Meds, a Web portal about orphan drugs. Feb. 29 is Rare Disease Day, which started two leap years ago to raise awareness about conditions that affect very few people. Many of these conditions are genetic and are often extremely debilitating or fatal. Fortunately, there are effective medications to treat some of these diseases. Unfortunately, the cost of some of these medications is exorbitant, since the market for them is so small. That leaves our society with some tough choices. The premise of a socialized health-care system is that there is a fixed pool of resources, and those funds should be distributed as fairly and effectively as possible. When these expensive so-called orphan drugs clearly save lives or improve quality of life, provincial governments have usually been willing to pay, despite the cost. However, as orphan drugs are becoming a larger part of the pharmaceutical market, provinces and the federal government face an emerging dilemma. Pharmaceutical companies used to rely on blockbuster drugs such as Lipitor and Viagra for their profits, but as the patents for these bestsellers expire, many companies are looking to the rare-disease market for future growth. Because there are so few patients for these treatments, governments have set up incentives to encourage the development of these drugs. Technological advances are allowing even more effective drugs to be developed for rare diseases and, as a result, orphan drugs have turned into a global growth market, worth $100-billion (U.S.) this year and projected to account for close to 20 per cent of branded drug sales by 2020. As with any company with a responsibility to its shareholders, drug makers charge as much as they can, and they market their products as best they can. It is not uncommon for companies to offer these expensive drugs for free for a few months, and once a patient is starting to improve, they leave it to provincial health authorities or private insurance companies to take over. Companies have been known to hire patients and their families to engage in advocacy for drug coverage, lobbying health ministries to cover the cost of these expensive medications. Each province makes its own coverage decisions, so a patient with a rare disease in one part of Canada may receive medication, while in a neighbouring province the patient might not. This leads to a situation that is contrary to the spirit of universal health care. Because each province has to negotiate the price of medications individually with drug companies, they have little leverage to push the price down; as a result, Canadians pay more than many other countries for these drugs. Last month, provincial health ministers met about a proposed national drug plan that would allow them to negotiate lower prices, and they have pledged to make access to these treatments more equitable. In 2010, they established the Pan-Canadian Pharmaceutical Alliance in which one province negotiates the price of a specific drug for all provinces. The federal Patented Medicines Price Review Board is responsible for regulating drug prices, but it has struggled with addressing the skyrocketing price of orphan drugs. Most recently, it is battling Alexion, the maker of Soliris, which, at an annual cost of up to $700,000 (Canadian) for a single patient, is one of the most expensive drugs in Canada. The board wants the company to lower the price of the drug and to pay back price overages, but the company has so far refused. Many patients with rare conditions see access to these orphan drugs as a right, no matter what the cost. Based on studies our team conducted, however, the public disagrees. Rarity of disease in itself does not justify coverage for most Canadians, nor does it for government payers. If treatments are effective, and they are priced reasonably, we can and should cover the cost of these drugs. But with orphan drugs becoming an increasing part of health-care spending, we have to come up with strategies to keep costs in check while allowing access for patients – or risk eroding the health care that is promised to all Canadians.Report Typo/Error Follow us on Twitter:
Banana workers in Ecuador are the victims of serious human rights abuses, Human Rights Watch charged in a new report released today. In its investigation, Human Rights Watch found that Ecuadorian children as young as eight work on banana plantations in hazardous conditions, while adult workers fear firing if they try to exercise their right to organize. Ecuador is the world’s largest banana exporter and the source of roughly one quarter of all bananas on the tables of U.S. and European consumers. Banana-exporting corporations such as Ecuadorian-owned Noboa and Favorita, as well as Chiquita, Del Monte, and Dole fail to use their financial influence to insist that their supplier plantations respect workers’ rights, the report found. Dole leads the pack of foreign multinationals in sourcing from Ecuador, obtaining nearly one third of all its bananas from the country. “The Ecuadorian bananas on your table may have been produced under appalling conditions,” said José Miguel Vivanco, executive director of the Americas Division of Human Rights Watch. “Banana companies have a duty to uphold workers’ rights. Ecuador is obligated under international law to do so.” The use of harmful child labor is widespread in Ecuador’s banana sector. Researchers for the Human Rights Watch report, Tainted Harvest: Child Labor and Obstacles to Organizing on Ecuador’s Banana Plantations, spoke with forty-five child laborers during their three-week long fact-finding mission in Ecuador. Forty-one of the children began working between the ages of eight and thirteen, most starting at ages ten or eleven. Their average workday lasted twelve hours, and fewer than 40 percent of the children were still in school by the time they turned fourteen. In the course of their work, they were exposed to toxic pesticides, used sharp knives and machetes, hauled heavy loads of bananas, drank unsanitary water, and some were sexually harassed. Roughly 90 percent of the children told Human Rights Watch that they continued working while toxic fungicides were sprayed from airplanes flying overhead. For their efforts, the children earned an average of $3.50 per day, approximately 60 percent of the legal minimum wage for banana workers. Chiquita, Del Monte, Dole, Favorita, and Noboa have all, at some time, been supplied by plantations on which children labored, with more than 70 percent of the children interviewed saying they had worked on plantations that almost exclusively supply Dole. When Human Rights Watch asked Dole to confirm or deny its business relationship with these suppliers, it refused, claiming this is “business proprietary information.” Dole’s web site states, “Dole does not knowingly purchase products from any commercial producers employing minors.” “Banana-exporting companies may tell you they’re not responsible for labor abuses,” Vivanco said. “But they have financial power and could use it to ensure respect for workers’ rights. They just don’t.” Adult workers face an environment in which they are often too scared to exercise their right to organize for better working conditions. Only approximately 1 percent of banana workers are affiliated with workers’ organizations—a rate far lower than any Central American banana-exporting country. Ecuadorian law fails to protect effectively the right to freedom of association, and employers take advantage of the weak law and even weaker enforcement to impede worker organizing. Workers illegally fired for union activity have no right to reinstatement. Instead, in the unlikely event that the offending employers are found responsible, they must pay only a negligible fine—often less than $400. And employers circumvent labor laws by relying on subcontractors to provide workers and by hiring “permanent temporary” workers with even fewer rights than permanent workers. The heavy use of subcontracted “perma-temps” has created a workforce with no right to bargain with employers who control working conditions. Instead, subcontracted “perma-temps” only enjoy the relatively useless right to organize and bargain with their virtually powerless subcontractors. “Most workers on these plantations can’t organize to protest their working conditions,” Vivanco said. “Either they suffer in silence, or they risk being fired.” Human Rights Watch urged banana-exporting corporations to demand that labor rights be respected on their supplier plantations and to monitor compliance with this requirement. Human Rights Watch also called on Ecuador to enforce its labor laws. The organization also urged Ecuador to guarantee children’s right to education by ensuring, as required by the country’s own law, that all children under fifteen have access to free schooling. In addition, Human Rights Watch called for the amendment of labor legislation to guarantee workers’ right to freedom of association by banning anti-union discrimination in hiring, requiring reinstatement of workers fired for union activity, strengthening laws governing the use of temporary workers, and adopting meaningful sanctions for anti-union conduct Vivanco said the report’s findings highlight the need for effective labor rights protections in any future trade agreement with Ecuador, including the Free Trade Area of the Americas. Testimonies taken from Tainted Harvest: Child Labor and Obstacles to Organizing on Ecuador’s Banana Plantations Below are testimonies from workers interviewed by Human Rights Watch for the report, Tainted Harvest: Child Labor and Obstacles to Organizing on Banana Plantations in Ecuador. The workers’ names have been changed to protect them from potential employer reprisals. Exposure to Toxic Substances Fabiola Cardozo told Human Rights Watch that twice when she was twelve she became ill after aerial fumigation. She described that the first time, “I got a fever. . . . I told my boss that I felt sick. . . . He told me to go home. . . . [The second time,] I became covered with red things. They itched. I had a cough. My bones hurt. I told my boss. He sent me home.” Similarly, Carolina Chamorro told Human Rights Watch that after aerial fumigation, “I felt sick twice. I was ten years old. . . . I began to shake.” She said that she thought she was going to faint and told her boss, who sent her home. Cristَbal Alvarez, a twelve-year-old boy, also explained, “That poison - sometimes it makes one sick. Of course, I keep working. I don’t cover myself. Once I got sick. I vomited [and] had a headache . . . after the fumigation. I was eleven years old. . . . I told my bosses. They gave me two days to recover.” The children told Human Rights Watch about the various methods that they used to protect themselves from the toxic liquid: hiding under banana leaves, bowing their heads, covering their faces with their shirts, covering their noses and mouths with their hands, and placing banana cartons on their heads. As one fourteen-year-old boy, Enrique Gallana, explained, “When the planes pass, we cover ourselves with our shirts. . . . We just continue working. . . . We can smell the pesticides.” Human Rights Watch interviewed three young girls, ages twelve, twelve, and eleven, who described being sexually harassed by the then “boss” of the packing plants on San Fernando and San Alejandro, plantations of the Las Fincas group. Human Rights Watch observed a roadside sign bearing the Dole logo above the name “Las Fincas,” strongly suggesting that the plantation group primarily supplies Dole. Marta Mendoza, a twelve-year-old who began working on Las Fincas at age eleven, explained to Human Rights Watch, “There is a boss at the plant who’s very sick. . . . This man is rude. He goes around touching girls’ bottoms. . . . He is in charge there and is always there. He told me that he wants to make love to me. Once he touched me. I was taking off plastic banana coverings, and he touched my bottom. He keeps bothering me. He goes around throwing kisses at me. He calls me ‘my love.’” Fabiola Cardozo, a twelve-year-old who began working on Las Fincas at age ten, similarly commented, “The boss of the packing plants . . . says, ‘Oh, my love.’ When we bend down to pick up plastic bags, he says, ‘Allي para meterle huevito.’ [‘There is a good place to stick my balls.’]” Freedom of Association After working as a “perma-temp” for a year and a half on the same two banana plantations, Gema Caranza was indefinitely “suspended” on May 7, 2001, allegedly for involvement in union activity. She explained that she was told by the boss of the packing plants that “[the administrator] has found out what you’re involved in and [is afraid] that you will want to speak with the people and organize.” According to Caranza, her boss, with whom she had a good working relationship, added, “I told you not to get involved in that, that you’d lose your job.” Caranza said that in June 2000, she began to attend union-sponsored events and seminars. In most cases, she said, she invented excuses for her absence, afraid to disclose their true purpose. Before leaving for her first union-sponsored event outside Ecuador, however, she showed her boss the event invitation. She said, “He told me to be careful [and] that others might soon know [what I was doing].” Caranza said, “I knew that if they [the administrator, the plantation owner, or others in management] found out, they would fire me. . . . Because that’s the way it is. If they find out, they fire you. This is why most people are scared.”
There is, however, another by-product of the work on an energy machine, and that is heat. The combustion of foodthat is, its chemical union with oxygengoes on continuously in the body, and by a delicate regulatory mechanism keeps the body at the even temperature of about 98% F. All of the processesboth the building up of heat and its dissipation by the body, and the chemical interchanges of various kindswhich go to make up the machinery of the body, are called “metabolism.” The sum of all of them is called “basal metabolism,” and this can be easily measured. The first experiments in trying to measure basal metabolism took the amount of heat an animal gave off as a standard. The animala cat or guinea pigwas placed in a vessel surrounded by ice. After a certain length of time the amount of ice that had melted was measured; as the number of calories required to heat a given amount of ice is well known, it was easy to calculate the amount of heat given off by the animal. More modern instruments have improved on this old ice calorimeter and use the amount of oxygen consumed as the measuring rod of basal metabolism. The measurement of basal metabolism is very useful in diagnosis in certain cases: it is known that certain forms of goiter raise the basal metabolic rate very greatly.
Freezing food is both economical and convenient. It enables you to take advantage of bargains and purchase large quantities of food that can be stored and defrosted whenever you need it. Almost any food can be frozen, says the United States Department of Agriculture, and dried fruit is certainly no exception. Because the fruit is already partially preserved through the drying process, it is especially receptive to freezing. It's very simple to prepare dried fruit for freezer storage, and if done properly, it retains its basic flavor, texture and nutrient content. Remove dried fruit from its original packaging. If you dried the fruit yourself, make sure that it is completely cooled before you freeze it. Separate the fruit into individual serving sizes and place each serving in a freezer bag. Be careful to squeeze out as much air as possible when sealing the bags. Label and date each bag. Place the bags in a vapor-proof, moisture-proof freezer container. Store the container in your freezer at no higher than 0 degrees F. Properly stored and frozen, dried fruit will keep well in your freezer for up to 12 months.