text
stringlengths
187
626k
Antisocial Personality Disorder Antisocial personality disorder is a mental illness that involves a pattern of disregard for or violation of the rights of others. Deceit and manipulation to gain personal profit or pleasure are common behaviors by people with this disorder. A person with antisocial personality disorder fails to conform to social norms and may repeatedly participate in destructive, illegal activity, such as property damage, cruelty to animals, setting fires, or harassing others. Important decisions, such as ending a relationship or changing a job, may be made suddenly and without much consideration about the consequences. Those with antisocial personality disorder tend to be irritable and aggressive and may repeatedly get into physical fights or physically or verbally abuse another person such as a spouse or child. Irresponsible work behavior and financial habits are common. Little remorse is shown for harmful behaviors. Antisocial personality disorder usually develops during childhood or early adolescence and continues into adulthood. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org Find out what women really need. Most Popular Topics Pill Identifier on RxList - quick, easy, Find a Local Pharmacy - including 24 hour, pharmacies
Calumet Environmental Education Program Photo: Kirk Anne Taylor The Division of Environment Culture and Conservation's Calumet Environmental Education Program (CEEP) translates science into action for students and educators in the Calumet region. Students participate in a consecutive ladder of conservation education programs, linking and building skills and knowledge grade level upon grade level. The three conservation education programs, Mighty Acorns (grades 4-6), Earth Force (grades 7-8), and Calumet Is My Back Yard (grades 9-12), engage young people in scientific, hands-on learning about biodiversity and conservation, resulting in action projects in their own community. |Mighty Acorns helps students develop a personal connection to natural areas in their community. Mighty Acorns students visit a local natural area three times a year, participating in exploration of biodiversity, educational activities that illustrate basic ecological concepts, and stewardship activities, such as removing invasive species and spreading native seeds.| Earth Force students develop the skills needed to create long-term solutions to environmental issues in their community. Using a six-step problem-solving curriculum, students choose a local environmental issue — such as toxic cleaning solutions in schools or air pollution — and implement a conservation-action project to address it. |CIMBY builds scientific and leadership skills for high school students. CIMBY students participate in a variety of difference activties throughout the school year, from ecological restoration at an adopted natural area to classroom activities and leadership training workshops that help students to take action to protect local natural areas.| Visit the CEEP Newsletter Archive To learn more about ECCo's conservation work click here.
Back to Wildlife Info Did you find a baby bird? Click here Understanding Baby Mammals By Elena Fox, Licensed Wildlife Rehabilitator Baby mammals should be regarded in much the same way as birds -- remember that mammal moms can leave their nests unattended for many hours. Unless you see obvious injuries or know for a fact that a nest of baby bunnies, or squirrels, or other small mammals has been orphaned, leave them undisturbed. Their best chance of survival is to be cared for by their mother. If their nest has been destroyed, place the babies in a box in a safe place near where they were found and let their mother find them. You may never see her, but check on them in 24 hours and if they are warm and plump, she is doing her job. Remember, their mother's milk is vastly different than anything we can use to substitute, and it changes composition each day, as the babies get older. A mother hare can feed her young once in 12 hours, but we must feed orphans every 45 minutes around the clock to approximate her rich milk. very special cases cause a great deal of confusion in the spring -- fawns and seal pups. In nearly all cases both should be left alone. Both species of babies are routinely left unattended for 6, 10, even 15 hours at a time. A fawn may lie for hours beside a busy road and even fail to respond to your approach. This is completely normal. The mother is likely nearby watching for you to leave. The same applies to seal pups; a pup alone in a small tidal pool for a day could appear to be cause for concern. The pup may even seem lethargic or "sick," but, as with the fawn, this is normal behavior and the mom will likely return to care for her baby. With seals, an added incentive to help you resist interfering is the law. Marine mammals are protected and individuals can be fined into the many thousands of dollars for "molesting" them, which can mean coming within 100 YARDS. Give them a wide berth for both your sakes. You can help most by educating other people. You can also help keep dogs and children away from the area, and you can talk yourself into staying away too. Your attentions can place the baby in danger by frightening the mother and attracting predators. It is hard to do nothing when we suspect an animal needs help. But, we should err on the side of caution when a misread of the situation could put us between a wild mother and her baby. If you have concerns or questions -- If you find a baby and you aren't sure if you should do something, call us for assistance. We care about them too, and together we can give wildlife the greatest chance of living a free and vigorous life. View a PDF of this page.
The cereal economy across most of Europe in the 1930s resembled that of Britain in the late 17th and early 18th centuries, in that wheat was still milled close to where it was grown, white bread was predominantly the food of the rich and the city dwellers, and wholemeal the food of the poor and the country households, and both rye and barley were still commonly eaten. In Britain, however, changes in agriculture and technology that began 200 years earlier had hustled British baking on to a path at odds with the rest of Europe. The British agrarian revolution of the 18th century stirred the most defining change in the way farmland was managed, and the way grains like wheat and barley were grown and harvested. During this time, the older style of subsistence farming practised for hundreds of years was replaced, through enclosures that took the strips of land away from the rural poor and united them in larger, more efficient estates under the direct control of increasingly wealthy landowners. This, in turn, made it possible to introduce the new farming equipment invented at that time such as Jethro Tull's horse-drawn hoe, and his drill that planted seeds low enough to stop the rain washing them away. Initially, these changes ensured that the first half of the 18th century was a period of relative prosperity. Wheat and barley harvests grew in both acreage and yield; increased yields and more animals to feed meant that more bran could be sold for animal feed, and a more refined, whiter flour could be marketed; and this became the dominant flour for bread-making. In 1700, rye flour accounted for about 40% of the bread of the common people; by 1800 it accounted for only 5%. The regulations for the sale price and quality of bread, known as the 'Assize of Bread', defined loaves according to the coarseness of the flour used, typically 'white' (the most expensive), 'wheaten' and 'household' (the least expensive). This, in effect, reduced the quality of household brown bread and increased the popularity of white. The prices set by the assize made it very difficult to sell brown bread at anything other than a loss, and some critics suspected that bakers made a poorer quality of household bread to promote higher-value white wheaten bread. The system of bread-making used in homes and bakeries at that time was a simple process that used a fermenting liquid, commonly called barm and usually made from the liquor extracted from soaked malted grains and boiled hops. The hops acted as an antibacterial addition and stopped the yeast liquid turning sour too quickly. A simple dough would be made, quite firm with a very small quantity of barm, and left for many hours to rise, after which it would be divided, shaped and baked. In France by comparison, during the same period, the dough mixing process was more complex, as hops and malt weren't used to speed the fermentation and inhibit excess acidity: instead, the volume of the dough was increased in stages, as this kept the fermentation brisk and controlled the acidity. It wasn't until the early 1800s in both Britain and France that a liquid ferment, what British bakers called a 'sponge', became commonly used, a method believed to have been introduced to both countries by Viennese bakers. Sourness wasn't avoided by all bakers. In Scotland, Wales, Cumbria and Lancashire, the practice of sowens-making where the husks of oats were left in a wooden bowl to ferment, and the liquid heated until it thickened slightly into a sour 'soup' was common, and a flat oatmeal bread, known as sowens cakes, and later simply oatcakes, had left the locals with a taste for sour bread. If malted barley wasn't available to speed the fermentation, then cooked potatoes were added, and this became typical of bakers in the Midlands. For the southern English, however, any trace of acidity or potato was frowned upon. A succession of poor wheat harvests after 1770, together with a rapidly rising population, led merchants to import grain from Europe. During the war with France (1793-1815), importation of grain from Europe became impossible, inflating grain prices in Britain and securing major landowners even more wealth. This situation unwound with the fall of Napoleon; cheap imported grain began to flood into the British market, almost halving the price within a matter of months, and Lord Liverpool and his government, the party of the landowners, sought ways to stop this. The 1815 Corn Laws were trade tariffs, which protected domestic grain from cheaper foreign imports; however, the British market was opened up again by their repeal in 1846, whereas the rest of Europe (apart from Belgium) retained tariffs on grain imports until at least the 1930s. At first, the introduction of imported, mainly European wheat worked in sympathy with the old style of British baking and complemented local wheat characteristics when used in small-scale bakeries with hand-mixing. But later on, a new kind of milling and dough mixing evolved that would enable the manufacture of bread in factories, and the whitest, softest and cheapest bread British workers had ever had access to. From the 1870s onwards, the importation into Britain of roller-milled flour from the US and Hungary changed the style of bread that could be made and slowly starved, and effectively closed, the traditional wind- and watermills of Britain. The flour was ultra-white and fine, due to the use of silk bolting cloths, and it was milled from new varieties of hard wheat, rich in gluten. This flour produced dough that was more resilient and extensible than that made with local grain, and though it lacked the same rich sweet flavour of native British wheat, it performed better when used in high-powered dough mixing machines, and became essential for the early plant baking industry that would dominate British bread-making during the 20th century. High tariffs protected the rest of Europe from importation of wheat and flour well into the 20th century, and this helped to protect the local milling and baking traditions. From this point on, the characteristics traditionally found in British regional bread-making the curious use of an ale-barm, the single long fermentation of a firm dough and the inevitable backnote of bitterness from the hops, the cream grey stone-milled flour, the use of rye, barley and oatmeal together with wholemeal wheat flour to make a maslin mixture began to vanish. Though the shapes of the loaves remained, the heart of the crumb and crust was lost. A small amount of home-baking continued, but given the higher number of women who were in full-time employment, compared to the rest of Europe, there was limited time and resources to bake at home. Effectively, a quest for wealth and modernity destroyed the traditions in British baking. Though the 20th century brought innovations, such as the Chorleywood Bread Process, electric ovens and refrigeration, the traditional methods and techniques once used to make British breads were already a faint memory by then. Today, young bakers are gradually unearthing and restoring the older techniques used in the 1800s, restarting old barm-making processes and working with farmers to grow forgotten, but once local varieties of wheat. Perhaps the legacy that British baking leaves the world at this point is the sobering thought that no matter how common and ordinary local skills and old ways seem next to the alluring gloss and promise of modern discoveries, it is only with hindsight that we can ever appreciate the benefits of the knowledge and skills used by other generations. l This is an extract from Dan Lepard's chapter on British baking in the Dictionnaire Universel du Pain, edited by Jean-Philippe de Tonnac, published this month by Robert Laffont. The book is a country-by-country exploration of the history of bread-baking and is currently available only in French. Maslin barm bread The combination of a maslin flour mixture with an ale barm is a typical style used in Georgian and early Victorian bread-making. This would be made with the sieving from the milled wheat after most (or sometimes all) of the white flour had been removed, then mixed or ground again with rye, barley and oats. In medieval times, the labourer would have been given a mixture of these grains to eat and these could be milled together into flour. The combination gave the loaves an earthy strong flavour. The proportions of grains used varied according to the season and harvest. Prior to the early 1800s, much of the wheat grown in southern Britain resembled wheat grown in France. Seeds were exchanged and sold between both countries, so today's French flour is arguably closer in performance to old British flour than the modern imported hybrids used in the UK. The amount of barm varied according to the time available for bread-making, but I prefer a high level to accentuate the flavour of hops and malted barley. 1. The 24-48 hour beer barm Optional: rye levain3g One or two days before baking, whisk the ale and flour in a saucepan and bring just to the boil, no more. Then remove from the heat, spoon into a bowl and leave until absolutely cold. Stir in the yeast (and leaven, if using), cover the bowl, leave for 4 hours, then beat again and leave at 17C-23C for at least 24 hours. 2. For the dough: The barm from above (550g), plus the flour mixture as follows: Strong white flour or type T55550g Fine-ground oatmeal75gFine salt 15g Final temperature 20C - 23C. 1. Mix the barm, flour and water to make a soft dough. Mix on 1st speed for 2 minutes, then leave 30 minutes. 2. Add salt and mix on 2nd speed for 8 minutes. Leave dough until risen by 50%, approximately 2 hours at 21C, giving the dough one fold after an hour. A longer fermentation at a lower temperature, 16C, is preferable but not essential. 3. Shape dough into a ball, leave to rise on a floured board until risen by 50-75%. 4. Cut a cross in the centre and bake with steam at 225°C for 20 minutes, then remove steam and bake for a further 15-20 minutes. Traditionally the loaves would be "batch baked" in wooden frames, so that each loaf firmly pushes up against the loaves around it as it bakes, and can be torn apart once cool. Sometimes just known as 'ale' or 'yeast', this could either be yeast skimmed off the top of a wooden vat of dark beer or it could be made in the bakehouse using a gelatinised mixture of wheat flour and water enriched with malt and hops, then seeded with a spoonful of old ale yeast. This latter method kept better and grew popular during the early 1800s. The earliest recipes for a white loaf, farmhouse or tin used locally milled white flour, typically high in natural sugars (maltose) and modest amounts of gluten. A spoonful of ale barm would be mixed with water and all of the flour into a firm dough and this would be left for 6-8 hours to rise before shaping, left to rise once more and then baked. The farmhouse baking tin, a deep-sided, oblong tin made in 1lb to 5lb lengths, appeared in the early 1800s and enabled more bread to be baked, as it reduced the floor space each loaf took. As ovens were kept hot, the upper crust tended to burn while the protected sides stayed pale, and this became a characteristic.
What is a Cookie? Cookies are small files that may be placed by a web site on a users PC. They are used to store information that may be needed while browsing the site, or on a future visit. This may include personal data, or anonymous information such as previously visited pages or items stored in the shopping cart. What Cookies do we use? Don't Want Cookies? Most web browsers allow some control of most cookies through the browser settings. To find out more about cookies, including how to see what cookies have been set and how to manage and delete them, visit www.allaboutcookies.org
MAHABHARATA. [Source: Dowson's Classical Dictionary of Hindu Mythology] 'The great (war of the) Bharatas.' The great epic poem of the Hindus, probably the longest in the world. It is divided into eighteen parvas or books, and contains about 220,000 lines. The poem has been subjected to much modification and has received numerous comparatively modern additions, but many of its legends and stories are of Vedic character and of great antiquity. They seem to have long existed in a scattered state, and to have been brought together at different times. Upon them have been founded many of the poems and dramas of later days, and among them is the story of Rama, upon which the Ramayana itself may have been based. According to Hindu authorities, they were finally arranged and reduced to writing by a Brahman or Brahmans. There is a good deal of mystery about this, for the poem is attributed to a divine source. The reputed author was Krishna Dwaipayana, the Vyasa, or arranger, of the Vedas. He is said to have taught the poem to his pupil Vaisampayana, who afterwards recited it at a festival to King Janamejaya. The leading subject of the poem is the great war between the Kauravas and Pandavas, who were descendants, through Bharata, from Puru, the great ancestor of one branch of the Lunar race. The object of the great struggle was the kingdom whose capital was Hastinapura (elephant city), the ruins of which are traceable fifty-seven miles north-east of Delhi, on an old bed of the Ganges. Krishna Dwaipayana Vyasa is not only the author of the poem, but the source from whom the chief actors sprung. He was the son of the Rishi Parasara by a nymph named Satyavati, who had a son called Santavana, better known as Bhishma. In his old age Santanu wished to marry again, but the hereditary rights of the Bhishma were an obstacle to his obtaining a desirable match. To gratify his father's desire, Bhishma divested himself of all rights of succession, and Santanu then married Satyavati. She bore him two sons, the elder of whom, Chitrangada, succeeded to the throne, but was soon killed in battle by a Gandharva king who bore the same name. Vichitravirya, the younger, succeeded, but died childless, leaving two widows, named Ambika and Ambalika, daughters of a king of Kasi. Satyavati then called on Krishna Dwaipayana Vyasa to fulfill the law, and raise up seed to his half-brother. Vyasa had lived the life of an anchorite in the woods, and his severe austerities had made him terrible in appearance. The two widows were so frightened at him that the elder one closed her eyes, and so gave birth to a blind son, who received the name of Dhritarashtra; and the younger turned so pale that her son was called Pandu, 'the pale.' Satyavati wished for a child without blemish, but the elder widow shrank from a second association with Vyasa, and made a slave girl take her place. From this girl was born a son who was named Vidura. These children were brought up by their uncle Bhishma, who acted as regent. When they became of age, Dhritarashtra was deemed incapable of reigning in consequence of his blindness, and Pandu came to the throne. The name Pandu has suggested a suspicion of leprosy, and either through that, or in consequence of a curse, as the poem states, he retired to the forest, and Dhritarashtra became king. Pandu had two wives, Kunti or Pritha, daughter of Sura, king of the Surasenas, and Madri, sister of the king of the Madras ; but either through disease or the curse passed upon him, he did not consort with his wives. He retired into solitude in the Himalaya mountains, and there he died; his wives, who accompanied him having borne five sons. The paternity of these children is attributed to different gods, but Pandu acknowledged them, and they received their patronymic of Pandava. Kunti was the mother of the three elder sons, and Madri of the two younger. Yudhishthira (firm in flight), the eldest, was son of Dharma, the judge of the dead, and is considered a pattern of manly firmness, justice, and integrity. Bhima or Bhimasena (the terrible), the second, was son of Vayu, the god of the wind. He was noted for his strength, daring, and brute courage; but he was coarse, choleric, and given to vaunting. He was such a great eater that he was called Vrikodara, 'wolf's belly.' Arjuna (the bright or silvery), the third, was son of Indra, the god of the sky. He is the most prominent character, if not the hero, of the poem. He was brave as the bravest, high-minded, generous, tender-hearted, and chivalric in his notions of honour. Nakula and Sahadeva, the fourth and fifth sons, were the twin children of Madri by the Aswini Kumaras, the twin sons of Surya, the sun. They were brave, spirited, and amiable, but they do not occupy such prominent positions as their elder brothers. Dhritarashtra, who reigned at Hastinapura, was blind. By his wife Gandhari he had a hundred sons, and one daughter named Duhsala. This numerous offspring was owing to a blessing from Vyasa, and was produced in a marvelous way. From their ancestor Kuru these princes were known as the Kauravas. The eldest of them, Duryodhana (hard to subdue), was their leader, and was a bold, crafty, malicious man, an embodiment of all that is bad in a prince. While the Pandu princes were yet children, they, on the death of their father, were brought to Dhritarashtra, and presented to him as his nephews. He took charge of them, showed them great kindness, and had them educated with his own sons. Differences and dislikes soon arose, and the juvenile emulation and rivalry of the princes ripened into bitter hatred on the part of the Kauravas. This broke into an open flame when Dhritarashtra nominated Yudhishthira as his Yuvaraja or heir-apparent. The jealousy and the opposition of his sons to this act was so great that Dhritarashtra sent the Pandavas away to Varanavata, where they dwelt in retirement. While they were living there Duryodhana plotted to destroy his cousins by setting fire to their house, which he had caused to be made very combustible. All the five brothers were for a time supposed to have perished in the fire, but they had received timely warning from Vidura, and they escaped to the forest, where they dressed and lived in disguise as Brahmans upon alms. While the Pandavas were living in the forest they heard that Draupada, king of the Panchalas, had proclaimed a swayamvara, at which his daughter Draupadi was to select her husband from among the princely and warlike suitors. They went there, still disguised as Brahmans. Arjuna bent the mighty bow which had defied the strength of the Kauravas and all other competitors, and the Pandavas were victorious over every opponent. They threw off their disguise, and Draupadi was won by Arjuna. The brothers then conducted Draupadi to their home. On their arrival they told their mother Kunti that they had made a great acquisition, and she unwittingly directed them to share it among them. The mother's command could not be evaded, and Vyasa confirmed her direction; so Draupadi became the wife in common of the five brothers, and it was arranged that she should dwell for two days in the house of each of the five brothers in succession. This marriage has been justified by a piece of special pleading, which contends that the five princes were all portions of one deity, and therefore only one distinct person, to whom a woman might lawfully be married. This public appearance made known the existence of the Pandavas. Their uncle Dhritarashtra recalled them to his court and divided his kingdom between his own sons and them. His sons received Hastinpura, and the chief city given to his nephews was Indraprastha on the river Yamuna, close to the modern Delhi , where the name still survives. The close proximity of Hastinapura and Indraprastha shows that the territory of Dhritarashtra must have been of very moderate extent. The reign of Yudhishthira was a pattern of justice and wisdom. Having conquered many countries, he announced his intention of performing the Rajasuya sacrifice, thus setting up a claim to universal dominion, or at least to be a king over kings. This excited still more the hatred and envy of the sons of Dhritarashtra, who induced their father to invite the Pandavas to Hastinapura. The Kauravas had laid their plot, and insidiously prevailed upon Yudhishthira to gamble. His opponent was Sakuni, uncle of the Kaurava princes, a great gambler and a cheat. Yudhishthira lost his all: his wealth, his palace, his kingdom, his brothers, himself, and last of all, their wife. Draupadi was brought into the assembly as a slave, and when she rushed out she was dragged back again by her hair by Duhsasana, an insult for which Bhima vowed to drink his blood. Duryodhana also insulted her by seating her upon his thigh, and Bhima vowed that he would smash that thigh. Both these vows he afterwards performed. Through the interference and commands of Dhritarashtra the possessions of Yudhishthira were restored to him. But he was once more tempted to play, upon the condition that if he lost he and his brothers should pass twelve years in the forest, and should remain incognito during the thirteenth year. He was again the loser, and retired with his brothers and wife into exile. In the thirteenth year they entered the service of the king of Virata in disguise - Yudhishthira as a Brahman skillful as a gamester; Bhima as a cook; Arjuna as a eunuch and teacher of music and dancing; Nakula as a horse-trainer; and Sahadeva as a herdsman. Draupadi also took service as an attendant and needlewoman of the queen, Sudeshna. The five princes each assumed two names, one for use among themselves and one for public use. Yudhishthira was Jaya in private, Kanka in public; Bhima was Jayanta and Ballava; Arjuna was Vijaya and Brihannala; Nakula was Jayasena and Granthika; Sahadeva was Jayadbala and Arishtanemi, a Vaisya. The beauty of Draupadi attracted Kichaka, brother of the queen, and the chief man in the kingdom. He endeavoured to seduce her, and Bhima killed him. The relatives of Kichaka were about to burn Draupadi on his funeral pile, but Bhima appeared as a wild Gandharva to rescue her. The brothers grew in favour, and rendered great assistance to the king of Trigartta and the Kauravas. The time of exile being expired, the princes made themselves known, and Abhimanyu, son of Arjuna, received Uttara, the king's daughter, in marriage. The Pandavas now determined to attempt the recovery of their kingdom. The king of Virata became their firm ally, and preparations for war began. Allies were sought on all sides. Krishna and Balarama, being relatives of both parties, were reluctant to fight. Krishna conceded to Arjuna and Duryodhana the choice of himself unarmed or of a large army. Arjuna chose Krishna and Duryodhana joyfully accepted the army. Krishna agreed to act as charioteer of his especial friend Arjuna. It was in this capacity that he is represented to have spoken the divine song Bhagavad-gita, when the rival armies were drawn up for battle at Kurukshetra, a plain north of Delhi. Many battles follow. The army of Duryodhana is commanded in succession by his great-uncle Bhishma, Drona his military preceptor, Karna, king of Anga, and Salya, king of Madra and brother of Madri. Bhishma was wounded by Arjuna, but survived for a time. All the others fell in succession, and at length only three of the Kuru warriors - Kripa, Aswatthaman, and Kritavarma - were left alive with Duryodhana. Bhima and Duryodhana fought in single combat with maces, and Duryodhana had his thigh broken and was mortally wounded. The three surviving Kauravas fell by night upon the camp of the Pandavas and destroyed five children of the Pandavas, and all the army except the five brothers themselves. These five boys were sons of Draupadi, one by each of the five brothers. Yudhishthira's son was Prativindhya, Bhima's was Srutasoma, Arjuna's was Srutakirtti, Nakula's was Satanika, and Sahadeva's was Srutakarman. Yudhishthira and his brothers then went to Hastinapura, and after a reconciliation with Dhritarashtra, Yudhishthira was crowned there. But he was greatly depressed and troubled at the loss of kindred and friends. Soon after he was seated on the throne, the Aswamedha sacrifice was performed with great ceremony, and the Pandavas lived in peace and prosperity. The old blind king Dhritarashtra could not forget or forgive the loss of his sons, and mourned especially for Duryodhana. Bitter reproaches and taunts passed between him and Bhima; at length he, with his wife Gandhari, with Kunti, mother of the Pandavas, and with some of his ministers, retired to a hermitage in the woods, where, after two years' residence, they perished in a forest fire. Deep sorrow and remorse seized upon the Pandavas, and after a while Yudhishthira abdicated his throne and departed with his brothers to the Himalayas, in order to reach the heaven of Indra on Mount Meru. A dog followed them from Hastinapura. The story of this journey is full of grandeur and tenderness, and has been most effectively rendered into English by Professor Goldstucker. Sins and moral defects now prove fatal to the pilgrims. First fell Draupadi: "too great was her love for Arjuna." Next Sahadeva: "he esteemed none equal to himself." Then Nakula: "ever was the thought in his heart, There is none equal in beauty to me." Arjuna's turn came next: "In one day I could destroy all my enemies." "Such was Arjuna's boast, and he falls, for he fulfilled it not." When Bhima fell he inquired the reason of his fall, and he was told, "When thou gazedst on thy foe, thou hast cursed him with thy breath; therefore thou fallest today." Yudishthira went on alone with the dog until he reached the gate of heaven. He was invited by Indra to enter, but he refused unless his brothers and Draupadi were also received. "Not even into thy heaven would I enter if they were not there." He is assured that they were already there, and is again told to enter "wearing his body of flesh." He again refuses unless, in the words of Pope, "admitted to that equal sky, his faithful dog should bear him company." Indra expostulates in vain. "Never, come weal or come woe, will I abandon yon faithful dog." He is at length admitted, but to his dismay he finds there Duryodhana and his enemies, but not his brothers or Draupadi. He refuses to remain in heaven without them, and is conducted to the jaws of hell, where he beholds terrific sights and hears wailings of grief and anguish. He recoils, but well-known voices implore him to remain and assuage their sufferings. He triumphs in this crowning trial, and resolves to share the fate of his friends in hell rather than abide with their foes in heaven. Having endured this supreme test, the whole is shown to be the effect of maya or illusion, and he and his brothers and friends dwell with Indra in full content of heart forever. List of books with contents: 1. Adiparva, 'Introductory book.' Describes the genealogy of the two families, the birth and nurture of Dhritarashtra and Pandu, their marriages, the births of the hundred sons of the former and the five of the latter, the enmity and rivalry between the young princes of the two branches, and the winning of Draupadi at the swayamvara. 2. Sabhaparva, 'Assembly book.' The assembly of the princes at Hastinapura when Yudhishthira lost his kingdom and the Pandavas had to retire into exile. 3. Vanaparva, ' Forest chapter.' The life of the Pandavas in the Kamyaka forest. This book is one of the longest and contains many episodes: among them the story of Nala, and an outline of the story of the Ramayana. 4. Virataparva, 'Virata chapter.' Adventures of the Pandavas in the thirteenth year of their exile, while they were in the service of King Virata. 5. Udyogaparva, 'Effort book.' The preparations of both sides for war. 6. Bhishmaparva, 'Book of Bhishma.' The battles fought while Bhishma commanded the Kaurava army. 7. Dronaparva, 'The Book of Drona.' Drona's command of the Kaurava army. 8. Karnaparva, 'Book of Karna.' Karna's command and his death at the hand of Arjuna. 9. Salyaparva, 'Salya's command, in which Duryodhana is mortally wounded and only three Kauravas are left alive. 10. Sauptikaparva, 'Nocturnal book.' The night attack of the three surviving Kauravas on the Pandava camp. 11. Striparva, 'Book of the women.' The lamentations of Queen Gandhari and the women over the slain. 12. Santiparva, 'Book of consolation.' A long and diffuse didactic discourse by Bhisma on the morals and duties of kings, intended to assuage the grief of Yudhishthira. 13. Anusasanaparva, 'Book of precepts.' A continuation of Bhishma's discourses and his death. 14. Aswamedhikaparva, 'Book of the Aswamedha.' Yudhishthira's performance of the horse sacrifice. 15. Asramaparva, 'Book of the hermitage.' The retirement of Dhritarashtra, Gandhari, and Kunti to a hermitage in the woods, and their death in a forest fire. 16. Mausalparva, 'Book of the clubs.' The death of Krishna and Balarama, the submersion of Dwaraka by the sea, and the mutual destruction of the Yadavas in a fight with clubs (musala) of miraculous origin. 17. Mahaprasthanikaparva, 'Book of the great journey.' Yudishthira's abdication of the throne, and his departure with his brothers towards the Himalayas on their way to Indra's heaven on Mount Meru. 18. Swargarohanaparva, 'Book of the ascent to heaven.' Entrance into heaven of Yudishthira and his brothers, and of their wife Draupadi. Modern Languages MLLL-4993. Indian Epics. Laura Gibbs, Ph.D. The textual material made available at this website is licensed under a Creative Commons License. You must give the original author credit. You may not use this work for commercial purposes. If you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one. No claims are made regarding the status of images used at this website; if you own the copyright privileges to any of these images and believe your copyright privileges have been violated, please contact the webmaster. Page last updated: October 16, 2007 12:22 PM
Source code: Lib/types.py This module defines names for some object types that are used by the standard Python interpreter, but not exposed as builtins like int or str are. Also, it does not include some of the types that arise transparently during processing such as the listiterator type. The module defines the following names: The type of user-defined functions and functions created by lambda expressions. The type of generator-iterator objects, produced by calling a generator function. The type of methods of user-defined class instances. The type of modules. The type of traceback objects such as found in sys.exc_info(). The type of frame objects such as found in tb.tb_frame if tb is a traceback object. The type of objects defined in extension modules with PyGetSetDef, such as FrameType.f_locals or array.array.typecode. This type is used as descriptor for object attributes; it has the same purpose as the property type, but for classes defined in extension modules. The type of objects defined in extension modules with PyMemberDef, such as datetime.timedelta.days. This type is used as descriptor for simple C data members which use standard conversion functions; it has the same purpose as the property type, but for classes defined in extension modules. CPython implementation detail: In other implementations of Python, this type may be identical to GetSetDescriptorType.
Science Fair Project Encyclopedia Intel's i960 (or 80960) was a RISC-based microprocessor design that became quite popular during the early 1990s as an embedded microcontroller, for some time likely the best-selling CPU in that field, pushing the AMD 29000 from that spot. In spite of its success, Intel formally dropped i960 marketing in the late 1990s as a side effect of a lawsuit with DEC, in which Intel received the rights to produce the StrongARM CPU. The i960 design was started as a response to the failure of Intel's i432 design of the early 1980s. The i432 was intended to directly support high-level languages that supported tagged, protected, garbage-collected memory -- such as Ada and Lisp -- in hardware. Because of its instruction-set complexity, its multi-chip implementation, and other design flaws, the i432 was very slow in comparison to other processors of its time. In 1984 Intel and Siemens started a joint project, ultimately called BiiN, to create a high-end fault-tolerant object-oriented computer system programmed entirely in Ada. Many of the original i432 team members joined this project, though a new lead architect was brought in from IBM, Glenford Myers. The intended market for the BiiN systems were high-reliability computer users such as banks, industrial systems and nuclear power plants, and the protected-memory concepts from the i432 influenced the design of the BiiN system. To avoid the performance issues that plagued the i432, the central i960 instruction-set architecture was a RISC design, and the memory subsystem was made 33-bits wide -- for a 32-bit word and a "tag" bit to indicate protected memory. In many other ways the i960 followed the original Berkeley RISC design, notably in its use of register windows, an implementation-specific number of caches for the per-subroutine registers, allowing for fast routine calls. The competing Stanford University design, commercialized as MIPS did not use this system, relying on the compiler to generate optimal subroutine call and return code instead. Unlike the i386, but in common with most 32-bit designs, the i960 has a flat 32-bit memory space, with no memory segmentation. The i960 architecture also anticipated a superscalar implementation, with instructions being simultaneously dispatched to more than one unit within the processor. The first 960 processors "taped-out" in October 1985 and were sent to manufacturing that month, with the first working chips arriving in late 1985 and early 1986. The BiiN effort eventually failed, due to market forces, and the 960MC was left without a use. Myers attempted to save the design by outlining several subsets of the full capability architecture created for the BiiN system. Myers tried to convince Intel management to market the i960 (then still known as the "P7") as a general-purpose processor, both in place of the Intel 80286 and i386 (which "taped-out the same month as the first 960), as well as the emerging RISC market for Unix systems, including a pitch to Steve Jobs's for use in the NeXT system. Competition within and outside of Intel came not only from the i386 camp, but also from the i860 processor, yet another RISC processor design emerging within Intel at the time. Myers was unsuccessful at convincing Intel management to support the i960 as a general-purpose or Unix processor, but the chip found a ready market in early high-performance 32-bit embedded systems. The protected-memory architecture was considered proprietary to BiiN and wasn't mentioned in the product literature, leading many to wonder why the i960MC was so large and had so many pins labeled "no connect". A version of the RISC core without memory management or an FPU became the i960KA, and the RISC core with the FPU became the i960KB. The versions were, however, all identical internally -- only the labelling was different. The "full" 960MC was never released for the non-military market, but the i960KA became successful as a low-cost 32-bit processor for the laser-printer market, as well as for early graphics terminals and other embedded applications. Its success paid for future generations. which removed the complex memory sub-system. The first pure RISC implementation was the i960CA, which used a newly-designed superscalar RISC core and added an unusual addressable on-chip cache. The i960CA is widely considered to have been the first single-chip superscalar RISC implementation. The C-series only included one ALU, but could dispatch an arithmetic instruction, a memory reference, and a branch instruction at the same time. Later, the i960CF included a floating-point unit, but continued to omit an MMU. Intel attempted to bolster the i960 in the I/O device controller market with the I2O standard, but this had little success and the design work was eventually ended. By the mid-90's its price/performance ratio had fallen behind competing chips of more recent design, and Intel never produced a reduced power-consumption version that could be used in battery-powered systems. In 1990 the i960 team was redirected to be the "second team" working in parallel on future i386 implementations -- specifically the P6 processor, which later became the Pentium Pro. The i960 project was sent to another, smaller development team, essentially ensuring its ultimate demise. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
In Emmon Bach, Eloise Jelinek, Angelika Kratzer & Barbara Partee (eds.), Quantification in Natural Languages. Kluwer (1995) |Abstract||In this paper, we discuss some rather puzzling facts concerning the semantics of Warlpiri expressions of cardinality, i.e. the Warlpiri counterparts of English expressions like one,two, many, how many. The morphosyntactic evidence, discussed in section 1, suggests that the corresponding expressions in Warlpiri are nominal, just like the Warlpiri counterparts of prototypical nouns, eg. child. We also argue that Warlpiri has no articles or any other items of the syntactic category D(eterminer). In section 2, we describe three types of readings— "definite", "indefinite" and "predicative"—which are generally found with Warlpiri nouns, including those which correspond to English common nouns and cardinality expressions. A partial analysis of these readings is sketched i n section 3. Since Warlpiri has no determiner system, we hypothesize that the source of (in)definiteness in this language is semantic. More specifically, we suggest that Warlpiri nominals are basically interpreted as individual terms or predicates of individuals and that their three readings arise as a consequence of the interaction of their basic meanings, which are specific to Warlpiri, with certain semantic operations, such as type shifting (Rooth and Partee 1982, Partee and Rooth 1983, Partee 1986, 1987), which universally can or must apply in the process of compositional semantic interpretation| |Keywords||No keywords specified (fix it)| |Through your library||Configure| Similar books and articles Miriam Corris, Christopher Manning, Susan Poetsch & Jane Simpson, Bilingual Dictionaries for Australian Languages: User Studies on the Place of Paper and Electronic Dictionaries. Maria Bittner (1994). Cross-Linguistic Semantics. Linguistics and Philosophy 17 (1):53 - 108. Maria Bittner (1998). Cross-Linguistic Semantics for Questions. Linguistics and Philosophy 21 (1):1-82. Brian Butterworth & Robert Reeve (2008). Verbal Counting and Spatial Strategies in Numerical Tasks: Evidence From Indigenous Australia. Philosophical Psychology 21 (4):443 – 457. Maria Bittner & Ken Hale (2000). Comparative Notes On Ergative Case Systems. In Robert Pensalfini & Norvin Richards (eds.), MITWPEL 2: Papers on Australian Languages. Dep. Linguistics, MIT. Paul Faulstich (1998). Mapping the Mythological Landscape: An Aboriginal Way of Being-in-the- World. Philosophy and Geography 1 (2):197 – 221. Added to index2009-01-28 Total downloads7 ( #134,900 of 556,837 ) Recent downloads (6 months)1 ( #64,847 of 556,837 ) How can I increase my downloads?
Spinal Decompression is a non-surgical therapy to treat back and neck pain associated with spinal disc problems. In the most basic terms, spinal decompression therapy is advanced technology “traction” combined with a much better understanding of human physiology. Back pain is often caused by misalignment in the spine. The technology allows your practitioner to isolate and treat specific spinal discs. The treatment alternates between gentle stretching and relaxation to realign the spine. Spinal Decompression is used for many non-surgical condition of the spine including: The physiology is quite complicated, but basically spinal decompression does a number of things. It distracts the spinal joints which gives them more mobility and stretches the spinal muscles and ligaments which in turn allows them to relax. If you have a disc problem such as a bulge or herniation touching a nerve, decompression will create a sort of vacuum in the center of the disc which will tend to suck the involved portion of the disc back towards the center, thus reducing the size of the bulge/herniation. The bottom line is that spinal decompression makes the pain go away by correcting the problem that is causing it. Spinal Decompression therapy is not recommended for advanced pregnancy, severe osteoporosis or severe obesity. It is also not recommended if you have had spinal surgery with instrumentation (screws, metal plates or cages). This is why a thorough examination and pre-screening is always done before any treatment protocol would be recommended. Yes, to be effective it must be. This is not like the old mechanical weighted units of the past. The system we use is computerized and will go up in small increments until the maximum setting is achieved. The same is used during release. You will barely notice since the computer control is so efficient. This is variable from individual to individual, but there is a certain time frame that we expect to see results within, depending upon your condition. We will not keep bringing you back if we are not getting the results that we wish to see within this time frame. In some cases this may be recommended after the treatment schedule has been completed in order to keep things stable. It depends upon the individual, how much degeneration has occurred, etc. These can vary from a maintenance treatment once a month to every few months. Let’s be totally honest, nothing in life is ever fully guaranteed, especially in regard to medical therapies. However, research has indicated that spinal decompression therapy is very effective. Some clinical studies have demonstrated spinal decompression therapy to have a success rate of 92%. Consider that other studies have shown that low back surgery to have a success rate of less than 25%. Unfortunately, it is not. However, we pride ourselves that our fees are less than those generally charged by other clinics outside the Quinte area, plus you don’t have to travel to other cities to receive the treatments. Some extended health care plans may partially cover the cost. Many people come for decompression therapy as a last resort. They may have tried chiropractic, physiotherapy, acupuncture, massage therapy, etc but have not experienced relief and now they are facing the prospect of surgery. All these mentioned therapies can be excellent and will help many people, but sometimes they don’t. No single therapy works 100% all the time, but spinal decompression therapy is non invasive and may be the solution for you where others have failed. We do not and here is why – oxygen comprises about 21% of the air we breathe regularly. Some clinics have their patients breath concentrated oxygen as part of the treatment but the reality is that there is absolutely no scientific evidence whatsoever to indicate that this would have any effect on a healing process with spinal decompression therapy. There is a claim that it may help for relaxation, but we feel the same thing can be accomplished by dimming the lights and playing soft music. In fact, breathing concentrated oxygen can be very detrimental for people with certain medical conditions. No we certainly do not. This was the now infamous machine that appeared on the controversial CBC “Marketplace” program. The DRX-9000 was not a true spinal decompression machine. The company has gone out of business and the DRX-9000 is no longer recognized by Health Canada. We want spinal decompression therapy to be as affordable and available to as many people as possible in the Quinte area. Your specific treatment plan will be determined after your initial consultation and examination. Based upon clinical observation and current research, optimum results are usually achieved with 20 sessions ($75. each) over a period of about six weeks. Using this protocol the total would be only $1500. Bear in mind that some clinics in larger urban areas are charging three times as much for the exact same treatment protocol. (Yes, you did read this right – three times more.) We also offer a senior’s discount. After the treatment protocol is completed, any further maintenance visits will be reduced to $50. per treatment if necessary. First book an appointment. Our initial consultation and examination is complimentary. You must first receive a thorough assessment to determine if spinal decompression therapy is suitable for your condition. If you have any previous MRI or x-ray reports please bring these with you. Copies can be requested from either your family physician’s office or from the institute where they were done. “In a recent study of 219 patients with herniated discs and degenerative disc disease, 86 percent who completed the therapy showed immediate improvement and resolution to their symptom; 92 percent improved overall” -Glonis T, Groteke E, Spinal Decompression. Orth Tech Review 5(6):36-39 Nov-Dec 2003 “We consider decompression therapy to be a primary treatment for low back pain associated with lumbar disc herniation, degenerative disc disease and decreased spine mobility. We believe that post-surgical patients with persistent pain or “failed back syndrome” should not be considered candidates for further surgery until a trial of reasonable trial of decompression has been tried” -Gose E, Naguszewski W, Naguszewski R. Vertebral Axial Decompression Therapy for pain associated with herniated or degenerated discs or facet syndrome: an outcome study. J Neuro research 20(4) 186-90 April 1998 “Successful reduction of intradiscal pressure with decompression therapy represents a technological advance” Naguszewski R. Gose E., Derma tonal Somatosensory Evoked Potential of Nerve Root Decompression After Vax-D Therapy. Journal Neuro Research 23(7) Oct 2001
Arthroscopy is a method of viewing a joint, and, if needed, to perform surgery on a joint. An arthroscope consists of a tiny tube, a lens, and a light source. This procedure is typically performed on the knee, shoulder, elbow, or wrist. The type of anesthesia depends on the particular joint and other factors. A regional anesthetic numbs the affected area, but the patient may remain awake, depending on whether other medications are used. For more extensive surgery, general anesthesia may be used. In this case the patient is asleep and pain-free. The area is cleaned and a pressure band (tourniquet) may be applied to restrict blood flow. The health care provider then makes a surgical cut into the joint. Sterile fluid is passed through the joint space to provide a better view. Next, a tool called an arthroscope is inserted into the area. An arthroscope consists of a tiny tube, a lens, and a light source. It allows a surgeon to look for joint damage or disease. The device also allows the surgeon to perform reconstructive procedures on the joint, if needed. Images of the inside of the joint are displayed on a monitor. One or two small additional surgical cuts may be needed in order to use other instruments. These instruments can be used to remove bits of cartilage or bone, take a tissue biopsy, or perform other minor surgery. In addition, ligament reconstruction can be performed using the arthroscope in many cases. You should not eat or drink anything for 12 hours before the procedure. You may be told to shave your joint area. You may be given a sedative before leaving for the hospital. You will be asked to wear a hospital gown during the procedure so the body part for surgery is accessible. You must sign a consent form. Make arrangements for transportation from the hospital after the procedure. You may feel a slight sting when the local anesthetic is injected. After this medicine starts to work, you should feel no pain. The joint may need to be manipulated to provide a better view, so there may be some tugging on the leg (or arm, if done on the shoulder). After the test, the joint will probably be stiff and sore for a few days. Ice is commonly recommended after arthroscopy to help relieve swelling and pain. Slight activity such as walking can be resumed immediately, however excessive use of the joint may cause swelling and pain and may increase the chance of injury. Normal activity should not be resumed for several days or longer. Special preparations may need to be made concerning work and other responsibilities. Physical therapy may also be recommended. Depending on your diagnosis, there may be other exercises or restrictions. Your doctor may order this test if you have: Arthroscopy can also help see if a disease is getting better or worse (this is called monitoring the disease), or to determine whether a treatment is working. Abnormal results may be due to: The diagnostic accuracy of an arthroscopy is about 98%, although x-rays and sometimes MRI scans are taken first because they are noninvasive. Review Date: 7/29/2008 The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed physician should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. Copyright ©2010 A.D.A.M., Inc., as modified by University of California San Francisco. Any duplication or distribution of the information contained herein is strictly prohibited. Information developed by A.D.A.M., Inc. regarding tests and test results may not directly correspond with information provided by UCSF Medical Center. Please discuss with your doctor any questions or concerns you may have.
If you can't stand the heat ... (Page 2 of 2) In the 1880s, the closed range was hooked up to gas and in the 1890s, to electricity. Food historian Rachel Laudan, author of the forthcoming book "Power Cuisine and History'' says, "No other change in kitchen technology compares to this - the closed gas or electric stove. It made the kitchen cleaner and pleasanter; you could begin cooking with the turn of a knob instead of needing to allow a couple of hours to get the fire going. It transformed cooking methods."Skip to next paragraph Subscribe Today to the Monitor Ms. Laudan adds that the enclosed stove "led to the great North Euro-American invention, the cake-cookie-pie complex. Before this, baking was a specialist trade. Admittedly, home baking also relied on chemical raising agents, refined white flour, refined white sugar, all of which had become cheap and widely available in the previous half century." Home refrigerators and freezers arrived in American homes somewhat later. In the US, the Shakers built some of the first ice houses, using sawdust and straw to keep large chunks of ice from melting too quickly. Individual families used a smaller version, the icebox, an insulated box kept cool as a large chunk of ice, delivered weekly, evaporated. Ice-making machines began to be patented in the 1830s. Refrigeration was being developed about the same time. In 1867, a prototype refrigerated rail car was patented, and soon trains and ships had refrigeration compartments. After World War I, many Americans began replacing their home iceboxes with refrigerators that had a small freezing compartment. "American housewives took to the mechanical refrigerator as fast as their finances would allow," writes Sylvia Lovegren in her book "Fashionable Food." Those early refrigerators were easier to clean and regulated temperature better than the old-fashioned icebox had. Women could store food more easily, even freezing small quantities, and therefore needed to shop less often. "With the car and the supermarket, the refrigerator made weekly [rather than daily] shopping possible," Laudan says, adding that the home refrigerator "started America's enchantment with the chilled," from cold drinks to ice cream. By the mid-1950s, more than 80 percent of American households had a refrigerator, compared to only 8 percent of English households, Ms. Lovegren notes. Home cooks began buying commercially frozen foods in the 1930s, after Clarence Birdseye started selling his line of "frosted foods." (The word "frozen" was associated with food that had gone bad because of cold weather.) Most of the gadgets modern cooks take for granted were invented in the past 150 years. In the second half of the 1800s, "Eggbeaters, cherry stoners, apple parers and corers, butter churns, meat choppers - all these and more were patented in large numbers," but few households owned them, Strasser writes. Laudan sees the application of the small electric motor to kitchen appliances as an important development. By the 1930s, housewives could buy electric devices ranging from chafing dishes, waffle irons, hot-plates, mixers, toasters, and even a free-standing portable oven. "Electric gadgets were the darlings of the 1930s, evidence of the modern age even in the midst of Depression," Lovegren writes. By the 1970s, American cooks had even more gadgets at their disposal, such as the food processor, the microwave, and the slow cooker, best known by the brand name Crock-Pot. Lora Brody is the author of "Lora Brody Plugged In: The Definitive Guide to the 20 Best Kitchen Appliances.'' Her pick for the most important modern kitchen appliance? "The easy answer would be the food processor, but my answer would be the slow cooker. Nothing has as many applications as the slow cooker." The microwave was the largest and perhaps the most important of the kitchen gadgets that became popular in the 1970s. While some brush it off as a device to heat water and reheat frozen food, Smith notes that 64 percent of American homes now have a microwave oven. Lovegren says, "I think in this century, the microwave is without a doubt the big evolutionary leap because it's made reheating take-out food and frozen entrees possible," which is leading to "the death of cooking in the modern world." "I think it will take a while, but most people I see around me, except for immigrants or people with strong ethnic backgrounds, don't do any cooking," with the exception of entertaining, she says. Preparing elaborate dishes for dinner parties is "like a fun hobby or an accomplishment like in Jane Austen [novels] when everyone comes over and the ladies show how prettily they can play the piano," Lovegren says. Laudan is more circumspect about how the microwave will affect cooking. "We won't know its full impact for another few decades - inventions take that long to find their place - but it's clearly implicated in the move away from meals and toward snacking." (c) Copyright 2000. The Christian Science Publishing Society
My immigrant ancestor, Nicanor Gonzalo Sanchez-Tereso was born on 10 January 1791 in Herencia, Spain. He married Anna Marie Weber in 1815 in Bad Kreuznach, Germany. They had at least 7 children in between 1819 and 1833 while living in Germany. They immigrated to the US after the revolution and ended up settling in Keokuk county, IA. Sadly, they both died a couple of years after arriving. I decided that I wanted to learn a bit more about the town of Herencia, where the Sanchez-Tereso family lived for at least a couple of hundred years. Nicanor’s ancestor, Juan Sanchez Tereso, was born there in 1620. (Related families that also lived in this area were: Gomez-Lobo, Lopez-Naranjo, Fernandez-Canadas, Martinez-Ojeda, Rodriguez-Polanco, Martinez-Oxeda, Garcia-Navas, Rodriguez Del Tembleque, Diaz De Ubeda, Martinez-Viveros and many more hyphenated names!) Herencia is in the Province of Ciudad Real in the autonomous community of Castile-La Mancha, about 150 km south of Madrid. It has about 9,000 inhabitants. Fairly small. I found a couple of websites for Herencia: www.herencia.net and www.herencia.es. The first seems to have more local news/events. The second had information on history, a map of the town, and pictures. I had to use Google Translate to figure out what the sites said. I have taken Russian, German, and a little French, but Spanish is an absolute mystery to me. The words were translated so that I could get the gist of it, but it just sounded awkward. It’s much better than pulling out a dictionary though, isn’t it? I won’t complain. - Herencia is translated as “Heritage” or “Legacy” (according to Google Translate). Now, isn’t that a cool name for a town? - In 1239, after the Battle of Los Navas de Tolosa, the Kingdom of Castile began the repopulation of the Southern Plateau. The town of Herencia is given its charter. It has about 150 residents at this time. - In 1568, a granary was built. - They had vineyards and produced a lot of wine. - In 1604, there is a population crisis because of poor harvests and typhoid epidemics. - In 1786 there was an epidemic of malaria. Life expectancy at this time period was about 50. - In 1790, the first of the windmills was built. By the early 1800′s, there were 11 windmills in/around Herencia. - I love this translation: “In 1,798, the master of the alphabet, D. Alfonso García Rosel, teaches 90 children of all ages, including 12 of the poorest, who do not receive any money.” I wonder if my Nicanor (who was born in 1791) may have been one of these children. - In 1808, they record that there are only 3 remaining windmills. It says that this may be due to the destruction by French troops during the War for Independence. I also found a site that has an album of pictures from the area. Lots of windmills, reminiscent of Don Quixote. Actually, the setting of Don Quixote is in this vicinity. It was neat to see what the area looks like. I would love to visit someday!! Here is a translation of what the Herencia.es site had to say about the relation of Don Quixote to the area: If the heirs of Cervantes had to collect intellectual property rights for the use of the names mentioned in Don Quixote, his would be one of the largest fortunes. In Legacy there is no corner that does not contain any reference to Cervantes’s fiction, from the stamp of their handmade cheeses, until the inevitable street and Plaza Cervantes. The very name of the people is in itself a valuable legacy.It is best to walk to the Plaza of Spain, where is the Church of the Immaculate Conception, then visit the Church of Our Lady of Mercy and walk the streets, that have a certain symmetry, with white as dominant color of the cityscape.. At the time of rest and food it is essential the presence of cheese Inheritance of universal popularity. Surely there will be no better place to buy a manchego cheese and taste The most exciting thing that I found is a bit of information on the church where my ancestors attended: the Church of the Immaculate Conception. The church that is currently standing was built in the 18th century, which means that it is the same one that Nicanor and his family attended. I have added visiting this place as number 153 on my life list. Other questions I have about this area are: How was it affected by the Napoleonic Wars? This is the time period that Nicanor (later known as Nicholas) was living there. He would have been 16 when the Peninsular War started and 23 when it ended. Was he a soldier? Would he have gone to Germany as a soldier? Did he move there after the war, looking for a new life? He married in 1815 in Germany, after the wars ended. I really know very little about this time period/place. I think that I only briefly studied it in high school and obviously didn’t retain anything. Everything I know about the Napoleonic Era was learned from watching BBC movies and reading Jane Austen novels.Not a complete education, for sure. Luckily, we will be studying Napoleon and the world in his time this January in our homeschool. Hopefully I will learn something too! I have a wonderful book entitled Historical Atlas of the Napoleonic Era. It has so many wonderful maps and paintings in it. I highly recommend it if you are looking for a book on this time period. I’m obviously not an expert though. I looked at the map of Spain in the chapter titled “The Spanish Ulcer”, and there was definitely chaos all around where the Sanchez-Tereso’s were living. Even if they didn’t fight in the wars, they had to be affected by the things going on around them. Another thing I need to research is Nicanor’s siblings. Other than their birth and baptismal dates, I know absolutely nothing about them. Did they stay in Spain or did they move also? I wonder if there are records of soldiers for this time period. Hmm. Things to think about. Any suggestions on where to look for further information? Does anyone know of any good books (in English) that give a good overview on daily life during this time period in Spain? Anyone want to speculate on why Nicanor moved to Germany?
One of the U.K.'s most critically endangered butterflies is making a comeback thanks to a profit making partnership between private landowners and conservation organizations. The heath fritillary -- a rare species exclusive to the south of England that thrives in cleared woodland environments -- has declined sharply over the past 25 years as forest clearing has become less common. But population numbers are on the rise again after the introduction of a forestry management scheme that enables rural landowners to cash in by creating butterfly habitats on their property. "We were down to 12 colonies (of heath fritillary) in 1995 and most of those were very small," says Dr. Martin Warren, chief executive of Butterfly Conservation, the organization behind the projects implementation. "Since then we've been able to work with landowners to get the management back up again and we are now looking at 25 colonies in the same area." The conservation scheme works by first sourcing locations where there are remnants of heath fritillary colonies. Those who own the land identified during this process -- be they individuals, businesses or wildlife trusts -- are then approached for permission to carry out the conservation work. This primarily consists of clearing the areas pinpointed of their indigenous plant life and repopulating them with the lighter foliage the heath fritillary requires to prosper. If landowners agree to take part in the project, they are then able to sell on the discarded lumber that accumulates during the clearing and ensuing maintenance process at a profit. "A lot of this is about persuasion," says Warren. "You have to deal with loads of different people who all have their own agenda and financial constraints."
How could anyone be convinced to spend money on any program designed to reduce losses to Dutch elm disease when the disease wasnt even here? Minnesota was considered by many to be too far north for Dutch elm disease to be a problem. It was thought that the smaller European elm bark beetle, which had been the primary vector throughout the eastern states, would have difficulty surviving the harsh winters of Minnesota. It was almost impossible to convince people of the potential hazards of the disease when it was still well removed from the states borders. How could anyone be convinced to spend money on any program designed to reduce losses to Dutch elm disease when the disease wasnt even here? It wasnt even possible to convince people to stop planting elms, and nurseries were still promoting the elm as an easily transplanted fast growing tree that was resistant to pests and tolerant of harsh urban environments. It was also promoted as a tree that grows tall, beautiful, and lasts for more than a century. In fact, nurseries continued producing and selling elms even after the disease entered Minnesota. We will never know exactly when Dutch elm disease first occurred in the state, but we do know it was found in the state for the first time in the early summer of 1961. This finding was identified from some branches from the Highland area of St. Paul, brought in by a tree service company. As far as was known at that time, only one tree was affected and it had wilted in 1960. More diseased trees were found in 1961 in the Monticello area near the Mississippi River. That several trees were infected at that location suggested that the fungus had been there for at least a year, possibly longer. It is reasonable to assume that the fungus could have been present in one or more of Minnesotas southern tier counties, which were relatively near infested Iowa counties. Dutch elm disease was also brought into Litchfield, probably in 1961. Many think that it likely arrived there in an automobile of a person returning from a visit to relatives in Illinois, a fact that was learned about accidently in 1987. While the evidence was circumstantial, it was learned that elm wood was brought to Litchfield from Illinois in that year, and a Litchfield elm died of Dutch elm disease shortly after. The introductions of Dutch elm disease into St. Paul and Monticello were almost certainly the result of people moving the Dutch elm disease fungus, Ceratocystis ulmi, from some relatively distant location, either on beetles or in wood contaminated with the fungus. This belief comes from knowing that the nearest existing confirmed location of Dutch elm disease to St. Paul in 1961 was 100 miles into Wisconsin. The Monticello location was 140 miles distant from the nearest known source. It took 30 years or longer for the fungus to move to Minnesota from Ohio and other eastern states. Thus this state had ample time to take appropriate steps to reduce or delay the ultimate losses. But people failed to believe that the beetles could survive in Minnesota. Elms dominated yards, streets, and parks as the shade tree of choice across the upper midwest, from Iowa to Canada and Wisconsin to the Dakotas. Minnesota had about 140 million elm trees in 1950. Our citizens and leaders should have acknowledged Minnesotas obvious reliance on elms in our urban forests and prepared for the possibility of infection. Minnesota had relied almost entirely on the American elm for its parks, its streets, and wherever people wanted shade trees. As early as 1912, in its 30th annual report the Minneapolis Board of Park Commissioners called for planting 2,104 trees, all elms. By the time Dutch elm disease struck in this state, Minnesota had close to 140 million elm trees, and little else, lining its streets and streams. The predominance of elms, as the shade tree of choice, stretched from Iowa to Canada and Wisconsin to the Dakotas. What could have been done in the 1950s or earlier to minimize the possibility of future elm tree losses in Minnesota? Most obviously, we first could have stopped planting elms. Elms in nurseries should have been destroyed and no new elms planted. A second necessary precaution would have been halting the movement of elm logs, elm firewood, or any form of elm with bark. It should all have been halted, possibly by regulation, but preferably by the potentially far more effective creation of public support through publicity and public education. Third, sick and dying elms should have been eliminated from cities and parks as much as possible. Fourth, elms should have been discriminated against in wild and forested areas. Every logging operation should have prescribed removal of elm. Whenever possible, these trees should have been utilized or burned. A few ineffective attempts were made to establish procedures which would have reduced the elm population, especially those elms thought to be most vulnerable to attack by bark beetles. These attempts were made by a Dutch elm disease committee formed in the 1950s, with representatives from the Minnesota Departments of Agriculture and Natural Resources, and from the University of Minnesota. The committee considered the steps which should be taken and encouraged appropriate actions. Unfortunately, nurseries were reluctant to cooperate. They continued selling elms, and new suburbs, parks and streets were planted with elms. No formal government restrictions were enacted. Nor was there any publicity urging voluntary restriction of shipment of elm with bark into Minnesota. All attempts to convince the legislature of the importance of initiating control programs, such as sanitation, were to no avail. Some individual legislators were interested and concerned, but other priorities took preference over concern about a tree disease which was not here and which many thought would be of little consequence. After a few meetings the Dutch elm disease committee effectively ceased to function, but efforts by its members continued periodically, trying to convince the State Legislature that measures were needed to prepare for the possibility of the arrival of Dutch elm disease. Even after the disease was found in Minnesota there were excellent opportunities to take steps to reduce or slow subsequent disease losses. The state had almost a decade in which the disease remained at low levels and could have been managed. In the city of St. Paul, from 1961 through 1968 only 30 positive cases were officially reported. The disease was not found in Minneapolis until 1963. Unfortunately, any enthusiasm for control programs was severely lacking. In fact, it seemed that the slow rate of increase simply confirmed the beliefs held by many that the European elm bark beetle would not survive well in Minnesota, and that the disease would never gain momentum. Beetles which carry the Dutch elm disease-causing fungus are only abut one-eighth of an inch long. The holes they leave behind as they burrow into dead elm wood are barely the size of pen point. As late as 1961, a letter from a University of Minnesota entomologist to the members of the Dutch elm disease committee urged that the initiation of sanitation procedures was not yet too late, but that time was running out: If sanitation measures are not started immediately and effectively the devastation may shock the residents of this area. When this happens it will be too late to do something about the problem. 1 During the decade of the 1960s a maximum effort would have prevented the disastrous losses of elm trees experienced by Minnesota in the 1970s. There were, in fact, unfortunate delaying activities that interfered with proper sanitation. One was the notion that an elm need not be condemned or eradicated unless confirmed by laboratory diagnosis to be positive for Dutch elm disease. The laboratory exercise was essentially of little value and dependence on it slowed control programs and provided citizens with a basis for arguing that their tree not be removed. It really made no difference whether an elm tree died from Dutch elm disease or from any other cause every dead or dying elm should have been eradicated. Bark beetles carrying the Dutch elm disease fungus invaded dead and dying trees irrespective of the cause of a trees demise and each new generation of beetles emerged to carry the Dutch elm disease fungus to healthy trees. All species of elms in which bark beetles can live, not just American elms, needed to be included in a sanitation program. Carefully managed burning should have been the disposal method of choice for beetle and fungus infected elm wood, but air pollution considerations severely restricted its use. Even Siberian elms, which are not often killed by the Dutch elm disease fungus, will harbor bark beetle populations and fungus inoculum. Because Siberian elms are very susceptible to winter injury they often sustain considerable amounts of dieback which, while not disastrously harming the tree, is invaded by bark beetles carrying the Dutch elm disease fungus. The next generation of beetles, their progeny, emerge from these resistant elms carrying large numbers of spores of the disease fungus. Burning has been the most expeditious method of eradicating elm material in which beetles could develop. Unfortunately, a then growing concern about our environment caused otherwise reasonable restrictions to be enacted against burning. Exceptions should have been granted to allow the burning of elm wood which could not be otherwise utilized. Reasonable numbers of fires and amounts of smoke should have been considered an acceptable environmental price to pay for being able to quickly eliminate large volumes of contaminated elm material. History has proven that all other systems for disposal of large quantities of elm wood have been both more expensive and far less efficient than burning. If managed properly, in consideration of the energy situation, the people of the state of Minnesota could even have saved considerable amounts of money by burning elm locally, rather than insisting on its being hauled to distant disposal sites. It was not until 1971 that the State Legislature became concerned about Dutch elm disease. Even then, that concern was initiated and sustained by a small core of effective individual legislators. In particular, state representative Tom Berg initiated legislative involvement by forming a committee and holding extensive hearings on the subject prior to the convening of the legislature that year. With its head start on the legislative session, the committee assembled a proposal and prepared a bill for legislative action. It moved slowly through the process, but it was ultimately passed. More than once it appeared that the bill would be tabled or voted down, but its supporters kept alive the legislation which eventually funded and set in place the largest program ever enacted by a single state to deal with a single tree disease. The bill did technically provide for programs dealing with oak wilt as well as Dutch elm disease, only a minimal effort was expended on the oak wilt problem. At the same time that the State Legislature recognized the seriousness of Dutch elm disease and took action, congressmen from this part of the country responded at the federal level. In October of 1975, then United States Senator Walter Mondale, of Minnesota, introduced legislation to help at the congressional level. William Steigler, of Wisconsin, introduced the same legislation in the United States House of Representatives. In accordance with the Americans with Disabilities Act, this material is available in alternative formats upon request. Please contact your University of Minnesota Extension office or the Extension Store at (800) 876-8636.
In 1911 the McClungs and their 4 children moved to Winnipeg, where their fifth child was born. The Winnipeg women's rights and reform movement welcomed Nellie as an effective speaker who won audiences with humorous arguments. She played a leading role in the 1914 Liberal campaign against Sir Rodmond ROBLIN's Conservative government, which had refused women suffrage, but moved to Edmonton before the Liberals won in Manitoba in 1915. In Alberta she continued the fight for female suffrage and for PROHIBITION, dower rights for women, factory safety legislation and many other reforms. She gained wide prominence from addresses in Britain at the Methodist Ecumenical Conference and elsewhere (1921) and from speaking tours throughout Canada and the US, and was a Liberal MLA for Edmonton, 1921-26. In 1933 the McClungs moved to Vancouver Island, where Nellie completed the first volume of her autobiography, Clearing in the West: My Own Story (1935, repr 1976), and wrote short stories and a syndicated column. In all, she published 16 books, including In Times Like These (1915, repr 1975). Her active life continued: in the Canadian Authors Association, on the CBC's first board of governors, as a delegate to the League of Nations in 1938 and as a public lecturer. Forgotten for a decade, she was rediscovered by feminists in the 1960s. Although some criticized her maternalistic support of the traditional family structure, most credited her with advancing the feminist cause in her day and recognizing the need for further progress such as the economic independence of women. See also WOMEN'S MOVEMENT. Author M.E. HALLETT Links to Other Sites View Historica’s Heritage Minute devoted to Nellie McClung. The Famous 5 This website focuses on the Famous 5 and their struggle to advance the legal rights of Canadian women. From the Alberta Online Encyclopedia. The “Persons” Case A brief overview of the historic “Persons Case” from the Parliament of Canada website. Are Women Persons? The “Persons” Case An online feature about the legal implications of the "Persons" Case. From Library and Archives Canada. A profile of Nellie McClung, Canadian writer, suffragette, and activist. From the Calgary Herald feature "Best of Alberta." Charlotte Gray - Nellie McClung Watch a video of Allan Gregg interviewing Charlotte Gray about Nellie McClung and the "mock parliament" episode. From the TVO website. Growing a Race: Nellie L. McClung and the Fiction of Eugenic Feminism See online excerpts from Cecily Devereux's book that provides a historical context for Nellie McClung's views on the sensitive issue of eugenics. From Google Books. Growing a Race: Nellie L. McClung and the Fiction of Eugenic Feminism (review) See an excerpt of a review of Cecily Devereux's book "Growing a Race: Nellie L. McClung and the Fiction of Eugenic Feminism." From the Project MUSE website. Shawnadithit grew anxious waiting for her uncle, Longnon, to return to camp at the junction of Badger Brook and the Exploits River, deep in the wilds of Newfoundland...
Do drinking giraffes have headaches? Charles Darwin wrote in his Origin of Species that he had no difficulty in imagining that a long drought could have caused some hypothetical short-necked ancestors of the giraffe to stretch their necks continually higher to reach the diminishing supply of leaves. He had no fossil evidence, of course, for such an evolutionary history. He also apparently was not aware of certain problems peculiar to giraffes which make his easy assumption of giraffe evolution even more difficult to accept. The giraffe heart is probably the most powerful in the animal kingdom, because about double normal pressure is required to pump blood up that long neck to the brain. But the brain is a very delicate structure which cannot stand high blood pressure. What happens when the giraffe bends down to take a drink? Does he ‘blow his mind’? Fortunately, three design features have been included in the giraffe to control this and related problems. In the first place, the giraffe must spread his front legs apart in order to drink comfortably. This lowers the level of the heart somewhat and so reduces the difference in height from the heart to the head of the drinking animal. The result is that excess pressure in the brain is less than it would be if the legs were kept straight. Second, the giraffe has in his jugular veins a series of one-way check valves which immediately close as the head is lowered, thus preventing blood from flowing back down into the brain. But what of the blood flow through the carotid artery from the heart to the brain? A third design feature is the ‘wonder net’, a spongy tissue filled with numerous small blood vessels located near the base of the brain. The arterial blood first flows through this net of vessels before it reaches the brain. It is believed that when the animal stoops to drink, the wonder net in some way controls the blood flow so that the full pressure is not exerted on the brain. Scientists also believe that probably the cerebrospinal fluid which bathes the brain and spinal column produces a counter-pressure which prevents rupture or leakage from the brain capillaries. The effect is similar to that of a G-suit worn by fighter pilots and astronauts. The G-suit exerts pressure on the body and legs of the wearer under high acceleration and prevents blackout. Leakage from the capillaries in the giraffe’s legs, due to high blood pressure, is also probably prevented by a similar pressure of the tissue fluid outside the cells. In addition, the walls of the giraffe’s arteries are thicker than those in any other mammal. Had Darwin known all these problems peculiar to giraffes, it surely would have given him a headache. Some careful investigations and measurements of blood pressure have recently been made in live giraffes in action. However, the exact manner in which these various factors operate to enable the strange creature to live has still not been clearly demonstrated. Nevertheless, the giraffe is a great success. When he has finished his drink he stands up, the check valves open, the effects of the wonder net and the various counter-pressure mechanisms relax, and all is well. Not even a headache!
Places to Avoid Planting Trees and Shrubs Planting the right trees and shrubs in the right place is not just an aesthetic strategy. It promotes safety and prevents damage to the plants, nearby buildings and utilities, and relations with the people who live next door. Anticipate the consequences of poorly placed plants — dangerous limbs hanging over your roof or growing into electrical wires, roots clogging your sewer pipe or leach field, inaccessible utility service boxes, unhappy neighbors, and unsafe driving conditions. Pruning efforts to correct the problems after the trees and shrubs mature can damage the plants and leave them looking unnatural and more prone to pests and diseases. Consider the following situations before you plant: Overhead power lines and utilities: The best way to keep your overhead wires clear of tree limbs is to consider the mature height and spread of trees before you plant. The International Society of Arboriculture recommends planting trees that grow no taller than 20 feet directly beneath utility wires. Taller trees should be planted so that their mature canopy grows no closer than 15 feet from the wires. Buried wires and gas lines: Frequently, utility companies bury electric, telephone, and cable television wires underground, especially in new developments. Don’t assume that the wires are buried deeper than your planned planting hole — sometimes, they’re buried just below the surface. Although pipes should be buried at least 3 feet below ground, gas companies prefer a tree-free corridor of 15 to 20 feet on either side of pipes to allow for safety and maintenance. Gas leaks within a plant’s root zone can also damage or kill it To avoid disrupting underground utilities, many states have laws that require you to contact utility companies that may have wires or pipes on or close to your property before you dig. Service boxes and wellheads: You may want to disguise your wellhead and the unattractive metal box that the utility company planted in your front yard, but someone will need access to them someday. Plan your shrub plantings so that the mature shrubs won’t touch the box or wellhead. Better yet, allow enough space for someone to actually work on the utilities located in the box without having to prune back your shrubs. Buildings: A strong wind can send branches crashing through your roof. Overhanging limbs also drop leaves that clog your gutters and sticky sap that can stain siding. Keep shrubs at least several feet from your house and plant trees that grow to 60 feet or more at least 35 feet away. Streets, sidewalks, and septic lines: Some trees, such as poplar and willow, grow large roots close to or on the ground’s surface where they heave paving and everything else out of their path. Shallow-rooted trees also compete with lawn grasses and other plants, and make for bumpy mowing. Plant roots usually grow two to three times farther from the tree trunk than the aboveground branches do, so leave plenty of room between the planting hole and your driveway, sidewalk, or septic field for outward expansion. Property boundaries and public rights of way: Your state and municipal governments own the land on either side of all public roads. Many communities and highway departments prohibit planting in the public right-of-way. Contact your local government office for guidelines, or call the State Highway Department if your property borders a state or federal highway. Homeowners commonly plant privacy hedges along their property boundary. If you plan to plant a hedge or row of shrubs or trees between you and the neighbors, avoid future disputes by hiring a professional surveyor to find the actual property lines. When you plant the shrubs, allow enough space so that mature shrubs won’t encroach on the neighboring property. You’ll also have room to maintain them from your own yard. Merging traffic: Shrubs and hedges near intersections, including the end of your driveway, must be lower than that height or planted far enough from the road to allow drivers to see oncoming motorists, bicyclists, and pedestrians.
Intel working on a new system to boost Stephen Hawking’s typing speed by 10x Share This article Stephen Hawking has done wonders for every scientific field. Not only has his own research in physics and cosmology been useful for other scientists, but he has inspired countless people to learn more about the scientific method and the fabric of reality. As we are all well aware, Hawking is paralyzed due to a degenerative disease called amyotrophic lateral sclerosis (ALS). He uses small muscle twitches in his face to select words on a custom computer system so he can communicate. Sadly, his condition has progressed to the point where he can only manage roughly one word per minute. After meeting with Hawking himself, Intel’s CTO Justin Rattner is spearheading a project to improve Hawking’s computer system, and allow for an increase in words per minute. Hawking can use other muscles in his face, so Intel is using his cheek twitch, mouth movements, and eyebrow movements to allow more nuanced control of the computer. In combination with an improved text prediction engine, and possibly use of facial recognition (think a high-resolution Kinect), the research team is set on getting Hawking back up to his previous five words per minute. If all goes well, the system might even boost that number upwards of ten. Keep in mind that this research isn’t just for Hawking. The technology developed here can be used in a broader context of smart gadgetry and assistive tech. Elderly and disabled people will undoubtedly benefit heavily from the software and hardware being developed for a person with such severe physical limitations. By adding more sensors like cameras, accelerometers, and microphones to the system while connecting that data to online services like chat programs and social networks, people who once were extremely isolated from society can maintain close personal connections. Facial recognition is getting substantially better. Not only are companies like Google using it to interact with your tablets and smartphones, but the government is using it to find people. Increasingly, these sensors are being used for entertainment in video games to personalize the experience. The field of biometrics and assistive tech is already large, and it’s only increasing in complexity and capabilities. The medical field has a lot to gain from behavioral biometrics as well. Using computers and sensors to sense changes in gait, metabolism, weight, and heart rate will significantly improve doctors’ ability to diagnose illnesses quickly and accurately. Instead of waiting for symptoms to increase to the point where a patient would notice them, small changes can be picked up extremely early, and treated accordingly. Genetic markers for increased risk for diseases like Parkinson’s disease can be tested for, and those patients could be put on a 24/7 symptom watch. It’s only a matter of time before personal systems using specialized sensors start saving countless lives. This type of technology not only improves lives once disaster strikes, but helps avoid disaster in the first place.
Preimplantation Genetic Diagnosis The term PGD refers to preimplantation genetic diagnosis. This is in effect checking the embryo for genetic diseases before replacing it into the uterus. The most common reason for this is that the one or both partners are known to carry a genetic trait associated usually with a severe genetic disease. A typical example is a couple who has given birth to a child with such a severe genetic disease. Some such diseases are uniformly fatal, while others are associated with severe disability. In another situation, the couple may know that they are carriers for a genetic trait ahead of time by a screening process. The most common severe genetic diseases in the US include sickle cell anemia, cystic fibrosis, Tay Sachs disease, and Huntington's disease. How is PGD performed? This process begins with a standard IVF cycle. For full details of this please go to the IVF section of the web site. Briefly, the ovaries are stimulated with medication. The eggs are harvested by ultrasound. Each egg is injected with a single sperm (ICSI). This is done to prevent the embryo from being covered with sperm DNA which can contaminate the embryo biopsy. The fertilized eggs (embryos) are then incubated for 3 days. Some embryos will naturally stop dividing. Others will be healthy and continue to divide. Healthy embryos which are at the 6 - 8 cell stage can then be biopsied. The biopsy technique involves removing carefully one cell and either fixing it to a slide or releasing its DNA for further analysis. The image below is of such a biopsy in progress. Typically the biopsy is done by the IVF program and the genetic material is sent off to a genetic lab which is frequently in another city. The lab will then try to get results within the next 48 hours. If the results are available in that time frame, we have an opportunity to do a fresh embryo transfer usually 5 or 6 days after egg retrieval. If the results cannot be obtained within this time frame, the embryos can be frozen until the results are known. Then a frozen embryo transfer can be performed. Is PGD expensive? PGD is highly complex. It requires 2 teams of lab staff a physician and a genetics expert. It is surprising given its complexity that it only adds $4,000.00 to $5,000.00 to the cost of the typical IVF cycle. Is PGD 100% effective? It is not. It is a new technique and cannot test for all genetic defects at once. It can typically test for one at a time. It is too early to say it is 100% effective for testing for that one gene. Most recent studies show that it is more than 90% effective for testing for one genetic defect. How soon can I do PGD? It typically takes 6 months to develop specific probes for the individual gene. Usually blood has to be initially collected from the parents and tested. The probes are then developed. What is the pregnancy rate? The pregnancy rate will vary from 50% for young patients to less than 20% for patients in the forties. Are there always normal embryos for transfer? Not always. In most cases some of the embryos will be normal and available for transfer. In some cases all of the embryos can be abnormal and therefore not suitable for transfer. The following is a highly detailed article about PGD. It is not meant to scare you off! It is presented for patients who would like to know more and health care professionals. It is reproduced with permission from Freedom Drug. Preimplantation Genetic Diagnosis Gina Paoletti-Falcone, RN, BSN Freedom Drug Priority Healthcare The term preimplantation genetic diagnosis, PGD, is actually somewhat self explanatory. It implies that there will be a genetic diagnosis of something before implantation. In this case, that something would be an embryo or the egg that could contribute to the formation of an embryo prior to embryo transfer in an in vitro fertilization, IVF, cycle. PGD is a laboratory technique that combines the use of IVF, often with intracytoplasmic sperm injection ( ICSI), and micromanipulation of eggs or embryos by skilled embryologists to biopsy a cell which subsequently undergoes genetic analysis by one of several techniques. PGD is therefore the earliest prenatal testing available to those trying to conceive who may be at greater risk, for a variety of reasons, of not conceiving at all, conceiving and losing a pregnancy or conceiving a child who will be affected by a number of diseases that have their basis in genetic abnormalities. The results allow decisions to be made regarding which embryo(s) would be suitable for transfer to the uterus following IVF to increase the likelihood of the pregnancy and birth of a healthy child. Edwards and Gardner performed the first successful embryo biopsy on rabbits to sex blastocysts in 1968. Advances in molecular biology and assisted reproductive technologies led to clinical research throughout the 1980s. In 1990 both Handyside and Verlinsky reported on their techniques for PGD. Handyside biopsied embryos at the cleavage stage for sexing by Y specific DNA amplification in X-linked disorders while Verlinsky tested polar bodies for autosomal recessive disease. The First International Symposium on Preimplantation Genetics was held in Chicago that same year. Today PGD is a clinical option in many countries throughout the world with an estimate of over 1000 healthy children born as a result of this technology that combines assisted reproductive technology, embryology and genetics. PGD has enhanced the specialty of prenatal diagnosis by allowing couples at risk for having a child with a genetic disease to make choices prior to pregnancy rather than being faced with the agonizing decision of terminating the pregnancy of an affected child. PGD can be used to screen eggs, sperm and embryos for chromosome abnormalities and embryos for single gene disorders, sex and human leukocyte antigen (HLA) matching. It is helpful to review some basic information before discussing each of these applications. Human cells should each contain 46 chromosomes. These chromosomes are string like structures that are found in the nucleus, or cell center. 23 chromosomes come from the egg and the other 23 from the sperm that unite to form the embryo. Chromosomes 1 through 22, largest to smallest, are the same for males and females. The 23rd chromosome determines sex. A female has 2 X chromosomes, inheriting one from her mother and one from her father. A male has 1 X chromosome from his mother and 1 Y chromosome from his father. Chromosomes are made of genes which act as chemical messages that tell cells how to grow and function in the various processes that take place in the human body. There are more than 30,000 different genes and each cell contains a pair of each, one from the mother and one from the father. Genes are made of DNA arranged in a particular sequence that holds the "code" for that particular gene and its function. There are four types of nucleotides that are the building blocks of nucleic acids. Each nucleotide consists of a 5 carbon sugar (which is deoxyribose in DNA), a phosphate group, and one of the following nitrogen bases: A Adenine G Guanine T Thymine C Cytosine DNA consists of two strands of these nucleotides, held together at their bases by hydrogen bonds. The bonds form when the two strands run in opposing directions and twist together into a double helix. Two kinds of base pairings form along the length of the molecule: A-T and G-C. This bonding pattern permits variation on the order of the bases in any given strand. Even though all DNA molecules show the same bonding pattern, each species has unique base sequences in its DNA. This molecular constancy and variation among species is the foundation for the unity and diversity of life. (from Biology The Unity and Diversity of Life 2001). Disruptions in "normal" structure (code) or number of genes or chromosomes can have consequences. The goal of PGD is to detect these changes prior to embryo transfer and avoid those consequences. PGD is usually performed on one or two cells that can be obtained in two ways: polar body biopsy of the egg or blastomere biopsy of the embryo. As an egg matures and undergoes meiotic division, it extrudes two polar bodies. The first polar body is a by product of the first meiotic division (prior to fertilization) and the second polar body is a by product of the second division (after fertilization). Fertilization is confirmed by the presence of two pronuclei, about 15-18 hours after insemination with sperm, and the presence of the second polar body in the perivitelline space, the space between the zona pellucida and the cytoplasmic membrane. The most common method for polar body biopsy is to make a slit in the zona pellucida, outer covering of the egg, using a PZD microneedle and aspirate the polar bodies. The disadvantage of polar body biopsy is that it only gives genetic information about the egg and does not allow for testing of the paternal genetic contribution to the embryo. This means that it cannot be used to detect chromosomal abnormalities that occur after fertilization, including translocations that are transmitted paternally, autosomal dominant diseases or sexing of embryos. Blastomere biopsy is the more widely used method to obtain cells for PGD. It allows testing of both the maternal and paternal genetic contribution to the resulting embryo(s). A blastomere is simply a cell from an embryo. Research established that the 8 cell stage was most suitable for blastomere biopsy which means performing the biopsy on day 3 after egg retrieval with embryo transfer pushed out to day 5. On day 3 the blastomeres are still totipotent, undifferentiated and having potential to develop into any type of cell, and have not yet compacted as in the morula stage. Removing a cell or two, therefore, will not effect fetal development but simply delays cell division for a couple of hours at which point the embryo resumes normal division. The embryo is usually incubated in a calcium and magnesium free media for about 20 minutes prior to biopsy to reduce the adherence of one blastomere to another. The biopsied blastomere must have a visible nucleus present. Before removing the blastomere with the biopsy pipette an opening is made in the outer covering, zona pellucida. This is accomplished using either the application of acidic Tyrode's solution, a diode laser or a PZD microneedle. Once the cell is removed, it must be prepared for one of two techniques used to analyze it. The technique used will be determined, in advance, by the reason for PGD and the test required. FISH, fluorescent in situ hybridization, can be used on both polar bodies and embryos to analyze whole chromosomes while PCR, polymerase chain reaction, is used to analyze genes on embryos. Preparation for FISH requires that the cell be spread on a slide and fixative is applied such that the cytoplasm dissolves leaving just the nuclear chromosomes. Preparation for PCR requires the cell to be placed in a special tiny PCR tube containing a buffer that allows a reaction for replication and amplification of the genetic signal. All embryos in culture dishes, slides for FISH and PCR tubes must be meticulously prepared and labeled so unequivocal matching of each embryo with it's final PGD report is assured. FISH uses probes, small pieces of DNA, that are a match for the chromosomes that need to be analyzed. Each probe is labeled with a different color fluorescent dye which is then applied to the biopsied cell on the slide. A coverslip is applied and sealed and then the slide is placed on a slide warmer, then in a humidification incubator. Finally, under a fluorescent microscope, each chromosome color can be counted and cells/embryos that are normal (2 of each analyzed chromosome) can be distinguished from those that are not normal. FISH can be used for: - Aneuploidy screening in women of advanced maternal age - Aneuploidy screening for male infertility - Aneuploidy screening with repeated IVF failure - Identification of sex in X linked diseases and for non medical reasons - Recurrent miscarriages caused by parental translocations HINT TO REMEMBER "BIG FISH" used to analyze whole chromosomes Each biopsed cell contains a tiny amount of DNA, which makes up the genes on the chromosomes. It would be very difficult to accurately read this small amount of DNA. PCR allows the amplification of a specific DNA sequence(s) by using enzymes that allow it to be copied and multiplied billions of times so that it can be read. PCR consists of 3 steps that are repeated 20-40 times. - Step 1 Denaturation of the two complimentary DNA strands at high temperature. This causes the two strands to unwind and separate into two single strands each serving as a template to build a new double strand. - Step 2 Annealing at a lower temperature which allows primers (short complimentary pieces of DNA) to connect on either end on the DNA sequence to be amplified - Step 3 Extension allows a heat resistant DNA polymerase to insert dinucleotide building blocks starting at each primer and working inward thus building two new identical strands. At the end of this cycle the number of DNA molecules has doubled and the cycle starts again. The mutation or disease being tested for requires the development of a PCR test specific for it. The test development takes time and generally involves blood samples from the couple. Once the DNA has been amplified there are a variety of laboratory techniques to screen that gene for the abnormality such as gel electrophoresis, where a mismatch results in differential migration on the gel, and automated DNA sequencing. PCR can be used for: - Single gene defects in autosomal disease - Single gene defects in male infertility HINT TO REMEMBER "Piece C R" used to analyze specific genes (pieces on chromosomes) PCR for single gene defects requires the use of ICSI, intracytoplasmic sperm injection, to prevent contamination of the biopsied cell with DNA from surplus sperm that may still be embedded in the zona pellucida at the time of blastomere biopsy if conventional IVF drop insemination was used. The cumulus cells attached to the zona can cause similar problems and should be removed prior to blastomere biopsy. The goal is to insure that pure, high quality DNA is available for analysis that is not contaminated by another cell or piece of DNA. Clinically PGD can benefit a variety of patients who undergo assisted reproductive technologies specifically for PGD or are undergoing assisted reproductive technologies to treat infertility with the addition of PGD to enhance their outcome. Aneuploidy, the most common chromosomal abnormality, simply means having an extra chromosome, trisomy, or a missing chromosome, monosomy. If the egg or the sperm that create the embryo has an extra or missing chromosome then that embryo will be affected in the same way. When there are extra or missing large chromosomes the likelihood of implantation decreases and the spontaneous miscarriage rate increases. When chromosomes 13, 18, 21, X or Y are involved, the pregnancy may implant and continue to develop resulting in the birth of a child with a chromosome condition that can include physical differences and intellectual retardation. Trisomy 21 or Down's Syndrome is the most common trisomy. Others include Patau Syndrome ( trisomy 13), Edward Syndrome ( trisomy 18), Klinefelter Syndrome ( 47 XXY an extra sex chromosome) and Turner Syndrome ( 45 X a missing sex chromosome). Trisomy 16, 22, 15 and 21 are commonly found in spontaneous miscarriages. The most common aneuploidies in day 3 embryos are 22,16,21,15 and 17. The chance of aneuploidy increases with increasing maternal age. Since women are born with their lifetime supply of eggs, the thought is that older eggs are more likely to make mistakes as their chromosomes divide resulting in a greater percentage of eggs that have either a missing or extra chromosome. This is likely the explanation for the dramatic decline in pregnancy rates and increase in miscarriage rates for women as they age, even with assisted reproductive technologies. Studies have shown that more than 20% of embryos from women 35-39 and 40-60% of embryos in women 40 and older are aneuploid. Screening for aneuploidy using FISH on polar bodies or blastomeres could therefore potentially increase implantation and pregnancy rates, while decreasing pregnancy loss and the number of pregnancies affected by trisomies or monosomy. Several studies have shown increased implantation rates with aneuploidy screening for 8 chromosomes. While PGD for aneuploidy significantly decreases the risk of having a child affected by a trisomy or monosomy, it is not possible at this time to test all of the chromosomes. The most common chromosomes in which monosomies or trisomies have been seen are tested for: 13, 15, 16, 17, 18, 21, 22 and X, Y. The accuracy of PGD for aneuploidy is about 90%. Misdiagnosis may occur because of mosaicism. This means that some of the blastomeres within the embryo are normal and some are abnormal. If a normal blastomere is biopsied, the result could be the transfer of an embryo that could carry an abnormality. Prenatal testing by either chorionic villus sampling or amniocentesis is currently recommended in any pregnancy after PGD to confirm the diagnosis and rule out any other possible aneuploidies not tested for. PGD can also be used to detect translocations, a change in the structure of chromosomes. Individuals who have "balanced" translocations are generally unaffected as there is no extra or missing chromosomal material and the break does not generally disrupt gene function. Typically these people have no medical problems although some have reduced fertility. This is likely due to producing eggs or sperm that are "unbalanced". An "unbalanced" translocation is one in which there is extra or missing chromosomal material. An embryo with an unbalanced translocation is less likely to implant, more likely to miscarry if it does implant or may result in the livebirth of a child who will likely have physical or mental problems. Therefore individuals with translocations are at risk for pregnancy loss or having a child with severe medical handicaps that may be incompatible with life. Reciprocal translocations affect about 1 in 625 people. This type of translocation involves a break anywhere on two different chromosomes allowing pieces to be swapped between them. About 1 in 900 people have a Robertsonian translocation involving chromosomes 13, 14, 15, 21 or 22. These chromosomes have much larger bottom halves which can fuse together. The risk for having children who are normal, balanced, unbalanced or recurrent pregnancy loss is influenced by the chromosome(s) involved and the size of the fragments exchanged. Polar body biopsy can be used if the woman has a translocation, although blastomere biopsy is more commonly used. FISH analysis is used to identify normal/balanced and unbalanced genotypes. Analysis of embryos from translocation carriers has shown that: - carriers of reciprocal translocations have a high number of unbalanced embryos - it may be beneficial to analyze sperm from male translocation carriers before a PGD cycle to determine the percentage of unbalanced sperm and allow for estimates of the percentage of embryos that may be unbalanced and counsel accordingly - carriers of reciprocal translocations have a higher incidence of mosaic and chaotic embryos than those with Robertsonian translocations - infertility in translocation carriers may not only be caused by their unbalanced eggs or sperm but also because of the high incidence of aneuploidy involving other chromosomes - lower pregnancy rates in translocation cases is primarily caused by the low number of normal embryos available for transfer after PGD - Evisikov et al (2000) showed that an equal number of normal/balanced (32%) and unbalanced (26%) embryos biopsied made it to the blastocyst stage PGD for translocations significantly decreases the likelihood of having a child with a translocation as it is about 90% accurate. Prenatal testing by either chorionic villus sampling or amniocentesis is recommended to account for the error rate as well as to test for other chromosomal conditions not tested for. PGD significantly reduces the chance of pregnancy loss in patients with translocations. According to Munne, patients with translocations who achieved a pregnancy after PGD had experienced miscarriage in >90% of their previous pregnancies. After PGD, fewer than 10% of pregnancies resulted in a loss. Munne also noted that female translocation patients produced an average of 9.5 mature eggs in comparison to 13 mature eggs in females without translocations. On average 65% of embryos are abnormal and in 22% of cycles there were no normal embryos available for transfer. In the past the first indication that many couples had that one or both of them carried a genetic mutation was the birth of a child with a serious medical condition or a history of a relative with a genetic medical condition. Individuals could be tested to see if they "carried" the gene and then counseled as to the odds of having a child with the disease. Prenatal genetic testing by either chorionic villus sampling or amniocentisis then made it possible to diagnose many of these diseases in a fetus during pregnancy. A positive diagnosis placed these couples in the unenviable position of deciding whether or not to continue with the pregnancy or terminate at a point when pregnancy was well established. IVF and micromanipulation for ICSI as well as the Human Genome Project and the development of PCR for DNA amplification have all made detection of many single gene disorders using PGD possible. Single gene disorders are those diseases that are caused by the inheritance of a single defective gene. There are two categories of single gene disorders: - those that are recessive in which two defective copies of that gene, one from each parent who carries it, is necessary to have the disease - those that are dominant in which only one copy of the defective gene is necessary in order to be affected Errors in hundreds of different genes are responsible for hundreds of diseases identified. Many are rare but some are common enough among certain subgroups of the population that they should routinely be screened to see if they are carriers and see a genetic counselor if they are. The following tables list single gene disorders that PGD has been used to screen for. Alpha and Beta Thalassemia HLA genotyping Cystic Fibrosis Sanhoff Disease Sickle Cell Anemia Epidermolysis bullosa Gaucher Disease Adenosine Deaminase deficiency Tay Sachs Disease Glycogen Storage Disease type !A Fanconi Anemia types A,C and G Adrenal hyperplasia Spinal Muscular Atrophy LCHAD Neurofibromatosis 1 and 2 Li-Fraumini (p53 gene) Von-Hippel Lindau Myotonis dystrophy Huntington's Disease Marfan syndrome Osteogenesis Imperfecta types I and IV Charcot-Marie-Tooth type IA APP early onset Alzheimers Polycystic Kidney Disease types 1and 2 Multiple Epiphyseal Dysplasia Retinitis pigmentosa Familial Adenomatous Polyposis (APC gene) X Linked Diseases Ornithine Carbamyl Transferase (OTC) deficiency X linked hydrocephalus Hemophilia A and B Duchenne Muscular Dystrophy Both ASRM and ACOG have recommended preconception screening for some of the most common single gene disorders such as CF and Tay Sachs in the at risk population. In order to do PGD blood samples from the couples may be needed to confirm the particular mutation and the ability to test for it. Reports of genetic testing are also needed to identify the specific mutation. Cystic fibrosis is the most common autosomal recessive disease in Caucasians of European descent. Approximately 1 in 25 carries a defective copy of the gene. Because it is a recessive disease two copies of the defective gene are necessary, one from each parent, to be affected. One copy of the defective gene makes a "carrier". Two carriers have a 25% chance that their child will be affected, a 50% chance that their child will be a carrier and a 25% chance that the child will not have a copy of the defective gene. There are many possible mutations in the CF gene. The most common is deltaF508. A different mutation causes congenital bilateral absence of the vas deferens, CBAVD, a cause of male infertility. Another common autosomal recessive disease is Tay Sachs. The odds of carrying the Tay Sachs mutation are increased among eastern European Ashkenazi Jews. Approximately 1 in 27 Jews in the US is a carrier. Hemoglobin diseases are the most common single gene disorders overall with sickle cell disease common in African ancestry and beta thalassemia common in Mediterranean countries/ancestry. Each of theses diseases has devastating effects on the affected child and is eventually fatal. Prior to PGD, families with known histories of these diseases were faced with either not having their own children to avoid transmittal of the disease or taking a chance, undergoing amniocentesis and being faced with the possible choice of terminating an affected pregnancy or having an affected child. PGD has given these couples the option of testing embryos prior to conception which could theoretically eliminate the transmission of some of these diseases to the next generation. Additionally, because of preconception screening, families "at risk" (2 carriers of the CF mutation) will be alerted of their risk before they ever have a family history of the disease. Huntington's disease is a late onset dominant single gene disorder. Symptoms usually present after the individual has had children and potentially passed on the single defective gene. Because it is dominant, having a parent with Huntington's disease means a 50% chance of inheriting Huntington's disease. Studies show that presymptomatic genetic testing is not something the majority of those at risk choose, yet given the opportunity they would choose to prevent the transmission of that dominant gene to their children. Some of these couples undergo IVF and PGD in a "nondisclosure" cycle meaning that they are given no information about the number of eggs or embryos obtained or the results of PGD in their embryos. They are given no information that would allow them to infer that they have the defective Huntington's gene but would only have an embryo transfer of disease free embryos which could eliminate the disease from the next generation of their family. Despite the relative simplicity of this train of thought, it does raise ethical questions that are difficult to answer. PGD can also be used to screen embryos as an HLA match for a sibling with a life threatening disorder. This may be the last resort for families with a child affected by thalassemia, Fanconi's anemia, leukemia and other inherited or sporadic diseases requiring a hematopoietic stem cell transplant. Matched sibling donors are the best candidates but if none exist IVF with PGD can provide both screening to prevent the transmission of the disease to another child (if it is an inherited disease) and the HLA matched sibling to save the life of the existing child using chord blood obtained at birth. It is apparent then that the following patients are most likely to benefit from PGD and the information it provides: - Couples with a family or personal history of an inheritable genetic disease - Carriers of single gene disorders - Women over 35 - Couples with a prior history of repeated pregnancy losses or pregnancies with - Carriers of chromosome translocations or abnormalities - Patients with repeated IVF failure - Severe male factor Once the appropriate PGD testing has been done, results are communicated so that decisions about embryo transfer can be made. Embryos will be classified as normal, abnormal or undiagnosed. Because of all the intricate steps involved in both the biopsy and the actual FISH or PCR technologies there can be technical difficulties that result in a "non diagnosis". Reasons for this can include: No nucleus in the cell biopsied therefore no chromosomes A slide fixation error such that cells are lost Unknown detection failure Failure to amplify the gene due to technical problems at IVF lab, PGD lab or an embryo with degraded DNA Contamination with foreign DNA Other limitations and challenges to consider are as follow: There may be few or no normal embryos available for transfer. There are generally no embryos available for cryopreservation requiring another fresh IVF cycle Cryopreserved biopsied embryos appear to have a lower implantation rate than non biopsied cryopreserved embryos. There is no guarantee of pregnancy even in otherwise fertile couples with the transfer of normal good quality embryos. Embryos can only be diagnosed as "normal" for the defect(s) tested There is a very low risk ~ 0.1% of damage to the embryo as a result of the biopsy Analysis of a single cell has limitations and an error rate (5-10%) that allows for a small percentage of misdiagnosis. Therefore if a pregnancy results prenatal testing in the form of chorionic villus sampling or amniocentesis are still required. Patients who come to an infertility practice for PGD are very often different from infertility patients. They generally are not infertile and may already have children.They may have a child who is affected by a condition they are trying to prevent in another child. They may or may not have a true understanding of what IVF and PGD entail. They may have no understanding of the time frame involved in a PGD cycle and the many steps involved. They may have no information on the cost or coverage of the PGD cycle. They are generally referred by someone who may or may not have started the educational process of how and why PGD may be beneficial to them. Infertility patients may also require PGD for reasons identified as part of their infertility workup ( both identified as carriers of CF gene mutation) or treatment (multiple failed IVF cycles). Depending on the reason for PGD the first consult for these patients may be with the reproductive endocrinologist or with the genetics counselor. Additionally, they will need to meet with financial and nursing and their cycle will also require the involvement of the embryology lab at the practice and a PGD lab. Every patient needs genetics counseling before their PGD cycle. The genetics counselor can review the genetic basis for the particular clinical situation the patient presents. Discussion may include an overview of the diagnosis, transmission of disorders, likelihood of transmission and ways to test for it. Family and personal medical histories may be discussed and previous genetic testing reviewed. Meeting with the counselor is an integral part of the "informed consent process" for patients undergoing PGD. Genetics counselors are the experts in discussing genetics with patients. The physician meets with the patients to discuss their clinical situation and the application of IVF with PGD. Risks and benefits of IVF, polar body/blastomere biopsy, FISH or PCR testing as well as the possibilities of no embryos to transfer, pregnancy rates and follow up testing all need to be discussed. Consents for all of the above procedures need to be signed. Very often the PGD testing will be done at a laboratory that is a separate entity from the infertility practice with a separate set of consents to be signed. The physician will need to discuss the most effective method for biopsy and testing with the embryology and PGD lab and clearly document what will be tested, where and how. Very often the PGD lab is not part of the infertility practice and may even be in another state. The relationship between the practice and the PGD lab needs to be clearly spelled out with defined roles in each entity and a communication plan for the various steps in the process. Financial issues need to be clearly documented so that all parties involved understand the costs and who is responsible for payment and to whom. Some PGD labs provide embryologists who come to the center to perform the actual polar body or blastomere biopsy while other infertility practices have their own embryologists do the biopsy, prepare the cells and ship to the PGD lab for analysis. Patients may have very little interaction with the lab that will do their genetic testing. Most PGD labs have the final say as to when a patient is clear to start their cycle based on receipt of consents, pretesting and preparation of probes etc., completion of genetic counseling and financial arrangements. Depending on the reason for PGD it may take 8-12 weeks for all of the testing and preparation to be completed. The PGD lab generally needs to be notified of: Start of stimulation Anticipated biopsy date HCG and egg retrieval dates Number of eggs retrieved Number of embryos to be biopsied All information regarding shipment of the specimens, generally by Fed Ex or another predetermined courier There needs to be a defined plan for communication within the PGD team at the infertility practice. The embryology lab needs to be involved in plans for upcoming PGD cycles including cycle starts and coordination with the PGD lab for egg retrieval and biopsy dates as well as information regarding eggs and embryos, transport of biopsied cells and communication of results and embryo transfer. There needs to be flexibility within the embryology staff as expected egg retrieval dates may change based on response to stimulation. Embryology needs to know how and who to get in touch with at the PGD lab at any time. Patients need to meet with the Financial department at the infertility center to discuss the cost of the procedures they will undergo. Some patients may have coverage for some of the pretesting involved and some for the IVF cycle. Very few patients will have insurance coverage for the actual PGD process which can cost somewhere between $2000 to $5000 dollars. Patients who are planning IVF and PGD can certainly benefit from a consultation with a psychological counselor. They may have issues that need to be discussed in light of their diagnosis and previous experiences. Counselors can help to reinforce the commitment that patients make when planning a PGD cycle in terms of time, money and emotions. Counselors should be available throughout the cycle to help patients cope with the emotional issues treatment can raise. Nurses play an intricate role in the very precise and detail oriented coordination of PGD cycles. Perhaps the most important word for everyone involved in these cycles to remember is communication. This refers both to the verbal communication that is essential between all the parties involved as well as written communication in the form of documentation of all that has been discussed, agreed to and planned. Nurses are pivotal figures in that they generally have the most contact with the patients and are the point person that patients, physicians, embryologists and the PGD lab all look to for assurance that all the appropriate steps have been followed and documented to allow the cycle to proceed successfully. Some might assume that a PGD cycle is simply an IVF cycle with a few additional laboratory procedures in between egg retrieval and embryo transfer. That is a very simplistic and unrealistic assumption for many reasons. The nursing consult orients patients to the process of IVF and PGD. Very often patients do not expect that they will need the same basic workup (day 3 hormones, infectious disease testing, uterine evaluation, semen analysis) as infertility patients because they don't consider themselves to be infertility patients. They may need additional bloodwork or records of previous genetic testing done in order for the PGD lab to develop testing specific for their clinical situation. Much of the infertility nurses' role is patient education. Despite the fact that these patients have generally met with various other members of the "PGD team" and been counseled and consented, it is very often the nurse who answers the questions that remain unasked or unanswered. The nurse fills in all the details of the journey from point A to point B in the process of IVF and PGD. Medications are discussed and ordered, the stimulation process and protocol are outlined, monitoring is arranged and the expected time table is covered. It is essential that the nurse has a reasonable understanding of polar body biopsy, blastomere biopsy, FISH and PCR so that they can be explained in terms that patients can understand. Nurses need open communication with the physician regarding the clinical plan for each patient. Some practices may designate specific nurses to handle PGD patients in the same way that there are usually specific donor egg nurses. Most patients are anxious to get started and may be overwhelmed and disappointed when they realize all that needs to be done before they can go ahead with the cycle. The nurse reassures and coordinates the various steps. The nurse is, in some respect, the gatekeeper who insures that all the i's are dotted and t's crossed so that the patient fulfills all the obligations necessary to get the go ahead from the PGD lab to start their cycle. As the gatekeeper the nurse is very often the key communicator between the physician, embryology lab, PGD lab and the patients. It takes expertise, cooperation, organization, communication and documentation on everyone's part to make a successful PGD program. It takes empathy, compassion and patience to care for the people who can benefit from these technologies. Defined roles, team meetings and ongoing evaluation of results can help to keep everyone on the same page. A PGD program can raise issues that may require ethical consideration and discussion. Professor Robert Edwards eloquently summarizes some of these moral issues that PGD forces us to consider: "A constant worry is the oft repeated charge that these techniques introduce eugenics to human populations rather than helping to avoid inherited diseases in fetuses. Great care is essential to avoid any impression that averting genetic disease in embryos casts any reflection of the value and equality of the handicapped in a modern society. And a final challenge to the democracy of science is that the rich will benefit most from these new advances because health authorities in many countries still crassly decline to fund IVF and PGD despite their overwhelming advantages to so many couples. All these issues have stemmed from the belief that the social advantages of trying to avert genetic disease in children far outweigh the cost of their technologies. There is no doubt that preimplantation genetic diagnosis and other means of averting or alleviating serious inherited disease are bound to offer ever widening opportunities while demanding the closest of ethical attention." "An Atlas of Preimplantation Genetic Diagnosis" Verlinsky and Kuliev Parthenon Publishing 2000 The Genetics and Public Policy Center, www.dnapolicy.org, released the result of their public opinion survey toward genetic testing on February 18, 2005. This is believed to be the largest public opinion survey ever conducted on the topic and was funded by The Pew Charitable Trusts. It included 21 focus groups, 62 in-depth interviews, and 2 surveys with a combined sample size of more than 6,000 people and both in person and on line town meetings. The report states that: "A majority of Americans believes it is appropriate to use reproductive genetic testing to avoid having a child with a life-threatening disease, or to test embryos to see if they will be a good match to provide cells to help a sick sibling. However, most Americans believe it would be wrong to use genetic testing to select the sex or other non-health related, genetic characteristics of a child. Focus groups and town hall meetings revealed that Americans don't fear technology per se, but rather fear that unrestrained human selfishness and vanity will drive people to use reproductive genetic testing inappropriately such as to select for non-medical but socially desirable characteristics." According to the report, Americans "fear a world in which children are expected to be perfect, and parents are expected to do everything possible to prevent children with genetic disease from being born. For many participants, these technologies raise concerns about how society might treat individuals with disabilities in a world where the birth of disabled persons might be preventable, and where the cost of testing and treatment might lead to disparities in who can afford them." A majority of those surveyed also "wants and expects oversight to ensure safety, accuracy and quality of reproductive genetic testing" but 70 percent of respondents are also "concerned about government regulators invading private reproductive decisions". Only 38% "support the idea of the government regulating PGD based on ethics and morality." 1. Verlinsky,Y and Kuliev,A."An Atlas of Preimplantation Genetic Diagnosis", Parthenon Publishing 2000. 2. Verlinsky et al. "Over a decade of experience with preimplantation genetic diagnosis: a multicenter report" Fertility & Sterility August 2004 Vol 82 No 2, pp.292-294. 3. Robertson,J. "Embryo screening for tissue matching" Fertility & Sterility August 2004 Vol 82 No 2 pp. 290-291. 4. Marik,J. "Preimplantation Genetic Diagnosis" eMedicine.com January 14, 2005. 5. Cunningham,D. "PGD and the Embryology Lab (what the heck are they doing in there?) powerpoint presentation and inservice for the New England Nurses in Reproductive Medicine February 2004. 6. Keller,M. "Preimplantation Genetic Diagnosis" powerpoint presentation and inservice for the New England Nurses in Reproductive Medicine February 2004 7. Sermon,K. "Current concepts in preimplantation genetic diagnosis (PGD): a molecular biologist's view, Human Reproduction Update, Vol 8, No 1 pp.11-20. 2002 8. www.reprogenetics.com assessed 1/14/05 9. www.givf.com assessed 1/24/05 10. Bielorai,B et al. "Successful umbilical cord blood transplantation for Fanconi anemia using preimplantation genetic diagnosis for HLA match donor" American Journal of Hematology. Dec 2004: 77(4):397-9. assessed on PubMed 1/17/05 11. Kahraman,s. et al. "Clinical aspects of preimplantation genetic diagnosis for single gene disorders combined with HLA typing" Reprod Biomed Online. 2004 Nov;9(5):529-32. assessed on PubMed 1/17/05. 12. Ferraretti,AP et al. "Prognostic role of preimplantation genetic diagnosis for aneuploidy in assisted reproductive technology outcome" Human Reproduction.2004 March;19(3):694-9. assessed on PubMed 1/17/05 13. Gianaroli,L et al. "Preimplantation diagnosis for aneuploidies in patients undergoing in vitro fertilization with a poor prognosis: identification of the categories for which it should be proposed" Fertility & Sterility Nov 1999 Vol 72, pp.837-844. 14. Kahraman,S et al. "The results of aneuploidy screening in 276 couples undergoing assisted reproductive techniques" Prenatal Diagnosis April 2004;24(4):307-11. assessed on PubMed 1/17/05 15. www.rscbayarea.com assessed 2/4/05 16. www.infertilitydoctor.com assessed 2/4/05 17. www.sbivf.com assessed 1/17/05 18. Biology The Unity and Diversity of Life Ninth Edition 2001 Brooks/Cole Thompson Learning Publishers. 19. www.dnapolicy.org assessed 2/18/05 1. Preimplantation genetic diagnosis testing must always be done in conjunction with an Answer is A 2. Polar body biopsy involves the removal of one or two polar bodies from: A. an oocyte B. a day 1 embryo C. a day 3 embryo D. a blastocyst Answer is A 3. Polar body biopsy tests for A. paternal genetic contribution B. maternal genetic contribution C. both paternal and maternal genetic contribution D. sex of the embryo Answer is B 4. Blastomere biopsy is usually done: A. as soon as fertilization is confirmed. B. after ICSI insemination with sperm. C. on day 3 after egg retrieval when there are generally 8 cells. D. on day 5 at the blastocyst stage. Answer is C 5. FISH involves the use of: A. probes which are small pieces of DNA. B. a fluorescent microscope to count the chromosomes analyzed. C. microscope slides and coverslips. D. all of the above. 6. FISH can be used to test for aneuploidy of polar bodies, sperm or embryos. Answer is A 7. Polymerase chain reaction allows for: A. multiplication of chromosomes. B. insertion of news genes to replace defective genes. C. amplification of specific DNA sequences D. removal of defective genes from embryos so they can be transferred. Answer is C 8. PGD for aneuploidy: A. uses fluorescent probes to identify the number of specific chromosomes being B. cannot test for every chromosome simultaneously at the present time. C. may help to increase implantation rates in patients with repeated IVF failure. D. all of the above. Answer is D 9. PGD eliminates the need for either chorionic villus sampling or amniocentisis. Answer is B 10. Each embryo that has undergone a blastomere biopsy will have a definitive Answer is B
Who’s Most Ticklish? Grade Level: 2nd to 4th; Type: Social Science This project determines what category of person is most likely to be ticklish. - Do males or females tend to be more ticklish? - What age(s) of persons tend to be most ticklish? Why can’t we tickle ourselves? Why do people laugh when tickled, even when they don’t like it? Some scientists, approaching it from an evolutionary standpoint, believe that tickling encourages social bonding. Others believe that it is a primitive form of self-defense practice for young children. - A long feather - Test subjects of different ages and genders - Paper and pencil for recording and analyzing data - Record the gender and age of test subject. - Using the feather, tickle the test subject in various commonly-ticklish spots (ear, neck, back of knee, etc.). - Rate the subject’s response to each tickle on a scale of one to five with one being no response and five being extreme ticklishness. - Repeat for all subjects. - Analyze results: On average do males or females tend to be more ticklish? Do younger or older people tend to be more ticklish? Do certain categories of people tend to be ticklish in a particular spot on their body (e.g. You might find that in general boys younger than 7 are ticklish on their knees but not on their ears)? Consider explanations based on scientists' hypotheses of the evolutionary roots of ticklishness. - Extension: Ask test subjects whether they find tickles pleasant or unpleasant. Analyze subjects’ answers according to gender, age, and overall degree of ticklishness. Terms/Concepts: ticklish, gender, age Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.
For many, the Pakistan Tehreek-e-Insaf (PTI) rally in Lahore indicated a nationalist upsurge — the sudden pride in being a Pakistani who was part of the process of an upbeat political activity. The sense of elation was natural, given the fact that crisis rather than the lack of it has become the rule rather than the exception. An average Pakistani seems to be on a never-ending roller coaster ride. Nations that get sucked into such a whirlwind often lose their sense of making appropriate choices. In fact, the appropriate choice becomes the one which provides instant, though short-termed, relief from an immediate crisis. Under the circumstances, the tendency is to deconstruct existing structures, often at the pace of destruction, and replace them with something which is often militantly nationalistic, self-righteous and generally dictatorial in character. Hence, extreme sociopolitical crises results in extreme solutions that may not bring long-term relief but are akin to a shot of morphine that gives an immediate high. One of the best examples of what results from the collapse of a sociopolitical system is the rise of the Third Reich in Germany during the 1930s. Burdened by global recession and a humiliating military defeat, the bulk of middle-class Germany found refuge in Adolf Hitler’s ideology. The Fuhrer promised getting rid of the Treaty of Versailles and unemployment. The silver lining was that once in power, the Nazis would change everything that had been spoiled by the ruling elite of those days. The Weimar government was ferociously accused of capsising to the enemy. The moral fabric of German society had thinned to a degree that there was little possibility of questioning Hitler’s logic. Thus, the rise of the Nazis was phenomenal. From getting 12 seats in 1928, the Nazi party gained popularity, winning 107 seats in 1930 and 230 in 1932. The sociopolitical and cultural discourse also began to change. There was greater emphasis on German traditions and values, which the Nazis promised to reinforce. This became extremely popular with the youth and women. The latter played an important role in enhancing the political power of the Nazis, just like we saw in the case of Maulana Fazlullah in Swat. The ascendency of the Nazis to power was not a reflection of some inherent unreasonableness of the German people but an indicator of the utter collapse of German society. Eager to survive and frustrated by the callousness of a political structure that didn’t deliver or dialogue, middle-class Germany opted for a dictatorial philosophy that had the potential of providing immediate relief. The German society at that time had completely lost the sense and ability to transform, hence temporary transition was the only option. The choice itself indicated the depravity of the then existing political system for which the best option was Hitler. Every act of political misdemeanour such as making concessions to the forces of evil and compromising on larger public good comes to haunt a state and its society. The Nazi party, which was a natural beneficiary of the flawed system, made gains through the excellent use of technology and modern tools of communication. Part of the problem of a weakening political structure is that the stakeholders are unable to reinvent themselves. The crumbling power of the Weimer Republic forced various powerful interest groups to search for a more potent player with the capacity to generate a more gripping ideology, which the Nazis presented in the form of fascism or an extreme form of nationalism. Not that foreign players did not have a hand in Germany’s military and economic devastation, but fascism held European powers entirely responsible for the chaos. At one level, the society had become very politicised and, on the other, extremely apolitical because the formula for changing conditions was absolute force and not dialogue and negotiations. Pragmatism is indeed a double-edged sword. Political survival is necessary but not at the cost of ideals and values. Hitler was a choice made by a society that had forgotten the art to negotiate dialogue and stand up for some principles. In the mid-1930s, when everyone in Germany thought they were transiting to a safe option, they were actually burning all their boats. Transition does not happen without transformation! Published in The Express Tribune, November 6th, 2011. More in OpinionNot another tsunami
A decision tree analysis is a specific technique in which a diagram (in this case referred to as a decision tree) is used for the purposes of assisting the project leader and the project team in making a difficult decision. The decision tree is a diagram that presents the decision under consideration and, along different branches, the implications that may arise from choosing one path or another. The decision tree analysis is often conducted when a number of future outcomes of scenarios remains uncertain, and is a form of brainstorming which, when decision making, can help to assure all factors are given proper consideration. The decision tree analysis takes into account a number of factors including probabilities, costs, and rewards of each event and decision to be made in the future. The analysis also uses expected monetary value analysis to assist in determining the relative value of each alternate action. This term is defined in the 3rd and the 4th edition of the PMBOK.
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or—as here—simply a vector) is a geometric object that has magnitude (or length) and direction and can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by Vectors play an important role in physics: velocity and acceleration of a moving object and forces acting on it are all described by vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can be still represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors. It is important to distinguish Euclidean vectors from the more general concept in linear algebra of vectors as elements of a vector space. General vectors in this sense are fixed-size, ordered collections of items as in the case of Euclidean vectors, but the individual items may not be real numbers, and the normal Euclidean concepts of length, distance and angle may not be applicable. (A vector space with a definition of these concepts is called an inner product space.) In turn, both of these definitions of vector should be distinguished from the statistical concept of a random vector. The individual items in a random vector are individual real-valued random variables, and are often manipulated using the same sort of mathematical vector and matrix operations that apply to the other types of vectors, but otherwise usually behave more like collections of individual values. Concepts of length, distance and angle do not normally apply to these vectors, either; rather, what links the values together is the potential correlations among them. The word "vector" originates from the Latin vehere meaning "to carry". It was first used by 18th century astronomers investigating planet rotation around the Sun. In physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a directed line segment, or arrow, in a Euclidean space. In pure mathematics, a vector is defined more generally as any element of a vector space. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above mentioned geometric entities are a special kind of vectors, as they are elements of a special kind of vector space called Euclidean space. This article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as geometric, spatial, or Euclidean vectors. Being an arrow, a Euclidean vector possesses a definite initial point and terminal point. A vector with fixed initial and terminal point is called a bound vector. When only the magnitude and direction of the vector matter, then the particular initial point is of no importance, and the vector is called a free vector. Thus two arrows and in space represent the same free vector if they have the same magnitude and direction: that is, they are equivalent if the quadrilateral ABB′A′ is a parallelogram. If the Euclidean space is equipped with a choice of origin, then a free vector is equivalent to the bound vector of the same magnitude and direction whose initial point is the origin. The term vector also has generalizations to higher dimensions and to more formal approaches with much wider applications. Examples in one dimension Since the physicist's concept of force has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force F of 15 newtons. If the positive axis is also directed rightward, then F is represented by the vector 15 N, and if positive points leftward, then the vector for F is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δs of 4 meters to the right would be 4 m or −4 m, and its magnitude would be 4 m regardless. In physics and engineering Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is velocity, the magnitude of which is speed. For example, the velocity 5 meters per second upward could be represented by the vector (0,5) (in 2 dimensions with the positive y axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction and follows the rules of vector addition. Vectors also describe many other physical quantities, such as linear displacement, displacement, linear acceleration, angular acceleration, linear momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field. Examples of quantities that have magnitude and direction but fail to follow the rules of vector addition: Angular displacement and electric current. Consequently, these are not vectors. In Cartesian space In the Cartesian coordinate system, a vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points A = (1,0,0) and B = (0,1,0) in space determine the free vector pointing from the point x=1 on the x-axis to the point y=1 on the y-axis. Typically in Cartesian coordinates, one considers primarily bound vectors. A bound vector is determined by the coordinates of the terminal point, its initial point always having the coordinates of the origin O = (0,0,0). Thus the bound vector represented by (1,0,0) is a vector of unit length pointing from the origin along the positive x-axis. The coordinate representation of vectors allows the algebraic features of vectors to be expressed in a convenient numerical fashion. For example, the sum of the vectors (1,2,3) and (−2,0,4) is the vector - (1, 2, 3) + (−2, 0, 4) = (1 − 2, 2 + 0, 3 + 4) = (−1, 2, 7). Euclidean and affine vectors In the geometrical and physical settings, sometimes it is possible to associate, in a natural way, a length or magnitude and a direction to vectors. In turn, the notion of direction is strictly associated with the notion of an angle between two vectors. When the length of vectors is defined, it is possible to also define a dot product — a scalar-valued product of two vectors — which gives a convenient algebraic characterization of both length (the square root of the dot product of a vector by itself) and angle (a function of the dot product between any two non-zero vectors). In three dimensions, it is further possible to define a cross product which supplies an algebraic characterization of the area and orientation in space of the parallelogram defined by two vectors (used as sides of the parallelogram). However, it is not always possible or desirable to define the length of a vector in a natural way. This more general type of spatial vector is the subject of vector spaces (for bound vectors) and affine spaces (for free vectors). An important example is Minkowski space that is important to our understanding of special relativity, where there is a generalization of length that permits non-zero vectors to have zero length. Other physical examples come from thermodynamics, where many of the quantities of interest can be considered vectors in a space with no notion of length or angle. In physics, as well as mathematics, a vector is often identified with a tuple, or list of numbers, which depend on some auxiliary coordinate system or reference frame. When the coordinates are transformed, for example by rotation or stretching, then the components of the vector also transform. The vector itself has not changed, but the reference frame has, so the components of the vector (or measurements taken with respect to the reference frame) must change to compensate. The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units (a special case of a change of coordinates) from meters to milimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm–a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm–a covariant change in value. See covariance and contravariance of vectors. Tensors are another type of quantity that behave in this way; in fact a vector is a special type of tensor. In pure mathematics, a vector is any element of a vector space over some field and is often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction". The concept of vector, as we know it today, evolved gradually over a period of more than 200 years. About a dozen people made significant contributions. The immediate predecessor of vectors were quaternions, devised by William Rowan Hamilton in 1843 as a generalization of complex numbers. Initially, his search was for a formalism to enable the analysis of three-dimensional space in the same way that complex numbers had enabled analysis of two-dimensional space, but he arrived at a four-dimensional system. In 1846 Hamilton divided his quaternions into the sum of real and imaginary parts that he respectively called "scalar" and "vector": - The algebraically imaginary part, being geometrically constructed by a straight line, or radius vector, which has, in general, for each determined quaternion, a determined length and determined direction in space, may be called the vector part, or simply the vector of the quaternion. Several other mathematicians developed vector-like systems around the same time as Hamilton including Giusto Bellavitis, Augustin Cauchy, Hermann Grassmann, August Möbius, Comte de Saint-Venant, and Matthew O’Brien. Grassmann's 1840 work Theorie der Ebbe und Flut (Theory of the Ebb and Flow) was the first system of spatial analysis similar to today's system and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870's. In 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibb's lectures, and banishing any mention of quaternions in the development of vector calculus. Vectors are usually denoted in lowercase boldface, as a or lowercase italic boldface, as a. (Uppercase letters are typically used to represent matrices.) Other conventions include or a, especially in handwriting. Alternatively, some use a tilde (~) or a wavy underline drawn beneath the symbol, e.g. , which is a convention for indicating boldface type. If the vector represents a directed distance or displacement from a point A to a point B (see figure), it can also be denoted as or AB. Especially in literature in German it was common to represent vectors with small fraktur letters as . Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here the point A is called the origin, tail, base, or initial point; point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction. On a two-dimensional diagram, sometimes a vector perpendicular to the plane of the diagram is desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an arrow head on and viewing the vanes of an arrow from the back. In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented as coordinate vectors in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components (or scalar projections) of the vector on the axes of the coordinate system. As an example in two dimensions (see figure), the vector from the origin O = (0,0) to the point A = (2,3) is simply written as The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation is usually not deemed necessary and very rarely used. In three dimensional Euclidean space (or ), vectors are identified with triples of scalar components: - also written Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them: These have the intuitive interpretation as vectors of unit length pointing up the x, y, and z axis of a Cartesian coordinate system, respectively. In terms of these, any vector a in can be expressed in the form: where a1, a2, a3 are called the vector components (or vector projections) of a on the basis vectors or, equivalently, on the corresponding Cartesian axes x, y, and z (see figure), while a1, a2, a3 are the respective scalar components (or scalar projections). In introductory physics textbooks, the standard basis vectors are often instead denoted (or , in which the hat symbol ^ typically denotes unit vectors). In this case, the scalar and vector components are denoted respectively ax, ay, az, and ax, ay, az (note the difference in boldface). Thus, As explained above a vector is often described by a set of vector components that add up to form the given vector. Typically, these components are the projections of the vector on a set of mutually perpendicular reference axes (basis vectors). The vector is said to be decomposed or resolved with respect to that set. However, the decomposition of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected. Moreover, the use of Cartesian unit vectors such as as a basis in which to represent a vector is not mandated. Vectors can also be expressed in terms of the unit vectors of a cylindrical coordinate system () or spherical coordinate system (). The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry respectively. The choice of a coordinate system doesn't affect the properties of a vector or its behaviour under transformations. A vector can be also decomposed with respect to "non-fixed" axes which change their orientation as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively normal, and tangent to a surface (see figure). Moreover, the radial and tangential components of a vector relate to the radius of rotation of an object. The former is parallel to the radius and the latter is orthogonal to it. In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set (e.g., a global coordinate system, or inertial reference frame). Basic properties The following section uses the Cartesian coordinate system with basis vectors and assumes that all vectors have the origin as a common base point. A vector a will be written as Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors are equal if Addition and subtraction Assume now that a and b are not necessarily equal vectors, but that they may have different magnitudes and directions. The sum of a and b is The addition may be represented graphically by placing the tail of the arrow b at the head of the arrow a, and then drawing an arrow from the tail of a to the head of b. The new arrow drawn represents the vector a + b, as illustrated below: This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, this point will also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c). The difference of a and b is Subtraction of two vectors can be geometrically defined as follows: to subtract b from a, place the tails of a and b at the same point, and then draw an arrow from the head of b to the head of a. This new arrow represents the vector a − b, as illustrated below: Subtraction of two vectors may also be performed by adding the opposite of the second vector to the first vector, that is, a − b = a + (−b). Scalar multiplication A vector may also be multiplied, or re-scaled, by a real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector. If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = −1 and r = 2) are given below: Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a − b = a + (−1)b. The length of the vector a can be computed with the Euclidean norm which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors. This happens to be equal to the square root of the dot product, discussed below, of the vector with itself: - Unit vector A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â. To normalize a vector a = [a1, a2, a3], scale the vector by the reciprocal of its length ||a||. That is: - Null vector The null vector (or zero vector) is the vector with length zero. Written out in coordinates, the vector is (0,0,0), and it is commonly denoted , or 0, or simply 0. Unlike any other vector it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector which is a multiple of the null vector). The sum of the null vector with any vector a is a (that is, 0+a=a). Dot product The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b and is defined as: where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point and then the length of a is multiplied with the length of that component of b that points in the same direction as a. The dot product can also be defined as the sum of the products of the components of each vector as Cross product The cross product (also called the vector product or outer product) is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (–n). The length of a × b can be interpreted as the area of the parallelogram having a and b as sides. The cross product can be written as For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below). Scalar triple product The scalar triple product (also called the box product or mixed triple product) is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by (a b c) and defined as: It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed. In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows The scalar triple product is linear in all three entries and anti-symmetric in the following sense: Multiple Cartesian bases All examples thus far have dealt with vectors expressed in terms of the same basis, namely, e1, e2, e3. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. For example, using the vector a from above, where n1, n2, n3 form another orthonormal basis not aligned with e1, e2, e3. The values of u, v, and w are such that the resulting vector sum is exactly a. It is not uncommon to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In order to perform many of the operations defined above, it is necessary to know the vectors in terms of the same basis. One simple way to express a vector known in one basis in terms of another uses column matrices that represent the vector in each basis along with a third matrix containing the information that relates the two bases. For example, in order to find the values of u, v, and w that define a in the n1, n2, n3 basis, a matrix multiplication may be employed in the form where each matrix element cjk is the direction cosine relating nj to ek. The term direction cosine refers to the cosine of the angle between two unit vectors, which is also equal to their dot product. By referring collectively to e1, e2, e3 as the e basis and to n1, n2, n3 as the n basis, the matrix containing all the cjk is known as the "transformation matrix from e to n", or the "rotation matrix from e to n" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from e to n" (because it contains direction cosines). By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases. Other dimensions With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as and in four dimensions as A seven-dimensional cross product is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products. Vectors have many uses in physics and other sciences. Length and units In abstract vector spaces, the length of the arrow depends on a dimensionless scale. If it represents, for example, a force, the "scale" is of physical dimension length/force. Thus there is typically consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1:250 and 1 m:50 N respectively. Equal length of vectors of different dimension has no particular significance unless there is some proportionality constant inherent in the system that the diagram represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance. Vector-valued functions Often in areas of physics and mathematics, a vector evolves in time, meaning that it depends on a time parameter t. For instance, if r represents the position vector of a particle, then r(t) gives a parametric representation of the trajectory of the particle. Vector-valued functions can be differentiated and integrated by differentiating or integrating the components of the vector, and many of the familiar rules from calculus continue to hold for the derivative and integral of vector-valued functions. Position, velocity and acceleration The position of a point x = (x1, x2, x3) in three-dimensional space can be represented as a position vector whose base point is the origin The position vector has dimensions of length. Given two points x = (x1, x2, x3), y = (y1, y2, y3) their displacement is a vector which specifies the position of y relative to x. The length of this vector gives the straight line distance from x to y. Displacement has the dimensions of length. where x0 is the position at time t=0. Velocity is the time derivative of position. Its dimensions are length/time. Force, energy, work Vectors as directional derivatives where the index is summed over the appropriate number of dimensions (for example, from 1 to 3 in 3-dimensional Euclidean space, from 0 to 3 in 4-dimensional spacetime, etc.). Then consider a vector tangent to : The directional derivative can be rewritten in differential form (without a given function ) as Therefore any directional derivative can be identified with a corresponding vector, and any vector can be identified with a corresponding directional derivative. A vector can therefore be defined precisely as Vectors, pseudovectors, and transformations An alternative characterization of Euclidean vectors, especially in physics, describes them as lists of quantities which behave in a certain way under a coordinate transformation. A contravariant vector is required to have components that "transform like the coordinates" under changes of coordinates such as rotation and dilation. The vector itself does not change under these operations; instead, the components of the vector make a change that cancels the change in the spatial axes, in the same way that co-ordinates change. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, like the co-ordinates, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that a coordinate vector x is transformed to x′ = Mx, then a contravariant vector v must be similarly transformed via v′ = Mv. This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity, then v is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include displacement, velocity, electric field, momentum, force, and acceleration. In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a contravariant vector to be a tensor of contravariant rank one. Alternatively, a contravariant vector is defined to be a tangent vector, and the rules for transforming a contravariant vector follow from the chain rule. Some vectors transform like contravariant vectors, except that when they are reflected through a mirror, they flip and gain a minus sign. A transformation that switches right-handedness to left-handedness and vice versa like a mirror does is said to change the orientation of space. A vector which gains a minus sign when the orientation of space changes is called a pseudovector or an axial vector. Ordinary vectors are sometimes called true vectors or polar vectors to distinguish them from pseudovectors. Pseudovectors occur most frequently as the cross product of two ordinary vectors. One example of a pseudovector is angular velocity. Driving in a car, and looking forward, each of the wheels has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the reflection of this angular velocity vector points to the right, but the actual angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors. See also - Affine space, which distinguishes between vectors and points - Array data structure or Vector (Computer Science) - Banach space - Clifford algebra - Complex number - Coordinate system - Covariance and contravariance of vectors - Four-vector, a non-Euclidean vector in Minkowski space (i.e. four-dimensional spacetime), important in relativity - Function space - Grassmann's Ausdehnungslehre - Hilbert space - Normal vector - Null vector - Tangential and normal components (of a vector) - Unit vector - Vector bundle - Vector calculus - Vector notation - Vector-valued function - Ivanov 2001 - Heinbockel 2001 - Ito 1993, p. 1678; Pedoe 1988 - The Oxford english dictionary. (2nd. ed. ed.). London: Claredon Press. 2001. ISBN 9780195219425. - Ito 1993, p. 1678 - Thermodynamics and Differential Forms - Michael J. Crowe, A History of Vector Analysis; see also his lecture notes on the subject. - W. R. Hamilton (1846) London, Edinburgh & Dublin Philosophical Magazine 3rd series 29 27 - U. Guelph Physics Dept., "Torque and Angular Acceleration" - Kane & Levinson 1996, pp. 20–22 - Apostol, T. (1967). Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra. John Wiley and Sons. ISBN 978-0-471-00005-1. - Apostol, T. (1969). Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications. John Wiley and Sons. ISBN 978-0-471-00007-5. - Kane, Thomas R.; Levinson, David A. (1996), Dynamics Online, Sunnyvale, California: OnLine Dynamics, Inc. - Heinbockel, J. H. (2001), Introduction to Tensor Calculus and Continuum Mechanics, Trafford Publishing, ISBN 1-55369-133-4 - Ito, Kiyosi (1993), Encyclopedic Dictionary of Mathematics (2nd ed.), MIT Press, ISBN 978-0-262-59020-4 - Ivanov, A.B. (2001), "Vector, geometric", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Pedoe, D. (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0.. - Aris, R. (1990). Vectors, Tensors and the Basic Equations of Fluid Mechanics. Dover. ISBN 978-0-486-66110-0. - Feynman, R., Leighton, R., and Sands, M. (2005). "Chapter 11". The Feynman Lectures on Physics, Volume I (2nd ed ed.). Addison Wesley. ISBN 978-0-8053-9046-9. |Wikimedia Commons has media related to: Vectors| |The Wikibook Waves has a page on the topic of: Vectors| - Hazewinkel, Michiel, ed. (2001), "Vector", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Online vector identities (PDF) - Introducing Vectors A conceptual introduction (applied mathematics) - Addition of forces (vectors) Java Applet - French tutorials on vectors and their application to video games
Biomedical engineers at UC Davis have developed a plug-in interface for the microfluidic chips that will form the basis of the next generation of compact medical devices. They hope that the “fit to flow” interface will become as ubiquitous as the USB interface for computer peripherals. UC Davis filed a provisional patent on the invention Nov. 1. A paper describing the devices was published online Nov. 25 by the journal Lab on a Chip. “We think there is a huge need for an interface to bridge microfluidics to electronic devices,” said Tingrui Pan, assistant professor of biomedical engineering at UC Davis. Pan and graduate student Arnold Chen – invented the chip and co-authored the paper. Microfluidic devices use channels as small as a few micrometers across, cut into a plastic membrane, to carry out biological or chemical tests on a miniature scale. They could be used, for example, in compact devices used for medical diagnosis, food safety or environmental monitoring. Cell phones with increasingly sophisticated cameras could be turned into microscopes that could read such tests in the field. But it is difficult to connect these chips to electronic devices that can read the results of a test and store, display or transmit it. Pan thinks that the fit-to-flow connectors can be integrated with a standard peripheral component interconnect (PCI) device commonly used in consumer electronics, while an embedded micropump will provide on-demand, self-propelled microfluidic operations. With this standard connection scheme, chips that carry out different tests could be plugged into the same device — such as a cell phone, PDA or laptop — to read the results. The work was supported by a National Science Foundation CAREER award to Pan, and a fellowship to Chen from UC Davis.
See also the Browse High School Equations, Graphs, Translations Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Direct and indirect variation. Slope, slope-intercept, standard form. - Evaluating Composite Functions [1/31/1996] If f(x) = 3, and g(x) = x^2 + 1 divided by the square root of (x^2 -1), how do you evaluate the composite function of f composed with g (f circle - Finding a Point Equidistant From Two Other Points [8/18/1996] Point A is (-5,-3), and point B is (-1,-5); to be equidistant from A and B, what should the value of k be for the point (3,k)? - Finding a Point on a Circle [5/28/1996] How do I find the y1 value? - Finding Asymptotes [07/06/1998] In the equation y = (m/x) + c, how do the values of m and c affect the graph? How do you find the asymptotes? - Finding Equations of Parabolas [12/12/1996] Determine the equation of the parabola whose vertex is at (1,3) and whose directrix is y=x. - Finding Focus and Directrix [04/12/2001] How can you find the focus and directrix when given a formula for a - Finding Intervals in Trig Graphs [10/25/1998] Can you help me with graphing sine and cosine functions? How do you mark off the intervals? - Finding Point of Intersection [05/08/2001] Is there a formula to tell if two lines, given by point coordinates, cross and at what coordinates they cross? - Finding the Center of a Circle [06/06/1999] How can I find the center and radius of a circle that is in the form: Ax^2 + Cy^2 + Dx + Ey + F = 0? - Finding the Equation of a Curve [10/02/2000] I'm looking for an equation for a curve that describes the way people store information about pending events over time. I would also like to know how to find the equation of any curve. - Finding the Equation of a Line [01/08/1997] Given either a point and a slope or two points, how do you find the equation of a line in both point-slope and standard form? - Finding the Equation of a Parabola [04/28/2001] How can I find the equation of a parabola whose zeros are at 0 and 6, and whose minimum value is -9? - Finding the Equations of Two Circles [9/30/1995] Find the equation of the circles with the center at (4,-7) and touching (externally and internally) the circle with the equation: (X^2)+(Y^2)+4X- - Finding the Table's Diameter [05/05/2001] A circular table is pushed into the corner of a room so that it touches both walls. On the edge of the table is a scratch 8" from one wall and 9" from the other wall. What is the diameter of the table? - Finding the Zeros of a Function [05/12/1999] How would I find the zeros of P(x) = (x^2-3)(2x-7)^2(x-1)^3, and what is meant by "state the multiplicity of each"? - Finding Three Lines that Form a Triangle [08/30/2001] Find an equation for each of the lines that goes through the point (3,2) forming a triangle of area 12 with the coordinate axes. - First Principle Hyperbolas [01/05/1999] How do you solve for the standard equation of a hyperbola given its - Folium of Descartes and Parametric Equations [11/23/1998] How do you plot an implicit function, such as the folium of Descartes, with the equation y^3 + x^3 = 3xy? - Formula for a Helix [07/08/1997] What is the formula that gives the height above the ground of a point on - The General Polynomial Function [09/17/1998] How do you relate a polynomial like y=6x-2 to the general polynomial function? How do I graph and find the inverse of y=sqrt(2)? - Generating Fractal Equations [7/1/1996] Is there any way to generate a fractal equation from a set of numbers? - Given n Points, find a Polynomial Function... [6/27/1996] Find a polynomial F(x) of degree less than n so that the graph of F passes through all of the points. - Graphing Absolute Values [07/03/2002] I have a function, f(x), that I can graph, but I don't understand how to graph |f(x)|. - Graphing a Circle [04/14/1998] I would like to know how to graph the equation of a circle. - Graphing a Function with Asymptotes [07/05/1998] How do you find the asymptotes of f(x) = (2x + 1)/(x - 3)? How do you use the asymptotes to graph the function? - Graphing a Line Given a Point on It and Its Slope [03/07/2004] I am completely confused about how to graph lines. The question is asking me to graph the line that goes through the point (0,2) with a slope m = 1/4. I am not sure what slope m = 1/4 means. - Graphing and Understanding Conic Equations [12/11/1995] How can I sketch and find the vertices of this conic equation? - Graphing an Ellipse [11/20/1998] How do you graph an ellipse? What is the equation? - Graphing an Equation [11/08/1997] Please explain how to graph this equation: y = a + b(x) + c(x^2) + ... - Graphing a Parabola [4/3/1995] How do you graph the quadratic equation, y=3(x+1)^2 -20? - Graphing Complex and Real Numbers [02/26/2003] Since on the Cartesian plane we can only graph real zeros and real solutions, are we truly graphing the function when we omit the complex and imaginary zeros and solutions? - Graphing Direct Variation [02/27/2003] How do you graph direct variation? I thought you had to have two coordinates to graph a line. - Graphing Equations [08/15/1997] I know the coordinates of these equations, but what are the names and how do I graph them? - Graphing f(2x) and f(|x|) [09/03/2003] Given f(x), how do you graph f(2x) and f(|x|)? - Graphing Inequalities [06/05/2003] How do you graph y + 4x is less than 20 ? - Graphing Intercepts [11/06/1997] How would I find the intercept for a problem such as 3x-2y = 12? - Graphing Linear Equations [07/14/1998] Can you give me a step by step guide to graphing linear equations? - Graphing Linear Equations [12/26/2001] Give the intercepts of 7x - 2y = 2. - Graphing Multivariable Polynomials [02/20/2005] I have a few questions about multivariable polynomials. Say we have the polynomial x + y + z, can this be graphed? I also really don't know anything about the z axis or how to graph on it. - Graphing Parabolas [01/16/1997] How do you know how to graph a parabola from looking at its equation?
(February 9, 2012) GreenHouse builds compost tumblers SHREVEPORT, LA — GreenHouse students met in the community garden behind Centenary Square Sunday, February 5, to build new compost tumblers for the campus. "This is a way of life for me," said Bonnie Bernard, sophomore/junior biology and chemistry major. "We make something positive and productive out of what people have just thrown away. The bins and wooden stands were scrap that we collected, and the community garden is standing on land that no one thought was useful." Due to the GreenHouse students' efforts, Centenary now has two compost tumblers and a traditional compost three-bin system to collect waste. The tumblers and bin break down green material such as vegetable waste and brown material like paper and leaves to create compost, which will be used to fertilize the community garden. "This project takes care of a sustainability issue that is local and close-to-home," said junior and GreenHouse resident Krista McKinney. "Every day we are throwing away a ton of food. Now, we can collect some of the waste, compost it, and fertilize our garden." GreenHouse participants can earn academic credit by participating in the "Sustainability Projects Lab," designing and implementing a sustainability project on campus. Bernard along with McKinney is leading the project to create more composters and to collect materials for composting from the dining hall in Bynum Commons. They have created weekly rotating teams to collect the waste and load the composters. Centenary's GreenHouse living learning community is open to students who are interested in environmental issues and sustainability. GreenHouse students live and study together through team-taught learning labs, service learning and internship opportunities, and special events.
Plant scientists find mechanism that gives plants ‘balance’ MSU plant biologist Sheng Yang He studied rice plants for a research project designed to ultimately improve a plant's ability to grow while at the same time defend itself against pests and other stresses. The project was detailed in a paper published in the Proceedings of the National Academy of Sciences. Apr. 23, 2012 EAST LANSING, Mich. — Plant scientists find mechanism that gives plants ‘balance’ When a plant goes into defense mode in order to protect itself against harsh weather or disease, that’s good for the plant, but bad for the farmer growing the plant. Bad because when a plant acts to defend itself, it turns off its growth mechanism. But now researchers at Michigan State University, as part of an international collaboration, have figured out how plants can make the “decision” between growth and defense, a finding that could help them strike a balance – keep safe from harm while continuing to grow. Writing in the current issue of the Proceedings of the National Academy of Sciences, Sheng Yang He, an MSU professor of plant biology, and his team found that the two hormones that control growth (called gibberellins) and defense (known as jasmonates) literally come together in a crisis and figure out what to do. “What we’ve discovered is that some key components of growth and defense programs physically interact with each other,” he said. “Communication between the two is how plants coordinate the two different situations. We now know where one of the elusive molecular links is between growth and defense.” This is important because now that scientists know that this happens, they can work to figure out how to “uncouple” the two, He added. “Perhaps at some point we can genetically or chemically engineer the plants so they don’t talk to each other that much,” He said. “This way we may be able to increase yield and defense at the same time.” In this way, He said plants are a lot like humans. We only have a certain amount of energy to use, and we have to make wise choices on how to use it. “Plants, like people, have to learn to prioritize,” he said. “You can use your energy for growth, or use it for defense, but you can’t do them both at maximum level at the same time.” The work was done on two different plants: rice, a narrow-leafed plant, and Arabidopsis, which has a broader leaf. This was significant because it demonstrated that this phenomenon occurs in a variety of plants. He was one of the lead investigators on an international team of scientists that studied the issue. Other participating institutions included the Shanghai Institutes for Biological Sciences, Hunan Agricultural University, the University of Arkansas, Duke University, Yale University and Penn State University. Funding was provided by the National Institutes of Health; the Chemical Sciences, Geosciences Division, Office of Basic Energy Sciences, Office of Science, Department of Energy; the Howard Hughes Medical Institute; and the Gordon and Betty Moore Foundation. He is a Howard Hughes Medical Institute/Gordon and Betty Moore Foundation Investigator. Earning the prestige honor last year, He is one of only 15 in the country. Michigan State University has been working to advance the common good in uncommon ways for more than 150 years. One of the top research universities in the world, MSU focuses its vast resources on creating solutions to some of the world’s most pressing challenges, while providing life-changing opportunities to a diverse and inclusive academic community through more than 200 programs of study in 17 degree-granting colleges. Tom Oswald, Media Communications, Office: (517) 432-0920, Cell: (517) 281-7129, [email protected] ###
Today is the Feast Day of St. Irenaeus of Lyons, an early church father and apologist whose writings have been very influential in in the life of the Church and the discipline of theology. Father Barron speaks about this great Saint here. Last year, I participated in the annual meeting of the Academy of Catholic Theology, a group of about fifty theologians dedicated to thinking according to the mind of the church. Our general topic was the Trinity, and I had been invited to give one of the papers. I chose to focus on the work of St. Irenaeus, one of the earliest and most important of the fathers of the church. Irenaeus was born around 125 in the town of Smyrna in Asia Minor. As a young man, he became a disciple of Polycarp who, in turn, had been a student of John the Evangelist. Later in life, Irenaeus journeyed to Rome and eventually to Lyons where he became bishop after the martyrdom of the previous leader. Irenaeus died around the year 200, most likely as a martyr, though the exact details of his death are lost to history. His theological masterpiece is called Adversus Haereses (Against the Heresies), but it is much more than a refutation of the major objections to Christian faith in his time. It is one of the most impressive expressions of Christian doctrine in the history of the church, easily ranking with the De trinitate of St. Augustine and the Summa theologiae of St. Thomas Aquinas. In my Washington paper, I argued that the master idea in Irenaeus’s theology is that God has no need of anything outside of himself. I realize that this seems, at first blush, rather discouraging, but if we follow Irenaeus’s lead, we see how, spiritually speaking, it opens up a whole new world. Irenaeus knew all about the pagan gods and goddesses who stood in desperate need of human praise and sacrifice, and he saw that a chief consequence of this theology is that people lived in fear. Since the gods needed us, they were wont to manipulate us to satisfy their desires, and if they were not sufficiently honored, they could (and would) lash out. But the God of the Bible, who is utterly perfect in himself, has no need of anything at all. Even in his great act of making the universe, he doesn’t require any pre-existing material with which to work; rather (and Irenaeus was the first major Christian theologian to see this), he creates the universe ex nihilo (from nothing). And precisely because he doesn’t need the world, he makes the world in a sheerly generous act of love. Love, as I never tire of repeating, is not primarily a feeling or a sentiment, but instead an act of the will. It is to will the good of the other as other. Well, the God who has no self-interest at all, can only love. From this intuition, the whole theology of Irenaeus flows. God creates the cosmos in an explosion of generosity, giving rise to myriad plants, animals, planets, stars, angels, and human beings, all designed to reflect some aspect of his own splendor. Irenaeus loves to ring the changes on the metaphor of God as artist. Each element of creation is like a color applied to the canvas or a stone in the mosaic, or a note in an overarching harmony. If we can’t appreciate the consonance of the many features of God’s universe, it is only because our minds are too small to take in the Master’s design. And his entire purpose in creating this symphonic order is to allow other realities to participate in his perfection. At the summit of God’s physical creation stands the human being, loved into existence as all things are, but invited to participate even more fully in God’s perfection by loving his Creator in return. The most oft-cited quote from Irenaeus is from the fourth book of the Adversus Haereses and it runs as follows: “the glory of God is a human being fully alive.” Do you see how this is precisely correlative to the assertion that God needs nothing? The glory of the pagan gods and goddesses was not a human being fully alive, but rather a human being in submission, a human being doing what he’s been commanded to do. But the true God doesn’t play such manipulative games. He finds his joy in willing, in the fullest measure, our good. One of the most beautiful and intriguing of Irenaeus’s ideas is that God functions as a sort of benevolent teacher, gradually educating the human race in the ways of love. He imagined Adam and Eve, not so much as adults endowed with every spiritual and intellectual perfection, but more as children or teenagers, inevitably awkward in their expression of freedom. The long history of salvation is, therefore, God’s patient attempt to train his human creatures to be his friends. All of the covenants, laws, commandments, and rituals of both ancient Israel and the church should be seen in this light: not arbitrary impositions, but the structure that the Father God gives to order his children toward full flourishing. There is much that we can learn from this ancient master of the Christian faith, especially concerning the good news of the God who doesn’t need us! In addition, here is the YouTube video of Father Barron's commentary on St. Irenaeus: St. Irenaeus, pray for us! Father Barron is the Director of Word on Fire Catholic Ministries.
CitiSense is a portable, box-like sensor that can be deployed to provide local air quality information to everyone within range of the sensor, not just those carrying it. The device provides feedback on things like ozone, nitrogen dioxide, and carbon monoxide levels – the most common pollutants present in vehicle exhaust. The sensor can relay information to your phone using the EPA’s air quality ratings, from green (good) to purple (unsafe), allowing people to gauge conditions for their daily activities. This information could be especially helpful for people suffering chronic conditions like asthma, or where air conditions have an increased affect on physical exertion such as for runners and those exercising. Researchers at UC San Diego believe this system could be implemented to provide a much more thorough data collection system on urban air pollution than the EPA currently provides. The sensors currently cost around $1,000 per unit to make, but much of the technology and computer software is already developed leading to reduced costs down the line. Additionally, the data collected can be relayed to home computers and others, allowing for a better understanding of urban pollution patterns, which tend to concentrate along major transportation routes rather than evenly dissipate. The project aims to make the ‘invisible visible,’ and hopefully everyone a little safer from pollution.
: Nocturnal World : (AKA Night Hike) Students will explore Glen Helen at night and learn about the special adaptations of nocturnal animals. Sensory awareness activities during the hike will teach students how to use their senses to navigate in the dark. On clear nights, students will stargaze and learn constellations. Objectives for the Nocturnal World Lesson - Become aware, comfortable, and respectful of the nocturnal world and the creatures that are active within. - Can describe adaptations of various nocturnal animals. - Describe Earth’s rotation and moon phases. - Observe at least two circumpolar constellations. Science Standards Covered by the Nocturnal World Lesson Earth and Space Sciences A.1 Describe how night and day are caused by Earth’s rotation. A.2 Explain that Earth is one of several planets to orbit the sun, and that the moon orbits the Earth. A.4 Explain that stars are like the sun, some being smaller and some larger, but so far away that they look like points of light.
Biblical Commentary on the Old Testament, by Carl Friedrich Keil and Franz Delitzsh, [1857-78], at sacred-texts.com Increase in the Number of the IsraelitesTheir Bondage in Egypt - Exodus 1 The promise which God gave to Jacob in his departure from Canaan (Gen 46:3) was perfectly fulfilled. The children of Israel settled down in the most fruitful province of the fertile land of Egypt, and grew there into a great nation (Exo 1:1-7). But the words which the Lord had spoken to Abram (Gen 15:13) were also fulfilled in relation to his seed in Egypt. The children of Israel were oppressed in a strange land, were compelled to serve the Egyptians (Exo 1:8-14), and were in great danger of being entirely crushed by them (Exo 1:15-22). To place the multiplication of the children of Israel into a strong nation in its true light, as the commencement of the realization of the promises of God, the number of the souls that went down with Jacob to Egypt is repeated from Gen 46:27 (on the number 70, in which Jacob is included, see the notes on this passage); and the repetition of the names of the twelve sons of Jacob serves to give to the history which follows a character of completeness within itself. "With Jacob they came, every one and his house," i.e., his sons, together with their families, their wives, and their children. The sons are arranged according to their mothers, as in Gen 35:23-26, and the sons of the two maid-servants stand last. Joseph, indeed, is not placed in the list, but brought into special prominence by the words, "for Joseph was in Egypt" (Exo 1:5), since he did not go down to Egypt along with the house of Jacob, and occupied an exalted position in relation to them there. After the death of Joseph and his brethren and the whole of the family that had first immigrated, there occurred that miraculous increase in the number of the children of Israel, by which the blessings of creation and promise were fully realised. The words פּרוּ ישׁרצוּ (swarmed), and ירבּוּ point back to Gen 1:28 and Gen 8:17, and יעצמוּ to עצוּם גּוי in Gen 18:18. "The land was filled with them," i.e., the land of Egypt, particularly Goshen, where they were settled (Gen 47:11). The extra-ordinary fruitfulness of Egypt in both men and cattle is attested not only by ancient writers, but by modern travellers also (vid., Aristotelis hist. animal. vii. 4, 5; Columella de re rust. iii. 8; Plin. hist. n. vii. 3; also Rosenmller a. und n. Morgenland i. p. 252). This blessing of nature was heightened still further in the case of the Israelites by the grace of the promise, so that the increase became extraordinarily great (see the comm. on Exo 12:37). The promised blessing was manifested chiefly in the fact, that all the measures adopted by the cunning of Pharaoh to weaken and diminish the Israelites, instead of checking, served rather to promote their continuous increase. "There arose a new king over Egypt, who knew not Joseph." ויּקם signifies he came to the throne, קוּם denoting his appearance in history, as in Deu 34:10. A "new king" (lxx: βασιλεὺς ἕτερος; the other ancient versions, rex novus) is a king who follows different principles of government from his predecessors. Cf. חדשׁים אלהים, "new gods," in distinction from the God that their fathers had worshipped, Jdg 5:8; Deu 32:17. That this king belonged to a new dynasty, as the majority of commentators follow Josephus (Note: Ant. ii. 9, 1. Τῆς βασιλέιας εἰς ἄλλον οἶκον μεταληλυθυΐ́ας.) in assuming, cannot be inferred with certainty from the predicate new; but it is very probable, as furnishing the readiest explanation of the change in the principles of government. The question itself, however, is of no direct importance in relation to theology, though it has considerable interest in connection with Egyptological researches. (Note: The want of trustworthy accounts of the history of ancient Egypt and its rulers precludes the possibility of bringing this question to a decision. It is true that attempts have been made to mix it up in various ways with the statements which Josephus has transmitted from Manetho with regard to the rule of the Hyksos in Egypt (c. Ap. i. 14 and 26), and the rising up of the "new king" has been identified sometimes with the commencement of the Hyksos rule, and at other times with the return of the native dynasty on the expulsion of the Hyksos. But just as the accounts of the ancients with regard to the Hyksos bear throughout the stamp of very distorted legends and exaggerations, so the attempts of modern inquirers to clear up the confusion of these legends, and to bring out the historical truth that lies at the foundation of them all, have led to nothing but confused and contradictory hypotheses; so that the greatest Egyptologists of our own days, - viz., Lepsius, Bunsen, and Brugsch - differ throughout, and are even diametrically opposed to one another in their views respecting the dynasties of Egypt. Not a single trace of the Hyksos dynasty is to be found either in or upon the ancient monuments. The documental proofs of the existence of a dynasty of foreign kings, which the Vicomte de Roug thought that he had discovered in the Papyrus Sallier No. 1 of the British Museum, and which Brugsch pronounced "an Egyptian document concerning the Hyksos period," have since then been declared untenable both by Brugsch and Lepsius, and therefore given up again. Neither Herodotus nor Diodorus Siculus heard anything at all about the Hyksos though the former made very minute inquiry of the Egyptian priests of Memphis and Heliopolis. And lastly, the notices of Egypt and its kings, which we meet with in Genesis and Exodus, do not contain the slightest intimation that there were foreign kings ruling there either in Joseph's or Moses' days, or that the genuine Egyptian spirit which pervades these notices was nothing more than the "outward adoption" of Egyptian customs and modes of thought. If we add to this the unquestionably legendary character of the Manetho accounts, there is always the greatest probability in the views of those inquirers who regard the two accounts given by Manetho concerning the Hyksos as two different forms of one and the same legend, and the historical fact upon which this legend was founded as being the 430 years' sojourn of the Israelites, which had been thoroughly distorted in the national interests of Egypt. - For a further expansion and defence of this view see Hvernick's Einleitung in d. A. T. i. 2, pp. 338ff., Ed. 2 (Introduction to the Pentateuch, pp. 235ff. English translation).) The new king did not acknowledge Joseph, i.e., his great merits in relation to Egypt. ידע לא signifies here, not to perceive, or acknowledge, in the sense of not wanting to know anything about him, as in Sa1 2:12, etc. In the natural course of things, the merits of Joseph might very well have been forgotten long before; for the multiplication of the Israelites into a numerous people, which had taken place in the meantime, is a sufficient proof that a very long time had elapsed since Joseph's death. At the same time such forgetfulness does not usually take place all at once, unless the account handed down has been intentionally obscured or suppressed. If the new king, therefore, did not know Joseph, the reason must simply have been, that he did not trouble himself about the past, and did not want to know anything about the measures of his predecessors and the events of their reigns. The passage is correctly paraphrased by Jonathan thus: non agnovit (חכּים) Josephum nec ambulavit in statutis ejus. Forgetfulness of Joseph brought the favour shown to the Israelites by the kings of Egypt to a close. As they still continued foreigners both in religion and customs, their rapid increase excited distrust in the mind of the king, and induced him to take steps for staying their increase and reducing their strength. The statement that "the people of the children of Israel" (ישׂראל בּני עם lit., "nation, viz., the sons of Israel;" for עם with the dist. accent is not the construct state, and ישראל בני is in apposition, cf. Ges. 113) were "more and mightier" than the Egyptians, is no doubt an exaggeration. "Let us deal wisely with them," i.e., act craftily towards them. התחכּם, sapiensem se gessit (Ecc 7:16), is used here of political craftiness, or worldly wisdom combined with craft and cunning (κατασοφισώμεθα, lxx), and therefore is altered into התנכּל in Psa 105:25 (cf. Gen 37:18). The reason assigned by the king for the measures he was about to propose, was the fear that in case of war the Israelites might make common cause with his enemies, and then remove from Egypt. It was not the conquest of his kingdom that he was afraid of, but alliance with his enemies and emigration. עלה is used here, as in Gen 13:1, etc., to denote removal from Egypt to Canaan. He was acquainted with the home of the Israelites therefore, and cannot have been entirely ignorant of the circumstances of their settlement in Egypt. But he regarded them as his subjects, and was unwilling that they should leave the country, and therefore was anxious to prevent the possibility of their emancipating themselves in the event of war. - In the form תּקראנה for תּקרינה, according to the frequent interchange of the forms הל and אל (vid., Gen 42:4), nh is transferred from the feminine plural to the singular, to distinguish the 3rd pers. fem. from the 2nd pers., as in Jdg 5:26; Job 17:16 (vid., Ewald, 191c, and Ges. 47, 3, Anm. 3). Consequently there is no necessity either to understand מלחמה collectively as signifying soldiers, or to regard תּקראנוּ drager ot , the reading adopted by the lxx (συμβῆ ἡμῖν), the Samaritan, Chaldee, Syriac, and Vulgate, as "certainly the original," as Knobel has done. The first measure adopted (Exo 1:11) consisted in the appointment of taskmasters over the Israelites, to bend them down by hard labour. מסּים שׂרי bailiffs over the serfs. מסּים from מס signifies, not feudal service, but feudal labourers, serfs (see my Commentary on Kg1 4:6). ענּה to bend, to wear out any one's strength (Psa 102:24). By hard feudal labour (סבלות burdens, burdensome toil) Pharaoh hoped, according to the ordinary maxims of tyrants (Aristot. polit., 5, 9; Liv. hist. i. 56, 59), to break down the physical strength of Israel and lessen its increase-since a population always grows more slowly under oppression than in the midst of prosperous circumstances-and also to crush their spirit so as to banish the very wish for liberty. - ויּבן - .ytrebil r, and so Israel built (was compelled to build) provision or magazine cities vid., Ch2 32:28, cities for the storing of the harvest), in which the produce of the land was housed, partly for purposes of trade, and partly for provisioning the army in time of war; - not fortresses, πόλεις ὀχυραί, as the lxx have rendered it. Pithom was Πάτουμος; it was situated, according to Herodotus (2, 158), upon the canal which commenced above Bybastus and connected the Nile with the Red Sea. This city is called Thou or Thoum in the Itiner. Anton., the Egyptian article pi being dropped, and according to Jomard (descript. t. 9, p. 368) is to be sought for on the site of the modern Abassieh in the Wady Tumilat. - Raemses (cf. Gen 47:11) was the ancient Heroopolis, and is not to be looked for on the site of the modern Belbeis. In support of the latter supposition, Stickel, who agrees with Kurtz and Knobel, adduces chiefly the statement of the Egyptian geographer Makrizi, that in the (Jews') book of the law Belbeis is called the land of Goshen, in which Jacob dwelt when he came to his son Joseph, and that the capital of the province was el Sharkiyeh. This place is a day's journey (for as others affirm, 14 hours) to the north-east of Cairo on the Syrian and Egyptian road. It served as a meeting-place in the middle ages for the caravans from Egypt to Syria and Arabia (Ritter, Erdkunde 14, p. 59). It is said to have been in existence before the Mohammedan conquest of Egypt. But the clue cannot be traced any farther back; and it is too far from the Red Sea for the Raemses of the Bible (vid., Exo 12:37). The authority of Makrizi is quite counterbalanced by the much older statement of the Septuagint, in which Jacob is made to meet his son Joseph in Heroopolis; the words of Gen 46:29, "and Joseph went up to meet Israel his father to Goshen," being rendered thus: εἰς συϚάϚτησιν Ἰσραὴλ τῷ πατρὶ αὐτοῦκαθ ̓ Ἡρώων πόλιν. Hengstenberg is not correct in saying that the later name Heroopolis is here substituted for the older name Raemses; and Gesenius, Kurtz, and Knobel are equally wrong in affirming that καθ ̓ ἩρώωϚ πόλιν is supplied ex ingenio suo; but the place of meeting, which is given indefinitely as Goshen in the original, is here distinctly named. Now if this more precise definition is not an arbitrary conjecture of the Alexandrian translators, but sprang out of their acquaintance with the country, and is really correct, as Kurtz has no doubt, it follows that Heroopolis belongs to the γῆ Ῥαμεσσῆ (Gen 46:28, lxx), or was situated within it. But this district formed the centre of the Israelitish settlement in Goshen; for according to Gen 47:11, Joseph gave his father and brethren "a possession in the best of the land, in the land of Raemses." Following this passage, the lxx have also rendered גּשׁן ארצה in Gen 46:28 by εἰς γῆν Ῥαμεσσῆ, whereas in other places the land of Goshen is simply called γῆ Γεσέμ (Gen 45:10; Gen 46:34; Gen 47:1, etc.). But if Heroopolis belonged to the γῆ Ῥαμεσσῆ, or the province of Raemses, which formed the centre of the land of Goshen that was assigned to the Israelites, this city must have stood in the immediate neighbourhood of Raemses, or have been identical with it. Now, since the researches of the scientific men attached to the great French expedition, it has been generally admitted that Heroopolis occupied the site of the modern Abu Keisheib in the Wady Tumilat, between Thoum = Pithom and the Birket Temsah or Crocodile Lake; and according to the Itiner. p. 170, it was only 24 Roman miles to the east of Pithom, - a position that was admirably adapted not only for a magazine, but also for the gathering-place of Israel prior to their departure (Exo 12:37). But Pharaoh's first plan did not accomplish his purpose (Exo 1:12). The multiplication of Israel went on just in proportion to the amount of the oppression (כּן = כּאשׁר prout, ita; פּרץ as in Gen 30:30; Gen 28:14), so that the Egyptians were dismayed at the Israelites (קוּץ to feel dismay, or fear, Num 22:3). In this increase of their numbers, which surpassed all expectation, there was the manifestation of a higher, supernatural, and to them awful power. But instead of bowing before it, they still endeavoured to enslave Israel through hard servile labour. In Exo 1:13, Exo 1:14 we have not an account of any fresh oppression; but "the crushing by hard labour" is represented as enslaving the Israelites and embittering their lives. פּרך hard oppression, from the Chaldee פּרך to break or crush in pieces. "They embittered their life with hard labour in clay and bricks (making clay into bricks, and working with the bricks when made), and in all kinds of labour in the field (this was very severe in Egypt on account of the laborious process by which the ground was watered, Deu 11:10), כּל־עבדתם את with regard to all their labour, which they worked (i.e., performed) through them (viz., the Israelites) with severe oppression." כל־ע את is also dependent upon ימררו, as a second accusative (Ewald, 277d). Bricks of clay were the building materials most commonly used in Egypt. The employment of foreigners in this kind of labour is to be seen represented in a painting, discovered in the ruins of Thebes, and given in the Egyptological works of Rosellini and Wilkinson, in which workmen who are evidently not Egyptians are occupied in making bricks, whilst two Egyptians with sticks are standing as overlookers; - even if the labourers are not intended for the Israelites, as the Jewish physiognomies would lead us to suppose. (For fuller details, see Hengstenberg's Egypt and the Books of Moses, p. 80ff. English translation). As the first plan miscarried, the king proceeded to try a second, and that a bloody act of cruel despotism. He commanded the midwives to destroy the male children in the birth and to leave only the girls alive. The midwives named in Exo 1:15, who are not Egyptian but Hebrew women, were no doubt the heads of the whole profession, and were expected to communicate their instructions to their associates. ויּאמר in Exo 1:16 resumes the address introduced by ויאמר in Exo 1:15. The expression על־האבנים, of which such various renderings have been given, is used in Jer 18:3 to denote the revolving table of a potter, i.e., the two round discs between which a potter forms his earthenware vessels by turning, and appears to be transferred here to the vagina out of which the child twists itself, as it were like the vessel about to be formed out of the potter's discs. Knobel has at length decided in favour of this explanation, at which the Targumists hint with their מתברא. When the midwives were called in to assist at a birth, they were to look carefully at the vagina; and if the child were a boy, they were to destroy it as it came out of the womb. וחיה for חייה rof ו from חיי, see Gen 3:22. The w takes kametz before the major pause, as in Gen 44:9 (cf. Ewald, 243a). But the midwives feared God (ha-Elohim, the personal, true God), and did not execute the king's command. When questioned upon the matter, the explanation which they gave was, that the Hebrew women were not like the delicate women of Egypt, but were חיות "vigorous" (had much vital energy: Abenezra), so that they gave birth to their children before the midwives arrived. They succeeded in deceiving the king with this reply, as childbirth is remarkably rapid and easy in the case of Arabian women (see Burckhardt, Beduinen, p. 78; Tischendorf, Reise i. p. 108). God rewarded them for their conduct, and "made them houses," i.e., gave them families and preserved their posterity. In this sense to "make a house" in Sa2 7:11 is interchanged with to "build a house" in Sa2 7:27 (vid., Rut 4:11). להם for להן as in Gen 31:9, etc. Through not carrying out the ruthless command of the king, they had helped to build up the families of Israel, and their own families were therefore built up by God. Thus God rewarded them, "not, however, because they lied, but because they were merciful to the people of God; it was not their falsehood therefore that was rewarded, but their kindness (more correctly, their fear of God), their benignity of mind, not the wickedness of their lying; and for the sake of what was good, God forgave what was evil." (Augustine, contra mendac. c. 19.) The failure of his second plan drove the king to acts of open violence. He issued commands to all his subjects to throw every Hebrew boy that was born into the river (i.e., the Nile). The fact, that this command, if carried out, would necessarily have resulted in the extermination of Israel, did not in the least concern the tyrant; and this cannot be adduced as forming any objection to the historical credibility of the narrative, since other cruelties of a similar kind are to be found recorded in the history of the world. Clericus has cited the conduct of the Spartans towards the helots. Nor can the numbers of the Israelites at the time of the exodus be adduced as a proof that no such murderous command can ever have been issued; for nothing more can be inferred from this, than that the command was neither fully executed nor long regarded, as the Egyptians were not all so hostile to the Israelites as to be very zealous in carrying it out, and the Israelites would certainly neglect no means of preventing its execution. Even Pharaoh's obstinate refusal to let the people go, though it certainly is inconsistent with the intention to destroy them, cannot shake the truth of the narrative, but may be accounted for on psychological grounds, from the very nature of pride and tyranny which often act in the most reckless manner without at all regarding the consequences, or on historical grounds, from the supposition not only that the king who refused the permission to depart was a different man from the one who issued the murderous edicts (cf. Exo 2:23), but that when the oppression had continued for some time the Egyptian government generally discovered the advantage they derived from the slave labour of the Israelites, and hoped through a continuance of that oppression so to crush and break their spirits, as to remove all ground for fearing either rebellion, or alliance with their foes.
A Catalyst article about diamonds. The element carbon exists in a number of allotropic forms, but diamonds have always held a special allure, whether it be for their hardness or for their transparency. The article examines how they can be made artificially and looks at some of their uses. This article is from Catalyst: GCSE Science Review 2007, Volume 17, Issue 4. Catalyst is a science magazine for students aged 14-19 years. Annual subscriptions to print copies of the magazine can be purchased from Mindsets. HEALTH and SAFETY Any use of a resource that includes a practical activity must include a risk assessment. Please note that collections may contain ARCHIVE resources, which were developed at a much earlier date. Since that time there have been significant changes in the rules and guidance affecting laboratory practical work. Further information is provided in our Health and Safety guidance.
Recommend UsEmail this PageeGazetteAlislam.org June 9, 2008 Exclusive: Indonesia — A Civil War Between Islamists And Moderates?: Part One of Two Indonesia is widely described as a “moderate” Islamic nation. In many ways this has been true. Recently, however, a conflict has been brewing between those who support moderate interpretations of Islam and those who support hardline and intolerant forms. This conflict has even been seen by some commentators to be pushing Indonesia to the very brink of a civil war. Today and tomorrow, I will try to explain the background of this conflict, whose causes belong as much to politics as they do to religion. Indonesia is certainly the most populous Muslim nation in the world. Its total population is around 235 million, with 85% of this figure being Muslim. The official language (Bahasa Indonesia) is a version of Malay, but other regional tongues exist on various islands. As an archipelago, Indonesia comprises a total of 17,508 islands, many of which were part of the Dutch East Indies. Indonesia sought independence from the Netherlands immediately following World War II. After 1949, the Dutch accepted Indonesia as a nation. The first ruler of Indonesia was Sukarno, who had declared independence in August 1945. He was overthrown in a coup led by General Suharto (Soeharto), who ruled from March 1968 until he was forced to resign in May 1998. Under Suharto’s rule, there was widespread corruption. Suharto’s son Tommy (Hutomo Mandala Putra) grew rich from embezzlement. Even when he was found guilty of the murder of Syaifuddin Kartasasita (the judge who convicted him of corruption), Tommy Suharto only served four years in jail . The current president of Indonesia is Susilo Bambang Yudhoyono who has been in power since 2004. His government has been weak when dealing with the demands of Islamists. During Yudhoyono’s presidency many areas of Indonesia have introduced bylaws which enforce Islamist laws. These laws were introduced following pressure from Islamist groups such as the Front Pembela Islam (Islamic Defender’s Front). Even though these bylaws are unconstitutional, Yudhoyonyo is either too politically weak or indifferent to oppose them. During the three decades that Suharto was in power, Islamist groups and movements were, along with communist groups, viciously suppressed. With Indonesia being comprised of varying cultural groups, the influence of totalitarians such as communists or religious supremacists would naturally lead to conflict. Two groups came into existence following the end of Suharto’s rule. The strident Islamism expressed by these groups has threatened to destroy the values of religious tolerance and pluralism that are promised by the constitution (called “Pancasila“) of Indonesia. Article 29, b, of the Indonesian constitution reads: “The State guarantees all persons the freedom of worship, each according to his/her own religion or belief.” Both of these Islamist groups are said to have tacit support from senior figures within the military as well as the judiciary and police. Laskar Jihad (Lashkar Jihad) was led by Jaffar Umar Thalib . This group, which allegedly was formed with the approval of members of the military and the government in 2000, was the main instigator of sectarian violence during the Moluccan War which lasted from the end of 1998 until 2002. This war pitted fanatical Islamists against Christians and at least 9,000 people, mostly Christian, were killed. The fighting was worst on the large island of Sulawesi and in the Moluccan islands (the Spice Islands). Thalib urged his followers to wage an attack upon Christian villagers in Soya on the island of Ambon. On Friday April 26, 2002 , Thalib spoke to Laskar Jihad followers outside Ambon’s biggest mosque. He urged a religious war against Christians, saying: “From today, we will no longer talk about reconciliation. Our … focus now must be preparing for war — ready your guns, spears and daggers.” Two days later, Laskar Jihad invaded the mainly Christian village of Soya on Ambon Island. Men, women and children were stabbed, beaten to death, burned and decapitated. Even babies did not escape machete attacks. The Soya massacre took place even though other Islamist groups had signed a peace deal with Christians on February 12, 2002 . This deal was called the Malino Accord. It was brokered by Yusuf Kalla (who is now the vice president of Indonesia), and was intended to put an end to the Moluccan War. Laskar Jihad refused to acknowledge the terms of the Malino Accord. Thalib’s vigilantes had also driven away Christian landowners in Malaku province, sharing their lands as “booty” among Laskar Jihad and Muslims from outside the province. Thalib himself had fought Soviets in Afghanistan from 1988 to 1989 and had met Osama bin Laden. He had been educated at the Mawdudi Institute in Lahore, Pakistan, before dropping out and joining the Afghan Mujahideen. He ran an Islamic boarding school (pesantren) called Ihya’us Sunnah Tadribud Du’at on the large island of Java. Thalib allegedly supervised an illegal Shari’a court which stoned a man to death, but though he was arrested for this, Thalib was never prosecuted. Following the Soya atrocity, Thalib was prosecuted for inciting religious violence but bizarrely, he was acquitted . Laskar Jihad announced it was officially disbanding in October 2002, but in 2003 it was waging war against the native peoples of West Papua. This territory — the Western end of New Guinea was never ceded by the Dutch, and was annexed by Indonesia in 1963, and officially recognized by the UN as “Indonesian” in 1969. Very few indigenous West Papuans consider themselves to be Muslim. FPI — The Islamic Defenders Group While Laskar Jihad continues to operate in secret, away from the prying eyes of the media, the Front Pembela Islam has been blatantly courting publicity. The Front Pembela Islam or Islamic Defenders Front was founded in August 1998, only three months after Suharto was ousted from power. The uniformed members of this group in their white jackets and hats appear indistinguishable from the vigilantes of Laskar Jihad. Their motives are the same — to impose a strict interpretation of Islam as the sole religion of Islam and to ignore or destroy the rights of those they deem to be non-Muslims. The BBC stated in 2003 of the FPI: “Unlike other groups it is not fighting for an Islamic state, but it does want to establish strict Sharia law.” Yet its subsequent actions in enforcing Islamist local bylaws to be imposed on all citizens, including non-Muslims, belie the BBC’s claims. At the time the group had claimed that it was suspending its activities, while its founder was awaiting trial for inciting his followers to carry out raids on social establishments. The founder of the group is Al Habib Muhammad Rizieq bin Hussein Shihab, more commonly described as Habib Rizieq Shihab. From its inception, the FPI began to make its presence felt in the main cities of Indonesia. During the holy month Ramadan, members of the group would attack bars and clubs that were seen to be flouting the conventions of Islam. In 2001 he organized a series of attacks against American interests, targeting businesses he believed were supportive of, or funded by, the United States. Even though Saudi-educated Habib Rizieq Shihab could have received seven years for inciting his followers to violence, when he was found guilty, he was only jailed for seven months. Upon his release from Salemba Penitentiary in Central Jakarta on November 19, 2003, the FPI became more intransigent. The group, according to the now-defunct MITP Terrorism Knowledge Base, apparently funds itself via extortion from businesses. In October 2004 during Ramadan, hundreds of FPI members attacked a restaurant and bar in the south of Jakarta, Indonesia’s capital city. They also raided a pool hall. Apparently when the attacks took place, police who were nearby took no action against the vigilantes. Though there is little to distinguish them from the core group, the paramilitary wing of FPI, which carries out the raids on bars, is known as the Laskar Pembela Islam (Islam Defenders’ Army). The FPI as a whole now has a total of 200,000 members who are based in at least 22 of Indonesia’s 33 provinces. On December 24, 2004, a massive tsunami devastated the province of Aceh, located on the northwestern tip of Sumatra Island. Relief workers came to the area to assist in the amelioration of the local population’s plight. A less positive addition to the relief work was the arrival of Islamist groups . These included the Laskar Mujahideen which had been involved with killing Christians in the Moluccan War. Additionally the Indonesian Muhajideen Council, whose spiritual head is the controversial cleric Abu Bakakr Bashir, arrived, as well as the Front Pembela Islam. The arrival or Islamist groups had been spurred on by a decision by the largest group of Indonesian clerics to make a grim announcement. On January 14, 2005, the Majelis Ulama Indonesia (Indonesia Ulemas Council or MUI) warned that there would be a Muslim backlash if any of the Christian relief workers in the tsunami-devastated region of Aceh attempted to proselytize. Fox News reported on January 21, 2005 on the intimidation of relief workers in Aceh by Islamists: “Hasri Husan, a leader of the Islamic Defenders Front, a militant Muslim group that is operating a refugee camp in Banda Aceh, made his feelings clear. ‘We will chase down any Christian group that does anything beyond offering aid,’ he said before making a slashing motion across his throat’” In July 2005, the Majelis Ulama Indonesia made a “fatwa” containing 11 decrees, which decried activities involving interfaith, pluralist and “liberal” thought. The fatwa declared that liberal interpretations of Islam, secularism and pluralism were un-Islamic and therefore forbidden. This ruling was seen by some as generating a climate of intolerance in Indonesia. On September 21, 2005 a community of Ahmadis was attacked in Sukadana in West Java. No individuals were hurt, but a mob of 1,000 fanatical Muslims carrying swords and sharpened bamboo stakes ran through the village. At least 70 homes and six mosques were badly damaged. Only five people were arrested. The attack upon the Ahmadi sect in 2005 mirrors very closely recent events that have taken place in Indonesia. In October 2005 , Strategy Page reported that: “Armed men claiming to belong to organizations like the “Islamic Defender Front” continue to attack Christians, threatening to burn down houses and kill people if, in one instance, Catholics do not stop holding prayer services in their homes.” The Ahmadiyah or Ahmadiyya are Muslims, but they are treated by orthodox Islam as heretics. They revere the founder of their sect, Ghulam Ahmad Qadiani (1835-1908). As many Ahmadi believe their fonder was a prophet, they are treated as heretics. They are barred from entering Mecca for the Haj pilgrimage, and in Pakistan blasphemy laws prevent them from proselytizing. In Bangladesh , political parties in the last coalition government supported attacks against the sect. In January of this year, the MUI (Indonesia Ulemas Council) declared that the Ahmadi sect was “deviant.” On Thursday January 3, 2008 a group claiming to represent 50 Islamic organizations petitioned the attorney-general of Indonesia, demanding that the Ahmadiyyah be abolished. The two main national Muslim groups, Nahdlatul Ulama and Muhammadiyah, apparently also supported the motion. These have respectively 40 million and 30 million members. The Indonesian Muslim Brotherhood (GPMI) sent Ahmad Sumargono as a delegate. On Sunday April 20th this year, thousands of Muslims marched in Jakarta, demanding that the Ahmadiyah sect be banned. A statement read: “We call on President Susilo Bambang Yudhoyono to immediately issue a presidential decree disbanding the Ahmadiyyah organization, confiscate its assets and demand its members and followers to disband and return to the true teachings of Islam.” Instead of demanding that such calls to ban any religious group were in contravention of the terms expressed in the constitution, the president did nothing. A few days before the April 20th march, a government-sponsored committee had agreed that the Ahmadiyah were “deviant” and recommended that the group be officially abolished. The decision was approved by the attorney-general’s office. This is not the first time that President Yudhoyono has stood by while his government acts in ways that contradict the constitution. In March 2006 , one of his ministers openly condemned the Ahmadi. Maftuh Basyuni, the Indonesian Minister of Religious Affairs (pictured), had said that the Ahmadiyah sect should discontinue calling itself “Islamic” and should declare itself as a new religion altogether, adding; “If they refuse to do so, they should return to Islam by renouncing their beliefs.” A month later, the minister repeated his comments on April 17th. A group calling itself National Alliance for Freedom of Religion and Faith (AKKBB) demanded that Maftuh Basyuni within a week or face legal consequences. The minister ignored the deadline. Basyuni was educated in Saudi Arabia and appears to share that nation’s contempt for “deviant” forms of Islam. A complaint was registered with the police against Maftuh Basyuni for “insulting and slandering… the members of the Ahmadiyah community,” but no action appears to have been taken against him. Basyuni remains employed as Religious Affairs Minister in Susilo Bambang Yudhoyono’s government. The Religious Affairs Minister’s comments against the Ahmadiyah had come at a particularly sensitive time. In February 2006, a month before, a community of Ahmadis had been physically attacked on the island of Lombok, adjoining Bali. Almost 200 Ahmadis had been forced to live as refugees. One said of the minister’s comments: “It’s ridiculous to suggest that we form a new religion. We are Muslims who pray five times a day, fast during Ramadan, and believe in the same Quran.” 187 Ahmadi refugees later discussed claiming asylum in Australia. This year, the Indonesian government has allowed the resentments between orthodox Muslims and those they deem to be heretical to reach dangerously tense levels. On the morning of April 28th this year, a mob of 300 individuals attacked an Ahmadi mosque in Sukabumi district in West Java. The mosque was burned to the ground. Three days earlier, a group of Muslim activists grouped outside the mosque demanding it remove any mention of Islam from its sign board. On the afternoon of Sunday June 1, 2008 , the National Alliance for Freedom of Religion and Faith (AKKBB) held a rally in Jakarta to support the right of the Ahmadiyah sect to exist, free from persecution. The date was significant — as it was a national holiday called Pancasila Day. “Pancasila”, the principle of the constitution, means literally “five principles”, which are these : 1) Belief in one supreme God The Front Pembela Islam was also holding a rally on the same day, to protest against fuel price rises. The two groups met at Monas Square, where the National Monument is situated. Here the FPI launched an attack upon the members of the National Alliance for Freedom of Religion and Faith using bamboo sticks. Seventy people were injured, with seven of these seriously wounded. Witnesses claimed that members of the FPI had shouted : “If you are defending Ahmadiyya, you must be killed.” On the following day President Yudhoyono awake from his political torpor to condemn the attacks made by the Front Pembela Islam. There were calls from inside the country and abroad for the FPI to be abolished. Habib Rizieq Shihab had no remorse about the incident at Monas Square. He appeared before reporters and openly told his followers on June 2nd to prepare for war. He said: “I have ordered all members of the Islamic Force to prepare for war against the Ahmadiyah (sect) and their supporters. We will never accept the arrest of a single member of our force before the government disbands Ahmadiyah. We will fight until our last drop of blood.” He added : “We will not accept Islam to be defiled by anyone. I prefer to be in prison or even be killed than accepting Islam to be defiled.” On Wednesday last week 58 members of the Front Pembela Islam were arrested from their headquarters in Central Jakarta. Habib Rizieq Shihab accompanied the arrested individuals as they were taken to a police station. There, he too was arrested. One individual among FPI’s leadership called Munarman is still on the run. The Indonesian police have finally acted to put a stop to the FPI, a group that has been openly practicing violence and intimidation. The actions come too little and too late. The current government has vacillated while extremists have eroded people’s basic rights and freedoms, and now the country is in danger of succumbing to violence. In Part Two, I will show how the Indonesian authorities have colluded with violent forces, rather than confront them head-on. In some instances, it appears that the government and the military have deliberately encouraged a climate of tension and potential conflict. June 13, 2008 Exclusive: Indonesia — A Civil War Between Islamists And Moderates?: Part Two of Two In Part One I described how the Front Pembela Islam (Islamic Defenders’ Front or FPI) had threatened to make war on the minority Islamic sect called the Ahmadiyah. On June 1st, FPI members violently attacked a procession of the National Alliance for Freedom of Religion and Faith (AKKBB), who support the rights of the Ahmadiyah. Several FPI members, including leader Habib Rizieq Shahib were arrested on Wednesday June 3rd in a police operation that involved 1,500 officers. Most FPI members were released shortly afterwards but Habib Rizieq Shahib and seven others remain in police custody. The Ahmadiyah (also called Ahmadi or Ahmadiyya) revere their founder Mirza Ghulam Ahmad — with many regarding him as a prophet. This places them into the category of Muslim “heretics,” as traditionally Mohammed is the last prophet of Islam. The Indonesian Ahmadiyah have recently officially claimed that they regard their founder not as a prophet but as a pious Muslim. Their protestations have been ignored by the Indonesian government. The FPI’s threats against the Ahmadiyah worsened this year after the nation’s leading group of clerics, the Majelis Ulama Indonesia (Indonesia Ulemas Council or MUI) declared that the Ahmadis were “deviant.” On July 27, 2005 , the same council had denounced all liberal and pluralist interpretations of Islam and condemned the Ahmadiyah, a fatwa that led to violence. The Ahmadiyah in Sukadana in West Java were attacked. Government bodies suggested that they would ban the Ahmadiyah movement, even though such an action contravened the 1945 constitution . This constitution is based upon a set of principles known as Pancasila . On Monday June 9th this week, about 5,000 Muslim protesters demonstrated in front of the presidential palace in Jakarta. They called for the Ahmadiyah to be disbanded. They also called for the seven members of the FPI in police custody, including leader and founder Habib Rizieq Shahib, to be released. The group that protested on Monday is called the Peaceful Alliance against Islam’s Defilement (ADA API). The group is comprised of various Islamist factions, including Hizb ut-Tahrir and the notoriously violent Forum Betawi Rempug (Betawi Brotherhood Forum or FBR). Noer Muhammad Iskandar, who led the demonstration on Monday, told the crowd: “Muslims’ demand for disbandment of the deviant Ahmadiyah sect is not a violation of religion freedom because Ahmadiyah has defiled Islamic teachings by recognizing Mirza Ghulan Ahmad as the last prophet, instead of the Prophet Muhammad.” Alliances between extremists have been a key feature of recent attempts to push Indonesian society towards Islamic “orthodoxy.” The Government Restricts Ahmadiyah On the evening of Monday June 9th this year, the Religious Affairs Minister, Maftuh Basyuni, issued a decree. Basyuni was educated in Saudi Arabia (where the Ahmadiyah are banned from visiting Mecca) and has previously urged the Ahmadiyah to abandon their claims to be Muslim. On Monday Basyuni’s decree, backed by President Susilo Bambang Yudhoyono’s cabinet, told the Ahmadiyah that they must stop spreading their religion or face five year jail terms on charges of blasphemy. The decree was co-signed by Hendarman Supanji, the Attorney General. The MUI (Indonesia Ulemas Council) has vowed to uphold the government’s decree against the Ahmadiyah sect by spying on the group and reporting its activities. It issued a statement which read: “If Ahmadiyah disobeys the decree, or continues its deviant activities, we will report it to the authorities and recommend that the president disband Ahmadiyah.” The MUI has deliberately attempted to undermine religious tolerance in Indonesia. In May 2005 the MUI encouraged the arrest of three Christian women under the Child Protection Act for inviting Muslim children to a “Happy Sunday ” event run by their church. The women were jailed for three years on September 1, 2005. The MUI first issued a fatwa against the Ahmadiyah in 1981, with another in 2001. In 2001 the secretary general of the MUI was Din Syamsuddin. Since 2005, Syamsuddin has been president of the “moderate” Muhammadiya movement, which has 30 million members. Currently he has attempted to be publicly diplomatic about the Ahmadiyah. In April this year, Syamsuddin had said that the Ahmadiyah should be persuaded to return to conventional Islam. Syamsuddin is a potential candidate for next year’s presidential elections. The July 2005 fatwa from the MUI that condemned deviant, pluralist and liberal forms of Islam affected not only the Ahmadiyah. Christian communities — particularly in West Java — became targets of a group calling itself the Anti-Apostasy Alliance (AGAP). This Alliance includes the Front Pembela Islam, and exploited a 1979 ruling by former president Suharto to declare churches to be illegal. The SKB or Joint Ministerial Decree declared that religious buildings should have proper permits, and was originally introduced to prevent Islamists building mosques. The SKB stated that before a religious building should be constructed, the community’s neighbors should be consulted. The MUI, which annually receives $600,000 from the Indonesian government, would pressure local people to disapprove of such buildings. In the month after the July 2005 fatwa by the MUI, at least 35 churches in West Java were closed down. In March 2006 the SKB was revised. The law made it more difficult for minority groups such as Christians and Ahmadiyah to construct places of worship. The law stated that a place of worship must have a minimum of 90 members and receive approval from 60 neighbors of another faith. On Wednesday last week, when 59 members of the FPI were arrested, some individuals avoided capture. The leader of the FPI wing that led the attack on June 1st remained at large . This man, called Munarman, this week surrendered himself to police late on Monday night. He claimed that his mission to outlaw the “infidel” Ahmadiyah sect had achieved its goal. The Ahmadiyah have been in Indonesia since the 1920s. To become an Ahmadi, a vow is taken to “harm no one.” What seems bizarre to Western minds is that a group which is peaceful and has not initiated violence is outlawed, while a group (FPI) that is openly violent, and has publicly called for a war to be made on the Ahmadiyah remains “legal.” On February 14th this year, Front Pembela Islam cleric Ahmad Sobri Lubis addressed a large crowd at a rally in Banjar, West Java. A video of his performance (in Bahasa Indonesian) can be found on the internet. The language used by Sobri Lubis is uncompromising. “Kill! Kill! Kill!,” Sobri Lubis told the rally. “It is halal to spill the blood of Ahmadiyah. If any of you should kill Ahmadiyah as ordered by us, I personally, as well as the FPI, will take responsibility.” Lubis is the secretary general of the Font Pembela Islam. He urged followers to kill Ahmadiyah members because they defile Islam. He said of human rights that they were cat excrement. Also attending the rally was Muhammad Al Khathath, head of the Forum Umat Islam (FUI). Abu Bakar Bashir also spoke at the rally. Bashir was jailed for giving consent to the 2002 Bali bombing, in which 202 people died. Bashir was released on June 13, 2006 and following an appeal, his conviction was overturned by Indonesia’s Supreme Court on December 21, 2006 . Bashir formerly ran the Indonesian Mujahideen Council (Majelis Mujahidin Indonesia or MMI). Calls for the deaths of those they oppose have been a hallmark of FPI activities for most of the time that the group has been in existence. In October 2000 , two years after being founded, armed members of the FPI patrolled Sukarno-Hatta International Airport. Their spokesman, Zainuddin, said: “’If we find any Israelis, we will first try to persuade them to leave, but if they refuse, we will slaughter them.” Two months later, on December 13, 2000 FPI violence led to the death of a civilian. The group was intimidating residents of an alleged red light district in Cikijing, Subang regency, in West Java and raiding entertainment centers. The vigilantes found women whom they claimed were prostitutes. They cut the women’s hair short and then began attacking homes in the neighborhood. When one young man objected, he was stabbed to death. The day after the stabbing, locals burned the house of Saleh Al Habsy, local FPI leader. On that Friday (December 15, 2000), the FPI under the leadership of Alawy Usman attacked a police station in Cikoko, 55 miles east of Jakarta, the capital. Three police officers were seriously injured. Usman later claimed that a rock had been thrown from the police station as his vigilantes passed. The rock caused one member to fall. Assuming he had been shot, the mob attacked the police station. No one was charged for the fatal stabbing in Cikijing. The FPI’s threats to kill Christians have continued even after the violence that took place on Pancasila Day (June 1st) this year. On June 4th in Tangerang in West Java, church leader Bedali Hulu was threatened with death by FPI members. The threats happened as he visited his elderly mother-in-law. The FPI has been able to act with virtual impunity. Its attacks on business premises rarely brought arrests, and when arrests have happened prosecutions rarely follow. Islamic vigilante groups in Indonesia are connected with political figures or parties. In 1998 , the FPI was linked with a voluntary militia called PAM Swakarsa. This militia was funded by B. J. Habibie, the President of Indonesia who succeeded Suharto. PAM Swakarsa and the FPI were used by the government and military to harass and intimidate student opponents of the government and the military figures supporting Habibie. PAM Swakarsa was founded in 1998 by Abdul Gafar, who was then deputy-speaker in the government. Gafur still plays a role in politics, albeit a corrupt one . FPI is still said to be linked to the military. FPI has close links with other fanatical and quasi-paramilitary factions in Indonesia, such as the MMI which was founded by Abu Bakar Bashir. It is linked to the Forum Umat Islam, which was founded in 1999 when it was linked to President Habibie and was used to fight against students loyal to Megawati (Sukarno’s daughter). In 2006, FPI took on a battle that had been initiated by the MMI (Majelis Mujahideen Indonesia) — the attack upon Indonesian Playboy. In January Avianto Nugroho announced that he had gained the rights to publish an Indonesian version of the famous magazine, though he made clear that it would contain no nudes. The MMI chairman, Irfan Awas, declared that Playboy was pornographic and its publication in Indonesia would damage the nation’s morals, even without nudity. The first issue was intended to appear in March, but was delayed. The first Indonesian edition of Playboy, edited by Erwin Arnada, appeared on April 7, 2006 . FPI members protested outside the magazine’s editorial offices in Jakarta. Alawi Usman, who had led the 2000 attack upon Cikoko police station said: “If within a week they are still active and sell the magazine, we will take physical action.” Tubagus Muhamad Sidik, another FPI activist, said: “Even if it had no pictures of women in it, we would still protest it because of the name… Our crew will clearly hound the editors.” Indonesian radio stations buzzed with callers, with many of these complaining about Playboy’s lack of raunchiness. One caller quipped: “It’s sinful to read Playboy if there’s no nudity!” Less than a week after initial publication, FPI members violently attacked the offices of the magazine. On Sunday February 19, 2006, about 400 FPI members had tried to storm the American Embassy over the Danish cartoons. Stones had been thrown at the embassy. On April 12, 2006 , about 300 FPI members stoned the building in South Jakarta where Playboy was put together. Attempts were made to smash though the iron gates outside the building, and policemen were attacked. The violence led to Velvet Media Group, who published Playboy, being forced to vacate their offices. They eventually moved to Bali. The editor of Indonesian Playboy, Erwin Arnada, was taken to court, charged with indecency. When one of the clothed models from the first edition, Andhara Early, appeared in the South Jakarta courthouse in January 2006 , protesters insulted her. Andhara too was charged with indecency. As she left the building she was called a prostitute who would go to Hell. Others shouted: “I hope your daughter gets raped.” Andahara Early and another model, Kartika Oktavini Gunawan, were acquitted. On Thursday April 5, 2007 , Erwin Arnada was also acquitted. The Front Pembela Islam is well-known for its campaigns of violence and intimidation. In February 2006 , while the Danish cartoon crisis was going on, members of the FPI and the Anti-Apostasy Movement were intimidating foreigners in Bandung, West Java. 27 activists were arrested outside the Holiday Inn in Bandung. The activists wee asking foreigners what they thought of the cartoons. “If they support the cartoons, we will have no other choice but to ask them to leave Indonesia,” one activist said. The Front Pembela Islam also influences politics in Indonesia at a local and national level. At the start of 2006, numerous local administrations introduced Islamic bylaws. In Tangerang near Jakarta, a law was introduced that stated that any woman found alone outside after 7 pm was a prostitute. A Muslim woman, Lilis Lindawati was one of the first to become a victim of this law. In late February 2006 as she waited for a bus to take her home, the pregnant wife and mother of two children was arrested. She had just finished work as a waitress, around 8 pm. She was placed in a cell and taken to court on the following day. In court she was made to empty the contents of her purse. Lipstick fell out. Judge Barmen Sinurat told her: “There is powder and lipstick in your bag. That means you’re lying to say that you are a housewife. You are guilty. You are prostitute.” The judge fined Mrs. Lindawatis $45 but as she had only her bus fare home, she was forced to spend three days in jail. Mayor Wahidin who introduced the law is the brother of Hassan Wiraduya, the Indonesian foreign minister. He said of Mrs. Lindawati’s case: “She could not prove she is not a prostitute. It is true when my men arrested her she was not committing adultery, but why does she put on such make-up?” Mrs. Lindawati later sued the mayor of Tangerang, but whether she won is unknown. In Depok , south of Jakarta, similar laws were being introduced. These had been brought in after the local administration had consulted with the FPI and the Indonesian Ulemas Council (MUI). Indonesian researcher Syaiful Mujani has claimed that such bylaws are unconstitutional and illegal. In South Sulawesi, laws were introduced female civil servants are forced to wear Islamic clothing and government employees must be able to read and write Arabic. On Saturday April 22, 2006 a meeting of the Indonesian Youth Circle claimed that Islamists and Muslim hardliners were threatening Indonesia’s democracy. Zuly Qodir of Muhammadiyah said: “Now the sectarian groups are pressing their agenda to change Indonesia into a theocratic state. They seek to formalize Islam as the state ideology.” At that time, a controversial act was being introduced in the nation’s parliament, called the Anti-Pornography Bill which would have made aspects of the Islamist bylaws become standardized throughout the nation. This proposed law was opposed by former president Kyai Haji Abdurrahman Wahid (Gus Dur). As a result, on May 23, 2006, FPI members forced him off a stage at a rally in Purwakarta, West Java. The bill would have outlawed kissing in public — resulting in a five year jail sentence for those found guilty. Exposing certain areas of the body, such the stomach, thigh or hip, could have invoked a 10 year jail sentence and $50,000 fine. On the island of Lombok, Muslim women protested against the bill. Yenny Wahid, a Muslim women’s rights campaigner said of the bill: “This is an attempt by some people to import Arab culture to Indonesia.” When women condemned the draft Anti-Pornography Bill they were harassed by the FPI’s allies, the Betawi Brotherhood Forum (FBR). The Front Pembela Islam helped to organize mass rallies in favor of the repressive bill, which would have destroyed the tourist trade in places such as Bali, and would have discriminated against Hindus, Christians and the indigenous peoples of West Papua. The bill was “watered down” in February 2007 but it appears not to have been fully introduced into law. The potential “civil war” ‘between moderate and hardline Muslims that has been highlighted by the Ahmadiyah/FPI problems reflects a more basic struggle — the struggle between Islamism and democracy. The current government is not, it seems, prepared to alienate or antagonize the Islamist minority. As a result, it has chosen to make the lives of a peaceful group — the Ahmadiyahs — more difficult. Faced with widespread demands to ban or outlaw the Front Pembela Islam, the government of Indonesia does nothing. Many of the leading Islamists in Indonesia — Umar Jaffar Thalib of the Laskar Jihad, Abu Bakar Bashir who is spiritual leader of the terrorist group Jemaah Islamiyah and Habib Rizieq Shahib are of Arabic descent. They do not value Indonesia’s cultural diversity, and do not value either the Pancasila principles or the 1945 constitution. There are many in the Indonesian military who appear happy for the country to have democracy break down so they can gain power under martial rule. The current president, Susilo Bambang Yudhoyono, appears to have no desire to uphold the principles of the constitution. He will be fighting a presidential election next year. When in 2004 he was elected, it was believed that Yudhoyono was firm in a time of crisis. That firmness is no longer visible. He has vacillated while others in his government, including the Attorney General Hendarman Supanji, have sought to remove Indonesia’s democratic foundations. Yudhoyono has become weak in the face of Islamic activism. In 2003, he wooed women voters with his voice, producing an album of love songs entitled “My longing for you.” Such a stunt now will do him no favors in the 2009 elections. He has bowed down to Islamist pressure, and failed to uphold his nation’s democracy and constitution. He has even apparently become hoodwinked by a mountebank who claimed to have a scheme to make energy from water. While Islamist bylaws were being introduced across Indonesia, sometimes following pressure from the Front Pembela Islam, Yudhoyono’s government did nothing. According to legal expert Denny Indrayana , sharia-based bylaws can be revoked by presidential decree: ““Based on Law No. 32/2004, the government can make a decision 60 days after local administrations give bylaws for review.” The recent decision to severely curtail the activities of the peaceful and law-abiding citizens in the Ahmadiyah movement has struck a sour note inside Indonesia and beyond. Already the group has suffered persecution in West Java and on the island of Lombok. Between 2005 and 2008 at least 25 Ahmadiyah mosques have been destroyed. The decree has been criticized by Islamists such as Abu Bakar Bashir because it is not a complete disbandment of the Ahmadiyah. Human Rights Watch condemned the move and urged the Indonesian government to uphold the pluralist values of the constitution. Adnan Buyung Nasution is a prominent lawyer who acts as an advisor to President Yudhoyono. He said: “I would say this is the beginning of a further war between Indonesians who want to maintain a secular state, an open democratic society, and those who want to dominate (and turn) the country into a Muslim country.” The Indonesian rights group Kontras has also condemned the decree. Usman Hamid, coordinator of Kontras has said : “The government has not been able to protect citizens from violence, from prosecutions committed by hard-line groups. This is a serious, serious problem in Indonesia… we have been able to achieve several political reforms, political freedom. But the case of Ahmadiyah undermines the image of reform even more starkly because religious freedom has been attacked after 10 years of reform in Indonesia.” The ideological war that is being fought now in Indonesia is between two diametrically opposed systems — Islamism and democracy. So far, the Islamists appear to be winning. Adrian Morgan is a British based writer and artist who has written for Western Resistance since its inception. He also writes for Spero News. He has previously contributed to various publications, including the Guardian and New Scientist and is a former Fellow of the Royal Anthropological Society.
Copyright © 1965 by Roger Lynds This photo was taken by Roger Lynds at Kitt Peak, Arizona, on the morning of 1965 October 29. It was a 4-minute exposure. The two stars to the left of the comet's head are Delta and Eta Corvi (magnitude 3.0 and 4.3, respectively), while the star a little ways up and just right of the tail is Gamma Corvi (magnitude 2.6). The tail extends into Crater in this picture, with the length being about 17°. (Special thanks to Jeannette Barnes (NOAO/Tucson) for relaying my request to use this picture to Roger Lynds). Kaoru Ikeya and Tsutomu Seki independently discovered this comet on 1965 September 18.8, within about 15 minutes of each other. It was then just west of Alpha Hydrae. The magnitude was estimated as 8, and the comet was described as diffuse, with a condensation. The first confirmation was obtained on September 19.79, when the Smithsonian Astrophysical Observatory station at Woomera, Australia, obtained a photograph showing the comet at magnitude 8. Comet quickly recognized as a sungrazer and brightened rapidly. It reached magnitude 5.5 by October 1 and magnitude 2 by October 15. On the latter date the tail was 5 degrees long. The comet was closest to the Sun (perihelion) on October 21 (0.008 AU). Became visible in broad daylight on October 21 to anyone who blocked the sun with their hand. Maximum magnitude may have been around -10 or -11. Japanese astronomers using a coronagraph on Mount Norikura said the comet was seen to disrupt into three pieces just 30 minutes prior to perihelion. Copyright © 1965 by F. Moriyama and T. Hirayama This photo was taken by F. Moriyama and T. Hirayama (Tokyo Astronomical Observatory, Mitaka, Japan) at the Norikura Corona Station on 1965 October 21. They used a 12-cm coronagraph and Fuji Panchroprocess plates behind a Mazda VG1B color filter. This was a 4-second exposure. Comet's tail was longest at the end of October and early November when observers reported lengths of 20 to 25 degrees. Two definite nuclei were photographed on November 4, with a third suspected. Comet last definitely detected on January 14, 1966, although images were suspected on Baker-Nunn plates exposed on February 12. The orbital period is 880 years. There is a chance this was a return of the great comet of 1106, which was seen in broad daylight in Europe. | cometography.com | | Comet Information If you have any questions, please
Written for the KidsKnowIt Network by: Fossils are the preserved remains of plants or animals. For such remains to be considered fossils, scientists have decided they have to be over 10,000 years old. There are two main types of fossils, body fossils and trace fossils. Body fossils are the preserved remains of a plant or animal's body. Trace fossils are the remains of the activity of an animal, such as preserved trackways, footprints, fossilized egg shells, and nests. When asked what a fossil is, most people think of petrified bones or petrified wood. Permineralization is a process. For bone to be permineralized, the body must first be quickly buried. Second, ground water fills up all the empty spaces in body, even the cells get filled with water. Third, the water slowly dissolves the organic material and leaves minerals behind. By the time permineralization is done, what was once bone is now a rock in the shape of a bone. Unlike what you see in cartoons, dogs wouldn't be interested in these bones. When an animal or plant dies, it may fall into mud or soft sand and make an impression or mark in the dirt. The body is then covered by another layer of mud or sand. Over time, the body falls apart and is dissolved. The mud or sand can harden into rock preserving the impression of the body, leaving an animal or plant shaped hole in the rock. This hole is called a mold fossil. If the mold becomes filled over time with other minerals the rock is called a cast fossil. A simple experiment can show you how this works. Take some clay and press a seashell or some other object into the clay. Pull the sea shell out of the clay any you will see a detailed impression of your seashell in the clay. If, over time, the clay hardens into rock the result would be a fossil mold. But really, who has millions of years to wait to make their own fossil? Here's the quick way. Pour plaster of Paris, dental stone, or other plaster into the mold. Wait for it to harden and you have just made your own cast fossil. Another type of fossil is a resin fossil. Resin is sometimes called amber. Plants, mostly trees, secrete sticky stuff called resin. Sometimes insects, other small animals, or bits of plants get stuck in the sticky resin. The resin hardens overtime and is preserved in rock making a fossil. Quetzalcoatlus was one of the largest flying animals to have ever inhabit the Earth. Its wingspan was over 40 feet (12m) in width. Quetzalcoatlus' neck alone was 10 feet (3m) long. This huge flying reptile is believed to have been a scavenger, picking at the carcasses of dead dinosaurs on the ground.
Posted November 11, 2002 Atlanta Communications & Marketing Contact Lisa Grovenstein Large, massive structures could be built in space simply by using radio waves that create force fields to move materials and assemble them into various structures. Once bonded in place, the structures could lay the groundwork for human settlement in space and a space-based economy, according to Narayanan Komerath, an aerospace engineer at Georgia Tech. A large number of objects can be arranged into shapes to form structures in reduced-gravity environments using radio and electromagnetic waves, according to Komerath, who is a professor in Georgia Tech’s School of Aerospace Engineering. The structures could range from micrometer-scale discs to kilometer-scale habitats. Komerath recently presented his team’s work in Atlanta during a conference of the NASA Institute for Advanced Concepts (NIAC), which explores ideas that could potentially result in funding from NASA. The team, which named the project “Tailored Force Fields,” found that structures could be built in small, enclosed gas-filled containers using sound waves. But in the vacuum of space, electromagnetic waves could be used. “The development of a comprehensive space-based economy is the best way to achieve the goals of human exploration and development of space,” Komerath said. “In such an economy, humans would gradually find more reasons to invest in space-based businesses and eventually to live and work in space for long periods, interacting for the most part with other humans located in other space habitats.” Concepts for extracting materials and power from the Moon and asteroids are already being developed. But Komerath says the idea of using force fields could solve some of the long-term problems of inhabiting space, such as the construction of a massive shield to protect humans from radiation, the danger and expense of humans laboring in space and skepticism about the prospects for building an economy in space. According to Komerath’s idea, robotic craft would be sent to Earth’s asteroid belt to break up an asteroid into small pieces. Formations of satellites would follow and form a radio-wave resonator that would begin moving the debris into various structures. Komerath estimates that it would take approximately one hour to form a rubble cloud into a 50-meter long enclosed structure, and could hold for another 12 hours while the pieces are fused together. Sound Waves as Construction Machines The idea follows earlier flight experiments conducted by the team that tested the effects of intense sound on a variety of particles in near-zero gravity conditions. Results from the technique – called “acoustic shaping” – proved the basic theory that sound waves could form raw material into walls of specified shape. These experiments have been performed inside rectangular boxes containing various materials including Styrofoam pieces, porous grains, aluminum oxide spheres and aluminum spheres. These experiments have been performed on the ground and aboard NASA’s KC-135 Reduced Gravity Flight Laboratory. Komerath says that light is already used in microscopes to hold nanosized particles and microwaves could shift millimeter-sized material, but radio waves would be needed to move brick-sized stones. An engineer by training, Komerath admits that such a concept sounds alien to most engineers, who are taught to think “faster, lighter and smaller” as well as “cheaper and better” for anything related to space. Komerath’s findings were gathered after a six-month feasibility study funded by a grant from the NIAC. Komerath estimates that a demonstration experiment could be ready for space flight by 2009. The Georgia Institute of Technology is one of the world's premier research universities. Ranked seventh among U.S. News & World Report's top public universities and the eighth best engineering and information technology university in the world by Shanghai Jiao Tong University's Academic Ranking of World Universities, Georgia Tech’s more than 20,000 students are enrolled in its Colleges of Architecture, Computing, Engineering, Liberal Arts, Management and Sciences. Tech is among the nation's top producers of women and minority engineers. The Institute offers research opportunities to both undergraduate and graduate students and is home to more than 100 interdisciplinary units plus the Georgia Tech Research Institute.
The Making of Paul: Constructions of the Apostle in Early Christianity Fortress Press 2010 The influence of the apostle Paul in early Christianity goes far beyond the reach of the seven genuine letters he wrote to early assemblies. Paul was revered–and fiercely opposed–in an even larger number of letters penned in his name, and in narratives told about him and against him, that were included in our New Testament and, far more often, treasured and circulated outside it. Richard Pervo provides an illuminating and comprehensive survey of the legacy of Paul and the various ways he was remembered, honored, and vilified in the early churches. Numerous charts and maps introduce the student to the "family" of Pauline and anti-Pauline Christianities. - Bibliographical references - Title: The Making of Paul: Constructions of the Apostle in Early Christianity - Author: Richard I. Pervo - Publisher: Fortress Press - Publication Date: 2010 - Pages: 400 About Richard I. Pervo Richard I. Pervo, retired Professor of New Testament and Christian Studies at the University of Minnesota, is author of Rethinking the Unity of Luke and Acts, and most recently, Dating Acts: Between the Evangelists and the Apologists. He lives in Saint Paul, Minnesota.
A World of First Nations, Learning This last installment of the six-part series from the Tyee Solutions Society details how, outside British Columbia, other indigenous people and other jurisdictions are building a record of academic success. [Editor's note: This Tyee Solutions Society Series has explored how innovative educators across British Columbia are reversing a century of educational exclusion for kids of First Nations or aboriginal heritage—the fastest-growing demographic in the province. For the most part, those innovations were based on ideas from beyond British Columbia's borders, with a local twist. In her concluding report for this series, Katie Hyslop finds a few more ideas from away that could be worth a try here at home.] British Columbia may have one of the largest aboriginal populations in the country, but it isn't the only province whose first peoples are struggling to reconcile the history of colonialism, forced assimilation and abuse found in the residential school system with the need to educate and prepare their children for life in the 21st century. Nor is ours the only post-colonial government yet to find the right formula for helping those populations get an education. But that has meant that many of the ideas being implemented by indigenous educators in British Columbia today were taken from other First Nations, across Canada and the world, who are struggling with the same task. Inspiring innovations reported earlier in this series, such as balancing education and culture through the development of language nests and immersion schools, are the result. Here are a few more examples of successful or promising programs that indigenous people and public governments nearby and around the world are using to increase academic success. Alberta leads the way in aboriginal-focused schools Alberta has the third highest population of aboriginal people in Canada, and it's growing fast: the demographic jumped 23 percent to nearly 250,000 people from 2001 to 2006. It's also the youngest population, with almost one-third of aboriginals less than 14 years old, making up almost nine percent of children under the Edmonton public school board. But according to Assistant Superintendent Bruce Coggles, that percentage should be a lot higher. "There's a fairly significant number of First Nations students that for whatever reason choose not to self-identify. They just want to be mainstream students and not want to be part of a sub-group within a system," says Coggles. "Even if we're aware that they're First Nations, they're not part of the population that's tracked." This applies to aboriginal academic rates, too, so the district doesn't calculate aboriginal graduation rates. Nonetheless, in 1999, the district knew aboriginals weren't graduating at a high enough rate, and were dropping out far too often. "We just felt we were losing too many students, and had to find a means or a strategy to increase our success level. We know it's a growing part of our population, and we just weren't satisfied with the results. So we were looking at alternatives," says Coggles. The school board created an aboriginal task force to find solutions. One of the standout programs this created is Amiskwaciy Academy, an aboriginal-focus Grade 7 to Grade 12 school—the first of its kind in Canada. Located in the old Edmonton Municipal Airport terminal in city's centre, Amiskwaciy Academy opened in 2002 to all students in the district, although the approximately 300 students currently enrolled are all First Nations. There are two elders attached to the school who provide spiritual guidance and student counselling; there's a sweat lodge. Classes are offered on aboriginal drumming, where students have the chance to make their own drums. And every school day starts with a Cree song and a drumming circle for all the students, teachers and staff in the main foyer of the school. There are aboriginal-focused options for curriculum, but unlike more rural areas, Coggles says most of the students at Amiskwaciy are urbanized and don't connect to indigenous traditions in the same way as their rural peers. "You make assumptions about heritage and assume that there's interest and knowledge in all the traditional ways. There isn't always. So rather than make things mandatory, there's opportunities for students to choose options that can have more traditional content in it," he says. Initially, the school put more emphasis on recognition of culture and traditions, as well as providing kids with positive role models by hiring a majority of aboriginal teachers—approximately two-thirds of the faculty. The aim was to improve student self-esteem and attendance, and then move on to academics. But Coggles says it's time to switch focus to academics, "because [Amiskwaciy's graduates] have to be able to put their results beside the other high schools and show that their results can be just as good as anybody." That could mean fewer aboriginal teachers in the future. "At this point," says Coggles, “we're kind of saying the first thing we need is really strong teachers in their subject areas, and if we can get that person and have them First Nations as well, then that's the best of both worlds. But the first and more important thing is to have highly qualified teachers." Again, Edmonton public school board doesn't track aboriginal academics, so there are no numbers to prove students in the Amiskwaciy program are improving. But Coggles sees signs of success. "We've gone through a few growing pains in getting it established, but we've got more stability," he says. "We're pleased that we're moving in the right direction, but we've got a ways to go yet." Hawaiian schools make a comeback At about the same time that British Columbia began forcing First Nations parents to send their children to English-only residential schools, much the same thing was happening in Hawaii. Native Hawaiians were forced into public, English-speaking schools starting in 1896, when their language—?lelo Hawai'I—was made illegal. Although there are differing accounts of what this meant—some sources assert that Hawaiian language newspapers continued to be published while others claim parents were reprimanded for speaking the language to their children—the ban succeeded in reducing ?lelo Hawai'i from the island's predominant language, to one spoken today by six percent of the population, according to the American Community Survey. But that's not an entirely accurate number as there are no statistics available on the number of fluent speakers versus those that know a few words and phrases. What's clear is that until their language was outlawed, native Hawaiians had maintained a very successful education system on their own, says Kau'ilani Sang, an educational specialist with the State Department of Education's Hawaiian Language Immersion Program. "Hawaiians have had 'schools' since before Western contact," Sang says. "The imposition of Western beliefs on education, and the banning of the Hawaiian language in schools, led to a demise of one of the most literate nations in the world, with thousands of text documents written in Hawaiian by Hawaiians," she wrote in an e-mail to the Tyee Solutions Society. A change in the state constitution in 1978, however, not only made ?lelo Hawai'i an official state language, but also mandated a state duty to provide natives with education in their own language and culture. Within a decade, the government began operating Hawaiian immersion schools. Today there are 21 such schools within the public education system, as well as immersion early childhood education and university programs. The 21 immersion schools integrate indigenous Hawaiian language, culture and history into the curriculum and are taught by a majority of indigenous teachers. In order to teach there, educators must complete a teaching degree, a four-year language program and a Hawaiian studies degree. "The foundation of most immersion schools is to teach the language with the belief that without the culture you cannot teach the language," Sang says. "If you look at the vision of most of the schools, they're trying to produce proficient Hawaiian language speakers by the time they leave in the 12th Grade." Only about 2,000 students are enrolled in the 21 schools. With a native Hawaiian population of 80,337, most children go to mainstream schools where teachers are unprepared to include native language and culture in the curriculum. It's just one of the drawbacks to operating a native education system as a subset of the larger public education system. "It's limited to the preparedness of those on staff," says Sang, "to be compassionate towards trying to revitalize Hawaiian language. And because we're confined to different federal and state laws, our vision for what we're trying to do sometimes gets pushed to the side. So when it comes down to decision-making, there's a lot of advocacy that goes on outside of the system to try and get the system to understand what we're trying to do. And that's the difficult part, because while the system is required to do the job they don't necessarily have the skills or the buy-in to make the right decision." There are pluses, however, like access to facilities and Department of Education infrastructure. And the program is working: students are leaving the program as fluent speakers, although there are no statistics on indigenous graduation rates in Hawaii, either. "I'm pretty confident that the success rate in terms of graduation is relatively high—I almost want to say 100 percent. I haven't heard of any teacher or student [who] has failed, but I have heard that students have dropped out," says Sang. Parental support of education is key: Simon When it comes to the vitality of traditional language, the Inuit are the exception to the rule: as of 2006, 69 percent of Inuit could converse in their language, with half speaking it regularly at home. But the majority of Inuit children attend public schools, where the main language of instruction is English or French. This hasn't resulted in a loss of traditional language, but it does prove difficult for students whose first language is Inuktitut. Struggles with language only add to the socio-economic issues facing many Inuit families, such as poverty, overcrowded housing, substance abuse and physical and sexual abuse. As a result, 75 percent of Inuit never finish high school, some of the worst academic outcomes in Canada. Inuit leaders from across the four Inuit Arctic regions which span northern Canada from Labrador to the Yukon—Inuvialuit, Nunavut, Nunavik and Nunatsiavut—came together with the federal, provincial and territorial governments to develop a National Strategy on Inuit Education, released in mid-June. The strategy calls for mobilizing parent support for education, increasing bilingual (Inuit and English or French) curriculum and instructors, investing in early education, providing external social supports to students, investing in Inuit-centred curriculum and resources, establishing an Inuit writing system, creating an Inuit university and improving methods for measuring and assessing student success. Mary Simon, president of Inuit Tapiriit Kanatami (ITK), the national organization representing the Inuit, has targeted results within a decade. "We hope that within five to ten years we will significantly close the gap in high school graduation rates with southern Canada, and experience a corresponding increase in the number of Inuit who graduate from university," Simon wrote in an email to Tyee Solutions Society. The strategy puts particular emphasis on parental support, saying Inuit organizations and public governments can only do so much to encourage academic success. The rest is up to parents, to motivate and support their child's participation in the school system. That can be a challenge for some families. "Parents who had negative experiences with the residential school system," Simon observes, "are less likely to be supportive of their children in the current education system. Our National Inuit Education Strategy wants to address this issue by engaging parents in the education of their children, and working with parents to ensure support for students in school." The plan is still in its early stages. A National Centre for Inuit Education is set to open this fall, followed by the appointment of an Inuit education secretariat to develop the implementation plan, dictating who is responsible for what actions, and its cost. But Simon is confident that will be accomplished by early next year. Nation to First Nation education aid Canada's federal government has, to be generous, ground to make up with the country's aboriginal population. It was the Federal Crown that partnered with Christian organizations to introduce residential schools as an overt and nearly successful attempt to erase the "Indian" in children by severing them from their language, culture and families. With the exception of some publicly funded private schools, churches are out of the game today, but the feds still oversee aboriginal education on reserves. And if indigenous education advocates are right, the system they're running is still failing First Nations kids. That may, finally and perhaps, be about to change. This past June, the Government announced a new partnership with the Assembly of First Nations: a Joint Action Plan for aboriginal education to start with a national panel discussion on that that should entail. The panel has two indigenous representatives—George Lafond, former chief of the Saskatoon Tribal Council and former special assistant to the federal minister of Indian and Northern Affairs and Caroline Krause, an aboriginal educator formerly with the Vancouver school board and the University of British Columbia's faculty of education—and one Caucasian, Scott Haldane, president and CEO of YMCA Canada. The panel has already begun travelling across the country, speaking with aboriginal parents, children, chiefs, councils and elders; regional and national First Nations organizations; the private sector; the provinces; as well as any interested private parties. They're expected to deliver two reports: a mid-way progress report, and a final report with recommendations by 2011. Here in British Columbia, aboriginal education advocates give the provincial government some credit for recognizing before their federal counterparts that education is something aboriginals need to be involved with. Victoria has made Aboriginal Education Agreements—pacts signed between school boards and local native governments specifying aboriginal content in school—mandatory for all 60 B.C. school districts. The Ministry of Education has introduced First Nations English, social studies, and math courses. And they've signed an agreement with the First Nations Education Steering Committee and the federal government that says aboriginals have the right to teach their own children. They even passed an act to solidify the agreement. "The inclusion of authentic aboriginal histories and knowledge throughout the B.C. curriculum enriches the educational experience of all students. Culturally relevant learning allows for the inclusion of local traditional knowledge, histories and aboriginal languages and is key to improving success and achievement for aboriginal students," a Ministry of Education spokesperson told Tyee Solutions Society via email. Yet critics say it's still not enough. The provincial government doesn't provide enough fundingto adequately support language revitalization programs, for one concern. Few students enrol in First Nations-focused courses in secondary school, in part critics say because the ministry has failed to inform parents and students that the course credits qualify for post-secondary admission. And Aboriginal Enhancement Agreements may be mandatory, but there is no system in place to hold districts to their promises or to measure their progress. Aboriginal British Columbians are confident they have the knowledge and skills to educate their own children, but not the resources to do so. After years of broken promises from post-colonial governments, matched by broken lives for thousands of First Nations and Métis British Columbians, what's common to these stories of collaboration is that, while indigenous people may have the know-how to bring their children out of the academic shadows, it's much easier on all of us if they don't have to do it alone. Katie Hyslop, Tyee Solutions Society; reporting made possible through the support of the Vancouver Foundation, McLean Foundation and the British Columbia Teachers’ Federation (funders neither influence nor endorse the particular content of Tyee Solutions’ reporting). Click here to view the first in this series. Click here to view the second in the series. Click here to view the third in the series. Click here to view the fourth in the series. Click here to view the fifth in the series.
Text by Candace Kanes Images from Maine Historical Society, Camp Winnebago, Camp Runoia, Good Will-Hinkley Home, and Eliot Baha'i Archives Wohelo, Kippewa, Mataponi, Runoia, Winnebago, Agawam, Kawanhee, Takajo, Mechuwana, and Indian Acres. Summer camps once appealed to urban youths for their exposure to nature and the simple life. Soon, they focused instead on skills, sports, loyalty, and camp spirit. Summer-long camps and special purpose camps remain popular in Maine. Most go far beyond summer amusements for young people. The images suggest how the camps -- and campers -- have changed throughout the twentieth century as well as how ideas about the purposes of the camps have developed.
No starter pistol announces the beginning of a new technological era. There are no cannon blasts or tower bells ringing forth the end of the old and dawn of the new. And yet, if the previous ten years were "The Internet Decade," then the next decade may be dubbed the "Age of the Intranet." Intranets are digital communication networks linking devices, such as computers or handheld devices, to each other and to network-based applications and services, often within a specific geographical location. Much as the global Internet has interconnected computer networks, Intranets provide local connectivity, services, and applications to their users. Intranets are often home or office networks used to interconnect computers. In this chapter, we explore the notion of a "community Intranet" -- an expanded network of networks spanning a neighborhood, municipality, or geographic region. By amplifying community interconnectedness, Intranets promise to enable new forms of political and democratic engagement that expand upon present day networks and models of cooperation. Intranets are often decentralized and ad-hoc, with no one entity owning the entire infrastructure or controlling expansion of or access to the infrastructure. These arrangements create new challenges for surveillance and command and control as well as new opportunities for participatory media and information dissemination. Intranet systems supplant old notions of networking geographic places by allowing people to be both networked and an integral part of the infrastructure-the creation of "device-as-infrastructure networks." These peer-to-peer communications systems provide unprecedented opportunities, as well as serious concerns, for the future of community organizing, political activism, media production, and communication research. Even as evidence accumulates demonstrating how these technologies encourage civic engagement, their social trajectory is far from determined, and the possibility for a more dystopian outcome cannot be dismissed. While drawing from real-world case studies, including community and municipal wireless networks, Indymedia, the iPhone, geo-locational applications and services, and next-generation wireless devices, this chapter documents the emergence of Intranet technologies, discusses their implications for research, and explores policy implications at this critical juncture in telecommunications development and policy making. The Intranet Potential Intranet-enabled communications have the potential to accelerate fundamental changes begun in the 1980s with the advent of widespread public use of pagers and cell phones. Like these cellular systems, Intranets are often fully functioning communications networks, connecting participants to one-another and to locally run services and applications within homes and offices at the local, state, regional, and even national levels. Similar to businesses connecting computers to share Internet connectivity, printer and file server access via a Local Area Network (LAN), community Intranets connect devices to form a community-wide LAN. Intranet technologies create new possibilities for how information is produced, disseminated, and archived by creating a peer-to-peer infrastructure that parallels the rise of peer-to-peer technologies, services, and applications. Sharing media, educational content, and public services via local telecommunications systems, Intranets provide web resources for their respective communities, ranging from the mundane, such as e-mail, webhosting, and filesharing, to the more innovative, such as streaming micro-broadcasting, video chat-rooms, temporary device-hosted LANs, and audio and video telephony. While some Intranets are geographically bounded, others are regional or even global in nature. Often, Intranets rely on darknets and friend-to-friend networking client (i.e., peer-to-peer file-sharing networks predicated upon social networking and trust) -- one example embraced by the open source community is Nullsoft's WASTE, a decentralized file sharing and IM program -- but increasingly they are focused on providing useful services, applications, and media to local communities (Biddle, England, Peinado, & Willman, 2002). Using local Intranets, communities can set up forums for political debate, artistic display, and educational fare. Streaming video and audio from local events -- from town council and PTA meetings to annual music festivals -- have created entirely new media services and information-sharing options for residents. Intranets enhance local government, education, and civic organizations services, allowing services such as online voter registration and real-time directions to polling stations, bill payment and live tax advice, access to school homework and teacher lesson plans, public service announcements, online newspapers and radio, and instant webcasting of emergency alerts. Public safety and social service groups, local schools, churches, and municipalities are already beginning to embrace the potentials of these technologies and recent shifts in municipal wireless business models have just begun to tap Intranets' potentials. And, as municipal networking business models continue to evolve, Intranet services and applications will increase in importance, becoming meaningful differentiators among different implementation options. Figure 1: Illustration of a Mesh Network Schools can set up a local wireless network and broadcast a student-produced news program or a theatrical play; a housing project can establish an online media forum to feature local artists, upcoming events, job listings, or educational opportunities; social workers out in the field and municipal workers can dynamically update their caseload files and task lists as they travel around town; religious organizations can webcast services to residents whose health prevents them from attending; electrical, water, gas and parking meters can be remotely read; and, automated congestion-pricing of vehicles and optimal traffic light configurations can be ascertained in real-time. Perhaps one of the most exciting prospects of Intranet-enabled communications is an enhanced potential for community journalism. As national media outlets increasingly omit local news, Intranets may facilitate a municipal service that provides a daily digital community news bulletin, replete with local beat reporting and investigative news. Community news, most likely delivered via a municipal or community wireless network offering ubiquitous high-speed broadband, could be treated as a public utility, provided each morning in the form of an informational service, and supported by local tax revenues. As local broadcast and print news continue to be eviscerated by national market pressures, the potential for Intranets to provide local journalism will be increasingly valuable. Many of the projects already utilizing Intranet technologies aim to change the cultural economy of the Internet by creating resources for people to democratize media distribution and information dissemination, often via existing infrastructures and off-the-shelf networking hardware. Unlike the Internet, which disproportionately favors capitalized publishers like the NYTimes.com and CNN.com, Intranets reliance on local networks allow low capital users to host websites, e-mail lists and accounts, stream audio and video programs, and create dynamic media for local consumption -- creating an integrated system that provides affordable access to everyone from independent musicians and journalists to teachers and civic officials (Young, 2003). The remainder of this chapter provides a glimpse of community Intranets' implications for reconceptualizing the theory and study of emergent communications technologies, with the goal of not just providing a thought-piece of what might be, but also showing how these technologies are already being used and studied. Community Intranet Case Studies: CUWiN & Chambana.net An exemplar of Intranet technology is the Champaign-Urbana Wireless Network (CUWiN; see cuwin.net), launched by a coalition of wireless developers in 2000 with a mission to "connect more people to Internet and broadband services; develop open-source hardware and software for use by wireless projects world-wide; and, build and support community-owned, not-for-profit broadband networks in cities and towns around the globe." Although the CUWiN Foundation is a non-profit organization headquartered in the small town of Urbana, Illinois, it has received considerable national and international attention during its years of successful open-source development. Through the ongoing support the Acorn Active Media Foundation (see: acornactivemedia.org), CUWiN has integrated the wireless network with a host of different services. In Spring 2000, a group of software programmers, radio techies, system administrators, and community activists began discussing ways to set up a community-operated wireless network using widely available, off-the-shelf hardware. After two years of intensive work, CUWiN's software development allowed the first multi-hop, bandwidth sharing, wireless cloud to become operational, creating shared access to the Internet from multiple locations. This milestone marked the first time a single Internet connection (in this case, donated by the Urbana-Champaign Independent Media Center Foundation; see: ucimc.org) was utilized from houses located a half-kilometer away from one-another, with traffic routed through an intermediary wireless node. This technology later became known as "mesh" wireless networking. CUWiN went on to deploy additional wireless routers (often called "nodes") in the community and develop the system software to deal with real-world conditions. This initial deployment brought CUWiN's first major press coverage and created opportunities for over two-dozen new organizations to partner with the project. Realizing that scaling up the system would require major upgrades, CUWiN received an exploratory grant in 2003 from the Threshold Foundation to buy much-needed additional equipment as a proof-of-concept for deployment in impoverished communities. Henceforth, CUWiN has been building a new generation of hardware chosen for its durability, price, and suitability for this application. The initial exploratory grant allowed CUWiN to double the number of nodes in the test bed network to try out software improvements under in-vivo conditions. Continuing collaborations with the Acorn Active Media Foundation have led to the development of lower-cost equipment and the implementation of free public Internet access in areas throughout downtown Urbana. In 2004, CUWiN received a $200,000 grant from the Information Program of the Open Society Institute to develop networking software as a model case for transfer to other communities. That same year the Center for Neighborhood Technology began using CUWiN's software in the economically disadvantaged, minority Chicago suburb, North Lawndale, to help bridge their digital divide by bringing broadband connectivity to many residents for the first time. Over 50 different communities are considering using CUWiN's software worldwide, and key facets of CUWiN's technology have been integrated into many open source wireless technologies. As these open source technologies have continued to stabilize, the number of organizations and communities looking to use them for Internet service and Intranet applications in their neighborhoods has increased dramatically. Figure 2: CUWIN Coverage Map Today, the CUWiN foundation project has over 200 members and 100 developers, and has deployed systems in multiple locations around Illinois, across the United States, and internationally. In CUWiN's local community, there is a long waiting list to join the network and the City of Urbana has allocated funding to build additional nodes and added them to extend the CUWiN network in the downtown region. This may represent the first time that a municipality has actively deployed an open-source, open-architecture wireless solution, thus helping to further advance these technologies. As isolated wireless "clouds" grow within the City of Urbana, distinct areas will merge, creating a single trans-neighborhood, interconnected wireless community. This process of conglomerization of distinct wireless clouds creates a community Intranet capable of providing multi-media services to network users, as described in more detail in the next section. CUWiN has also formed numerous partnerships with university research laboratories to develop next-generation wireless technologies, and is now working with a diverse array of groups, from the Council for Scientific and Industrial Research in South Africa to the University of Illinois, Urbana-Champaign (UIUC) in its own back yard. Community Intranets are as diverse as the constituencies they serve. In Urbana-Champaign, Illinois, the Chambana.net project is a proving ground for next-generation Intranet services and applications. Chambana.net is built and maintained by the Acorn Active Media Foundation (Acorn) and creates an community LAN that interconnects the local mesh wireless network with multi-media resources located at the Urbana-Champaign Independent Media Center (UCIMC). Chambana.net hosts scores of websites for local organizations as well as web portal capabilities, hundreds of e-mail lists, serves tens of thousands of users, integrates an IRC server and file storage capabilities, streams audio and video, provides telephony capabilities, and is a platform for darknet participants and local IT developers. By directly integrating the local low power FM radio station, WRFU 104.5 FM (Radio Free Urbana), the project allows such innovations as the streaming of live shows from the performance venue, which are also simulcast through the radio station and the Internet. By harnessing the Intranet capabilities of this system, this community Intranet allows local and global participants to communicate via the chat servers with audience members, sound engineers, etc. The project allows Intranet participants to access video files they have been editing at the UCIMC's production center from their laptop computer at a local cafe. Soon social networking and geolocation multi-media web portal functionality will allow media producers to upload their work to local wireless hotspots and comment on each other's work. Figure 3: The Chambana.net Infrastructure and Community Intranet The power of Intranets lies in their potential for supporting new forms of communication. As the functionality of mobile devices increases, Intranet usage will expand as well. Together, CUWiN and the Chambana.net project provide a natural laboratory for Intranet services and applications. By integrating media production and information dissemination, they support a return to localism -- the potential to blend participatory media production with the reach of regional networking. Intranets represent a clear shift away from the broadcast model, enabling two-way flow of information, community and shared ownership of communications infrastructure, and more services and applications than existing telecommunications systems. In Urbana, the Independent Media Center, CUWiN, Chambana.net, Acorn Active Media Foundation, UCIMC, and WRFU 104.5 FM (see: wrfu.net) and a host of allied organizations are using these new technologies to advance democratic communications. In response to these emergent digital communications, vibrant new strands of communications theory have begun to coalesce, while traditional barriers between participatory action, policy and regulatory debate, and technological innovation are breaking down. Feedback loops among developers, implementers, policy reformers, and community organizers are placing pressure on decision-makers in Washington, DC to substantially reform our telecommunications policies to better match on-the-ground realities. Telecommunications reforms in 2008 are set to extend Intranet capabilities from the margins of "techno-geekery" into the mainstream. Current battles over access to the television white spaces (unused frequencies between existing broadcast channels; see Meinrath & Calabrese, 2007) have pitted public interest groups (who want to foster more democratic communications) and hi-tech firms (who want to sell next-generation wireless equipment) against the National Association of Broadcasters and its allies (who want to protect their current business models and prevent competition). The 700MHz spectrum auction that was concluded in March 2008 created, for the first time, an "open platform" band that requires the license-holder (in this case Verizon) to open its network to all compatible devices. With Google's Android phone and the continuing work of the Open Handset Alliance (a coalition of over 30 corporations working on next-generation open cellular hardware), community Intranets are poised to become an everyday part of normal life. And with the increasing functionality of next-generation hand-held computers, device-as-infrastructure networking is rapidly becoming an everyday reality. Meanwhile, contemporary researchers are increasingly drawing from current telecommunications and regulatory deliberations, familiarizing themselves with new and emergent technologies, and immersing themselves in the communities they study. A promising development in communications theory since the late 1990s is the emergence of the field of community informatics (CI), which is particularly well-suited to address issues raised by Intranet technologies and digital media-production practices. Howard Rheingold underscores the importance of formulating a new field as a potential starting point for new communications theories that is "based on actual findings by people who have tried to use online media in service of community, then reported on their results." He notes that "In the absence of such systematic observation and reporting by serious practitioners, public discussion will continue to oscillate between ideological extremes, in a never-ending battle of anecdotal evidence and theoretical rhetoric" (Keeble & Loader, 2002, p xx). Emphasizing the cross-disciplinary and emergent aspects of CI, Keeble and Loader (2002) define "community informatics" as: ...a multidisciplinary field for the investigation and development of the social and cultural factors shaping the development and diffusion of new ICTs and its effects upon community development, regeneration and sustainability. It thereby combines an interest in the potentially transforming qualities of new media with an analysis of the importance of community social relations for human interaction. (p.3) CI draws from a wide range of source material and expertise based on the understanding that new and emergent technologies often fall outside traditional disciplinary boundaries. Therefore, expertise concerning their use, impacts, and diffusion are found through a participatory action research methodology (Meinrath, 2004). Similarly, since CI is, first and foremost, involved in the systematic study of contemporary technologies and social phenomena, it relies on the work of "community activists, webmasters and Internet enthusiasts, policy-makers, digital artists, science-fiction writers, media commentators [in addition to] a wide variety of academics including sociologists, computer scientists, communications theorists, information systems analysts, political scientists, psychologists, and many more" (Keeble & Loader, 2002, p. 3). Beyond media research, these new technologies also have a profound impact on the formation and nature of communities. One of the most overlooked facets for determining whether technological innovation is empowering to its users is whether it is open and how that openness is operationalized. Open vs. Closed Technologies and Network Architectures At its heart, one of the most significant barriers to the potential of Intranets comes down to the differences between closed and open technologies. These notions often bring to mind issues related to open source and proprietary software (e.g., Linux versus Windows), but the distinction is more encompassing. Stolterman (2002) defines the important attributes thusly: A closed technology is one that does not allow the user to change anything after it has been designed and manufactured. The structure, functionality and appearance of the artifact are permanent...The technology is a relatively stable variable in social settings.... An open technology allows the user to continue changing the technology's specific characteristics, and to adjust, and or change its functionality. When it comes to an open technology, changes in functionality pose a question not only of change in the way the existing functionality is used or understood but also of a real change in the artifact's internal manifestation. (Stolterman, 2002, p. 45) The Internet, generally speaking, was conceived and remains an open and designable technology. One can "add, embed, contain or surround the artifact with other technology in a way that radically changes it." (Stolterman, 2002, p. 45). This aspect has contributed to the successes of so-called "Web 2.0" applications. Unfortunately, this openness is also under attack, as moves by Comcast to block Bittorrent communications, the blocking of pro-choice text messaging by Verizon, and the editing of a live Pearl Jam's concert by AT&T all exemplify. Unfortunately, the "gentlemen's agreements" that have been sold as "solutions" (that these corporations will not engage in these practices again) do nothing to prevent these sorts of anti-competitive, anti-free speech, and anti-democratic actions from being repeated at a later date. Thus, a growing list of public interest organizations have grown increasingly worried that by abdicating their responsibility to prevent this sort of corporate malfeasance, the Federal Communications Commission and other regulatory agencies are all but guaranteeing that these sorts of behaviors will continue. In fact, without the advent of the landmark Carterphone decision to allow interconnection of "foreign attachments" to the AT&T telephone network, wireline communications may well have taken a different turn -- even preventing the emergence of the Internet in its present form. Prior to Carterphone, the FCC tariff governing interconnecting devices stated, "No equipment, apparatus, circuit or device not furnished by the telephone company shall be attached to or connected with the facilities furnished by the telephone company, whether physically, by induction or otherwise" (FCC 68-661). The growth and successes of the Internet are predicated upon an open architecture (Cooper, 2004; Kahin & Keller, 1997) that facilitates the interconnection of a variety of different devices and technologies (Louis, 2000; Meinrath, 2005). While AT&T may have wanted end-to-end control over every part of their network, the FCC wisely concluded that the best interests of the general public would be achieved by ensuring that innovation could not be stifled by AT&T and that end-users could decide for ourselves which devices and technologies we wanted to attach to the telephone network. In fact, the Internet stands as a remarkable reminder of the potential power (and problems) of network effects (Hiller & Cohen, 2002; Nuechterlein & Weiser, 2007) and the promise that this new "networked information economy" (Benkler, 2003) makes possible. As Benkler (2003) sums up: For over 150 years, new communications technologies have tended to concentrate and commercialize the production and exchange of information, while extending the geographic and social reach of information distribution networks... The Internet presents the possibility of a radical reversal of this long trend. It is the first modern communications medium that expands its reach by decentralizing the distribution function. Much of the physical capital that embeds the intelligence in the network is diffused and owned by end users. (p. 1250). While thus far true for much of the wireline communications infrastructure, this analysis breaks down within the wireless realm (Meinrath & Pickard, 2008). Wireless communications are a particularly interesting case study since the transport medium -- the public airwaves -- is not only publicly owned, but also, for data communications in particular, often unlicensed (Meinrath et al., 2005). Yet the wireless systems that an increasing number of Internet participants use to connect to the Internet remain closed technologies (Nuechterlein & Weiser, 2007). The 2007 deal inked by Apple and AT&T is a classic example of the problems with this approach. Apple's iPhone was only available to be used on AT&T's network, even though it could be used on any cellular network. Likewise, AT&T only allows certain services and applications to run on the iPhone, even though the iPhone could run many additional programs that would be useful for end users. Innovative iPhone owners and entrepreneurs have already found ways to unlock the device and consumer groups have launched campaigns to get iPhone limitations removed (see, for example, freetheiphone.com), but the extra work and cost are borne by end-users as a result of anti-competitive business practices. By comparison, the superiority of open architectures is immediately apparent: An open architecture means fewer technological restrictions and, thus, the ability to explore more options. In an open architecture, there is no list of elements and protocols. Both are allowed to grow and change in response to changing needs and technology innovation...With an open architecture you are not making bets on a specific direction the technology will take in the future. You are not tied to a specific design or a particular vendor or consortium roadmap, so you can evaluate and select the best solution from a broad and energetic competitive field. Competition facilitates innovation and reduces equipment and implementation costs. (Waclawsky, 2004, p. 61) With data communications networks, the costs of closed architectures are particularly devastating because they impact almost every communications medium. As Tim Wu (2007) documents, wireless cellular carriers may be the worst purveyors of closed technologies: The wireless industry, over the last decade, has succeeded in bringing wireless telephony at competitive prices to the American public. Yet at the same time, we also find the wireless carriers aggressively controlling product design and innovation in the equipment and application markets, to the detriment of consumers. In the wired world, their policies would, in some cases, be considered simply misguided, and in other cases be considered outrageous and perhaps illegal. (Wu, 2007. p.1) Luckily, open architecture cellular devices are just around the corner. Projects like OpenMoko.org are working to develop "the world's first integrated open source mobile communications platform" and the Open Handset Alliance is committed to creating a cellular platform that supports innovation (though how open this hardware platform will be is still to be determined). In fact, the superiority of these open systems is so strong that both Verizon and AT&T have declared their intention to run open networks (though the details of their "openness" have yet to be released as of this writing). Yet even these approximations of openness are steps away from a fully proprietary infrastructure and towards a more open, interoperable, and innovation-supporting one. Within the data communications realm, today most municipal and enterprise 802.11 (WiFi/WiMAX) wireless networks are entirely proprietary. For example, a Motorola 802.11 system will not interoperate directly with a Tropos system, which will not interoperate directly with a Meru system, which will not interoperate directly with a Meraki system, etc. In fact, most consumers have no idea that the links they rely on to access Internet and Intranet services lock geographical areas into path dependencies with specific vendors (and their specific capabilities and limitations). Disconcertingly, in an era when interoperability of applications, services, and communications is assumed, and the communities that people participate in are geographically dispersed, the immediate and long-term ramifications of this geospatial lock-in remain almost entirely unexplored. Closed technologies have the potential to constrain the positive potentials of Intranets if their widespread adoption stems more from an emphasis on corporate profits than maximizing wireless networks' public benefits. Unlike the Internet, these wireless "last-mile" links can disallow users from extending the network (e.g., using bridges and routers), adding applications (e.g., VoIP, P2P, IRC, IM), interconnecting additional services (e.g., streaming servers, distributed file storage, local webhosting), or connecting directly with one another. The wireless medium is a de facto throwback to an era paralleling AT&T's control over which devices could be connected to their network and which technologies would thus be developed. For unsuspecting communities and decision-makers, the long-term effects of wireless lock-in may be more detrimental than any policy previously witnessed in telecommunications history. Thus far, regulatory bodies and decision-makers remain unwilling to address these fundamental concerns, even though, as Nuechterlein and Weiser (2007) document, telecommunications history is rife with cautionary tales of regulatory inaction. Within this context, communications researchers, in particular, have an opportunity to both study and positively impact the future of U.S. telecommunications. By facilitating interventions in telecommunications policy, engaging with community media activists, and emphasizing how the democratic potentials of new technologies are dependent on sound public policies, the praxis of contemporary academics can help shift the trajectory of global communications and shake the foundations of current and future Intranet practices. The Challenges of the Intranet Era The opportunities of Intranet technologies hold much promise, but also require meaningful changes to how we study communication and render media policy as we implement these new telecommunications systems. Communications departments are increasingly adding new media strands to study emergent technologies and we are beginning to see more research addressing issues like privacy and surveillance; Intranet vs. Internet services and applications; social networking; wired and wireless network neutrality; technology convergence, empowerment and independence; digital divides and inclusion efforts; digital rights management; and current and pending telecommunications proceedings (Lessig, 2001, 2006; Wu, 2007; Meinrath & Pickard, 2008). The emergent roles of Intranets enabled by the Internet, digital television, cell phones, PDAs and other digital media are providing a powerful set of tools which challenge and shift social and economic behavior. While it is easy to slip into a perspective where we see these changes as a positive global phenomenon, the vast majority of humanity -- over five billion people as of 2008 -- do not have Internet access and are not directly participating in this "information revolution"; and these divides have implications that have only just begun to be studied. Today, computer-mediated communication is, according to OECD, ITU, PEW Center, FCC and other statistics, far more prevalent among the affluent and highly educated. Meanwhile, the rural-urban divide in Internet connectivity, contrary to rosy press reports, may actually be worsening in the United States. While we often look to emergent technologies and new media for their "potential of being used as a liberatory and empowering tool by many people and...for the disadvantaged and excluded to 'challenge entrenched positions and structures'" (Keeble & Loader, 2002, p. 5), these new communications media are actually little-understood and grossly under-utilized (Cooper, 2003, 2006; Pickard, 2008). Media activists have played a pivotal role in deploying these technologies and opening up and shaping policy debates regarding community Intranets and other forms of Internet-enabled communication (Pickard, 2006a, 2006b). As they expand the boundaries on what is possible with new technologies, increasingly they engage with policy debates, including spectrum ownership, network neutrality and open access issues of the Internet, privacy, surveillance, and intellectual property law. Likewise, the political and regulatory battles of the next few years will determine the trajectory of communications development for generations to come. Non-profit organizations like the New America Foundation, Free Press, Public Knowledge, and the Media Access Project are often on the front lines of debates that will affect the lives of all U.S. residents and reverberate around the globe. These groups are often battling against telco incumbents with orders of magnitude more funding as well as hundreds of lobbyists and enormous PR war chests. Top-down telecommunications reforms are critically important to the creation of a more just society, but given the systematic under-resourcing of public interest organizations, grassroots implementation of next-generation communications infrastructures is useful and much-needed strategy for illustrating the potential benefits that national reforms could facilitate. In summary, community Intranets hold great promise, but the onus is on researchers and their allies to document the positive effects of these new technologies for civic engagement and democratized communications. Attentiveness to the issues discussed in this chapter will help scientists, activists, decision-makers, and practitioners better understand the intersections among the technologies, uses, and policies of Intranet-enabled communications. Communications research strives to shed light on changes in how we interact and communicate. As observers and participants during this time of rapid change, we have a responsibility to the global community to develop sound public policy dedicated to social justice. Keeping up with the rapidity of change is certainly a challenge, but while many outcomes of these shifts in communications are still to be determined, the opportunity to help shape the technologies and policies of next-generation communications and positively impact the day-to-day lives of millions of people has never been greater. Anderson, Janna (2005). Imagining the Internet. Lanham: Rowman & Littlefield Publishers, Inc. Barney, Darin (2000). Prometheus Wired. Chicago: University of Chicago Press. Benkler, Yochai (2006). The Wealth of Networks. New Haven, CT. Yale University Press. Benkler, Yochai. Freedom in the COMMONS: Towards a Political Economy of Information. Duke Law Journal, Vol. 52:1245. Biddle, P., England, P., Peinado, M., & Willman, B. (2002). The Darknet and the Future of Content Distribution. Available online at: http://msl1.mit.edu/ESD10/docs/darknet5.pdf Bradner, S., claffy, kc., & Meinrath, S. (2007). The (un)Economic Internet? IEEE Internet Computing. May-June 2007: 53-58. Brock, Gerald (2003). The Second Information Revolution. Cambridge: Harvard University Press. Castells, Manuel (2001). The Internet Galaxy. Oxford: Oxford University Press. Cooper, Mark (2006). The Case against Media Consolidation. New York: Donald McGannon Center for Communications Research. Cooper, Mark (2003). Media Ownership and Democracy in the Digital Information Age. Washington, DC: Consumer Federation of America. Cooper, Mark (2004). Open Architecture as Communications Policy. Stanford: Center for Internet and Society. De Bernabé, Fernando Gil (2004). Connected Homes. London: Premium Publishing. Federal Communications Commission. FCC 68-661, "In the Matter of Use of the Carterphone Device in Message Toll Telephone Service." Accessed June 20, 2007 and available online at: http://www.uiowa.edu/~cyberlaw/FCCOps/1968/13F2-420.html Hiller, Janine and Ronnie Cohen (2002). Internet Law & Policy. Englewood Cliffs: Prentice Hall. Howley, Kevin (2005). Community Media. Cambridge: Cambridge University Press. Kaczorowki, Willi (2004). Connected Government. London: Premium Publishing. Kahin, Brian and James Keller (1997). Coordinating the Internet. Cambridge: MIT Press. Keeble, Leigh and Brian Loader (2002). Community Informatics. New York: Routledge. Lessig, Lawrence (2001). The Future of Ideas. New York: Random House. Lessig, Lawrence (2006). Code: Version 2.0. New York: Basic Books. Louis, P. (2000). Telecommunications Internetworking. New York: McGraw-Hill Professional. Meinrath, Sascha. D., & Pickard V. W. (2008). The New Network Neutrality: Criteria for Internet Freedom. International Journal of Communication Law and Policy, 12, 225-243. Meinrath, Sascha D. (2005). "Wirelessing the World: The Battle over (Community) Wireless Networks" in McChesney, Robert et.al. The Future of Media. New York: Seven Stories Press, 2005, pages 219-242. Meinrath, Sascha, D., Bahl, V. Carter, K., Cooper, M.,Scott, B.& Westervelt (2005). "'Openness' and the Public Airwaves." Plenary panel at the MobiHoc Conference, Urbana, IL. May 26, 2005. Association for Computing Machinery. Proceedings of the 6th ACM International Symposium on Mobile Ad-hoc Networking and Computing. New York: ACM. Meinrath, Sascha (2004). "Reactions to Contemporary Activist-Scholars and the 'Midwestern Mystique': A case study for utilizing an evolving methodology in contentious contexts." Meinrath, Sascha & Calabrese, Michael (2007). Unlicensed Broadband Device Technologies: "White Space Device" Operations on the TV Band and the Myth of Harmful Interference. New America Foundation. Meinrath, Sascha & claffy, kc. (2007, Sept.). The COMMONS Initiative: Cooperative Measurement and Modeling of Open Networked Systems. Paper presented at the Telecommunications Policy and Research Conference. Washington, DC. Nuechterlein, E., Jonathan and Philip Weiser (2007). Digital Crossroads. Cambridge: The MIT Press. Pickard, Victor W. (2006a). Assessing the Radical Democracy of Indymedia: Discursive, Technical and Institutional Constructions. Critical Studies in Media Communication, 23(1), 19-38. Pickard, Victor W. (2006b). United yet Autonomous: Indymedia and the Struggle to Sustain a Radical Democratic Network. Media Culture & Society, 28 (3), 315-336. Pickard, Victor W. (2008). Cooptation and Cooperation: Institutional Exemplars of Democratic Internet Technology. New Media and Society, 10 (4), 625-645. Poster, Mark (2001). What's the Matter with the Internet?. Minneapolis: University of Minnesota Press. Preston, Paschal (2001). Reshaping Communications. Thousand Oaks: Sage. Stolterman E. (2002). "Creating community in conspiracy with the enemy." In Keeble, Leigh and Brian Loader. Community Informatics. New York: Routledge. Sunstein, Cass (2001). Republic.com. Princeton: Princeton University Press. Waclawsky, John C. (2004). Closed Systems, Closed Architectures, & Closed Minds. Business Communications Review. Waclawsky, John C. (2005). Where do System Standards go from Here? Business Communications Review. Wu, Tim. (2007). Wireless Net Neutrality: Cellular Carterfone and Consumer Choice in Mobile Broadband. Working Paper #17. New America Foundation, Wireless Futures Program. A version of this paper will be published in Kevin Howley (Ed.), Globalization and Communicative Democracy: Community Media in the 21st Century, London: Sage Publications. Young, David (2003). Personal Communication. December 3. 2003.
This activity will have you create a brand new actor class and add an object of that class using the Greenfoot IDE. In this activity, you will: - Create a new class - Kangaroo and cut and paste code to give it content. - Compile your new class. - Create a new object of this class and add it to the wombatWorld - Modify some lines of the Kangaroo class, compile and add it to wombatWorld and see how the code changes affect behavior. Skills you will learn include: - Being able to use the Greenfoot IDE to create a new Actor class. - Learn more about classes, objects and methods. - Learn some basic programming concepts and constructs. - Learn how to use Sun’s online Java programming manuals. - Start up Greenfoot. - Make sure you have opened the Wombats scenario. If not, click Scenario->Open… and select Wombats. - Access the context menu (right-click on Windows, ctrl-click on Mac) on the Actor class located on the right side of the IDE. Select New Subclass... . Name your new class Kangaroo and select a image from category: animals: kangaroo and click Ok - Right click on Kangaroo. You’ll notice that there is no new Kangaroo(). That’s because we need to add code for this class before we can create objects. Instead, select open editor. Just to make this simple, select all the code, (click at the top and drag the mouse to the bottom of the existing code). Then click the Cut button. It will be easier to just replace all this code for this activity. - Double click on class Wombat. Select all the source code (similar to what we did for Kangaroo), but this time click the Copy button. - Go back to the editor for Kangaroo. Click Paste. We now have a copy of all the Wombat code in Kangaroo. - But this is a Kangaroo, NOT a wombat. From the editor for Kangaroo, select the menu Tools -> Replace… The find dialog will popup. In the text area labeled find, type Wombat. In the text area labeled replace, type Kangaroo. Click Replace All to replace all occurrences of Wombat with Kangaroo. - Compile, add wombats, kangaroos and leaves and run the scenario. We have created our first new class! - At this point, please refer to the document Let's Make a Kangaroo - Digging Deeper. The Digging Deeper document will contain some code that you will need top copy and paste into your program. Open the Digging Deeper page in another tab or window, and continue with the following steps. - To class Kangaroo, add method turnRandom(). Double click on class Kangaroo. From the Digging Deeper document, copy the code for the turnRandom() method (the code will be displayed in a light blue box). At the bottom of the Kanagaroo class, just before the final closing ‘}’ paste the turnRandom() code. - In the method act(), change the call from turnLeft() to turnRandom(). This will cause your Kangaroos to turn in a random direction, while your Wombats will continue to always turn left. - Compile, place objects and run scenario. Note the differences in behavior between Wombat and Kangaroo. - Read through the Digging Deeper document for details about the code changes you just made. - Please read the following instructions for preparing your work prior to submission to this activity. - Select Scenario -> Export. Choose Webpage (from Publish, Webpage, Application) and select a location to save the scenario as an applet (a jar file and a .html file). - Submit the exported jar file for this activity.
(NaturalNews) Outbreaks of asthma and allergies have increased considerably since the early 1980s. Asthma statistics outline a jump of 74% for children between the ages of 5-14 years and 160% for children under four years old, according to the National Institutes of Health. Additionally, one of every four children in the U.S. also suffers from some type of allergy. With annual costs in the billions, researchers offer a glimpse of hope for a natural cure. Earlier this month, published findings in Pediatric Allergy and Immunology from a seven-year study of 460 Spanish children concluded that a definitive link exists between symptom-free children and a diet rich in "fruity vegetables" and fish. Fruity vegetables are those that grow from a blossom in the plant that comes from seed; such veggies include tomatoes, zucchini, eggplants, green beans, cucumbers and butternut squash, among others. Scientists explain that the protective effects of this type of diet were irrefutable, and were very specific to this kind of vegetables. Researchers tested different types of foods such as diary, meats and vegetables, but only fruity vegetables and fish were beneficial to these conditions. Although this is not the first study that links a benefit of a diet rich in fish and vegetables to health improvement, the findings here are quite powerful as the researchers followed the children from the womb until age six, taking the mother's dietary habits into consideration among other factors. Incidents of asthma and allergies were reduced significantly in children consuming more than 2 oz of fish and 1½ oz of fruity vegetables a day. Anita Khalek resides in North Carolina. As a total wellness advocate, she is a passionate believer in the healing power of Nature and is inspired by local, organic and fresh foods to nurture her family and friends. Anita is currently working on several projects including a cookbook. Visit her blog for fresh, healthy recipes at myFreshLevant.com. Questions and suggestions can be directed to [email protected] FREE online report shows how we can save America through a nutrition health care revolution. "Eating healthy is patriotic!" Click here to read it now... Healing Power of Sunlight and Vitamin D In this exclusive interview, Dr. Michael Holick reveals fascinating facts on how vitamin D is created and used in the human body to ward off chronic diseases like cancer, osteoporosis, mental disorders and more. Click here to read it now... Get the Full Story The International Medical Council on Vaccination has released, exclusively through NaturalNews.com, a groundbreaking document containing the signatures of physicians, brain surgeons and professors, all of which have signed on to a document stating that vaccines pose a significant risk of harm to the health of children. Click here to read it now... Ranger Storable Organics GMO-free, chemical-free foods and superfoods for long-term storage and preparedness. See selection at www.StorableOrganics.com
There are many reasons for man to be grateful for fire. The two biggest are that is provides us with a source for cooking and heat. Via the source of gas, fire still continues to provide us with heat and a source of cooking. The fire is under much greater control than it was when Neanderthals discovered fire tens of thousands of years ago. Fire is something us to be grateful for in many ways, but it can also be a scary thing that causes tremendous damage each year. We can still have a sense of gratefulness towards those that work to fight fires. As technology advances the fight against putting unwanted fires out seems to continue to get better. Accidental fires can happen for many reasons. Some happen through nature from lightning and other sources, while others occur from man-made sources or arson. Most of us take much of what we have in our daily lives for granted. Having fire is something that we don’t usually think about consciously think about. It’s always prevalent in our lives. We use fire to heat our homes and with a gas stove cook our food. When we roll our garbage cans out to the curb once or twice per week do we think about how the garbage is disposed of? It’s picked up by a garbage truck and eventually brought to a landfill where the waste is slowly burned off. Fire can also be used in many other ways. Other sources can be to provide light, energy, blacksmithing and glass making all require fire. Without fire, most of the things we take for granted would come to a halt. Although most of us don’t use it directly, on a daily basis Fire provides us with most of the things we now consider necessities to live, thrive and survive.
Universal XSS (UXSS) is a particular type of Cross-Site Scripting that has the ability to be triggered by exploiting flaws inside browsers, instead of leveraging the vulnerabilities against insecure web sites. One of these UXSS flaws was disclosed earlier today on russian forum rdot, the flaw takes advantage of the Data URI Scheme to execute script using the MIME Type ‘text/html’, which makes the browser render it as a webpage. So, how would an attacker exploit this fancy new bug? The first trick here is to use the Data URI Scheme in combination with another (less dangerous) flaw called “Open Redirection” which happens when an attacker can use the webpage to redirect the user to any URI of his choice. So if you don’t have one of these “Open Redirection” bugs on your website, you’re safe, right? Not so fast. There’s websites that are made exclusively for this purpose to shorten URI’s like bit.ly andtinyurl.com. Here’s a proof-of-concept link on tinyurl: http://tinyurl.com/operauxss. If you open this link in Opera, you will find yourself looking at an alert box saying “tinyurl.com”. Hang on, there’s more! The original author of the forum post, M_script, pointed out that you could take this one step further. This is where the clever part of this vulnerability comes in play. If you embed a script in the payload that calls the method location.reload() in Opera, it will update the current domain to the original domain where the link was clicked. This means that an attacker may execute script not only from the domain containing the open redirect, but also All domains allowing links to other domains. Yes, you read that right. Here’s a proof-of-concept link with the second stage of this vulnerability: http://tinyurl.com/operauxssstep2. Other browsers block redirects to the Data URI Scheme or changes the domain where the script is executed from, avoiding the XSS issue. What can you do to protect yourself against this bug? If you don’t want to change browser, you can head over to Tools->Preferences->Advanced->Network and uncheck the checkbox labeled “Enable automatic redirection”. Update: Opera has now released a patch for this problem. Update your Opera browser to version 12.10. By: Mathias Karlsson
Biological Warfare (cont.) IN THIS ARTICLE Tularemia is an infection that can strike humans and animals. It is caused by the bacterium Francisella tularensis. The disease causes fever, localized skin or mucous membrane ulcerations, regional swelling of lymph glands, and occasionally pneumonia. G.W. McCay discovered the disease in Tulare County, Calif., in 1911. The first confirmed case of human disease was reported in 1914. Edward Francis, who described transmission by deer flies via infected blood, coined the term tularemia in 1921. It has been considered an important biological warfare agent because it can infect many people if dispersed by the aerosol route. Rabbits and ticks most commonly spread tularemia in North America. In other areas of the world, tularemia is transmitted by water rats and other aquatic animals. The bacteria are usually introduced into the victim through breaks in the skin or through the mucous membranes of the eye, respiratory tract, or GI tract. Ten virulent organisms injected under the skin from a bite or 10-50 organisms breathed into the lungs can cause infection in humans. Hunters may contract this disease by trapping and skinning rabbits in some parts of the country. Signs and Symptoms Tularemia has six major forms: Victims with the most common form, ulceroglandular type, typically have a single papulo-ulcerative lesion with a central scar (often at the site of a tick bite) and associated tender regional lymphadenopathy (swollen lymph nodes). A sore up to 1 inch across may appear on the skin in a majority of people and is the most common sign of tularemia. If the bite associated with infection was from an animal carrying the disease, the sore is usually on the upper part of a person's body, such as on the arm. If the infection came from an insect bite, the sore might appear on the lower part of the body, such as on the leg. Enlarged lymph nodes are seen in a majority of victims and may be the initial or the only sign of infection. Although enlarged lymph nodes usually occur as single lesions, they may appear in groups. Enlarged lymph nodes may come and go and last for as long as three years. When swollen, they may be confused with buboes of bubonic plague. The glandular form of the disease has tender regional lymphadenopathy but no identifiable skin lesion. Oculoglandular tularemia presents as conjunctivitis (white of the eyes are red and inflamed), increased tearing, photophobia, and tender enlarged lymph nodes in the head and neck region. Pharyngeal tularemia presents with a sore throat, fever, and swelling in the neck. The most serious forms of tularemia are typhoidal and pneumonic disease. Patients with typhoidal disease can have fever, chills, anorexia, abdominal pain, diarrhea, headache, myalgias, sore throat, and cough. Patients with pneumonic tularemia have mostly pulmonary findings. Many patients with pulmonary findings have underlying typhoidal tularemia. Tularemia can be diagnosed by growing the bacteria in the laboratory from samples taken of blood, ulcers, sputum, and other body fluids. Serological tests (done to detect antibodies against tularemia), direct fluorescent antibody (DFA) staining of clinical specimens, and polymerase chain reaction (PCR) tests on clinical specimens are available from specialized labs. Victims with tularemia who do not receive appropriate antibiotics may have a prolonged illness with weakness and weight loss. Treated properly, very few people with tularemia die. If a patient has severe disease, it is recommended to give them a 14-day course of streptomycin or gentamicin. For patients with mild to moderate disease, oral ciprofloxacin or doxycycline is recommended. In children with mild to moderate disease, gentamycin is often recommended. However, despite the concerns over side effects in children, some clinicians may recommend oral treatment with ciprofloxacin or doxycycline. Although laboratory-related infections with this organism are common, human-to-human spread is unusual. Victims do not need to be isolated from others. There is no recommendation for prophylactic treatment of people going into areas where tularemia is more common. In fact, in the case of low-risk exposure, observation without antibiotics is recommended. There no longer exists a vaccine against tularemia. New vaccines are under development. In the event of a biological attack using Francisella tularensis, the recommendation is to treat exposed people who are not yet ill with 14 days of oral doxycycline or ciprofloxacin. Medically Reviewed by a Doctor on 6/30/2016 Must Read Articles Related to Biological Warfare
Around 400 new cases of oral cancer are diagnosed each year in Ireland, and two people die each week from the disease, yet awareness is very low. Oral cancer is the sixth most common cancer among men and it kills more people than cervical cancer and malignant melanomas. Patients on the Medical card and PRSI schemes are entitled to a free check up once a year and should avail of it. Your dental examination includes a free oral cancer screening to detect the early signs of this devastating disease. What are the signs and symptoms of oral cancer? - The most significant sign to look for is an ulcer or sore in the mouth that is not healing Oral Cancer of the lip Oral cancer of the tongue Also look out for: - * Unexplained bleeding in the mouth - * Unexplained numbness, loss of feeling, or pain/tenderness in any area of the face, mouth, or neck - * Persistent sores on the face, neck, or mouth that bleed easily and do not heal within two weeks - * A soreness or feeling that something is caught in the back of the throat - * Difficulty chewing or swallowing, speaking, or moving the jaw or tongue - * Hoarseness, chronic sore throat, or changes in the voice - * Ear pain - * A change in the way your teeth or dentures fit together – a change in your “bite” - * Dramatic weight loss If you notice any of these changes, contact your dentist immediately for a professional examination. I recently noticed a whitish patch in my mouth. Is this oral cancer? Probably not, but this whitish patch may be leukoplakia. Leukoplakia, a condition caused by excess cell growth, can form on the cheeks, gums, or tongue. Leukoplakia is commonly seen in tobacco users, in people with ill-fitting dentures, and in those who have a habit of chewing on their cheek. This condition can progress to cancer. Red patches in the mouth (called erythroplakia) are less common than leukoplakia but have an even greater potential for being cancerous. Any white or red lesion in your mouth should be evaluated by your dentist. Who gets oral cancer and what are the risk factors for oral cancer? Well everybody has some risk, but people who smoke and drink alcohol have an even higher risk of cancer than those who only drink or only use tobacco products. The risk of developing oral cavity and pharynx cancers increases both with the amount as well as the length of time tobacco and alcohol products are used. Men face twice the risk of developing oral cancer as women, and men who are over age 50 face the greatest risk. Risk factors for the development of oral cancer include: - Cigarette, cigar, or pipe smoking — Smokers are six times more likely than non-smokers to develop oral cancers. - Use of smokeless tobacco products (for example, dip, snuff, or chewing tobacco) — Use of these products increase the risk of cancers of the cheek, gums, and lining of the lips. - Excessive consumption of alcohol — Oral cancers are about six times more common in drinkers than in non-drinkers. - Family history of cancer - Excessive exposure to the sun — especially at a young age What can I do to prevent oral cancer You can take an active role in preventing oral cancer or detecting it early, should it occur. - Conduct a self exam regularly. Using a bright light and a mirror, look and feel your lips and front of your gums. Tilt your head back and look at and feel the roof of your mouth. Pull your checks out to view the inside of your mouth, the lining of your cheeks, and the back gums. Pull out your tongue and look at all surfaces. Examine the floor of your mouth. Look at the back of your throat. Feel for lumps or enlarged lymph nodes in both sides of your neck and under your lower jaw. Call your dental clinic if you notice any changes in the appearance of your mouth or any of the signs and symptoms mentioned above. - See your dentist on a regular schedule. Even though you might be conducting frequent self exams, sometimes dangerous spots or sores in the mouth can be very tiny and difficult to see on your own. We recommend oral cancer screening exams every three years for people over age 20 and annually for those over age 40. During your next dental appointment, ask your dentist to perform an oral exam. Early detection can improve the chance of successful treatment. - Don’t smoke or use any tobacco products and drink alcohol in moderation. (Refrain from binge drinking.) - Eat a well balanced diet. - Limit your exposure to the sun. Repeated exposure increases the risk of cancer on the lip, especially the lower lip. When in the sun, use UV-A/B-blocking sun protective lotions on your skin as well as your lips. It is important to note that more than 25% of all oral cancers occur in people who do not smoke and who only drink alcohol occasionally.
By J. Matthew Roney The U.N. Food and Agriculture Organization (FAO) projects that the world’s wild fish harvest will fall to 90 million tons in 2012, down 2 percent from 2011. This is close to 4 percent below the all-time peak haul of nearly 94 million tons in 1996. The wild fish catch per person has dropped even more dramatically, from 17 kilograms (37.5 pounds) per person at its height in 1988 to 13 kilograms in 2012—a 37-year low. While wild fish harvests have flattened out during this time, the output from fish farming has soared from 24 million tons in the mid-1990s to a projected 67 million tons in 2012. Over the last several decades, as demand for fish and shellfish for food, feed, and other products rose dramatically, fishing operations have used increasingly sophisticated technologies—such as on-vessel refrigeration and processing facilities, spotter planes, and GPS satellites. Industrial fishing fleets initially targeted the northern hemisphere’s coastal fish stocks, then as stocks were depleted they expanded progressively southward on average close to one degree of latitude annually since 1950. The fastest expansion was during the 1980s and early 1990s. Thereafter, the only frontiers remaining were the high seas, the hard-to-reach waters near Antarctica and in the Arctic, and the depths of the oceans. The escalating pursuit of fish—now with gross revenue exceeding $80 billion per year—has had heavy ecological consequences, including the alteration of marine food webs via a massive reduction in the populations of larger, longer-lived predatory fish such as tunas, cods, and marlins. Unselective fishing gear, including longlines and bottom-scraping trawls, kill large numbers of non-target animals like sea turtles, sharks, and corals. As of 2009, some 57 percent of the oceanic fish stocks evaluated by FAO are “fully exploited,” with harvest levels at or near what fisheries scientists call maximum sustainable yield (MSY). If we think of a fish stock as a savings account, fishing at MSY is theoretically similar to withdrawing only the accrued interest, avoiding dipping into the principal. Some 30 percent of stocks are “overexploited”—they have been fished beyond MSY and require strong management intervention in order to rebuild. The share of stocks in this category has tripled since the mid-1970s. A well-known example of this is the Newfoundland cod fishery that collapsed in the early 1990s and has yet to recover. This leaves just 13 percent of oceanic fish stocks in the “non-fully exploited” category, down from 40 percent in 1974. Unfortunately, these remaining stocks tend to have very limited potential for safely increasing the catch. These FAO figures describe 395 fisheries that account for some 70 percent of the global catch. Included are the small minority that have undergone the time-consuming and expensive process of formal scientific stock assessment, with the remainder being “unassessed” fisheries. There are thousands more unassessed fisheries, however, that are absent from the FAO analysis. In a 2012 Science article , Christopher Costello and colleagues published the first attempt to characterize all of the world’s unassessed fisheries. The authors report that 64 percent of them were overexploited as of 2009. The top 10 fished species represent roughly one quarter of the world catch. Nearly all of the stocks of these species are considered fully exploited (most of these fish have more than one geographically distinct stock), including both of the major stocks of Peruvian anchovy, the world’s leading wild-caught fish. Stocks that are overexploited and in need of rebuilding include largehead hairtail—a ribbon-like predator caught mainly by Chinese ships—in its main fishing grounds in the Northwest Pacific. (See data.) Despite the unsustainable nature of current harvest levels, countries continue to subsidize fishing fleets in ways that encourage even higher catches. Governments around the world spend an estimated $16 billion annually on increasing fleet size and fish-catching ability, including $4 billion for fuel subsidies. Industrial countries spend some $10 billion of that total. More than $2 billion is spent by China, whose 15-million-ton catch is nearly triple that of the next closest country, Indonesia. The world’s fisheries reveal a classic case of diminishing returns. In a 2012 paper published in the journal Fish and Fisheries, scientists found that overall engine power for the world fishing fleet has grown 10-fold since 1950, while the total catch has grown just fivefold. (In Asia, home to 3.2 million of the estimated 4.4 million fishing vessels worldwide, the growth was 25-fold.) In other words, ships now have to use twice as much energy to catch a ton of fish as they did 60 years ago. Seafood plays a vital role in world food security. Roughly 3 billion people get about 20 percent of their animal protein from fishery products. It is perhaps unsurprising that fish account for half or more of animal protein consumption in small island developing countries, but the same is true for some much more populous countries, such as Bangladesh and Indonesia (home to a combined 400 million people). With the wild catch no longer increasing, aquaculture has emerged as the world’s fastest-growing animal protein source, soon to overtake beef in total tonnage. China, which has raised carp for millennia, produced nearly 37 million tons of farmed fish in 2010, which was 60 percent of the world total. Six of the world’s top 10 farmed fish are carp species, either filter feeders or those fed a largely plant-based diet. But a commonly cited drawback of aquaculture is that wild-caught forage fish—smaller plankton consumers that support the higher levels of the food chain—are often turned into fishmeal and oil used to feed farmed predatory fish, such as salmon and shrimp. In fact, a caught Peruvian anchovy’s main fate is to be fed to farmed fish, pigs, and chickens. And while the share of the wild catch fed to farmed fish has declined since the mid-1990s, scientists recently have called for a reduction in fishing pressure on forage fish by as much as half, well below MSY. They note that if poor environmental conditions lead to poor spawning success in a given year, a much lower catch would provide a buffer against collapse and ripples up the food chain. Recent developments in the Peruvian anchovy fishery help illustrate the vulnerability of forage fish: Warm Pacific Ocean waters associated with a mild El Niño were implicated in a 40 percent drop in the fish’s population between 2011 and 2012. In response, Peru, which hauls in over 80 percent of the total harvest, cut its allowed catch for the upcoming season by two thirds to its lowest level in 25 years. The country’s top fisheries regulator admitted, “Technically, we should have said the quota is zero.” There is hope for rebuilding the world’s fisheries. In several well-studied regional systems, multiple fisheries have bounced back from collapse after adopting a combination of management measures. These include restricting gear types, lowering the total allowable catch, dividing shares of the catch among fishers, and designating marine protected areas (MPAs). Around coral reefs in Kenya, for example, communities removed beach seine nets and co-managed a network of “no-take” zones. The result was an increase in total fish biomass, size per fish, and fishers’ incomes. Worldwide, 8.1 million square kilometers of MPAs have been designated-—an area larger than Australia but covering only about 2 percent of the oceans. Well-designed and managed MPAs offering varying levels of protection provide multiple ecological and social benefits, but marine reserves where fishing is excluded entirely are most effective. A 2010 study of no-take reserves in Australia’s Great Barrier Reef showed up to a doubling of fish abundance and size within them, as well as increased fish populations outside reserve boundaries. In June 2012, Australia announced that it would increase its number of reserves of all kinds from 27 to 60, protecting one third of its waters. At an 1883 international fisheries exhibition, Thomas Huxley, president of the British Royal Society, said, “Probably all the great sea fisheries are inexhaustible; that is to say that nothing we do seriously affects the number of the fish.” This view prevailed well into the twentieth century. Faced now for several decades with evidence to the contrary, the world has made some progress. But securing a future for world fisheries, especially in a time of warming and acidifying seas, means moving much more quickly to put scientific advice into practice. # # # Data and additional resources at www.earth-policy.org.
Braeburn is one of the most important commercial apple varieties. It originated in New Zealand in the 1950s, and by the last decades of the 20th century had been planted in all the major warm apple-growing regions of the world. Braeburn accounts for 40% of the entire apple production of New Zealand. Even in conservative Washington state, the most important apple-producing area of the USA, where Red Delicious and Golden Delicious have always held sway, Braeburn is now in the top 5 varieties produced. The reasons for this success are not difficult to pinpoint. Braeburn has all the necessary criteria for large-scale production: it is fairly easy to grow, produces heavily and early in the life of the tree, it stores well, and withstands the handling demands of international supply chains. What marks it out from the competition is flavour. Braeburn was the first modern apple variety in large-scale production where the flavour was genuinely on a par with the older classic apple varieties. Braeburn's depth of flavour makes its main competition - Red Delicious and Golden Delicious - seem one-dimensional in comparison. At a time when consumers were starting to look for something less bland in their weekly shopping, Braeburn was the right apple at the right time. The commercial success of Braeburn has opened the way for the development of many new apple varieties where flavour is now one of the main selection criteria. Braeburn was one of the first "bi-coloured" varieties, a characteristic now regarded as essential for sales success. In comparison the first wave of supermarket apple varieties were either bright red (Red Delicious) or shades of solid green (Golden Delicious and Granny Smith). This combination of modern colouring and flavour means that Braeburn was effectively the first of the new-wave of modern apple varieties. The first Braeburn tree was discovered growing in New Zealand in the 1950s, and is named after Braeburn Orchards, where it was first grown commercially. It is generally thought to be a seedling of a variety called Lady Hamilton. The other parent is not known, but is popularly believed to be Granny Smith - quite possible given the time and location of its discovery, but there seems to be no scientific evidence to confirm this theory. When conditions are right there is no doubt that Braeburn is a first-class dessert apple. It easily outstrips its late 20th century peer group (Golden Delicious, Granny Smith, Red Delicious) with a richness and complexity of flavour that they cannot match. In fact in many ways Braeburn is now the benchmark apple variety against which all other commercial varieties should be ranked. It is crisp, without being hard, and very juicy. It snaps cleanly to the bite, and there is an immediate rush of strong apple flavours. The overall flavour is sharp and refreshing but with a good balance of sweetness - and never sugary. There is occasionally a hint of pear-drops to the flavour of a new-season Braeburn (a characteristic which is more prominent in its offspring Jazz). Braeburn is at its best when cooled slightly below room temperature, and if you get a good one it really reminds you why you like eating apples. If there is a downside to Braeburn, it is probably poor quality control. Braeburn is grown throughout the warm apple-growing regions of the world, and it also keeps well in storage. As a result there can be quite a variation in quality and flavour of Braeburn apples reaching the consumer from different countries and at different times of the year. Since Braeburn is too old to be trade-marked, there is little control over the "brand" - quite a contrast with the rigourously-controlled production of Pink Lady for example. Of the southern hemisphere producers, we think Braeburns from Chile are often good - at their best in June. Braeburn is also widely grown in Europe, and France seems to have the best climate for producing good ones - try them in November. A number of sports of the original Braeburn have been developed, including: Hidala, Mahana Red, Royal Braeburn, Hillwell, and Southern Rose. Braeburn's other weakness is that whilst it is not difficult to grow, it is difficult to grow in an organic regime - although this is also true of most of its competitors. Apple varieties which have been developed for disease resistance and therefore more amenable to organic production such as Topaz - in many ways quite similar to Braeburn - have not achieved the same commercial success. Braeburn stores very well, and apples for cold store are generally picked whilst still slightly immature. Whilst some apples improve in store, Braeburn is arguably at its best soon after picking. Some growers and supermarkets offer premium tree-ripened Braeburns from time to time and these are worth trying. They are likely to have more red and less green colouring than conventionally stored apples. Braeburn is grown commercially in the southern UK, but it really needs a warmer climate and longer growing-season than is usually possible here. According to UK government DEFRA statistics, in July 1994 there were about 194 hectares of Braeburn orchards in the UK - compared with 669 hectares for Gala and more than 3,000 hectares for Cox. Even early varieties with little shelf-life such as Discovery (300 hectares) and Worcester Pearmain (213 hectares) are grown more extensively than Braeburn. Whilst UK supermarkets are under some pressure to source apples from within the UK, it is perhaps questionable whether growing varieties like Braeburn, which are not really suited to the UK climate, is the best solution. However, to partly contradict this view, another view is that the marginal UK climate can actually produce better flavour in an apple compared to ones grown in more temperate European climates (notably France or Italy). On balance we think the main problem with UK-grown Braeburn is not so much the lack of sunlight, but the shorter growing season. Braeburn is a relatively easy variety for the backyard orchardist. It likes a warm but not hot climate. It can be grown successfully in the southern UK, and most parts of the USA. In the 21st century Braeburn faces competition as supermarkets start to offer a much wider choice of apple varieties - not least from one of its own offspring, Jazz (a cross between Braeburn with pollen from Gala). Compared to the last decades of the 20th century when just a few apple varieties dominated world production, the market is now much more diverse. However when properly grown and marketed Braeburn is such a good apple variety that it is likely to remain one of the leading varieties for many years to come.
Apparently I care about how our ancestors spent their time after dusk. Wow, that sounded dirty. I didn’t mean it to be. Well, I suppose I did because did you know that they use to hang cowdung at the foot of their beds to detract fleas? That everyone washed only their feet before bed and everything was disgusting? That clean linen (and remember people gave birth and died on beds ) was almost unheard of in the lower classes? That spacial constraints forced entire families to sleep in one or two beds, that any visiting parties became a bedfellow and ALSO shared a bed? It goes on and on. People had a better sense of using their other senses to wade through the dark: especially merchants returning home by horse after a long market day. In the same vein, however, and so unfortunately, our forebears were prone to any and all kind of accidents: falling in ditches, losing their footing and ending up in a well. Thank the lord for flashlights. Night was a time of superstition, of theft and murder, of a world unhindered by social obligations for the most part ( those crept in in the upper classes during the 18th Century and the Industrial Revolution changed it all ) where sleep and rest was a God-given gift after the toil of the day. The book speaks to everything about night. Everything that kept our forebears ticking after the clock settled beyond the dusk hours. The most interesting bit of the extensive research? The fact that we used to sleep in bi-modal and segmented patterns. Get this, blog readers, research (which cites greats works of literature, primary sources and numerous first hand accounts from Barnaby Rudge to Jane Eyre and Chaucer) proves that our ancestors went to be at 9 or so and woke up in the MIDDLE OF THE NIGHT for a few hours thus concluding what they called “First Sleep.” While awake, they would smoke tobacco, talk to their bedfellows, engage in “other” night-time activities (wink-wink, nudge), even visit neighbours, say their prayers (special matins created for early morning that fit so tightly into the research here) before settling into SECOND SLEEP. With the Industrial Revolution, gas-lamps and electricity, with the rise of coffee houses and the ability for those who were respectable to embark on social escapades outside of the region of the local pub or tavern (where hooligans and ladies of the night reigned supreme), people began going to bed later and sleeping through the night. No First Sleep and Second Sleep with a couple of interesting hours of waking interval betwixt. No. Just sleeping straight through. We’ve changed a lot and this book gave me the best sort of glance into the secrets of yesteryear. There is fascinating research in here; but a lot of it and a lot of citations. So if you are willing to spend some time meandering through extensive musings on night in centuries of yore, then this is your book.
The occurrence of multiple phenotypes within a sex of a single species has long puzzled behavioral ecologists. Male red-backed fairy-wrens Malurus melanocephalus exhibit 3 behaviorally distinct types in their first breeding season: breed in bright nuptial plumage, breed in dull plumage, or remain as an unpaired auxiliary (helper) with dull plumage. The retention of dull plumage by auxiliaries and dull breeders is an example of delayed plumage maturation (DPM), a widespread phenomenon in birds whose costs and benefits are not well understood. At a mechanistic level, DPM might allow dull males either to deceptively mimic females (female mimicry hypothesis) or to honestly signal their subordinate status (status-signaling hypothesis). DPM might function via either mechanism to provide ultimate benefits relative to developing nuptial plumage by increasing reproductive success, survival, or both. In this study, we tested the hypothesis that DPM is related to increased male survival in the red-backed fairy-wren via either female mimicry or status signaling. Aviary-based experiments revealed that dull males were perceived as male, which is consistent with the status-signaling hypothesis but contradicts the female mimicry hypothesis. Further aviary and field-based experiments also revealed that dull males were socially subordinate to bright males and received less aggression than bright males, further evidence for status signaling. However, male survival was not related to plumage coloration or breeding status. These findings indicate that male plumage coloration signals social status but that dull plumage does not afford a net survival advantage, perhaps because plumage color is a conditional strategy. Feminine Mimicry and Masquerade I - Columbia University Some of the most spectacular and best studied cases of Batesian polymorphism are found in swallowtails, and in some species only the female is mimetic (see an example in Fig. 5). This peculiar tendency to sex-specific polymorphism seems to be restricted to butterflies (Papilionidae and Pieridae), and virtually no other case of sex-limited mimicry seems to be reported for other insects (except for male-limited mimicry in some moths). Female-limited mimicry was often viewed as a result of negative frequency dependence: if mimicry is restricted to one sex, the effective mimetic population size is only about half that of a nondimorphic species, reducing deleterious effects of parasitism onto the warning signal. But this group-selection argument cannot in itself explain why females tend to become mimetic more often than males and why mechanisms arise that restrict the mimicry to one sex. However, more proximal, individual-selection arguments are not lacking. First, mimicry may be more beneficial to one sex than to the other. For instance, female butterflies have a less agile flight because of egg load and a more “predictable” flight when searching oviposition sites, and they suffer higher rates of attacks by visual predators. Second, male wing patterns can FIGURE 5 Female-limited mimicry in Perrhybris pyrrha (Pieridae), Eastern Peru. The female (top) is a Batesian mimic of the tiger-patterned Ithomiines and Helicomiines (see Fig. 3- , while the male (bottom) has retained a typical pierid white coloration. Scale bar, 2 cm. be constrained by sexual selection, via either female choice or male-male interactions: males could not evolve Batesian mimicry without losing mating opportunities. In experiments with North American swallowtails (of which only females mimic B. philenor), male P. glaucus painted with the mimetic pattern had a lower mating success than normal yellow males; similarly, painted P. polyxenes males had a lower success in male-male fights and therefore held lower-quality territories around hilltops. In these insects, the wing coloration appears to bear signals directed either to conspecific males or to predators, which creates a potential conflict leading to sex-limited polymorphism. It is interesting to note that Papilio and Eurytides species that mimic Parides (Papilionidae) in South America do not exhibit female-limited mimicry; different modes of sexual selection (e.g., absence of territoriality) may operate in the forest understory habitat. In a different ecological setting, diurnal males of the North American silkmoth C. promethea are exposed to visual predators, and mimicry of B. philenor is limited to males; female Callosamia fly at night and benefit more by crypsis during the day. FEMININE MIMICRY AND MASQUERADE 91 .. Bluegill males are characterized by a discrete life history polymorphism. "Parental" males mature at age 7, and have several reproductive years before they die. These males use their caudal fins to construct bowl-shaped nests in colonies that may be as small as 10 nests, but may have more than 100. Parental males then court females and defend and care for the young for the length of the care-giving period (7-10 days). These males are approximately 20cm long and have a mass of close to 200g. "Cuckolder" males, in contrast, mature at age 2. They do not provide care for the young, and instead fertilize eggs opportunistically using one of two different tactics. "Sneaker" males are small (7-10cm total length) individuals 2-3 years old. When a female has entered a nest to spawn with the nest-guarding parental male, sneakers swim very quickly into the nest, usually beneath the spawning pair, and release their sperm. "Satellite" males (also called "Female mimics") are larger (10-14cm total length) and older (4-6 years). Rather than sneak into the nest, satellites mimic the appearance and behaviour of females. They enter the nest as a parental male and a female are in the act of spawning, but release sperm rather than eggs, thus stealing fertilizations from the parental male. In approximately 90% of cases, parental males are alone with females in the nest. However, in the cases in which males are in competition for fertilizations, cuckolders fertilize the majority of eggs released. On average, approximately one-fifth of larvae in a parental's nest were sired by cuckolder males. Females provide no care to the developing larvae. They mature at age 4 and enter colonies as a shoal to spawn. A parental male will have multiple females visit his nest, and females will visit the nest of multiple males. In Lake Opinicon, bluegill breed in June, with some spawning bouts occuring in late May and early July. Most adults participate in multiple spawning bouts in a year. Female mimicry in garter snakes (PDF Download Available)
At the opening of Chapter 5, Ralph Touchett knocks on his mother's door eagerly. His mother is described as being more fatherly, her father, as more motherly. Ralph's father, Mr. Daniel Touchett, is described as having adopted England as his country because he found it sane and accommodating. Yet he also had no great desire to render himself less American. Ralph therefore spent many terms at an American school, has a degree from an American university, but he also spent three years at Oxford. Ralph is therefore well accustomed to English manners, and appeared English from the outside, but his mind is described as enjoying independence. (The implication is that this independence of mind is an American quality.) He did well at Oxford, but he was prevented from having a successful career in England because he was American. Ralph admires his father but has no aptitude for banking himself. Ralph appreciates his father's "fine ivory surface" mostly -- that is, his father's impenetrability to the ideas of others, his father's "originality" (31). Mr. Daniel Touchett has been successful because he is less pliant than many other Americans. Ralph had worked briefly at his father's bank before he caught a violent sickness; he is a consumptive. (Today this is known as tuberculosis.) This is a deadly disease, but it is described optimistically, insofar as Ralph believes he will survive quite a few winters. He always goes abroad during the winter because of this disease. He comforts himself with the thought that he had not really had ambition to do much in his life in the first place. One winter though, he stayed too long in England, and arrived more "dead than alive" (32) in Algiers. After this scare, his attitude changed: he no longer felt he had to struggle to distinguish himself. His friends then know him as more serene. He though does still have the prospect of being in love in his future, although he has forbidden himself an "expression" of this. Ralph Touchett converses with his mother about Isabel, and he jokes that he speaks about her like a piece of "property" (33). He asks what she means to do with her. Mrs. Touchett answers practically, when Ralph has asked the question in the abstract. Mrs. Touchett talks about buying her clothing, bringing her to Paris, and so forth. "I should like to know what you mean to do with her in a general way," Ralph responds (33). Mrs. Touchett tells Ralph where she found Isabel. She thinks Isabel may be a "genius," but she does not yet know in what (35). Ralph asks if she is a genius in flirting, as Lord Warburton has suggested that to him, but Mrs. Touchett thinks that is not where Isabel's talents lie. Ralph delights in the idea that Isabel might be a "puzzle" to Lord Warburton. Ralph persists in asking what Mrs. Touchett plans to "do" with her, and then asks if she plans to get her to marry someone in Europe. Mrs. Touchett responds, "She's perfectly able to marry herself," implying that she does not plan on assisting her in that regard. Mrs. Touchett does not know if Isabel is already engaged. Ralph then goes to show Isabel around the house. He watches her inspecting some of the art in their gallery, and he notes that she has "taste" and a judging eye (37). He also notes that she has a great passion for knowledge. Isabel wants to know if there is a ghost in their mansion. Ralph responds that their house is "dismally prosaic" and that there is no romance there "but what you [Isabel] may have brought with you [her]" (38). Isabel asks if there are more people around there house, saying that she liked Ralph's father and his friend, Lord Warburton. She also likes Mrs. Touchett, she declares, because Mrs. Touchett does not expect one to like her. She goes on to assert that she likes Ralph too, even though he is the opposite of Mrs. Touchett in caring what others think of him. Isabel and Ralph conclude the conversation by agreeing that the great point is to be as happy as possible, and that one does not need to suffer. In Chapter 6, the narrator embarks on more description of Isabel's history, and other's perceptions of her. He describes Isabel as being an active young person "of many theories" (40) with a "finer mind" than most others, and a "larger perception" of facts. One of her aunts, Mrs. Varian, once started a rumor that she was writing a novel, but Isabel had never attempted to do so. She is not a novelist, for she has no talent for expression. She is not exactly a genius, but she does regard herself highly, thinking that people are right in treating her as superior. Thus the narrator says she might be guilty of the "sin of self-esteem" (41). Her actual thoughts though are described as a "tangle of vague outlines which had never been corrected by the judgment of people speaking with authority" (41) and she had her own, stubborn way in her own unclear opinions. It seems to be her philosophy that life was only worth living if one thinks well of one's self. According to her, the worst thing that could happen to her seems to be that she might cause injury to someone else. She is unaware of the evil in the world, and flattered herself that she would never sink to the dangers of inconsistency which high self-esteem often brought: "Her life should always be in harmony with the most pleasing impression she could produce; she would be what she appeared, and she would appear what she was" (42). She sometimes even imagines herself finding herself in a difficult, so that she might arise as a hero of the occasion, and prove herself. The narrator muses that Isabel might be a subject worthy of a scientific criticism were it not for her tendency to awaken the reader's tenderness. Isabel has a friend named Henrietta Stackpole, who is a journalist and financially independent. Henrietta is representative of a progressive woman who has "clear cut views" and is very radically liberal. Isabel thinks Henrietta is proof that a woman can be independent and happy. In respect to her own independence, Isabel's "deepest" (44) thought in her mind is described as being the belief that she could give herself completely if a man should present herself as a husband, but she also finds the image "formidable" more than "attractive" (44). Her mind only hovers around this thought. She sometimes even feels that she is immodest in being so happy, thinking others less unfortunate than herself. Overall she returns to the general theory that a young woman who everyone thinks is clever needs to get a "general impression of life" (44). Isabel and her uncle get along quite well. She asks him many questions, and he provides her with a great number of answers about British politics and manners, neighborhood gossip. Isabel wonders if his description of things accords with the descriptions in books; and he responds that he would not know, having been always interested in finding things out in their natural form. Isabel notes that the people in Europe are not very kind to girls in novels, and wonders if they will similarly abuse her. Mr. Touchett notes that he was once incorporated into a novelist's description of England in a caricatured form -- and thus, people in novels are not always depicted accurately. He also expresses to Isabel that one advantage of being an American in Europe is that one does not belong to any class, unlike Europeans, who all belong to a class. Isabel thinks she will not be successful in Europe if Europeans prove to be conventional. Mr. Touchett thinks it's already been settled that Isabel will be a "success" in Europe. The narrator, in Chapter 6, portrays Isabel in a less flattering light. She is naïve and thinks highly of herself even though she has never been put to the test. He foreshadows that such a test though, will come, and that this test will test her philosophy that she can really appear as she really is. Will Isabel end up being a hypocrite in appearing to be something she is not? Henry James' early and mid-career novels often bring up the "American theme" in which an American goes to Europe, bringing some freshness, innocence, money, moral Puritanism and hope to a decadent culture. These Americans are often disappointed in their expectations though. The Touchetts are depicted as an American family who are successful in Europe in spite of their American qualities. Ralph though, notably, is not quite a success. He has money and European manners, but he has not married into the aristocratic class. When Isabel and her uncle discuss the prospect of Isabel's "success," the actual pathway to success seems unclear. It is altogether possible that Isabel conceives of such success in such abstract terms as her own like-ability, and that Mr. Touchett is thinking of it practically -- in terms of her ability to marry into the upper echelons of society, and to achieve the same respect that a European aristocrat would achieve. Isabel's mind does not seem quite capable of formulating the concrete idea of marriage, and instead her desire for love seems to be frightening to her, a very vague idea that she does not want to assume concrete form, since it might threaten her notion of independence. Either way, it is a difficult task to be "successful" in Europe, because Americans were seen as coming from a less-respected culture and a lack of tradition. Ralph's observation that Isabel has good taste in painting foreshadows that Isabel will find herself interested in European aesthetics, a conventional aspect of European culture, even while she contradictorily critiques "conventionality" in a general sense.
Jane Stotts Time Line Rowe Woods and the Cincinnati Nature Center will stand as a memorial to Stan Rowe and Carl Krippendorf. Stan was on the board of the National Audubon Society when it was left a large estate in Greenwich, Connecticut, together with a substantial amount of money "to educate people in the ways that nature works and how they should be safeguarded." This resulted in the formation of Audubon's first Nature Center. Cincinnati Nature Center at Rowe Woods, Milford Ohio. Stan's mind was prepared, therefore, when in 1965, as he told the story later, "Karl Maslowski (noted nature photographer and writer) came to me and said, 'You know that Carl and Mary Krippendorf died a few weeks ago and it would be a shame to have that wonderful piece of property split up into residential lots.' 1 said it would be an excellent location for one of the new Nature Centers. We went at once to see the Krippendorf daughter, Rosan Adams and she was delighted with the idea. She said, 'I wish 1 could give it to you but 1 can't. 1 will sell it for the amount that was used in my parents' estates.' " Rosan and her sister gave up a lot to leave home Stan Rowe dedicated himself wholeheartedly to the project: developed plans with the help of the National Audubon Society, raised practically all the funds needed for the purchase, planning and development of the property (raised $2 million of endowment funds after he had passed age 80), organized a board and made the dream a reality. "When he had the idea someone ought to give, he never let up on it,"
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? Connect your Facebook account to Prezi and let your likes appear on your timeline. You can change this under Settings & Account at any time. History of Track and Field Transcript of History of Track and Field at a sports festival in Athens, Greece. The marathon was an event of the ancient Olympic Games. The marathon is a modern event that was first introduced in the modern olympic games of 1896 in Athens. From 776 BC, the Olympic Games were held every four years for almost 12 centuries. Track and Field athletics in the U.S. dates back from the 1860's. The Intercollegiate Association of Amature athletes of America, the nation's first national athletic group, held the first collegiate races in 1873 and in 1888 the Amateur Athletic Union (which governed the sport for nearly a century) held it's first championship. As track developed as a modern day sport, a major issue for all athletes was their status as amateurs. For many years track and field was considered an amateur sport! Athletes could not accept training money or cash prizes. Track and field is one of the oldest sports. During the middle ages track and field disappeared. The true development of track and field as a modern sport started in England during the 19 century, and in 1849 the Royal Military Academy held the first organized track and field meet of modern times. In 1896 the first modern Olympic Games were held in Athens, Greece, from April 6th to April 15th. It was held there because Ancient Greece was the birth place of the Olympic Games. In 1913 the international Amateur Athletic Federation (IAAF) was formed by representatives from 16 countries. The IAAF was charged with establishing standard rules for the sport, approving world records, and ensoring that the amateur code was adhered to; it continues to carry out these duties today. The participation of women in track and field is a recent development. In 1921 representatives from 6 countries formed an athletic federation for women, which merged with the IAAF in 1936. Participation by women has grown rapidly in many countries particularly in the United States, where many schools have added women's track and field to their athletic programs. The first event ever run in track and field was 600 feet long, for men only. Now as you can see, this is not an amatuer sport! But if you can not do these events as amazimg as we can....we are professionals. Just keep trying and join track it's amazing!!!
|Bible Research > Interpretation > Cross-References| One of the fundamental principles of Protestant biblical interpretation is that "Scripture is its own best interpreter." Luther expressed this principle with the words, Scriptura sui ipsius interpres ("Scripture is its own expositor"), and it was summed up by the authors of the Westminster Confession thus: "The infallible rule of interpretation of Scripture is the Scripture itself: and therefore, when there is a question about the true and full sense of any Scripture ... it must be searched and known by other places that speak more clearly." For this reason the most important feature of any edition of the Bible (aside from the quality of the translation itself) is the system of cross-references provided in the margin, which helps the reader to find out the meaning of any hard place by "comparing spiritual things with spiritual" (1 Cor 2:13). A good set of cross-references, when used diligently and with intelligence, will make much commentary unnecessary. One of the most useful study editions of the English Bible ever published, the Thompson Chain-Reference Bible, has nothing but subject headings and cross-references in the margin, with index numbers pointing to a topical concordance in the back of the volume. Many a student has found that with the patient use of this convenient system, the Bible is virtually self-interpreting. Other less elaborate "Reference" editions will serve the same purpose for students of the English Bible. The cross-references ordinarily published in editions of the New American Standard Bible are especially full and helpful, and another very good set of references is to be found in the "Classic Reference Bible" edition of the English Standard Version. But the best resource by far in this department is the Treasury of Scripture Knowledge, first published in the 1800's and available in many reprints. This volume provides over a half million cross-references, with most verses of the Bible having more than a dozen references each. Students who are able to use a Greek New Testament will find invaluable help in the cross-references given in the side margins of the Nestle-Aland editions (though not in the UBS editions which use the same text). Frederick Danker in the third edition of his book Multipurpose Tools for Bible Study (1970) gives ten pages (27-36) to showing how helpful these margins can be, and says, "These are virtually inexhaustible mines of information. The average student is unaware of their potentialities, and many a preacher has wearied himself in vain while the answer to the problems in his text lay a few centimeters to the right." Likewise, the lateral margins of the Expositor's Greek Testament give much help in the form of cross-references, quite aside from the exegetical commentary at the bottom of the page. Indeed we might say that the cross-references nearly always suffice to explain the text, without the commentary, because it really is true that "Scripture is its own expositor." Such cross-references will go far to explain nearly everything in the Bible chiefly because, as Ray Van Leeuwen puts it, "The language, imagery, narratives and poetry of Scripture are pervasively cross-referential." The verbal "cross-references" are there in the very language of Scripture, and these were apparent to diligent students long before anyone had the idea of dividing the text into numbered verses and filling the margins with references. Many times students have grasped the correct interpretation of a difficult expression by remembering a parallel usage somewhere else in Scripture, or have hit upon the right interpretation of a passage by comparing it with another passage. What we have now in the margins of our reference Bibles is the scholarly deposit of generations of such insights, laid out for our inspection -- if we will only take the time to look them up. The cross-referential nature of the Bible is most plainly seen by English readers who have become familiar with the words of an essentially literal translation, because the original "cross-references" are the verbal details which are reproduced in a literal translation. For this reason, the habitual use of a literal translation gives students the same "referential" capability which is given by cross-references. Below is an excerpt from Leland Ryken's book, The Word of God in English: Criteria for Excellence in Bible Translation (Wheaton, Illinois: Crossway, 2002), pp. 149-151, in which the cross-referential advantages of an essentially literal translation are explained more fully. Some principles of biblical interpretation belong to the realm of general hermeneutics -- principles that apply to the interpretation of any text, whether in the Bible or the Harvard Classics. Other principles apply specifically to the Bible and are known as special or particular hermeneutics. The subject of the unified network of cross-references and foreshadowings and echoes that we find in the Bible is perhaps the preeminent example of special hermeneutics. As an entry into this complex subject, I would ask you to picture the pages of a Bible with cross-references listed in the margin. I would note first that the Bible is the only book I know where this format regularly appears. Even after we have eliminated the somewhat arbitrary listing of passages that express similar ideas or simply use identical words, we are left with an anthology of diverse writings that are unified by an interlocking and unified system of theological ideas, images, and motifs. Together the diverse elements make up a single composite story and worldview known as salvation history. Biblical interpretation has legitimately been preoccupied with tracing the intricacies of this system of references. Of particular importance has been the use that New Testament writers make of the Old Testament. Often a New Testament writer will evoke an Old Testament passage in such a way as to show its fulfillment in the New Testament, though many different scenarios also exist. To cite a random example, the poet in Psalm 16 at one point expresses his trust in God's providence and goodness with the claim that 'you will not abandon my soul to Sheol, or let your holy one see corruption' (verse 10, ESV). In the book of Acts we find a sermon of Paul in which he quotes this verse and applies it to Christ (Acts 13:35-39). The relevence of this to Bible translation is that although biblical interpretation insists on the importance of the network of cross-references, some Bible translations and translation theories do a much better job of retaining the system of cross-references than other translations do. It is easy to see why dynamic equivalent translations have been nervous about the New Testament metaphors and technical theological vocabulary that are rooted in Old Testament religious rituals. The New Testament references are frequently odd and difficult. That modern readers will find such references easy to understand is out of the question. But to remove them from sight violates a leading tenet of biblical hermeneutics. Many of the New Testament references of which I speak pick up something from the Old Testament system of sacrifices and offerings and turn it to metaphoric use in discussing some aspect of the Christian faith. James 1:18 provides a typical example: 'Of his own will he brought us forth by the word of truth, that we should be a kind of firstfruits of his creation' (ESV). The mention of firstfruits is an evocative allusion to one of the three most important annual festivals in Old Testament religion. The firstfruits were the first portions of a crop. It is impossible to overemphasize how evocative the first portion of a crop is in an agrarian society. (From my childhood on a farm I can remember the thrill of seeing the radishes that appeared on the supper table every spring as the first produce of our garden.) In the Old Testament religious rituals, firstfruits were presented to God as part of the annual harvest festival known as the Feast of Weeks (also called Pentecost). When New Testament writers refer to believers as God's firstfruits, they are tying into a multilayered set of associations between believers and the firstfruits of Old Testament offerings to God. The first wave of believers were literally first -- the first of a long line of subsequent believers. In addition to these metaphoric meanings, by using the Old Testament frame of reference the New Testament writers were participating in the grand drama of unifying images and motifs that thread their way through the Bible. All of this gets lost in the following renditions of James's statement that believers are 'a kind of firstfruits of his creation': By excising the reference to firstfruits, these translations eliminate the way in which James's statement positions itself in the unifying story of the Bible as a whole. The scholar who has written on this most incisively is Ray Van Leeuwen, who provides further examples and concludes this about a good translation: 'By consistency in rendering biblical expressions and metaphors, it helps readers see the unity and coherence of Scripture, how one part echoes or enriches another.' (1) And again, The language, imagery, narratives and poetry of Scripture are pervasively cross-referential. Much of the New Testament material consists of quotations, paraphrases, or allusions to Old Testament texts ... My argument is thus that the massive text we call the Bible is itself the primary context of meaning within which we must find the meaning of each smaller unit of text. (2) Special hermeneutics tells us to respect the interrelatedness of Old Testament and New Testament references. Some dynamic equivalent translations fail to show that respect. Contrariwise, essentially literal translations and some dynamic equivalent translations preserve the network of cross-references. These translations assume that Bible readers will find the inner and outer resources to ascertain the meaning of a reference to firstfruits. Translations that are unwilling to make that assumption and that aim for immediate comprehension by an uninitiated reader are compelled by their very theory to abandon a hermeneutical principle that is a central tenet of evangelical hermeneutics, thereby obscuring the meaning of the original. 1. Raymond C. Van Leeuwen, 'We Really Do Need Another Bible Translation,' Christianity Today, October 22, 2001, p. 34. 2. Raymond C. Van Leeuwen, 'On Bible Translation and Hermeneutics,' in After Pentecost: Language and Biblical Interpretation, ed. Craig Bartholomew et al (Grand Rapids: Zondervan, 2001), pp. 306-307. |Bible Research > Interpretation > Cross-References|
What does Z mean in German? This page is about the meanings of the acronym/abbreviation/shorthand Z in the International field in general and in the German terminology in particular. Find a translation for Z in other languages: Select another language: What does Z mean? - omega, Z(noun) - the ending of a series or sequence
Estimation Fascination (division version) is a great math center activity that will help students with the tricky skill of estimating. This is a very important skill before students start using the standard division algorithm because it will allow them to check their answers for reasonableness. Additionally, estimating with division can be difficult because students cannot just round, they must find compatible numbers. In this center activity, students will build a number using the included number cards. Students will then change their numbers to nearby numbers that are compatible so that they can use mental math to make an estimation of the quotient of the two numbers they originally drew. This activity is quick and easy to put together for a fun and helpful math center. This center is nice because it helps practice multiple skills beyond estimating like changing expanded form to standard form (when drawing the number cards), finding compatible numbers, and practicing mental math. Please see the preview file for complete product details. This activity is aligned to the following TEKS: 5.3.A This activity is aligned to the following Common Core State Standards: 4.OA.A.3 ****Looking for more multiplication and division estimation activities?**** Save money by getting this product in a bundle! Estimation: Multiplication and Division Activity Bundle
Michael Phelps is an amazing athlete. The most-awarded Olympian of all time, he has collected an incredible amount of hardware for his display case: 23 gold medals, three silver medals and two bronze medals are nothing to scoff at. He's an icon, an ambassador for his sport and, to quote Jay Z, not just a businessman, but a business, man. Yet, despite all of his accomplishments, because of the Eurocentric nature of swimming, he will never achieve the level of international fame occupied by Usain Bolt. After Simone Manuel became the first black woman to win a gold medal swimming in the women’s 100-meter freestyle, much was made of the way in which swimming pools have divided Americans. In large metropolitan areas, there were usually segregated spaces for white and black swimmers. Part of why black Americans are stereotyped as being unable to swim stems from racialized assumptions about bone density, but in reality, the reason for the stereotype stems from the lack of access black would-be swimmers had to decent places to learn this skill. In fact, swimming is such a racially contentious issue in American history that on June 18, 1964, Horace Cort captured a famous photograph showing a man pouring acid in a swimming pool to stop a "swim-in" planned by black and white protesters in Augustine, Fla. The protesters were trying to draw attention to racially segregated recreational facilities. And yet, America is not alone. Segregated pools are not uniquely American, and racialized propaganda about what happens when an influx of black and brown people gain access to swimming pools is not hard to find in Europe. All over the world, access to swimming pools can be a contentious issue. Furthermore, swimming is an activity that requires not only access but also leisure and economic means. That is part of why I think Phelps fails to appeal to a world audience. He is a white man in a sport dominated by white people. As Bomani Jones mentioned on The Right Time, Americans love Michael Phelps because he is distinctly American. He gives us bragging rights over other countries. He represents us well, but if you asked anyone outside of America—hell, if you asked anyone not living in the suburbs—who they’d rather emulate, the answer would not be Phelps—it’d be Usain Bolt. The 6-foot-5 runner from Jamaica is a once-in-a-lifetime talent. He has won the gold medal in the 100-meter in three straight Olympic Games, and he has done so in startling fashion. In 2008, as a 21-year-old, he set a new world record in the 100-meter by running it in 9.69 seconds. He then set another world record that year by running the 200-meter in 19.30 seconds on his way to another gold medal in the 4-x-100 meter relay. In London, he won the gold again in the 100-meter and 200-meter and was part of the relay team that set the world record in the 4-x-100 meter relay. And in the Rio de Janeiro Olympic Games, he won gold in all of his individual events again, making him the first man to ever win the gold in three straight Olympics in the 100-meter and 200-meter. Bolt’s athletic brilliance is awe-inspiring. We may never see another like him again. Yet, Bolt is a worldwide phenomenon not just because of how many gold medals he has amassed but also because of the sport in which he dominates. Phelps is a star in a sport that has class and racial barriers. Bolt is a star in a sport in which anyone can participate, if you have shoes—and sometimes shoes are not even required. Talking to the BBC, Bolt was asked about what he hoped his legacy would be as an Olympian. His response was telling. “I want to be among the greats,” he said. “Muhammad Ali, Pelé and the like. So, to do that, I have to show up and perform.” He did not say he hoped he would be remembered alongside Phelps. In fact, Phelps never came up as a great Olympian. This is no shade to Phelps, but it does speak to the fact that there is a difference between being an American great and being a great athlete in the eyes of the world. Ali excelled in boxing, where there have been few historical obstacles to accessing the sport. Pelé is widely regarded as the best professional footballer to ever play the game, and soccer remains a sport where there are few economic and racial barriers to entry. Bolt stated that he wanted to be remembered among these men because these are men who were considered phenomenons in sports played by the world—not just by those who have the privilege necessary to gain access. Phelps is a great Olympian—no one is denying that—but he will never be Usain Bolt. The sprinting phenom from Jamaica means something to people around the world that a swimmer never could. Phelps is great, but "the Big Man From Kingston" is GOAT. Lawrence Ware is a progressive writer in a conservative state. A frequent contributor to Counterpunch and Dissent magazine, he is also a contributing editor of NewBlackMan (in Exile) and the Democratic Left. He has been featured in the New York Times and discussed race and politics on HuffPost Live, NPR and Public Radio International. Ware’s book on the life and thought of C.L.R. James will be published by Verso Books in the fall of 2017. Follow him on Twitter.
The following message was posted on my homeschool group page and credited to Dr. Vinay Goyal. It is a lot of common sense that is not so common. Thought it might help you all stay well this fall and winter. The only portals of entry of the HiNi virus are the nostrils and mouth/throat. In a global epidemic of this nature, it's almost impossible not coming into contact with H1N1 in spite of all precautions. Contact with H1N1 is not so much of a problem as proliferation is. While you are still healthy and not showing any symptoms of H1N1 infection, in order to prevent proliferation, aggravation of symptoms and development of secondary infections, some very simple steps, not fully highlighted in most official communications, can be practiced (instead of focusing on how to stock N95 or Tamiflu): 1. Frequent hand-washing (well highlighted in all official communications) . This is not a joke. Make it a ritual habit... make it part of your daily routine... DO NOT BE LAZY...!!! 2. "Hands-off-the- face" approach. Resist all temptations to touch any part of face (unless you want to eat, bathe or slap). 3. Gargle twice a day with (use Listerine if you don't trust salt). *H1N1 takes 2-3 days after initial infection in the throat/ to proliferate and show characteristic symptoms. Simple gargling prevents proliferation. In a way, gargling with salt water has the same effect on a healthy individual that Tamiflu has on an infected one. Don't underestimate this simple, inexpensive and powerful preventative method. 4. Similar to 3 above, clean your nostrils at least once every day with warm salt water. If this method is not possible, blowing the nose hard once a day and swabbing both nostrils with cotton buds dipped in warm salt water is very effective in bringing down viral population. 5. Boost your natural immunity with foods that are rich in Vitamin C (citrus fruits). If you have to supplement with Vitamin C tablets, make sure that it also has Zinc to boost absorption. 6. Drink as much of warm liquids as you can. Drinking warm liquids has the same effect as gargling, but in the reverse direction. They wash off proliferating viruses from the throat into the stomach where they cannot survive, proliferate or do any harm.
This blog serves as a current awareness resource for recently published federal and Wisconsin government publications. Wednesday, November 30, 2016 Immigrant Voting in the United States "In recent decades, immigration has driven population growth more than natural increase. Therefore, it is useful to examine the degree to which immigration status shapes the voting-eligible population, or “electorate.” Anew reportreleased today from the U.S. Census Bureau examines a number of generational characteristics, including voting patterns. In 2012, there were 214.8 million U.S. residents who satisfied both the age and citizenship requirements for voting. The Constitution stipulates that voters must be at least 18 years of age and U.S. citizens by birthright or naturalization...." Immigrant voting
It has been a long and tumultuous road to legalization of marijuana in California. Through the years, there have been a number of significant developments in the legislative, legal, and cultural fronts with regard to marijuana possession, use, and cultivation. The Medical Marijuana Program California was the first state to implement a medical marijuana program. This program was the result of two legislative measures: Proposition 215 and Senate Bill 420, which were passed in 1996 and 2003 respectively. Proposition 215 or the Compassionate Use Act, was passed on the strength of a 55% majority vote. This measure essentially gave certain individuals the legal right to cultivate or purchase marijuana for medical reasons upon the recommendation of a physician. The law is intended to benefit patients with cancer, AIDS and other chronic health conditions. Senate Bill 420 (also known as the Medical Marijuana Protection Act) for its part was implemented impermeable to institute a medical marijuana ID card system. This measure was passed by Governor Gray Davis. Decriminalization of the Possession of Marijuana A significant step forward toward the legalization of marijuana occurred in July 1975, when the possession of small amounts of marijuana was decriminalized with the passing of Senate Bill 95. This bill essentially rendered the possession of 28.5 grams of marijuana or less a civil offense rather than a criminal one. The offense merits only a $100 fine, although the fine assessments added in the state of California places the total closer to $480. Offenses involving possession of larger amounts, possession on school property or for purposes of cultivation merit higher penalties. To this day, the possession of small amounts of marijuana is still decriminalized in California. The Substance Abuse and Crime Prevention Act (Proposition 36) Another significant development in the history of marijuana legalization in California was the passage of Proposition 36 of 2000. Known as the Substance Abuse and Crime Prevention Act, this law essentially gave first – and second – offense violators the opportunity to seek drug treatment instead of being subjected to a trial and a possible prison term. The Marijuana Control, Regulation, and Education Act In 2009, the Marijuana Control, Regulation, and Education Act was introduced by Tom Ammiano. This law essentially lifted state penalties for marijuana possession, use, and cultivation for individuals over 21 years of age. With the approval of the bill by the Assembly Public Safety Committee in January 2010, the United States finally had a bill that effectively legalized marijuana pass through a legislative body. The bill was passed on the strength of a 4 to 3 vote. However, the Marijuana Control, Regulation, and Education Act was not approved in the Assembly. Ammiano intended to reintroduce the bill if Proposition 19 (the Regulate, Control and Tax Cannabis Act) was passed later in 2010. With the defeat of Proposition 19 in November 2010 however, the Marijuana Control, Regulation, and Education Act was not pursued. Senate Bill 1449 September 2010 saw the passing of Senate Bill 1449 into law. Signed into law by Governor Arnold Schwarzenegger, the bill essentially reduced the possession of an ounce of marijuana to an infraction from its previous misdemeanor status. This offense does not warrant a criminal record, nor does it require a court appearance, involving only a $100 fine. Senate Bill 1449 took effect on January 1, 2011.
Dr. John Radcliffe, known by contemporaries as 'the Aesculapius of his age', had a great reputation as a physician in the late Seventeenth century, and has proved to be one of Oxford University's leading benefactors. His memory is perpetuated in an observatory, two libraries, a college quadrangle, a square, a road, travelling fellowships for medical students and two hospitals. The son of an attorney, he was born in Wakefield, Yorkshire in 1650. He matriculated at University College, Oxford when he was thirteen and was elected to a fellowship at Lincoln College in 1669 aged only eighteen. He became a lecturer in Logic in 1671 and in Philosophy in 1672. When living in Lincoln, Dr. Bathurst (the President of Trinity College, Oxford) called upon him and was surprised to see so few books in his room. Radcliffe is said to have pointed to a skeleton and a couple of vials declaring 'This is Radcliffe's library!' He was obliged to resign from his job at Lincoln in 1677, when under the statutes of the College he was called to take holy orders. Embittered, he bequeathed nothing to Lincoln in his will; however, he did show some affection for the college in 1684 when the Senior Common Room was being furnished with 'a fine dark chestnut wainscoting'. He gave £10, a contribution more than double that of any other donor. He took a subsequent degree in Medicine, graduating in 1675, and established himself as a physician. He gained a reputation as an excellent diagnostician, apparently more on the basis of instinct than on technique! He did have much success with smallpox, as he believed that fresh air was a preferable cure to blood letting. It has been said that he secured his fame through his bluntness. His recipe for success was to “use mankind ill". Radcliffe practised first in Oxford, but in 1684 he moved to London, where he started work earning twenty guineas a day. In 1690 Radcliffe was elected Member of Parliament for Branber, and for Buckinghamshire in 1713. Despite being a Jacobite, he became physician to William III and Mary, and attended the king frequently until 1699 when, examining his skinny frame and swollen ankles, he offended the monarch by remarking, 'I would not have your Majesty's two legs for your three kingdoms.' Despite this, by 1707 he was worth £80,000. On 1st November 1714 he died of apoplexy at his house in Carshalton. William Macmichael asserts that the dread he had of the populace, and the want of company in the country village where he had retired to and which he did not dare leave, shortened his life. He is buried at The University Church of St Mary, Oxford. In his will, he left property to University College to found two medical travelling fellowships. A further £5,000 bequest to this college enabled it to build a new quadrangle. Radcliffe did stipulate that the architecture must be 'answerable to the [seventeenth century] front already built'. In 1770 his trustees allocated £4,000 to establish the Radcliffe Infirmary, where building began in 1758. The original Radcliffe Infirmary in Woodstock Road opened on St Luke's Day 1770. Now it has been largely superseded by the John Radcliffe Hospital, a modern building of concrete and glass on Headington Hill. Until 1885, the Radcliffe Infirmary was a University institution, governed by University officers and often staffed by Fellows of the colleges. There is a plaque in the old infirmary commemorating the first use of penicillin on a patient in the Briscoe ward on 12th February 1941. This represented the culmination of two years research at the Sir William Dunn School of Pathology under the leadership of Professor Florey (also an old Lincolnite) to isolate and purify the exudeate from the mould Penicillium notatum, the bacteriocidal effects of which had first been noted by Sir Alexander Fleming. Money was also bequeathed for enlarging St. Bartholomew's Hospital in London, and another grant was made to build Oxford's Radcliffe Observatory, which, designed by James Wyatt, was erected eighty years after its donor's death. It is an unusual building modelled on the ancient Tower of the Winds in Athens (100 B.C.E), and became an astronomical observatory and lecture room for the University. When Green College, housed in the Radcliffe Observatory, was founded in 1977, it was originally going to be named after John Radcliffe, but instead reflects the generosity of its founder Cecil Green. Radcliffe also left £40,000 in his will towards the building and endowment of a new library. When one academic heard this, he remarked rather cuttingly that this was 'like a eunuch founding a seraglio!' The library was to stand on the site of a conglomeration of modest houses occupying the space bounded by St. Mary's church, Brasenose and All Souls colleges and the schools. Nicholas Hawksmoor's original plan was that the square would be devoid of buildings save a central statue. Radcliffe's bequest altered this and the square is now home to one of Oxford's most distinctive buildings, the Radcliffe Camera, which Sir Nikolaus Pevsner called 'England's most accomplished domed building.' The idea of the rotunda originated with Hawksmoor, but he died in 1736, so the final designs belonged to James Gibbs, even though he was a Catholic and a Scot! Gibbs wanted the library to be 'a public building seen by all sorts of people who come to Oxford from different parts of the world.' In 1927 the Camera was taken over by the University and became part of the Bodleian Library, a copyright library and the main University library. It now houses the reading rooms for English Literature and Language, History and Theology. In 1901, the Radcliffe Science Library, in the University Parks, was opened and now has an extensive collection of medical literature on its shelves. Radcliffe claimed relationship with the Earl of Derwentwater and assumed the Derwentwater coat of arms. After he died, the College of Heralds refused to accept this, forbidding the Derwentwater arms being displayed on any buildings erected from his estate. Oxford University ignored this point blank, and the arms appear in University College's Radcliffe Quadrangle, the Radcliffe Science Library and on the ceiling of the Lower Reading Room in the Radcliffe Camera. The Life of Dr. John Radcliffe, Campbell R. Hone, Faber and Faber Ltd, 1950 The Gold Headed Cane, William Macmichael, Thomas Davidson - Whitefriars, 1828 Oxford, Jan Morris, Oxford University Press, 2001 Oxford, a Cultural and Literary Companion, David Horan, Signal Books, 1999 Odette Orlans, 14th October 2004
About 1.2 billion of the world’s people don’t have access to electricity, while 2.8 billion rely on wood or other solid fuels to cook and heat their homes. This results in indoor air pollution that killed over 3.5 million people in 2010. About 1.8 billion people gained electricity connections between 1990 and 2010. But this was only slightly ahead of global population growth of 1.6 billion. The pace of electricity expansion needs to double to reach everyone by 2030. An even faster rate of expansion in safe cooking solutions is needed to cut to zero those households using solid fuels from the current 41% level. The carbon cost of such expansion is low; universal electricity access would increase global carbon dioxide emissions by less than one percent. Sustainable Energy for All, a global coalition of governments, private sector, civil society and international organizations, aims to deliver universal access to electricity and safe cooking solutions, while also doubling the amount of renewable energy in the global energy mix from its current share of 18% to 36%. The initiative also seeks to double the rate of improvement in energy efficiency, reducing the compound annual growth rate of energy intensity to –2.6%. It seeks to reach these targets by 2030. This initiative was launched in 2011 by United Nations Secretary-General Ban Ki-moon, who now chairs its Advisory Board with World Bank Group President Jim Yong Kim. Its Advisory Board comprises distinguished leaders and experts from around the world who have pledged to act on this vision of a sustainable energy future. The initiative is supported in its work by a global facilitation team led by Kandeh Yumkella, the UN Secretary General’s Special Representative for Sustainable Energy.
||貨幣・信用論をめぐる研究状況 : Economy & Society 誌における論争を手掛かりとして《論文》 Contemporary Debates on the Theory of Money and Credit in Economy & Society 泉, 正樹結城, 剛志 58 , 2016-3 , 埼玉大学経済学会 This study analyses the major approaches to frameworks for understanding money. These approaches include those taken by the Marxian, post-Keynesian, and neo-classical schools, and sociologists. The theory of money and credit involves deeply controversial issues. Since the 1970s, financial speculation has been spreading more deeply within global capitalism. The sub-prime mortgage loan problem in the United States was one consequence of this phenomenon. The situation demands an inquiry into the basic question, ‘What is money’ ? In the 2000s, the journal Economy & Society presented an interdisciplinary exchange of opinions and criticism with respect to the traditional understanding of money in mainstream economics, that is, money as the medium of exchange. From a sociological viewpoint, Zelizer(2000) emphasises that money has ‘special’ implications when viewed with regard to different situations, thus it cannot be encapsulated by any single concept. On the other hand, from the viewpoint of post-Keynesian economics, Ingham(2001, 2004) insists that money is the social relation between debts and credits as represented by the money of account. However, from a Marxist viewpoint, Lapavitsas(2005b) understands money as the ‘monopolization of the ability to buy’. Thus, the concept of money has been interpreted in various ways by researchers in different disciplines. Nevertheless, these researchers all conclude that ‘fiat money’ is one of the conditions of money. However, some Japanese Marxian political economists have developed an alternative view which states that pure ‘fiat money’ cannot be explained in principle and does not exist in practice. On the basis of these Japanese studies, we analyse the relationship among these views and attempt to unravel the basic question, ‘What is money’ ?
Mubarak Al-Sabah : The Foundation of Kuwait Üye Girişi yapın, temin süresi ve fiyatını size bildirelim. Üye Girişi yapın, sizi bu ürün stoklarımıza girdiğinde bilgilendirelim. Temin süremiz 28 - 42 iş günü Yayıncı I.B. Tauris ( 04 / 2014 ) ISBN 9781780764542 | Ciltli | 16x24x3,4 cm. | İngilizce | 304 Sayfa | Türler Politika - Dünya Amidst political upheaval and the decline of the Ottoman Empire, the State of Kuwait emerged as an independent country under British protection in 1899, with Sheikh Mubarak Al Sabah widely accredited as the instrument of its foundation. But the path to power for Mubarak was not a simple or smooth one. The author here presents an original perspective on the difficulties and controversies surrounding Mubarak's ascension. With unparalleled insights and access to original sources she reveals the life, personality and politics of a man who, determined to secure a distinctive Kuwaiti state, helped to shape the modern Middle East. This biography provides a comprehensive overview of a time of significant political and social change in the Gulf when development, diplomacy, economics, finance and trade were both routes to political independence
Nanotechnology is concerned with materials at the nanometre (o.000 000 001m) scale, and is expected to be the basis of many of the key innovations of the 21st century. It bridges all aspects of science, touching medicine, physics, engineering and chemistry, and promises the development of new processes and materials with unique properties. IOM is at the centre of the European research initiative to ensure that new nanomaterials are safe. Our researchers work closely with the SAFENANO team in providing services to industry and others. Exposure to engineered nanomaterials IOM has expertise in measuring and estimating the airborne concentration of nanoparticles using mathematical models. We have state-of-the-art portable instrumentation and laboratory space for conducting controlled experiments on nanomaterials often related to consumer products. Potential hazards from nanomaterials We have a team of particle and fibre toxicologists and other scientists working on improving our understanding of the hazard from nanomaterials. Our research ranges from particle characterisation, through modelling of the dose at critical target organs, to the interpretation of the results of toxicology tests using sophisticated statistical analysis. Our data scientists have helped build databanks containing extensive research outputs. Through our research (see www.enpra.eu) we have been developing a novel integrated approach for risk assessment of nanomaterials. We have adapted the traditional chemical risk assessment approach to be relevant to nanoparticles. This approach relies upon hazard Identification, dose-response assessment, exposure assessment along with risk characterisation and risk management. Life Cycle Assessment (LCA) is a method for estimating and assessing the resource usage and environmental and health impacts attributable to the entire life cycle of a product - from raw material extraction, through material production and manufacturing, to use and to end-of-life treatment and final disposal. Use of a "life cycle approach" in relation to nanotechnology can help identify where the maximum health impact could occur, providing the opportunity for process or design modifications to minimise these impacts.
In this article we will learn about the parallel operation of transformer, we will learn how to connect two transformers in parallel. Will also learn condition of parallel operation of transformer. Parallel operation of transformer Parallel operation, it means simply that two or more transformers are connected to the same supply bus bars on the primary side and to a common or same bus bar or load on the secondary side. Such requirement is frequently encountered in practice. The reasons that necessitate parallel operation are as follows. Reason of parallel operation of transformer These are some reason of parallel operation of transformer. Form this we can understand that why we need parallel operation transformer. 1. Non-availability of a single large transformer to meet the total load requirement. 2. The power demand might have increased over a time necessitating augmentation of the capacity. More transformers connected in parallel will then be pressed into service. 3. To ensure improve reliability. Even if one of the transformers gets into a fault or is taken out for maintenance or repair, the load can continued to be serviced. 4. To reduce the spare capacity. if many smaller size transformers are used, one transformer can be used as spare. If only one large transformer is feeding the load, a spare of similar rating has to be available. The problem of spares becomes more acute to transport smaller ones to site and work them in parallel. 5. When transportation problems limit installation of large transformers at site, it may be easier to transport smaller ones to site and work them in parallel. According to figure the physical arrangement of two single phase transformer s working in parallel on the primary side. Transformer A and transformer B are connected to input voltage bus bars .ascertaining the polarities they are connected to output of load bus bars. Certain conditions have to be met before two or more transformers are connected in parallel and share a common load satisfactorily. these are also some essential condition of parallel operation of transformer 1. Both transformer voltage ratio is same 2. The per unit impedance of each transformer is also must be same 3. Both transformer polarity also must be same , so that there is no circulating current between the transformers. 4. Both transformer phase sequence must be same and no phase difference must exit between the voltages of the transformers. These condition is mostly required for parallel operation of transformer(single phase). Condition of parallel operation of transformer 1. Same voltage ratio Generally the voltage ratio and turns ratio are taken to be the same or similar. If the ratio is large, there can be considerable error in the voltages even if the turn’s ratios are the similar. When the primaries are connected to similar bus bars, if the secondary do not show the similar voltage, paralleling them would result in a circulating current between the secondary’s. Reflected circulating current will be there on the primary side also. So it is required for parallel operation of transformer. 2. Per unit impedance Transformers of different ratings may be required to operate in parallel. If they have to share the total load in proportion to their ratings the larger transformer has to draw more current. The voltage drop across each transformer has to be the same by virtue of their connection at the input and the output ends. Thus the larger transformers have smaller impedance and smaller transformer s have larger ohmic impedance. Thus the impedance's must be in the inverse ratios off the ratings. As the voltage drops must be the same, the per unit impedance of each transformer on its own base, must equal. In addition, if active and reactive power is required to be shared in proportion to the ratings, the impedance and per unit reluctance of both the transformers must be the same for proper load sharing. 3. Polarity of connection The polarity of connection in the case of single phase transformer can be either same or opposite. Inside the loop formed by the two voltages get added and short circuit results. In the case of polyphone banks, it is possible to have permanent phase error between the phases with substantial turns ratios in such groups can be adjusted to give very close voltage ratios but phase errors cannot be compensated. Phase error of .6 degree gives rise to one percent difference in voltage. Hence, poly phase transformers polarity connection must proper. It is most required for parallel operation of transformer. 4. Phase sequence The phase sequence of operationbecomes relevant only in the case of poly systems. The poly phase banks belonging to same vector group can be connected in parallel. A transformer with +30 degree phase angle however can be paralleled with the one with -30 degree phase angle, the phase sequence is reversed for one of them both at primary and secondary terminals.. hence these are some condition of parallel operation of transformer if you will fined any incorrect above please comment below. For knowing more about the parallel operation of transformer please sea this video.
The last major effect of the Enlightenment were the Latin American Revolutions. Dominated by the Creoles, these movements saw Spain lose all of her major colonies in the new world. Simon Bolivar gained independence for 5 nations on his own. The effect of these new Caudillos has affected Latin American history until today. So, here is how New World gained its independence.
FOURCC is short for "four character code" - an identifier for a video codec, compression format, color or pixel format used in media files. A character in this context is a 1 byte/8 bit value, thus a FOURCC always takes up exatly 32 bits/4 bytes in a file. Another way to write FOURCC is 4CC (4 as in "four" character code). The four characters in a FOURCC is generally limited to be within the human readable characters in the ASCII table. Because of this it is easy to convey and communicate what the FOURCCs are within a media file. AVI files is the most widespread, or the first widely used media file format, to use FOURCC identifiers for the codecs used to compress the various video/audio streams within the files Some of the more well known FOURCCs include "DIVX", "XVID", "H264", "DX50". But these are just a few of the hundreds in use. To find out which FOURCCs are used within a media file, you need to use an application specialized to open and inspect the media file. In our fourcc identifier section we have several such applications (all free) available for download. You may refer to our video codecs for a long list of which FOURCCs that identifies which video codecs. For audio codecs it is not FOURCCs that is used, but rather audio tags, or an audio identifier - that identifies one specific audio codec or one type of audio compression scheme. An audio tag is just an integer decimal value (32 bits) or alternatively often specified as a HEX value. Your best bet at locating an audio codec given the audio tag is probably through this list.
What and How Are Kids Reading? 1. The English teachers at our school have been noticing a gradual loss of reading and writing skills in the last five years. While the “above-average” students still exist in good numbers, there seems to be more students with “very-low” reading competency. 2. My colleagues and I on the 7th grade team have noticed more students each year who are struggling with vocabulary and reading comprehension skills, so that even in math, they struggle with understanding the questions asked of them. 3. Everywhere you look outside of the classroom, students are reading a lot, but it’s mostly text messages, instant messages, emails, teen-related blogs and websites. Teens are often seen viewing screens yet are very rarely seen reading a book. (Some are calling this generation of kids the “children of the screen.”) 4. Our Academic Dean gave all the English teachers a copy of chapter 2 of Mark Baurlein’s “The Dumbest Generation” to read. 5. I watched PBS’s FrontLine documentary “Digital Nation” again, which shows how fragmented our digital lives have become and how hard it is for us to concentrate for sustained amounts of time on reading, writing, discussing, or anything. We’re all becoming a little ADD because we are constantly interrupted. Some Startling Statistics – The literary rate (the percentage of those who read any books in a year) for 18-24 year-olds plummeted in the 20 years from 1982 to 2002 from 60% to 43%. – In 2005, 15-24 year-olds spent just 8 minutes a day doing any kind of reading activity (back of cereal boxes, video game instructions, internet article, anything). – The percent of 17 year-olds who “Never or hardly ever” read for fun more than doubled from 1984 to 2004, 9 percent to 19 percent. – 25% of high school graduates who have gone to college never read a word of literature, sports, travel, politics, or anything else for their own enjoyment. (All of the above statistics are from various studies cited in The Dumbest Generation by Mark Baurlein) The New Attitude About Books Here are some excerpts from Baurlein’s book specifically about reading. “It’s a new attitude, this brazen disregard of books and reading. Earlier generations resented homework assignments, of course…but no generation trumpeted a-literacy (knowing how to read, but choosing not to) as a valid behavior of their peers…Today’s generation wears anti-intellectualism on its sleeve, pronouncing book reading an old-fashioned custom. [One student said,] “My dad is still into the whole book thing. He has not realized that the Internet kind of took the place of that.” “In her world, reading is counterproductive. Time spent reading books takes away from time keeping up with youth vogues, which change every month. To prosper in the hard-and-fast cliques in the schoolyard, the fraternities, and the food court, teens and 20-year-olds must track the latest films, fads, gadgets, YouTube videos, and television shows. To know a little more about popular music and malls, to sport the right fashions, and host a teen blog, is a matter of survival… Heavy readers miss out on activities that unify their friends.” And the problem is that it’s a zero-sum game. You cannot have it all. There are only so many hours in a week. By adding all this technology into the daily life of teens, there is not enough time or motivation leftover for reading books. What’s Causing This? In my opinion, and many others’, it is caused by the meteoric rise of personal technology in our lives every day, all day. Watch excerpts of the PBS FrontLine documentary “Digital Nation” Section 1 – “Distracted by Everything” Section 5 – “The Dumbest Generation” A New Literacy or a New Illiteracy? There are new terms being invented and used by language teachers now: e-literacy, viewer literacy, and digital literacy. This is to give credit for what kids are learning. Many teachers are fully embracing all these new modes of communication, but at what cost? Bauerlein responds with, “However much the apologist proclaim the digital revolution and hail teens for forging ahead, they have not explained a critical paradox. If the young have acquired so much digital proficiency, and if the digital technology exercises their intellectual faculties so well, then why haven’t knowledge and skill levels increased accordingly? “ “If the Information Age solicits quicker and savvier literacies, why do so many new entrants into college and work end up in remediation? If their digital talents bring the universe of knowledge into their bedrooms, why don’t they handle knowledge questions better? “Digital habits have mushroomed, but reading scores for teens remain flat, and measures of scientific, cultural, and civic knowledge linger at abysmal levels.” What Can Teachers and Parents Do About It? A far as I can tell, the answer to this problem is not complex. Kids need to read more books. They need to read books that are interesting to them and on their reading level. This will yield wonderful results. Benefits of Reading for Fun “Books afford young readers a place to slow down and reflect, to find role models, to observe their own turbulent feelings well-expressed, or to discover moral convictions missing from real situations. Habitual readers acquire a better sense of plot and character, an eye for the structure of arguments, and an ear for style.” “The more you read, the more you can read. Reading researchers call it the Matthew Effect, in which those who acquire reading skills in childhood read and learn later in life at a faster pace than those who do not. They have larger vocabularies, which means they do not stumble with more difficult texts, and they recognize the pacing of stories and the form of arguments, an aptitude that does not develop as effectively with other media… A sinister corollary to the cognitive benefit applies: the more you don’t read, the more you can’t read. (Bauerlein)” Here are some ideas we are kicking around at school: – Make a bulletin board with reading lists, book reviews, faculty favorites, and a “Who’s Reading What?” section. – Read a book at some point in the year in science and social studies, not just in language arts. – Construct a blog all about books, including student discussion forums and student-generated videos. – Give out prizes to those who are caught reading a book or for meeting a goals (i.e. 1,000 pages in a school year + 1,000 pages in a summer). – DEAR (Drop Everything And Read) time weekly, or even daily. Teachers need to drop everything and read too, in order to be good role models. – Require all students to bring a reading book along with their daily planner to every class. Give small rewards to those who do. – Setup a “Free Books” shelf or rack in a central location where students can exchange old books for different ones. Remediation Possibilities for Weak Readers: • Summer Reading Class with a Reading Specialist • Remedial Reading Program for Home Use (home school curriculum) • “Classic Illustrated” graphic novels, Roald Dahl novels, and other easy and fun books that are not frustrating to weak readers. • Books on tape to aid the reading process Your Help Needed If you have any other ideas, I welcome your thoughts. This is an open process for me. I am looking for more ideas and thoughts on this topic. So, how do you get your children or your students to read more? How can we help kids embrace a balanced lifestyle in which reading has a place?
Those were also the days of Edmund Gosse’s terrible childhood. No fiction of any kind, religious or secular, was admitted into the house. In this it was to my Mother, not to my Father, that the prohibition was due. She had a remarkable, I confess to me still somewhat unaccountable impression that to 'tell a story', that is, to compose fictitious narrative of any kind, was a sin. - Gosse, Father and Son (1907) Though Gosse was a child in the 1850s, his upbringing was a sign of things to come. By the 1860s, certain types of fiction were considered incredibly dangerous, especially the sensation novel. Drawing from gothic and romantic literary fiction, the sensation novel appealed to the uncultured masses through shocking themes, like adultery, theft, kidnapping, insanity, bigamy, forgery, seduction, and murder. Other than relying on shock and awe, sensation novels separated themselves from other more respectable genres by using common settings familiar to their poorer middle- and working- class audiences. By the 1880s, the sensation novel was replaced by the ‘penny dreadful,’ known as ‘dime novels’ in the United States. Stanford's Dime Novel and Story Paper Collection celebrates the era, which “benefited from three mutually reinforcing trends: the vastly increased mechanization of printing, the growth of efficient rail and canal shipping, and ever-growing rates of literacy.” I’ve always thought of these as serial novels, published in story-based (I hesitate to use the word literary) publications. Again, their low price made them accessible to the uncultured reading masses, filling their minds with the kind of trash Mother Gosse sought to protect her son from in the 1850s. In the 1890s, the British newspaper and publishing magnate, Alfred Harmsworth, decided to take on the corrupting influence of the penny dreadfuls, but wound up creating the same thing for less: The Half-penny Marvel, The Union Jack and Pluck, all priced at a half-penny. A.A. Milne once said, "Harmsworth killed the penny dreadful by the simple process of producing the ha'penny dreadfuller." But Harmsworth made some thing wonderful happen. Meet Sexton Blake, who, in the 1890s, was known as “the poor man’s Sherlock Holmes.” Though written by many hands (Harmsworth owned the rights), the first Sexton Blake story, “The Missing Millionaire,” was published on 20 December 1893 and really sought to capitalize on the popularity of Sherlock Holmes, who Arthur Conan Doyle was already getting tired of writing about. Like Sherlock Holmes, Sexton Blake was a consulting detective operating out of Baker Street. Blake’s Watson is called Tinker, who didn’t appear until 1904 and is more like Batman’s Robin. Blake’s love interest is Yvone Cartier and his housekeeper is Mrs. Bardell. Blake is a little more clean-cut than Holmes, but is actually an educated medical doctor, like the real Sherlock Holmes. The Sexton Blake franchise continued on through the 1960s and the official fan website calls for a revival in the 21st century. So why was Sherlock Holmes considered literature, while Sexton Blake was considered trashy? My focus is on the 1890s, so I’m not going to comment on what might have happened to the franchise in the 20th century, but the fact that it actually was a franchise might have had something to do with it. Maybe having multiple authors had something to do with it? During my undergraduate degree, I took a class, in which we read novels of many shapes, my fellow students found that they had a lower opinion of what a book would be like, based on its shape. If it looked like a Harlequin romance book, they expected it to be trashy. They were often surprised by what they found inside. Maybe readers in the 1890s didn’t take Blake seriously because of the cheap quality of the publications?
What lies behind the concept of BFM? As many people know, there exist two distinct varieties of fats that can be encountered. Those are essential fats on one hand, and body fats on the other. The former are known to be in charge of the body metabolism and they are made of fats formed and stored in one’s lungs, kidneys, liver, heart, and other organs, as well as in specific parts of the person’s nervous system. Having the latter in mind, what needs to be emphasized is that they are also known as the storage fats and that they are actually loose connective tissue, i.e. adipose tissue. They are held responsible for energy storage in the shape of fats. Another distinct and extremely important function of the fats in question is their role in protection of the person’s internal organs, as well as in providing the person’s body with energy in the moments when one’s body craves it the most. Despite numerous evident benefits that they offer, in case they build up in too large quantities, it will be the number one culprit for occurrence of a quite serious weight related condition known as obesity. In direct relation to the aforementioned is the person’s recommended and healthy body weight, which depends on the height of that person. For those who seek to calculate and determine their proper body weight, the most efficient and accurate tool, used in the greatest majority of cases, is known as body mass index, or BMI for short. Those people whose BMI is or exceeds the value of 25 kg/m2 are regarded as those with serious weight related problems; they are considered to be overweight. Furthermore, if a person’s BMI is exactly or higher than 30 kg/m2, this person is referred to as obese. Obesity is by far more serious problem than being simply overweight, and is one of those condition that can seriously damage one’s health to a great extent, and in some cases even cause life threatening side effects. Monitoring and tools Having fitness and overall health of a person in mind, it must be emphasized that, in terms of importance, the percentage of the entire body fat is regarded as more essential than the person’s general body weight. Exactly for this reason monitoring one’s body fat becomes quite an important manner by way of which a person can not only monitor the body fat levels (i.e. percentage), but also control it in the most proper manner as well. Considered to be the most accurate and thus most reliable body fat monitors are: - Bioelectrical Impedance Analysis - Near Infrared Interactance - Dual Energy X-ray Absorptiometry - Calipers (i.e. skinfold measurement)
A Brief History Of Foston Foston lies on an ancient cross roads, situated on the northern edge of the Vale of Belvoir, nearly six miles northwest of Grantham, national grid reference SK 85 42. Foston has a linear scattering of houses sited on the upper edge of a north-south fault line in theory giving flood free living conditions, but having a line of springs, which emerge from the outcropping limestone. The local subsoil is mainly clay although there are a considerable variety of soils within the parish. Foston is traditionally a farming village. The River Witham runs to the north of the village and the A1 trunk road bisects at the south-western edge isolating half a dozen village dwellings. The Foston Beck runs along the eastern border and the parish covers around 850 hectares. Neighbouring villages include Long Bennington, Westborough, Allington and Marston. In Roman times there was a settlement in the parish and it is thought that it may have been developed from a late Iron Age farmstead situated close to the Fallow Ford at the end of Fallow Lane. A Roman Villa was excavated in late Victorian times 1891-1896; it was on slightly raised ground near to the present forded crossing of the River Witham. Rev. Henry Faulkner Allison (who was Curate of Foston from 1891 to 1896) found various Roman pieces of pottery and Mr J Dable discovered numerous Roman coins in 1973. These coins were dating from Nero AD 37 to Constantius ll in 306. Other artefacts have also been found including two Dolphin brooches and one Trumpet brooch along with a Stud brooch. The Saxons settled in this district and extensive remains have been excavated on Loveden Hill, which can be seen to the east of the village. Another pointer to the possible Saxon settlement within the parish is to be found in the name Foston itself - i.e. the first “ton” or settlement on the road north from Grantham, FOTR-TUN. The combination of Scandinavian personal name and Saxon ending found in Ekwall's interpretation of the Village name may reflect a Saxon settlement taken over by a Viking headman. Foston from the Old Scandinavian Fos+ton, for “farmstead or Fotr”. [A. D. Mills, “A Dictionary of English Place-Names,” Oxford University Press, 1991] The Normans took over an existing agricultural society and organised it. We give the name “feudalism” to this system where land is held in return for the performance of services. The Manor was the farming unit and evidence of the Manorial system is extensive in Foston. Foston was part of the manor of Long Bennington and Foston appears in the 1086 Domesday Book as “Foztun”. Agricultural practices in the Middle Ages were based upon the open field system and the land divided into strips. The general line of the holdings would run in an east-west direction. The Enclosure Act in a plan dated 3rd March 1796 affected Long Bennington and Foston. The present pattern of land usage is very nearly that formed from the old strip system at enclosure. Modern agricultural machinery has made it necessary to remove hedges thus recreating the appearance of the large open fields of the pre-enclosure era. The English Civil War started in 1642. In 1643, which was the year Oliver Cromwell won his first victory over the Royalists at Grantham. The Royalists were defeated in 1647, and King Charles I was executed two years later. The monarchy was restored with Charles II in 1660. The Wesleyan Methodists chapel The Anglican Church of St Peter dates back to at least the 13th Century, although in 1858 it received substantial restoration and was partially rebuilt under the direction of Charles Kirk. The parish register for burials and baptisms begins in 1626, several years before Oliver Cromwell proposed the civil registration of births, deaths and marriages. Marriages were included in the Long Bennington register until 1766. There was a Wesleyan Methodists Chapel (on the right) that was situated on Chapel Lane. However, it was taken down and bungalows and houses were built on the site. A National School was built next to the Church in 1847 to hold eighty children, which later became a Church School. The attendance at the time was only about forty. An earlier school building adjoined the south wall of the churchyard. The School was closed in 1987 and subsequently converted into a home. Children up to the age of eleven are bussed to the Long Bennington Church of England School. The Foston Poat Windmill The Foston Post Windmill was one of the oldest in Britain, dating back to 1624. It was demolished in 1966. It had been sited on the A1 at the cottage known as Mill House. Previous to that it was sited at Mill Close on Allington Lane and before that it stood immediately below the old Post Office on Newark Hill. At the time these mills were considered portable. Old buildings constructed from mud and stud have been demolished although most new dwellings have been built on sites of older dwellings with the exception of Wilkinson Road. In 1967-69 around forty houses were built on land belonging to Mr Wilkinson whose family had farmed in the village for a hundred years. New houses were also built on the site of The Black Boy Inn (left). From 1979 to the 1980's eight houses were built on Highfield Close and since then individual houses have been fitted in along Main Street, Church Street, Newark Hill and Long Street. Foston is currently undergoing a series of small developments of individual properties in Long Street and Back Lane. The Black Boy Inn Coaching was at its height in the early 19th century and Foston had several Inns and Staging Houses. The Duke William public house was at the Allington crossroads close to the service station area. The Black Horse still exists in the centre of the village as a private house and flats. The Black Boy was demolished in 1967 and it stood at the Junction of Tow lane and Newark Hill. With the advent of the railway Foston lost any importance it may have had as a staging post and the village was bypassed in the 1920's isolating the village from any through traffic. At the turn of the century when the Great North Road ran through the village, the vehicles on the road were mainly horse drawn. Travelling circuses would pass through the village along with occasional dancing bears and travellers from all walks of life. During the First and Second World Wars, Foston remained a thriving village. Sadly, during the Great War eight men from the village lost their lives in the armed forces. In the Second World War, two men were lost during the war. During that time American Medics stationed at Allington frequented the Public Houses in the village. Italian prisoners of war worked on some of the farms, replacing villagers that had left to go to war or work in Grantham at the armament factories. The Old Shop and Bakery Up until the time the school closed Foston remained a busy, more or less, self-supporting village. By the beginning of the twenty first century the story is very different. Foston has lost all her services including the Doctors surgery, Post Office, pub, and shop, which, was also a bakery, however, the villagers are fighting back with the Parish Plan. More history here
Internet Safety Guidelines Student Health & Nursing Services You are here: Bullying Resources for Everyone: Kids, Teens, Young Adults, Parents, Educators and the Community Hampshire County Schools Policy 5517.01 - Harassment, Intimidation or Bullying How to Report Harassment, Intimidation or Bullying in Hampshire County Schools 8 ways to Banish Bullying: Gulp! What to do When Your Child is the Bully: Top 5 Tips for Dealing with Bullying in Your Family: Signs Your Child is a Bully or Being Bullied: OMG-The Mean Girl Saga Starts at Age 4: Cyberbullying Specific Articles: Stop Cyber Bullying: Cyberbullying is Parents’ #1 Fear: How to be a Plugged-in Parent: Bullying Statistics for 2011 Are you a bully? (Quiz for kids) Are you a bully? (Quiz for girls) Are you a bully or a bully in the making? (Quiz for all) Stop Bullying (links for Kids, Teens, Young Adults, Parents, Educators, and In the Community) Kids's Health - Dealing with Bullies Teens Health - Dealing with Bullying It's My Life - Friends - Bullying - What is bullying? Kid's Against Bullying Big Bully - Are you Cyberbullying? Are you a Cyberbully? - (Take the survey to find out.) STOP Cyberbullying - what is it? how it works? why cyberbully? prevention - take action - what's the law? Cyberbullying Research Center ... identifying the causes and consequences of cyberbullying iSafe - Cyberbullying - Dig Deeper American Academy of Child and Adolescent Psychiatry - Facts for Families about Bullying Free PowerPoint Presentations about Bullying - great for classroom teachers or group leaders How to Report Bullying On Facebook What to do about bullying - Tips for parents other helpful links: Internet Safety Links for Parents - Look for the new links about Facebook and YouTube! Hampshire County Schools 111 School Street Romney, WV 26757 Dr. Jeffrey R. Crook Phone: (304) 822-3528 Fax: (304) 822-3540
Monday, September 15, 2008 Narrative Statement project 1 I have learned and gained a better understanding and perspective of how storyboarding, animations, and narratives are told and how to go about telling a narrative through a series of frames. In this project I have also learned how to ride a freebord and how to assemble one frontwards and backwards. Also through my process and viewing my classmates work I know that narratives can be told many ways depending on the narrator. In the beginning I enjoyed experimenting with different methods of depicting my two actions through drawing, collageing, thinking outside the box with metaphors and photography which was my choice of medium in the end. After choosing photography, I learned that editing is key into creating a good narrative. Changing the pace in a narrative by increasing and decreasing frames between scenes, transitioning between actions and actions and scenes to scenes can make the viewer or audience become engaged and in your narrative, especially when they are forced to use their imagination when I story doesn’t tell you everything.
Francis Cottington, 1st Baron Cottington He was the fourth son of Philip Cottington of Godmanston in Somersetshire. According to Hoare, his mother was Jane, daughter of Thomas Biflete, but according to Clarendon, "a Stafford nearly allied to Sir Edward Stafford", through whom he was recommended to Sir Charles Cornwallis, ambassador to the court of Philip III of Spain, becoming a member of his suite and acting as English agent on the latter's recall, from 1609 to 1611. In 1612 he was appointed English consul at Seville. Returning to England, he was made a clerk of the council in September 1613. His Spanish experience rendered him useful to King James, and his bias in favour of Spain was always marked. He seems to have promoted the Spanish policy from the first, and pressed on Diego Sarmiento de Acuña, conde de Gondomar, the Spanish ambassador, the proposal for the Spanish in opposition to the French marriage for Prince Charles (later King Charles I). After his return he was appointed secretary to Prince Charles in October 1622, and was knighted and made a baronet in 1623. He strongly disapproved of the prince's expedition to Spain, as an adventure likely to upset the whole policy of marriage and alliance, but was overruled and chosen to accompany him. His opposition greatly incensed George Villiers, 1st Duke of Buckingham, and still more his perseverance in the Spanish policy after the failure of the expedition, and on Charles I's accession Cottington was through his means dismissed from all his employments and forbidden to appear at court. The duke's assassination, however, enabled him to return. He was a Roman Catholic at least at heart, becoming a member of that communion in 1623, returning to Protestantism, and again declaring himself a Roman Catholic in 1636, and supporting the cause of the Roman Catholics in England. On 12 November 1628 he was made a privy councillor, and in March 1629 appointed chancellor of the exchequer. In the autumn he was again sent as ambassador to Spain; he signed the peace treaty of 5 November 1630 and subsequently a secret agreement arranging for the partition of the Dutch Republic between Spain and England in return for the restoration of the Palatinate. On 10 July 1631 he was created Baron Cottington of Hanworth in Middlesex. In March 1635 he was appointed master of the Court of Wards and Liveries, and his exactions in this office added greatly to the unpopularity of the government. He was also appointed a commissioner for the Treasury, together with William Laud, and a fierce rivalry sprang up between the two men. However, in their personal encounters Cottington nearly always had the advantage, because he practised great reserve and possessed great powers of self-command, an extraordinary talent for dissembling, and a fund of humour. Laud completely lacked these qualities, and although really possessing much greater influence with Charles, he was often embarrassed and sometimes exposed to ridicule by his opponent. The aim of Cottington's ambition was the place of lord treasurer, but Laud finally triumphed and secured it for his own nominee, Bishop Juxon, when Cottington became "no more a leader but meddled with his particular duties only." He continued, however, to take a large share in public business and served on the committees for foreign, Irish, and Scottish affairs. In the last, appointed in July 1638, he supported the war, and in May 1640, after the dismissal of the Short Parliament, he declared it his opinion that at such a crisis the king might levy money without the Parliament. His attempts to get funds from the City of London were unsuccessful, and he had recourse instead to a speculation in pepper. He had been appointed constable of the Tower, and he now prepared the fortress for a siege. In the trial of Strafford in 1641, Cottington denied on oath that he had heard him use the incriminating words about "reducing this kingdom". When the parliamentary opposition became too strong to be any longer defied, Cottington, as one of those who had chiefly incurred their hostility, hastened to retire from the administration, giving up the court of wards in May 1641 and the chancellorship of the exchequer in January 1642. He rejoined the king in 1643, took part in the proceedings of the Oxford Parliament, and was made lord treasurer on 3 October 1643. He signed the surrender of Oxford in July 1646, and being excepted from the indemnity retired abroad. He joined Prince Charles at the Hague in 1648, and became one of his counsellors. In 1649, together with Edward Hyde, Cottington went on a mission to Spain to obtain help for the royal cause, having an interview with Cardinal Mazarin at Paris on the way. They met, however, with an extremely ill reception, and Cottington found he had completely lost his popularity at the Spanish court, one cause being his shortcomings and waverings in the matter of religion. He now announced his intention of remaining in Spain and of keeping faithful to Roman Catholicism, and took up his residence at Valladolid, where he was maintained by the Jesuits. He died there on 19 June 1652, his body being subsequently buried in Westminster Abbey. He had amassed a large fortune and built two magnificent houses at Hanworth near Heathrow and Fonthill near Tisbury, Salisbury. Cottington was evidently a man of considerable ability, but the foreign policy he pursued was opposed to the national interests and futile in itself. According to Clarendon's verdict "he left behind him a greater esteem of his parts than love of his person." He married in 1623 Anne, a daughter of Sir William Meredith and the widow of Sir Robert Brett. All his children predeceased him, and his title became extinct at his death. - Chisholm 1911. - Hunneyball, Paul. "COTTINGTON, Sir Francis, 1st Bt. (c. 1579-1652), of Charing Cross, Westminster and Hanworth, Mdx.; later of Fonthill Gifford, Wilts.". History of Parliament. The History of Parliament Trust. Retrieved 2 February 2016. - George Edward Cokayne Complete Baronetage, Volume 1 1900 - Walter, Henry (1834). A History of England: Extending from the accession of James I to the abdication of James II. J.G.F. & J. Rivington. p. 85. - Strafford's Letters, ii. 52 - Granger, James (1824). A Biographical History of England. Baynes. p. 273. - "Cottington". Westminster Abbey. Retrieved 2 February 2016. - This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Cottington, Francis Cottington, Baron". Encyclopædia Britannica. 7 (11th ed.). Cambridge University Press. p. 254. This cites: - Stephen, Leslie, ed. (1888). "Dart, John". Dictionary of National Biography. 14. London: Smith, Elder & Co., and authorities there quoted - Clarendon's State Papers and Life - Strafford's Letters - Gardiner's Hist. of England and of the Commonwealth - Hoare's Wiltshire - Laud's Works, vols. iii.-vii. - Winwood's Memorials: A Refutation of a False and Impious Aspersion cast on the late Lord Cottington - John Dart, Westmonasterium, i. 181 (epitaph and monument). - Firth, Charles Harding (1887). "Cottington, Francis". In Stephen, Leslie. Dictionary of National Biography. 12. London: Smith, Elder & Co. - Pogson, Fiona. "Cottington, Francis, first Baron Cottington (1579?–1652)". Oxford Dictionary of National Biography (online ed.). Oxford University Press. doi:10.1093/ref:odnb/6404. (Subscription or UK public library membership required.) |Wikimedia Commons has media related to Francis Cottington, 1st Baron Cottington.|
Department: Guest Editorial When the Patient Protection and Affordable Care Act (P.L. 111–148) is fully implemented, the need for free healthcare clinics should diminish greatly; however, I am skeptical about whether this will actually happen. This new law mandates that most U.S. citizens and legal residents have health insurance. Medicaid coverage will be expanded to include those 133% above the Federal Poverty Level. People who do not qualify for Medicaid or Medicare and do not have health insurance through their employers will be required to purchase health insurance, but there will be tax incentives to help alleviate the cost. Those who do not purchase health insurance will be required to pay a tax penalty starting in 2014. However, there are several exemptions to this health insurance mandate, including financial hardship. A summary of the new healthcare reform law can be found at http://www.kff.org/healthreform/8061.cfm.1 In 2007, 45 million Americans had no health insurance and 89% of the uninsured lived in families headed by workers.2 Most of those without health insurance are the working poor. They make too much money to qualify for Medicaid and do not receive health insurance through their employers. They often have difficulty meeting their monthly expenses and are unable to afford health insurance. Although the new healthcare reform law includes tax benefits to offset the cost of purchasing health insurance, monthly payments will most likely be necessary. When a choice must be made between paying the monthly health insurance bill or buying groceries, basic human needs will take precedence. It appears that once again, this vulnerable group may have difficulty securing affordable health insurance. Hopefully, the provisions within the new law will help most of the working poor to purchase affordable coverage. It has been my experience that many patients who use free healthcare clinics have recently lost their jobs and health insurance. These patients may eventually qualify for Medicaid; until then free healthcare clinics provide a way to receive primary healthcare services. Applying for Medicaid is difficult for those with transportation issues and physical and/or cognitive limitations. I foresee that implementing the new healthcare law may be burdensome and time-consuming to new enrollees. Free and accessible healthcare will still be needed when life situations such as unemployment occur. Although I have expressed some pessimism toward the new healthcare reform law, it is a step in the right direction. Clearly, other options besides employment-based health insurance must be made available. Those of us caring for patients in free healthcare clinics see the struggles they face every day. Because the need for free healthcare exists and will for the foreseeable future, I encourage all NPs and other providers to locate the free clinics in their area and volunteer their services. These clinics cannot exist without the support of healthcare providers such as NPs. Helping those in need is extremely gratifying. Not only do individual patients benefit from services provided, but also the community as a whole becomes a healthier place to live. My hope is that in the near future, all Americans will have access to affordable, quality healthcare and the need for free clinics will cease. Until that time, NPs need to assume leadership roles in our evolving healthcare system. NPs have the knowledge and skills to effect change in healthcare policy, and now is the time to formulate a plan with other policy makers to meet the needs of the most vulnerable members of society.
How and Why You Should be Making Ghee This post may contain Affiliate Links. Thank you for your support! What is this mysterious ghee or clarified butter? First, Ghee is the Sanskrit name for clarified butter. It is simply butter that has had all of the milk solids removed. It is simmered until all that remains after straining is a pure combination of fats. Benefits of Eating Ghee 1. Ghee is great for those with a dairy intolerance or allergy. A number of people who cannot have dairy can have ghee because, as mentioned above, all of the milk proteins have been removed. I have a niece who cannot have dairy or she will become asthmatic. I never realized how much we eat dairy until we were around them for the holidays! No cheese on pizza, no butter in the mashed potatoes, no mac-n-cheese. At least with ghee the family can easily cook with “butter” and she can have toast etc. with ghee. Not everyone does perfectly well with ghee (possibly because not all milk proteins were removed?) so I can’t wait to see how it goes! 2. It is a Nutrient Rich Food Yes, I called this awesome fat, “food”. When you buy fresh, non-pasteurized (not heated to a high temp.), and non-homogenized (not zapped so that the fat stays mixed and doesn’t rise to the top) milk, all the nutrients are still intact. It’s great for drinking. Then, you can use a turkey baster and suck up that cream from the top and use it to make butter!! THEN you can use some of that awesome fresh butter to make ghee!! How great is that! - “Omega-3s (monounsaturated fats) are healthy forms of fat that can be found in ghee, in addition to other fatty acids like conjugated linoleic acid and butyric acid, both of which have positive health benefits in the body.”1 - It also is high in Vitamins A and D and E. 3. Ghee May lower Bad Cholesterol Ghee may decrease bad cholesterol when taken at a rate of 2 or less Tbsp a day; LDL and very-low-density lipoprotein, as well as triglycerides. Ghee seems to affect the metabolism of cholesterol.2 4. Ghee protects the Heart A study done in 2003 showed that ghee helped arteries from hardening by lowering bad cholesterol and raising good cholesterol.3 5. It doesn’t have to be refrigerated Yes, just like coconut oil, it can be kept in your cupboard or on your counter for months. This is because there is no protein remaining, only a pure combination of fats. 6. Ghee has a high smoking point Many oils will begin to break down at medium to high temperatures, releasing free radicals. For example, olive oil should not be used for frying, grilling, or other high temperature cooking due to this very reason. On the other hand, ghee (and coconut oil) have high smoking points, meaning that they can reach very high temperatures before beginning to smoke or break down. This makes ghee (and coconut oil) great for cooking. How to Make Ghee Ghee is very simple to make. 1. Place your butter (from grass fed cows for better nutrition) in a sauce pan over low heat. 2. Melt butter. 3. Skim any “foam” (milk protein) off of the top using a slotted spoon. Get as much as you can. 4. Continue to cook for 20 minutes on low heat. 5. The rest of the milk proteins will sink to the bottom and begin to brown. 6. Once the mixture is clear with solids on the bottom, pour through a cheese cloth into a jar (or coffee filter, or old t-shirt…). 7. Keep on the counter or in a cupboard until needed. It will keep for many months at room temperature. - Organic Butter - Place your butter (from grass fed cows for better nutrition) in a sauce pan over low heat. - Melt butter. - Skim any "foam" (milk protein) off of the top using a slotted spoon. Get as much as you can. - Continue to cook for 20 minutes on low heat. - The rest of the milk proteins will sink to the bottom and begin to brown. - Once the mixture is clear with solids on the bottom, pour through a cheese cloth into a jar (or coffee filter or old t-shirt). - Keep on the counter or in a cupboard until needed. It will keep for many months at room temperature. You can save the toasted milk solids for stir fries and other dishes that you want to add a slight nutty flavor to.
The order in which children are born into a family is a fact, but the effect that this order has on their personality and psychology is not a science. Many factors influence the results of birth order including the sex of the children, the physical size of the children and most importantly the spacing between them. The family is the largest influence on a persons development, more so than institutions or cultures outside of the family. Therefore, the influence of birth order in the family structure is worth looking at. However keep in mind when searching for reliable sources on birth order that the only way to discuss birth order is to generalize and stereotype. Take the following prompt for example from The University of Maine. Where Do You Fit? - Perfectionist, reliable, list-maker, well-organized, critical, serious, scholarly - Mediator, fewest pictures in the family photo album, avoids conflict, independent, extreme loyalty to the peer group, many friends - Manipulative, charming, blames others, shows off, people person, good salesperson, precocious If you identified with the characteristics in the first list, you may be an only child or a first-born. If the second list fits you better, chances are you a middle child. And if the last list fits you best, you may be the youngest or baby in the family. Birth order is not a simple system stereotyping all first-borns as having one personality, with all second-borns another, and last-born kids a third. Instead, birth order is about tendencies and general characteristics that may often apply. Other things also influence birth order.2 The underlying factor in birth order is parental attention. The first born experienced 100% of each parents attention while the second born will never know that reality experiencing only 50% of each parents attention. The expectations and implications of birth order are in the hands on the parents. If you expect the first born to do more difficult chores than the second born based solely on age while typically siding with the younger child in disputes than you may notice a personality development that follows suit. Putting extra time and attention into treating the children as individuals, and giving them each the attention they deserve independent of each other is a good place to start. Expectations are best adjusted so that the older child is not naturally expected to get higher grades while the younger child is expecting to cause youthful trouble. Holding all children to the same standards, while valuing them as individuals with separate interest and talents goes a long way in avoiding Second Child Syndrome.
Reading “The Purpose of the Church” In 1948, most congregations and houses of worship in the United States were segregated (separated) by the color of their members’ skin. Some were segregated by law; others by custom or by a lack of actively trying to welcome and include all people. The First Unitarian Society of Chicago was one of these congregations. Although their church was located in a neighborhood with many African Americans, only whites could join, according to the written by-laws (rules) of the church, and according to custom. The day came that many members began to believe they needed to take action against racism, if they really wanted to live their values and principles. The minister, the Reverend Leslie Pennington, was ready for this day and ready to take action. So was James Luther Adams. James Luther Adams was a famous liberal theologian and social ethicist — a person who studies religion, beliefs, and values. Doctor Adams taught at the Meadville Lombard Theological School, right across the street from the First Unitarian Society of Chicago. And he was a member of the congregation’s board of directors — a leader in the congregation. Along with some others, Reverend Pennington and James Luther Adams proposed a change in the church’s by-laws to desegregate the church and welcome people whatever the color of their skin. They wanted to include, not exclude. They saw this as a way to put their love into action. When the congregation’s Board of Directors considered the desegregation proposal, most of them supported it. However, one member of the board objected. “Your new program is making desegregation into a creed,” he said. “You are asking everyone in our church to say they believe desegregating, or inviting, even recruiting people of color to attend church here is a good way to tackle racism. What if some members don’t believe this?” Desegregation was a very controversial topic. In 1948, anything about skin color and racism was controversial. Some people, even some who supported African Americans in demanding their civil liberties, believed in a separate, but equal policy which kept people apart based on their skin color. Respectful debate ensued at the First Unitarian Society of Chicago. Both sides felt, in their hearts, that their belief was right. Perhaps they were so busy trying to be heard they forgot to listen. And so, they kept on talking. The debate went on in the Board of Directors’ meeting until the early hours of the morning. Everyone was exhausted and frustrated. Finally, James Luther Adams remembered that we should be listening twice as much as talking. He asked the person who had voiced the strongest objection, “What do you say is the purpose of this church?” Suddenly, everyone was listening. Everyone wanted to hear the answer to this crucial question. Probably, the person who objected was listening especially hard to his own heart, as well as to the words he had heard from other Board members through the long discussion. The Board member who opposed opening the church to people of color finally replied. “Okay, Jim. The purpose of this church is to get hold of people like me and change them.” The First Unitarian Society of Chicago successfully desegregated. Sermon – The Limits of Diversity Over this past year, Marc and I had a few philosophical conversations we kept coming back to – one of many joys of having multiple ministers in a church – lots of opportunity for philosophical debate. One of the questions we often returned to has been the philosophical question of unity. Given life’s incredible diversity, given the unfathomable cultural differences present across this world, the unimaginable vastness of the universe – is there any reasonable way that we Unitarians today, can affirm life’s oneness? One of us would throw out such a question, and then inevitably, the other of us would bring up Stephen Prothero’s book, God is Not One. The enlightenment gave us the idea of oneness, afterall, but today’s post-modern world likely requires a less romantic concept of reality – life is broken, fragmented, diverse, random. As Prothero proposes – God is not one. We are not one. This is the theology that one of my professors in seminary, Edward Antonio, would often assert. God is not one. Or at least Professor Antonio would start there. Perhaps because he was talking mostly to Western mono-theists and part of the point of any good introductory theology course is to shake you out of your assumptions. Western monotheists would be a reasonable description – by the way- of the Unitarian Universalists in the room, since at least historically we differentiated ourselves so much on the point of God’s unity that we took it into our name – the Unit-arians. And although UUs today may offer a variety of opinions on the idea of god, we mostly come to agreement around the idea of Life itself being One, an idea of an underlying unity at the heart of Reality. Even science seems to indicate that at some basic level – all life is one. Professor Antonio said however, this whole Unity business is a problem. Because, in most cases, “Unity” simply means we are ascribing our perspective onto others. Looking for our oneness, we assume we are the same in ways we are actually not, which often makes the minority or less powerful ways of being or doing completely invisible. We assume the most common way is the right way, and historically this perspective has paved the way for cultural and religious imperialism and colonialism. Professor Antonio’s approach, particularly informed by his experience as a colonized African, encouraged us Western monotheists – to stop emphasizing reality’s ultimate unity and instead recognize its vast diversity. All paths are not simply different routes up the same mountain. God is Not One. God is many, multiple, diverse, divergent. Taking this in – taking in life’s profound diversity- how profoundly different each person and each expression of life truly is – how ever-changing and growing the spirit of life is, there seems to be only one appropriate response – which is – humility. We have a small sliver of truth! We hold such a little piece of life’s reality. Our experience is so limited, and diversity and possibility so vast! Knowing this, all questions and inquiries, all interactions with others, all building of relationship, all of these suddenly require such care and attention, such an awareness that we cannot know what the other knows simply because they are another human in our proximity. Knowing and accepting our diversity, leads necessarily to humility. Perhaps predictably, once my professor saw this recognition and practice of humility in his students, he would pretty quickly encourage us to articulate a new and more nuanced theology of unity. Both of these things are true – after all – unity and diversity. Actually, this combination offers one my favorite ways to conceive of the divine – On the one hand, life is ridiculously, overwhelmingly diverse; there are not many paths up one mountain, and perhaps not even many paths up different mountains, but maybe when we look more fully, we realize that there are mountains, and plains, and deserts, and oceans and sky. And yet – on the other hand, holding that diversity – somehow, mysteriously, impossibly, is some kind of unity. Some kind of link. Some kind of profound connection across all time and space and incarnation. I don’t pretend to understand it, but I feel it. I believe it. We are one. And – we aren’t. A couple years ago, I was serving a church that was struggling with its diversity, specifically its theological diversity. They wanted to resolve their differences and more clearly define themselves on the humanist-theist spectrum. No matter how often I reminded them that as Unitarian Universalists, we came together not over shared beliefs but over shared promises – that we were covenantal rather than creedal – they just couldn’t let it go. I urged them to be open to the ways they could, that they must learn from one another’s experience and understanding – that we all hold a piece of the truth – and we need each other to approach wholeness. They’d nod their heads and say they got it, but then when it came right down to it, they couldn’t escape that desire to be with “like-minded” people. People who believe and think as they did. Finally I decided to preach a sermon on the value of pluralism as a source of truth and authority in Unitarian Universalism. I told them – Officially, Unitarian Universalists claim six sources – first, direct and personal experience; second, words and deeds of prophetic women and men; third, wisdom from all the world’s religions; fourth, Jewish and Christian teachings; five, humanist teachings, and finally, our sixth source, spiritual teachings of earth-centered religions and the teachings of the natural world. But, I suggested, in a world where we realize that we all hold just a piece of the truth – I would propose another necessary source of wisdom and authority – a seventh source – which is, the gathered community. The community who has promised to walk together, in right relationship, in mutual obligation. Without being a part of such a community, truth is not fully discernible, and life is less than whole. And – I told them – Critical to a meaningful use of the gathered community as a source of knowledge and truth – is the presence of diversity. The more diversity the better. The more places where our individuality and self-expression meet up with the boundaries of another individual and their self-expression, the more possibility for the creative interchange that allows for us all to grow and change, to release ourselves more fully from our illusions, to get just a little closer to Reality. My sermon went over great, and looking back, I think it helped empower the congregation in its differences – rather than seeing them as a problem to be solved. It went great, except for one little unintended consequence…. One of our members, who was particularly proud of his capacity to see things from a totally different perspective than any other person in the church, and then make it a point of hijacking the whole community to accommodate his personal opinion – he came to talk to me about the sermon. Let me confess, this person was someone I really hoped I could reach with the sermon, someone I hoped would hear it as a call to be more open to changing and growing with others, a call to humility – perhaps. So when he came to see me, and it turned out – he loved it, wanted a copy, wanted to make sure people who weren’t there that Sunday would know about it – well I was feeling really proud and self congratulatory -like – yes! I did it! And then….he told me what he loved about it…I loved it, he said, because it helped underscore just how important it is that in every meeting, every gathering, every time I ever come to something for the church – I will be sure to represent “the Devil’s Advocate.” It took everything in me not to tell him just how right that label was from my perspective. I didn’t. I didn’t roll my eyes or sigh in exasperation. What I did instead, was listen and try to keep the conversation going. Ministry is always a process. A longer process for some than others. But the other thing I did was – I began to plot my follow-up sermon. It took me nearly two years, and a move to a new congregation, but this Sunday is that follow up sermon. Which brings me to an illustration, one that my colleague the Rev. Robert Latham – coincidentally my co-minister during my first year of ministry in my former congregation- an illustration that Robert – whenever the topic of diversity comes up likes to refer to. It’s from a Hagar the Horrible cartoon. Do you know Hagar the Horrible? Well this one shows all the vikings in a boat, rowing. Some of the vikings are using one end of the paddle. Some, the other. Some are going in one direction. Some the other. Some are paddling fast. Some slowly. And the boat – you can imagine – is sitting motionless, despite lots and lots of viking activity. And the caption has the one guy saying to the guy calling out the instructions: “‘Will you please stop saying ‘different strokes for different folks?!'” Whether we are talking about ultimate reality and the spirit of life, or the practical communal incarnations of these holiest visions – this Hagar the Horrible cartoon reminds us that diversity is not the whole story – any more than the whole story is a matter of unity and conformity. For a moment, let’s think back to our singing earlier. The four songs. All different, from different traditions. We each chose the song that spoke to us, representing our free capacity to choose our own path. Diversity. And yet – in order for the music to work we had to sing in a shared rhythm. We had to concede to a shared key. We had to follow the leader of our chosen song. We had to limit the expressions of diversity. Robert and Hagar the Horrible and our singing together – all remind us that at some point, diversity becomes a danger rather than a gift.At some point, in order for a community to grow and for life to flourish, we must discern appropriate limits for our diversity, and with humility construct a robust practice and theology of unity. For a community to grow and life to flourish, we must acknowledge a boundary where we stop saying – everything anyone does or says we affirm out of respect for diversity, and instead say – if you act like this, if you believe like you behave like, this, it’s not going to work here. For a community to grow, it must have boundaries – things we can point to and say – here is the edge of what it means to be a Unitarian Universalist, or what it means to be a ember of Foothills Unitarian Church – and anything beyond this threatens our capacity to move forward together. Perhaps you have said – or heard it said, that Unitarian Universalists can believe whatever you want. The story we shared about the church in Chicago offers a great counterpoint to this idea. Somewhat paradoxically, in order to affirm the value of diverse community, that church had to draw a boundary around the acceptable limits of membership in their community. They had to say that diversity required a limit. The question that the board member asked -“What if some members don’t believe this?” is a question we in an intentionally pluralistic community have had to wrestle with throughout our history. When do we affirm and accommodate and when do we set aside individual perspective, opinion, or comfort for the good of the whole? In order for the Chicago church to navigate where and how to draw the boundaries of diversity in community, they went back to what their core purpose was – and in that return, they had to acknowledge that their core purpose could not be to affirm each person’s individual beliefs. In fact it was that board member who was most personally challenged by the move to desegregation that acknowledged the church’s purpose had to be transformation – which I would say begins with the humble recognition that some or even many of our ideas and practices might need to change so that we can all be our best, most loving selves. I find that Unitarian Universalist congregations – which is to say – people – struggle to draw the boundaries at the appropriate and helpful limits of diversity in two main ways. First, as individuals, we fail to engage that practice of humility. We approach our congregations…or our friends and family….as if everyone agrees with us, or should. And then, when things don’t go as we think they should, we are confident that the church – or our friends, or family – they need us to bring them back on course. Now maybe you’ve already in your mind thought up someone you know who fits this description. But what I’d ask you to do is to imagine that I’m talking about you. Or me. Because we are all well-trained in this tendency. Our educational systems teach us to prioritize certainty, and our consumer culture tells us we should get it “our way.” And so though we might like to believe that these not-so-humble trouble-makers are someone else, the practice of humility begins by acknowledging at any time any one of us might be that board member Lenny gave voice to earlier – that person ascribing our way as everyone’s way, that dear one who insists on our personal preferences, failing to remember the true purpose of the church is to help us all become our best selves – which likely means letting go of our personal preferences! The second way I see Unitarian Universalists – that is to say, people – struggling to set the appropriate limits for diversity falls into a category we might describe as “conflict avoidance.” Although, that’s not what we usually think or say we’re up to when we are missing this limit – we call it affirming individual freedom, affirming an individual’s dignity, or respecting individuality… In the name of our first principle, we can let all kinds of inappropriate and destructive behaviors fly – in our churches, in our communities, in our families. We sacrifice the good of the whole because one among us lacks the self-awareness, or self-control to recognize that the mission of the congregation is not to serve their personal agenda. The Chicago church could have easily stepped away from that desegregation conversation when it met resistance, placing a higher value on one person’s discomfort than their community’s greater vision for the world. Instead, they stayed in the difficult dialogue and asserted the boundaries of the church’s mission. We fail our communities when we avoid drawing these boundaries appropriately – but we also fail our friends who are acting out of covenant. It does none of us any good to be affirmed in our destructive, selfish behaviors by our religious community. Discerning the limits of diversity in community requires careful and loving consideration. It is – like everything – a practice. To help us in our practice, I want to conclude this sermon by reminding us of words often attributed to Francis David, Unitarian from Transylvania in the 16th century: “We need not think alike to love alike.” Contemporary Unitarian Universalist minister and scholar, Alice Blair Wesley says in our congregations we often assert how much we need not think alike, but we fail to fully discern and articulate what it means to love alike. Where “not thinking alike” is where we can celebrate diversity; “loving alike” is where we must draw its limit. James Luther Adams and his church in 1948 decided that “loving alike” meant upholding desegregation. Loving alike today, helps us assert the full inclusion of gay lesbian, bisexual, transgender and queer people in our congregations. Loving alike means there are boundaries around our community, things that describe what it means to be a Unitarian Universalist, a member of the Foothills Unitarian Church. As we walk this path together with people who may not think alike, may our loving alike allow us to be a place where we can discover our best selves. And together may we grow in our capacity to love one another, the world, and the spirit of life itself. In its vast diversity, and in our mysterious and ultimate unity.
Drinking coffee has become quite a routine among a lot of people as early as breakfast time with the reason that it gives them the energy to accomplish the tasks ahead of them. There are even some people who have reached the point of having a bad mood if they are not able to drink their morning cup routine. Nonetheless, until this day, even those who are coffee lovers are not well aware of the many nutrients found in coffee that bring about a lot of health advantages. The current research studies have proven that regularly taking in coffee decreases the chances of the person to get Parkinson’s disease, dementia, stroke, certain kinds of cancer, arrhythmia, and type 2 diabetes. These particular studies have also proven that as coffee consumption increases so does its benefits. A researcher at the Institute for Coffee Studies based in Vanderbilt University named Dr Thomas DePaulis has proven that coffee is not detrimental to the health of a person at all. Such an institute has become one that monitors different coffee research programs being conducted across the globe. This institute has also established that there is minimal harm when one gets to drink coffee. They have collected from six of the coffee studies that they are closely monitoring that 80 percent of their regular coffee drinker participants have less chances of having Parkinson’s disease. The risk is also proportionately reduced when the participant most likely takes in more coffee as shown in three of these studies. Additional studies have also shown that taking in a minimum of 2 cups of coffee reduces the risk of getting colon cancer by 25% as well as reduces the risk of getting liver cirrhosis by 80%. Additional studies have also proven that coffee has the ability to reduce the effects brought about by common vices. This means that your chances of getting lung cancer or liver cirrhosis are decreased if you drink coffee regularly even though you are a smoker and a drinker. How are type 2 diabetes and Parkinson’s disease connected to drinking coffee? What Do You Know About Coffee Health benefits are being brought about by coffee because of the biochemical ingredients it comprises. Caffeine, for instance, is an antioxidant in coffee that has been proven to affect the body in a beneficial way. In comparison to other sources of caffeine being taken in the morning, coffee has been found to have the highest caffeine content. This basically means that you are guaranteed to get three and a half times more caffeine if you drink a cup of coffee rather than hot chocolate or a cup of tea. High caffeine levels have been shown to have a positive effect on reducing one’s risk of developing Parkinson’s disease. Because of the increased levels of chromium and magnesium in coffee, it has been shown that insulin is used better. This is the reason why developing type 2 diabetes is also reduced.Short Course on Products – Getting to Square 1
Rangoli is found in front of all Hindu households in India and whichever State or region you go, you can see a variety of rangoli in front of houses, temples, religious institutions such as mathas and Veda Patashalas, Kalyana Mantapas and even in front of venues hosting social and religious events. Rangoli is drawn on all auspicious occasions such as marriages, Gruhapravesha and Nishchitarta (engagement ceremony). The tradition of drawing on the floor or/and in front of houses is called Rangoli. Therefore, it is also called floor art and it is an integral part of an Indian household. Rangoli is called by different names in different regions. There can be innumerable designs and there are designs for every occasions. In earlier days, Rangoli was created with Hittu which attracted sparrows. Now, they are drawn with white chalk powder and on occasions colours are used. One of the leading haridasas of Karnataka, Jagannatha Dasa of Manvi was well-known as Rangoli Dasa as he was an expert at this art. He would draw the image of Hari and other Gods as he sang about them and their glories. We find different Rangoli designs in front of Raghavendra Swamy Mathas. The above image of a Rangoli drawing is from the Raghavendra Swamy Temple in Jayanagar 4th Block. We saw that a woman had drawn a beautiful Rangoli of Rayaru in front of the Tulasi Katte or platform where Tulasi was planted. The Rangoli shows Rayaru holding the Veena, his favourite musical instrument. Rayaru, like his father Thomanna Bhat, and his grandfather, Kanakachala Bhatta, was an expert Veena player. Both hios grandfather and father were at the court of the Vijayanagar Emperor. We decided to take a photograph of the Rangoli drawing before it disappeared.
Nearly a year after the Japanese tsunami and subsequent meltdown at the Fukushima nuclear plant, the good news is that the risk from radiation doesn’t seem to be as high as many initially feared. Take the Pacific Ocean, for example, where most of the radioactive fallout from the plant eventually ended up. Nicholas Fisher, a marine science professor at New York’s Stony Brook University, took samples of the seawater three months after the accident. He found levels of radiation that were elevated, but still just a fraction of the amount of radioactivity sea life is exposed to from naturally occurring potassium in seawater. As Fisher told CNN in an interview: The total radiation in the marine organisms that we collected from Fukushima is still less than the natural radiation background that the animals already had, and quite a bit less. It’s about 20%. So that’s good news for the Pacific Ocean—and those of us who might want to take fish from it. Early reports also suggest that the radiation risk in the area surrounding Fukushima may be less severe as well, though scientists will need more time to be certain. But what about the U.S.? Thousands of Americans bought potassium iodine pills in the wake of the Fukushima meltdown, afraid that a radioactive cloud was on it was to the U.S. Did any Fukushima fallout make it over here—and if so, was it enough to cause any harm? It turns out that you can rest easy. In a new study, researchers from the U.S. Geological Survey found that only minute levels of radiation from Fukushima reached the U.S. in the immediate aftermath of the accident. As part of the National Atmospheric Deposition Program (NADP), USGS scientists looked for radiation at 167 sites around the country a few weeks after Fukushima. Just about 20% showed levels of radiation from the plant—and those levels were minimal at most, well below any threat to human health. From USGS director Marcia McNutt: Japan’s unfortunate nuclear nightmare provides a rare opportunity for U.S. scientists to test an infrequently needed national capability for detecting and monitoring nuclear fallout over a wide network. Had this been a national incident, NADP would have revealed the spatial and temporal patterns of radioactive contamination in order to help protect people and the environment. VIDEO: After the Tsunami The greatest concentration of radiation was found on the West Coast, which makes sense. Radioactive particles can be carried in the high atmosphere for thousands of miles, but when they meet a rain system, they can fall to the Earth as precipitation, spreading the radiation—hence the term “fallout.” Some fallout from Fukushima was found as far away as the East Coast and Europe in the month after the accident, but the levels were so low that researches likely wouldn’t have detected the radiation without looking. The positive, as McNutt put it, is that the accident gave the USGS a chance to work with its radioactive detection network, which could come in handy if another accident occurs—especially one in the U.S. Which of course brings us to the big question from Fukushima: could it happen here? That remains unanswered, but anyone interested in the issue should read a new article in Prevention by the journalist Chanan Tigay. Tigay spent months reporting inside the Diablo Canyon nuclear power plant in California, which sits near four earthquake faults. The piece looks at the risks of living next to one of the biggest nuclear plants in the U.S.—Diablo provides power for 3 million homes—as well as the benefits, and asks whether nuclear power is worth it: Diablo Canyon’s watchdogs have long worried about the plant’s proximity to the faults, but in the wake of Fukushima, fears have escalated. Mothers for Peace, an antinuclear activist group that has opposed the operation at Diablo Canyon since the start, says its in-box has swelled with questions about safety. Sen. Sam Blakeslee, a California Republican who represents San Luis Obispo and holds a PhD in earthquake studies, says he can understand why. “This entire area is a patchwork of faults,” he says. “It was probably an imprudent location to site a nuclear power plant.” Diablo Canyon and Fukushima have that in common–but so do many other power plants. The production of nuclear power relies on an abundance of water: That’s why all but one of America’s 104 nuclear reactors sit on oceans, lakes, or rivers. And earthquakes of varying strength have been detected near most US nuclear power plants. Diablo Canyon has a strong safety record: It has never had a nuclear emergency or meltdown, it’s designed to withstand a 7.5-magnitude earthquake, and it routinely self-reports problems, which experts consider to be a sign that the system is working as it should. What’s more, nature has cooperated: Diablo Canyon has never been overrun by a tsunami or damaged by an earthquake. Even so, it’s hard to shake off the echo of similar assurances–assurances now known as the “safety myth”–that the Japanese were given before Fukushima. Now, in that catastrophe’s aftermath, the fundamental question looms: How safe is safe enough when you’re talking about nuclear energy? I don’t know the answer yet—but I do know the article is well worth a read.
I was consulting with a Montessori school that is making tremendous effort to improve it's pedagogy and we spent two full days reviewing the Montessori language theory and material presentations. You probably already know that there are three main areas in the Montessori language program for 3 to 6+ year-olds: - Spoken language - Preparation for writing Of course, before you can really do anything beyond the oral language work, you have to steep the child in practical life, especially all of those preliminary exercises. (This school spent 3 days with me on that endeavor last summer.) Anyway, it took the entire first day just to go over theory and spoken language presentations and we could have spent a second day on it. Now I know that in some Montessori teacher training programs, these lessons get really short-shrift. However, since they are the basis of the entire language program, I thought I'd give you a few highlights here. First, here's a picture of the spoken language shelves that we set up. Everything always goes from top to bottom, left to right. So, the top left has a framed piece of art to remind the Guide to given the conversations at a picture lessons. Next, there is a small photo album of pictures from the class in action. This is to remind the Guide to give True Story lessons. Next there is a lyric card... this is just the lyrics to a song you want to be sure to sing that week printed on card stock and standing up in a postcard holder. You won't use this when you sing the song with the children, but it is on the shelf as a reminder to the Guide to sing the song. Swap the song every week. Finally, the top shelf has a poetry card. Just like the lyric card, this is a short poem printed on card stock that is meant to remind the Guide to recite that poem (poems must be memorized...don't read it!) with the children. Shelf two has a basket with objects for the Sound Game. You can have an empty basket here and use it to collect objects from around the room or you can prepare objects in advance. Both strategies are good. Next, two baskets of vocabulary cards. Shelf 3 starts with a basket of visual matching objects (nice, real miniatures or actual objects...no pink elephants) followed by a tray with matching cards and then a tray with two packets of matching cards for matching/sorting work. To put this in context, I should probably give you the list of spoken language presentations, remember that the spoken language lessons require formal presentations just like every other material in the environment. And formal presentation means we need to practice (with our co-teacher/assistant if we have one) ahead of time so our lessons are seamless. Here is a list of the lessons we covered: - Natural conversations - Conversations at a picture - Poetry (memorized) - Songs (memorized) - Reading classified books (e.g., At the Market) - Reading literature (e.g., The Dot) - True stories - Listen & Do - Furnishings & surroundings - Names of the exercises - Objects within exercises - Classifications (e.g., all brushes) - Entire exercise (all parts) - Double commands (touch the third rod and pick up the smallest brown stair) - Spoken classification (e.g., think of fruits) - Question games - News periods - Holiday discussions - 3-period lessons - Parts of the body - Sensorial material language - Classified vocabulary cards (groups of things) - Discussing (elicit definition) - Sorting & Matching - Sound games - Level 1: Beginning sounds - Level 2: Beginning & ending sounds - Level 3: Beginning, ending, and any other sounds - Level 4; Beginning, ending, and every other sound - Notes: Lasts 3-5 minutes; Always teacher led; When children are consistently successful with Level 1, introduce the sandpaper letters; When they have mastered the sandpaper letters AND are consistently successful with Level 4 sound games, introduce the movable alphabet We discussed the theory behind each lesson, gave a model lesson, and then practiced giving the lesson in small teams. It was an incredibly eye-opening experience. Here are some more photos of the entire language area. First, the children's library where they can go wherever they like to look at a book. Such a calm and lovely space. I wish we had had a little lamp to add in next to the chair. Next two pictures to give you an overall sense of the language area. Now, specific images of the preparation for writing shelves (sandpaper letters, movable alphabets, metal insets, and chalkboards). And the Reading shelves... these are a bit too crowded IMHO but we were very pressed to set up the area in time for the workshop... your environment should probably have two shelves to cover what we did in one. I think an awesome staff development program might take one of these lessons to discuss, present, and practice at each professional development meeting. You could also easily bring poetry to life in your school by asking a different Guide to prepare a short, memorized poem to recite/teach to the staff at each staff meeting. Poetry, of course, is an oral tradition and so poems should be memorized before being presented to the children. Keep them short at first (just one or two stanzas) to increase the possibility of success and bolster enthusiasm (the same goes for introducing poetry to the children).
Kissing: It really is all about chemistry By Julie Steenhuysen CHICAGO (Reuters) - Valentine Lotharios beware: There's a lot riding on a kiss, new studies on the science of smooching suggest. Researchers said kissing sets off a complex set of chemical reactions, and in some cases, a bad kiss could be the kiss of death for a burgeoning romance. "A kiss is a mechanism for mate assessment," said Helen Fisher of Rutgers University in New Jersey, who is presenting her findings on Saturday at the American Association for the Advancement of Science meeting in Chicago. Fisher, an anthropologist, told a news briefing that kissing is something more than 90 percent of human societies practice, but scientists are just beginning to understand the science of kissing, which is known as philematology. One theory of kissing is that it is intended to promote bonding. Wendy Hill, a researcher at Lafayette College in Pennsylvania who is presenting her findings at the meeting, set out to test this on college students. She was looking for changes specifically in oxytocin, a "love" hormone linked to feelings of sexual pleasure, bonding and maternal care. Since oxytocin has been known to lead to decreases in the stress hormone cortisol, she decided to look at that as well, she told reporters on Friday. The researchers studied 15 heterosexual college couples between 18 to 22 who were assigned to either go off and kiss in a room in the college health center or just hold hands and talk to each other for 15 minutes. Blood and saliva tests showed that men in the kissing group had a burst of oxytocin, but in women, levels of this hormone fell. "Cortisol levels for everyone declined," Hill said. Continued...
Safety Topic: Safety Tips According to IAPA President and CEO Maureen Shaw, there are five simple steps employers should take to promote safety and health in the workplace: - Show commitment to workplace health and safety at the CEO and senior management level. - Have workers participate in workplace safety efforts. - Have an effective joint health and safety committee. - Comply with legislative regulations. - Provide training and education for employees in occupational health and safety. “The sad fact is that workplace fatalities, injuries and diseases are preventable, but too many organizations haven’t yet made that commitment to making workers’ health and safety a real business priority,” Shaw adds. Reported in: Occupational Hazards
If you live in the southern United States, you may be pleased to hear that fire ants have met their match in another invader from South America, the crazy ant (Nylanderia fulva). Photographed by April Nobile, 6/8/07 Edward LeBrun, Nathan Jones and Lawrence Gilbert of The University of Texas at Austin found that the crazy ants can detoxify the venom of the fire ants, rendering the latter weaponless in the ensuing battle. The crazy ants do this by daubing themselves with their own abdominal secretions. Crazy ants that were permitted to detoxify the fire ant venom had a 98% survival rate. That rate dropped in half when the crazy ants' own venom glands were sealed, preventing them from applying the antidote. With the arrival of crazy ants, it looks like fire ants' days may be numbered. The news isn't all good. While crazy ants are likely to spread more slowly than fire ants did, they will eventually cause many of the same problems, devastating native populations of insects, and, in turn, the animals that depend on those insects. You can read more about this here.
Extracurricular activities and athletic programs are important features of American high schools. Little research has been done on how much instructional time is negatively affected or lost due to these activities. The emphasis on extracurricular activities increases the sense of school community in a diverse school, but it also takes away from the already small amount of time teachers have to present new material to their students. Educators and administrators alike need to look carefully at the impact that participation in extracurricular activities has on their classroom and school environments. To view my qualitative research paper click here. To view my qualitative research paper as a PDF file click here.
Proposed rule aims to reduce public housing residents' exposure to second-hand smoke, as well as reduce smoke-related maintenance costs and fire risk New York, NY – Research conducted by the Centers for Disease Control and Prevention (CDC) demonstrates that secondhand smoke is an extremely prevalent problem in multi-unit housing. According to their July 2016 press release, tobacco smoke often travels from smokers’ units into those of non-smokers, as well as common areas such as hallways and lobbies. This puts all residents, especially children, at risk of exposure. According to U.S. Surgeon General Vivek H. Murthy, there is no safe level of exposure to tobacco smoke, and second-hand exposure to children can have devastating consequences, ranging from asthma to Sudden Infant Death Syndrome. In an effort to address this issue, approximately 600 of the nation’s 3,300 public housing authorities (PHAs) have made at least one of their buildings smoke-free. While this is a solid foundation to build upon for a completely smoke-free public housing environment, much of the country’s public housing remains largely unregulated. On November 30th, 2016, Secretary Julián Castro of the U.S. Department of Housing and Urban Development (HUD) announced that a new rule had been implemented requiring all PHAs to now provide smoke-free environments for their residents within the following 18 months. Throughout the year, HUD has worked with PHAs, housing and health partners, and tenant advocates to create a final rule which prohibits combustible tobacco products such as cigarettes, cigars, or pipes in all public housing properties, as well as all outdoor areas within 25 feet of housing and administrative office buildings. This rule was created based out of the need to provide safe, smoke-free homes for children living in public housing units. Out of 2 million residents living in public housing, there are more than 760,000 children under the age of 18. HUD’s smoke-free rule is not only expected to dramatically reduce second-hand smoke exposure, but decrease smoke-related maintenance and repair costs for PHAs as well. According to the Centers for Disease Control and Prevention (CDC), this rule will save agencies up to $153 million every year, including $94 million in health care costs, $43 million in renovation of smoking-permitted units, and $16 million in smoke-related fire reparations. “Exposure to secondhand smoke can mean the difference between a healthy childhood, and one spent taking recurrent trips to an emergency room to receive treatment for smoke-related health issues.” said Dawn Middleton, Project Director of the COE for HSI. “HUD’s new rule for smoke-free public housing would ensure that future generations can live healthier lives, free from the devastating consequences of secondhand smoke.” For more information about the work of the COE for HSI, visit www.tobaccofreeny.org About CAI: CAI is a global nonprofit organization dedicated to improving the health and well-being of underserved populations worldwide. Since 1979, CAI has provided customized capacity building services to health and human service organizations in more than 27 countries and in all 50 states. Offering more than 1,500 training programs annually, CAI’s passionate staff works to fulfill its mission: to use the transformative power of education and research to foster a more aware, healthy, compassionate and equitable world. For more information about CAI, visit our website: www.caiglobal.org. About the Center of Excellence for Health Systems Improvement: With funding from the New York State Department of Health, Bureau of Tobacco Control, CAI serves as the Center of Excellence for Health Systems Improvement (COE for HSI) for a Tobacco-Free New York. The COE for HSI promotes large-scale systems and policy changes to support the universal provision of evidence-based tobacco dependence treatment services. The COE for HSI aims to support 10 regional contractors throughout New York State working with health care systems and organizations that serve those populations for which tobacco use prevalence rates have not decreased in recent years - adults with low incomes, less than a high school diploma, and/or serious mental illness. Focused on providing capacity-building assistance services around topics like how to engage and obtain buy-in from leadership to implement the kinds of systems-level changes that will result in identification and intervention with every tobacco user who seeks care, the COE for HSI also will offer materials and resources to support contractors in their regional work. For more information, visit www.tobaccofreeny.org.
A bicycle is an efficient machine, but only when it's a proper fit. The correct position of the seat, pedals and handlebars in relation to your height, ensure that you enjoy an efficient and enjoyable ride. Because your foot is your body's direct connection for providing motion to the bicycle, proper foot positioning on the pedal is extremely important. You should position your foot over your bicycle pedal so that the ball of your foot, also called the metatarsal, is directly over the pivot arm of the pedal. The pedal's pivot arm is the axis, which runs through the body of the pedal. Positioning your metatarsal over this part of the pedal maximizes stability when bicycling. Practice proper positioning every time you ride, until it you can feel when your foot is out of place on the pedal. Placing your foot properly over the body of the pedal will help to ensure maximum pedal power and efficiency. Positioning the pedal at the ball of your foot promotes a process called "ankling." Ankling to the natural rolling of your foot as it rotates through the crank arms full range of motion. If your foot is farther back on the pedal, you will lose reach and power because your foot cannot rotate properly. If your foot is too far forward, it will place too much pressure on your toes, reducing power and in some cases, causing pain or injury. Clips And Cages Cages and clips provide more efficient riding by attaching your foot to the pedal. Cages work by providing a metal or plastic form that fits over your foot and the bike pedal. Adjust the cages' forward position and the tension on your foot to ensure proper alignment. Specially designed biking shoes can clip or snap onto your bicycle pedals. You need to adjust them properly to align the ball of your foot with the pedal's axis. Proper foot positioning on the pedal is easier when other bicycle fit adjustments are correct. Your foot's position is dictated in large part by your bicycle seat's position. Your should ensure that your seat's height, forward and aft position, and tilt are properly adjusted. When fit correctly, your feet should comfortably reach the pedals throughout your full range of motion. When adjusting seat for position, make sure to wear your normal riding shoes, which can affect fit, especially through seat height.
The TEACCH Autism Program started in 1972 as part of the University of North Carolina. TEACCH stands for Treatment and Education of Autistic and Communication Handicapped Children. It was developed as a system of university-based regional centers to serve children, adolescents and adults with Autism Spectrum Disorder and their families and consists of active clinical, teaching, and research programmes. Across the state of North Carolina, TEACCH operates 7 community regional clinics as well as a vocational/residential facility for adults with ASD. Each of the centres provides core services and unique demonstration programmes meeting the needs of individuals with ASD, their families, and professionals. TEACCH additionally supports student and professional training activities within the state, the US, and around the world. In the UK many aspects of the TEACCH approach are adapted and used in both specialist and mainstream schools, as well as by parents at home. One of the 4 major components of structured teaching widely used in the UK is the individual work system used for developing independence in school (The other 3 major components are: physical organisation; schedules; and learning task organisation – Schopler & Mesibov, 1995). Typically, young people using the work system to support their learning will have a work station set up within the classroom that allows their work activities to be structured using trays or folders which the young person can then work through with minimal adult support. The structured work station provides a prosthetic that allows the young person to make sense of how to understand and proceed with the activities, and reduces the need for adult direction and support. It’s important to remember to be creative with how these structured approaches are used – what works well for one young person may not work well for another, but the idea of making the task more easily understood and developing independence is an important one!
A general term meaning inflammation of the muscles, myositis includes the following diseases: The above diseases are also referred to as inflammatory myopathies. They cause inflammation within muscle and muscle damage. Polymyositis, dermatomyositis, and juvenile myositis are all autoimmune diseases, meaning the body’s immune system is attacking the muscle. While the immune system may also cause muscle damage in inclusion body myositis, this may not be cause of this disease. Although myositis is often treatable, these diseases are poorly understood and do not always completely respond to current medications. Muscle inflammation and damage may also be caused by certain medications. These are called toxic myopathies. Perhaps the most common toxic myopathy is caused by statin medications which are frequently prescribed to lower cholesterol levels. In most cases, the muscle can recover once the problem medication is identified and stopped. Symptoms of myositis may include: - trouble rising from a chair - difficulty climbing stairs or lifting arms - tired feeling after standing or walking - trouble swallowing or breathing - muscle pain and soreness that does not resolve after a few weeks - known elevations in muscle enzymes by blood tests (CPK or aldolase) Although the inflammatory myopathies affect about 50,000 Americans, often they are not diagnosed correctly. In part, this is because patients with autoimmune myopathies have many of the same symptoms as those with inclusion body myositis, toxic myopathy, or muscular dystrophies, which are inherited forms of muscle disease. As a result, patients with the above symptoms should be tested and evaluated by physicians and medical staff who specialize in diseases of the muscle, such as those practicing at the Johns Hopkins Myositis Center. We use specific guidelines to evaluate and diagnose patients and all care is managed in one center by our caring and committed staff.
Economic growth is the over-arching policy objective of governments worldwide. Yet its long-term viability is increasingly questioned because of environmental impacts and impending and actual shortages of energy and material resources. Furthermore, rising incomes in rich countries bear little relation to gains in happiness and well-being . Growth has not eliminated poverty, brought full employment or protected the environment. Results from a simulation model of the Canadian economy suggest that it is possible to have full employment, eradicate poverty, reduce greenhouse gas emissions and maintain fiscal balance without economic growth. It's time to turn our attention away from pursuing growth and towards specific objectives more directly relating to our well-being and that of the planet. For more information about the Centre for Inquiry, please visit http://www.cficanada.ca.
Back in 1887, Thomas Edison and his crew in West Orange, New Jersey, invented one of the first ways to view a motion picture: a kinetoscope. It was like a super early version of the film projector. A string of photographs would flash across a peephole, where folks could view a moving image. As National Geographic notes, Edison was tipped to the idea from the British photographer Eadweard Muybridge. Muybridge made images like the below gif, which clued Edison to the fact that motion could be conveyed through a series of photos. After seeing this, Edison — who'd already made recorded history with the phonograph — decided he needed to get into the motion picture game. The inventor wrote: "I am experimenting upon an instrument which does for the Eye what the phonograph does for the Ear, which is the recording a reproduction of things in motion, and in such a form as to be both cheap, practical and convenient." We can learn a couple of things from Edison's analogizing. First, cognitive scientists have confirmed that it's easier to learn new things when you already have an extensive base of knowledge. Second, the bridge that allows us to come up with new ideas is usually an analogy: a way of looking at two things in your memory or in the world and seeing the similarity in their underlying structures. Analogy may be the best way to brainstorm new ideas — and it's a ridiculously old technique. Frequently, the "answer" to a new question exists in a solution somewhere out there in the world. It's just a matter of finding the right fit. Nat Geo tips us to the first recorded invention by analogy. Some 2000 years ago, the Roman architect-engineer Vitruvius used an analogy to figure out how to build an excellent theatre. "As in the case of the waves formed in the water, so it is in the case of the voice," the architect wrote. "The first wave, when there is no obstruction to interrupt it, does not break up the second or the following waves, but they all reach the ears of the lowest and highest spectators without an echo." Analogy helped Johannes Kepler untangle the laws of planetary motion. The German astronomer thought that gravity — though it didn't have a name yet — could act like light. Just like light could move from the sun to the planets, a force could keep them in orbit. Modern-day office folk can employ analogies, too.How to analogize your way to better ideas In a new paper, University of Pittsburgh researchers Joel Chan and Christian Schunn tracked the brainstorming sessions of a design firm trying to make a handheld printer for kids. The designers made new analogies every five minutes. Those analogies allowed the designers to incrementally improve on each other's conceptions. Let's look at the transcript. In this particular selection, they're trying to figure out how to cover the printer head so it doesn't get destroyed by kids when it's not in use. Notice the progression of the idea, analogy by analogy: Incrementally, the idea shifts, recomposes, and evolves. The solution is first like a video tape, then a garage door, and then a rolling garage door. You can get a feeling of what's going on in the designers' minds: They're looking for a new solution to an old problem, how to protect this valuable thing when it's not in use. So they fire off ideas of other types of protectors in a rapid evolution. In 10 seconds, analogies help the idea to evolve. So remember this the next time you're trying to dream up an answer: Look for other "solutions" that already exist out there in the world, and see if they might fit your question, just like Edison, Vitruvius, and Kepler. See Also:This Batting Practice Experiment Exposes Our False Assumptions About LearningWhy People Wait In Hours-Long Lines For Shake Shack, Cronuts, And iPhonesRadio Host Ira Glass: Even The Most Successful People Once Doubted Themselves17 Web Resources That Will Improve Your Productivity17 Easy Habits To Start Today That Will Help You 5 Years From Now SEE ALSO: 7 Memory Skills That Will Make You Way Smarter
Although coal is expected to be the backbone of energy sources in the future, the country has made no progress in limiting extraction so that the mineral can be used in the years to come. Amid a declining global price, the country’s coal output keeps growing. The government failed to meet its commitment to limit coal output to the same level as last year. Early this year, the Energy and Mineral Resources Ministry’s mineral and coal directorate general said it would cap total coal production for this year at 421 million tons — similar to the 2013 output — partly by controlling coal diggers’ work plans. Until October, the policy went smoothly as the 10-month-output was in line with the full-year plan. In November, however, things changed. It was revealed that during the January-November period, as many as 427 million tons of coal had been extracted, with around 366 million tons sent overseas. Thus, the mineral and coal office adjusted its full year coal output’s estimation to 458 million tons, which is almost a 9 percent increase from last year’s figure. “The increase in output is partly caused by better documentation,” the mineral and coal director general, R. Sukhyar, said in a recent interview. The government implemented on Oct. 1 a policy requiring coal miners to obtain export licenses. The licenses require them to have a “clean and clear status”, a term referring to coal miners’ compliance to royalty and tax obligations as well as being free from conflict due to overlapping claims of ownership. Moreover, the ministry is also cooperating with the Corruption Eradication Commission (KPK) to crack down on illegal mining activities. The government has been criticized for its poor coal management. There are reports that coal exports are lower than the actual volume of coal shipped overseas. The country’s 2013 coal output was questionable. The official report repeatedly claimed the national output was 421 million tons in 2013. However, the ministry’s Energy Outlook 2014 report said that coal output in 2013 was 431 million tons. The report also said Indonesia’s coal resources reached 28.97 billion tons. Assuming that coal output is at the current level, the country’s coal production could only last for the next 50 years, the report added. A domestic market obligation (DMO) for coal has been implemented for several years to ensure that domestic buyers have access to Indonesian coal. Also, the DMO is set to increase from year to year, with the government expecting producers to reduce its dependency on the overseas market. Last year, 85 million tons were directed to local buyers. Domestic absorption is expected at 95 million tons this year and 110 million tons next year. However, once again infrastructure hurdles made the policy unrealistic. State-owned electricity firm PT Perusahaan Listrik Negara, the biggest domestic user of coal, said that its planned coal usage was 55 million tons this year, meaning that there was uncertainty on whether the domestic obligation of 95 million tons of coal would be fully absorbed by local users. The slow growth of power-plant development, which is caused mostly by land acquisition issues, has contributed to slower growth in domestic coal absorption compared to the pace of increases in production set by miners, which are trying to balance the weakening price by selling more. Price pressures are expected to continue as global demand slows. The International Energy Agency’s (IEA) World Energy Outlook report reported that global coal demand grew and would increase at an average of 0.5 percent per year between 2012 and 2014, a much lower rate compared to 2.5 percent over the last 30 years. The growth is hampered not necessarily by a weakening economy but by new air pollution and climate policies in the main markets, particularly in the US, China and Europe. Amid the bleak outlook and rising environmental concerns, the ministry once again expects next year’s production level to be similar to this year’s, at no more than 460 million tons. The public will see if the realization of the policy is as poor, as the government still considers coal as a source of state revenue amid declining prices instead of securing supply for future utilization. – See more at: http://www.thejakartapost.com/news/2014/12/29/coal-sector-management-poor-amid-policy-changes.html#sthash.7kyMNC0x.dpuf
Many people would argue that the American political process is unfair, but they would say that for different reasons. Some people would say that the American political process does not accurately reflect the will of the people and that this is unfair. Other people would argue that it is not feasible for this to be the case and that certain people deserve more influence in American politics because of their greater contribution to society or because they are more qualified for the job. These two sides have been in conflict since the early days of the American political process. The representative democracy of the United States does render the opinions of individual voters relatively unimportant. While voters and their votes do matter and candidates spend millions of dollars trying to sway the opinions of voters, many individual voters are frustrated with the fact that it barely seems to make a difference whether they vote or not and the voting seems to be a matter of principle.However, the fact that every voter is in the same situation does seem to make the process fair in its own way. People have been arguing since the beginning that American democracy has to be representative. Pure democracy with no representatives is very rare when the voting public has hundreds of millions of people. It usually only works in much smaller societies. While some people would argue that this does not mean the situation is fair, they might still make a case for the system in a pragmatic sense. Democracy requires an educated middle class to be sustainable, or people will often vote for the very same individuals that democracies seek to eliminate. You may also like these articles: - The Value of Academic Debate - Women's Right to Education - Combining Academic Knowledge and Practicality - Torture Is Never Justified - Is High IQ a Guarantee to Academic Success? In Principle and In Practice It should be noted that a lot of Americans functionally never vote for reasons beyond their control. Even getting to the polling booths or getting absentee ballots is tough in some areas, which is genuine discrimination against poorer people and people who live in certain regions. Disabled individuals often find it difficult to vote for various reasons, so their voice gets excluded from American politics. Some wealthy people argue that since they pay most of the taxes, they deserve a bigger voice in American politics. However, wealthy people pay fewer taxes in America than they do in other countries. Also, wealthy people have more control over American elections than almost anyone even though they each have one vote. Wealthy people can give campaign contributions to the candidates of their choice, so the candidates of their choice will have an advantage during the election. Elections are automatically slightly biased in favor of the wealthy on this basis alone. Wealthy people represent a small portion of the population, and the policies that favor lining their pockets further will directly go against the interests of most of the country. More and more wealth has been directed to the wealthy over the past thirty years, and campaign contributions towards certain candidates have had a huge impact on that. The situation involving wealthy people buying elections is reflective of faulty laws in the sense that there could be laws limiting campaign contributions. However, this situation does not directly reflect a problem with the baseline American political process or democratic structure itself. If anything, this problem demonstrates that the American political process is not working as it was intended. Wealthy people who have no political experience and who are acting purely in their own self-interest have more political power than many politicians. The overall system for American voters and the American representative democracy isn't perfectly fair, but having a direct democracy that was perfectly fair would be too difficult. However, the fact that wealthy people are able to subvert the political process and control it so substantially automatically taints the American political process, rendering it unfair even though there are no laws mandating that this should be the case. The disproportionate influence of the wealthy has made the American political process unfair, and not the representative democratic structure.
Qualitative observation in science is when a researcher subjectively gathers information that focuses more on the differences in quality than the differences in quantity, which usually involves fewer participants. Qualitative observation is more interested in bringing out and knowing all of the intimate details about each participant and is conducted on a more personal level so that the researcher can get the participants to confide in the researcher.Continue Reading When participants feel comfortable with the researcher and confide in him or her, the researcher is able to get the information that he or she needs to make concrete observations. Most qualitative observational studies take place in a natural setting such as a public place and ask that participants answer questions in their own words. Qualitative observations and studies are usually done by social scientists, psychologists and sociologists with the goal of better understanding human and animal behavior. Quantitative observation, on the other hand, is an objective gathering of information. It focuses on things such as statistics, numeric analysis and measurements. Quantitative observation typically measures things, such as shapes, sizes, color, volume and numbers looking for the differences between the test subjects. It is the most commonly used observational method except for the social sciences where qualitative observation is the most commonly used observation method.Learn more about Chem Lab
One of the mysteries of the English language finally explained. The main hormone produced by the thyroid gland, acting to increase metabolic rate and so regulating growth and development. - ‘The thyroid releases too much of the hormone thyroxine, which increases the person's basal metabolic rate.’ - ‘Human beings require iodine for the production of the thyroid hormones, thyroxine and triiodothyronine.’ - ‘He tested for thyrotrophin releasing hormone in 67 women with menorrhagia who had normal concentrations of thyroxine and thyroid stimulating hormone.’ - ‘When your thyroid gland produces too much of the hormone thyroxine, you develop hyperthyroidism.’ - ‘Hormones that require amino acids for starting materials include thyroxine (the hormone produced by the thyroid gland), and auxin (a hormone produced by plants).’ Early 20th century: from thyroid + ox- ‘oxygen’ + in from indole (because of an early misunderstanding of its chemical structure), altered by substitution of -ine. In this article we explore how to impress employers with a spot-on CV.
There are many different types of pipes for sewer lines and home drainage systems. Each type of piping system can have its own unique drainage problems requiring different methods to maintain the lines and clear stoppages. Listed below are different types of piping materials and descriptions of what they are used for. Clay is one of the most ancient piping materials, with the earliest known example coming from Babylonia. Clay pipe was laid in 2, 3 and 4 foot lengths for most residential applications. There is an expanded “bell” hub at one end. The regular end of a pipe fits snugly into the bell end of the next pipe, making a joint. These joints were typically packed with a mortar type material creating a seal. The Clay piping is very strong but like glass, it will crack or break under pressure. The most common issues we find in Clay are tree root intrusion and cracked or broken sections of pipe. Cast Iron is a metal pipe that has been manufactured and used in The United States since the early 1800s. A good quality Cast Iron pipe, installed under ideal conditions, has a life expectancy of about 50-100 years. As Cast Iron ages it begins to corrode and deteriorate. This deterioration is very slow but exponential and does affect the structural integrity of the pipe eventually requiring repair or replacement. In some cases, the beginning signs of deterioration will be evident through small cracks or breaks in the pipe. Tree roots growing into the Cast Iron is also a sign that the pipe has deteriorated to the point that a repair will likely be needed. In more severe cases, entire sections of the pipe may be missing or the pipe may have completely collapsed. Cast Iron was used extensively in single family homes until the late 1960s to the mid 1970s when plastic became the material of choice. PVC (Polyvinyl Chloride) PVC is a plastic material that became popular in the 1960s as a cheaper and easier to install alternative to Cast Iron. PVC is light weight and very durable so it became the main material used in sewer line applications by the early 1970s. Properly installed, PVC has a life expectancy of 100+ years. The common issues we see with improperly installed PVC relate to poorly glued connections that have separated or improperly backfilled lines that have been crushed. ABS is very similar to PVC in terms of cost and ease of installation, but is considered to be slightly less durable. ABS is widely used in some areas of the country but is not nearly as prevalent as PVC. The common issues we see with improperly installed ABS relate to poorly glued connections that have separated or improperly backfilled lines that have been crushed. Check out our FAQS for more information. VIEW FAQS >>
Reshaping the world When you think of maps, what comes to mind? An informative document used by travelers? Demarcations of national borders and geographic features? At the very least, we might think of some factual representation of the world, not one that is fictitious or subjective. Imagine a map of the world. Now take a moment to examine the following map, created by members of the Surrealist movement and published in the Belgian journal Variétés in June 1929. This anonymous map might seem unsettling; it emphasizes certain areas while removing others, and changes dramatically the size of landmasses. How is this map different from the world we know? And, although it might seem like an odd place to begin, what can it tell us about the ideas and approaches of the Surrealists? Historians typically introduce Surrealism as an offshoot of Dada. In the early 1920s, writers such as André Breton and Louis Aragon became involved with Parisian Dada. Although they shared the group’s interest in anarchy and revolution, they felt Dada lacked clear direction for political action. So in late 1922, this growing group of radicals left Dada, and began looking to the mind as a source of social liberation. Influenced by French psychology and the work of Sigmund Freud, they experimented with practices that allowed them to explore subconscious thought and identity and bypass restrictions placed on people by social convention. For example, societal norms mandate that suddenly screaming expletives at a group of strangers—unprovoked, is completely unacceptable. Surrealist practices included “waking dream” seances and automatism. During waking dream seances, group members placed themselves into a trance state and recited visions and poetic passages with an immediacy that denied any fakery. (The Surrealists insisted theirs was a scientific pursuit, and not like similar techniques used by Spiritualists claiming to communicate with the dead.) The waking dream sessions allowed members to say and do things unburdened by societal expectations; however, this practice ended abruptly when one of the “dreamers” attempted to stab another group member with a kitchen knife. Automatic writing allowed highly trained poets to circumvent their own training, and create raw, fresh poetry. They used this technique to compose poems without forethought, and it resulted in beautiful and startling passages the writers would not have consciously conceived. Envisioning Surrealism: automatic drawing and the exquisite corpse In the autumn of 1924, Surrealism was announced to the public through the publication of André Breton’s first “Manifesto of Surrealism,” the founding of a journal (La Révolution surréaliste), and the formation of a Bureau of Surrealist Research. The literary focus of the movement soon expanded when Max Ernst and other visual artists joined and began applying Surrealist ideas to their work. These artists drew on many stylistic sources including scientific journals, found objects, mass media, and non-western visual traditions. (Early Surrealist exhibitions tended to pair an artist’s work with non-Western art objects). They also found inspiration in automatism and other activities designed to circumvent conscious intention. Surrealist artist André Masson began creating automatic drawings, essentially applying the same unfettered, unplanned process used by Surrealist writers, but to create visual images. In Automatic Drawing (left), the hands, torsos, and genitalia seen within the mass of swirling lines suggest that, as the artist dives deeper into his own subconscious, recognizable forms appear on the page. Another technique, the exquisite corpse, developed from a writing game the Surrealists created. First, a piece of paper is folded as many times as there are players. Each player takes one side of the folded sheet and, starting from the top, draws the head of a body, continuing the lines at the bottom of their fold to the other side of the fold, then handing that blank folded side to the next person to continue drawing the figure. Once everyone has drawn her or his “part” of the body, the last person unfolds the sheet to reveal a strange composite creature, made of unrelated forms that are now merged. A Surrealist Frankenstein’s monster, of sorts. Whereas automatic drawing often results in vague images emerging from a chaotic background of lines and shapes, exquisite corpse drawings show precisely rendered objects juxtaposed with others, often in strange combinations. These two distinct “styles,” represent two contrasting approaches characteristic of Surrealists art, and exemplified in the early work of Yves Tanguy and René Magritte. Tanguy began his painting Apparitions (left) using an automatic technique to apply unplanned areas of color. He then methodically clarified forms by defining biomorphic shapes populating a barren landscape. However, Magritte, employed carefully chosen, naturalistically-presented objects in his haunting painting, The Central Story. The juxtaposition of seemingly unrelated objects suggests a cryptic meaning and otherworldliness, similar to the hybrid creatures common to exquisite corpse drawings. These two visual styles extend to other Surrealist media, including photography, sculpture, and film. The Surrealist experience Today, we tend to think of Surrealism primarily as a visual arts movement, but the group’s activity stemmed from much larger aspirations. By teaching how to circumvent restrictions that society imposed, the Surrealists saw themselves as agents of social change. The desire for revolution was such a central tenet that through much of the late 1920s, the Surrealists attempted to ally their cause with the French Communist party, seeking to be the artistic and cultural arm. Unsurprisingly, the incompatibility of the two groups prevented any alliance, but the Surrealists’ effort speaks to their political goals. In its purest form, Surrealism was a way of life. Members advocated becoming flâneurs–urban explorers who traversed cities without plan or intent, and they sought moments of objective chance—seemingly random encounters actually fraught with import and meaning. They disrupted cultural norms with shocking actions, such as verbally assaulting priests in the street. They sought in their lives what Breton dubbed surreality, where one’s internal reality merged with the external reality we all share. Such experiences, which could be represented by a painting, photograph, or sculpture, are the true core of Surrealism. The “Nonnational boundaries of Surrealism”* Returning to The Surrealist Map of the World, let’s reconsider what it tells us about the movement. Shifts in scale are evident, as Russia dominates (likely a nod to the importance of the Russian Revolution). Africa and China are far too small, but Greenland is huge. The Americas are comprised of Alaska (perhaps another sly reference to Russia’s former control of this territory), Labrador, and Mexico, with a very small South America attached beneath. The United States and the rest of Canada are removed entirely. Much of Europe is also gone. France is reduced to the city of Paris, and Ireland appears without the rest of Great Britain. The only other city clearly indicated is Constantinople, pointedly not called by its modern name Istanbul. An anti-colonial diatribe, the Surrealists’s map removes colonial powers to create a world dominated by cultures untouched by western influence and participants in the Communist experiment. It is part utopian vision, part promotion of their own agenda, and part homage to their influences. It also reminds us that Surrealism was an international movement. Although it was founded in Paris, pockets of Surrealist activity emerged in Belgium, England, Czechoslovakia, Mexico, the United States, other parts of Latin America, and Japan. Although Surrealism’s heyday was 1924 to the end of the 1940s, the group stayed active under Breton’s efforts until his death in 1966. An important influence on later artists within Abstract Expressionism, Art Brut, and the Situationists, Surrealism continues to be relevant to art history today. *André Breton, “Nonnational Boundaries of Surrealism,” in Free Rein, trans. Michel Parameter and Jacqueline d’Amboise (Lincoln: University of Nebraska Press, 1979), pp. 7-18. André Breton, First Manifesto of Surrealism (1924) Surrealism Reviewed: a selection of audio interviews and music (UbuWeb) Four Surrealist films by Man Ray Digital archive of Surrealist journals from Spain, Chile, and Argentina (in Spanish) Smarthistory images for teaching and learning: