text
stringlengths
187
626k
1. Mosfiloti is a thoroughly Greek village of the Larnaca district and it is located at the “heart” of the three big cities, Nicosia, Limassol, and Larnaca. The Community of Mosfiloti has a distance of about 21 kilometres from Larnaca, 22 kilometres from Nicosia, and 44 kilometres from Limassol. These are distances that one can cover by car in 25 minutes. 2. The name of the village Mosfiloti (as well as that of the village “Mosfileri”) is synonymous and relates to the “mosfilia” tree (“crataegus azarolus”, loquat). The name indicates an area full of such trees. It is worth noting that the name used in Cyprus until today has ancient Greek origins. Not many loquat trees are extant today because, during the British rule and due to the poverty and the scarce jobs, there were plenty of woodcutters that cut down various trees, including the loquat trees, and sold their timber to the British for their fireplaces. They also used the timber for the construction of their manors. The British had a good relationship with the inhabitants of Mosfiloti because they bought timber from them and the Governor visited the village every weekend for hunting. There were several inhabitants that assisted him in the hunt and offered would be his hosts for a meal in their houses. 3. Mosfiloti is served by a good road network. It is adjacent to the old Nicosia – Limassol road and has a distance of 500 metres from the new highway. Through turnpike roads, it connects to the village Sia in the west (about 2.5 km), to the village Psevdas in the east (about 3.5 km), to the village Lympia in the north-east (about 7 km), and to the village Pyrga in the south (about 3 km). The village of Mosfiloti existed in the Mediaeval times. According to Nearchos Clerides, the village was constructed around the monastery of “Agia Thekla” that pre-existed. This monastery was established by Saint Helen in 330 AD, along with that of “Stavrovouni”. The founding of the village around the monastery must be placed around the Byzantine times. During the Frank and Venetian domination eras, the village was a feud but we do not know to which family of noblemen it belonged. In old maps it is found marked under the name Mesfolot and/or Mesfelot. Under the monastery there was a spring with holy water, which was used by the faithful for the healing of skin diseases and cases of eczema. The Holy Monastery of Agia Thekla celebrates on the 24th of September, the day that our church honours the memory of the great martyr and equal to apostle, Saint Thekla. It is one of the biggest fairs of the Larnaca district. Henry Lite, the British traveller and military man, who -during his visit to the Monastery of Saint Thekla -stayed there for a night on the day of the fair, mentions in a relevant text of his that he was profoundly impressed by the poverty and misery that prevailed amongst the rural population of the village, something indicative of the ordeals and the hardships that the Greeks of Cyprus suffered during the Turkish domination (1571-1878). 4. Mosfiloti is built at an average altitude of 250 metres, with its north and west borders being a part of the administrative limits of the Nicosia and Larnaca districts. It is the village of the Larnaca district that is closest to Nicosia. It is a part of the administrative area of Larnaca but comes under Nicosia in terms of education and telecommunications. It borders with three villages of the Nicosia district, in the west with the village Sia, north with the village Alampra, and north-east with the village Lympia. 5. The village is surrounded by the Pipis, Vizakeri, Petromoutos, and Kalogeros mountains. The Pipis and Kalogeros mountains were reforested in 1981 with pine-trees and today they are considered as the “lungs” of the village. The green and the variety of the mountains’ vegetation provide the village’s natural beauty From a geological point of view, what predominates is the lava of Troodos’s igneous complex, upon which dark coloured soils developed. They are the mild knolls of lava, upon which sparse pine-trees, thyme, and cistuses (ladaniferous) grow. The Mosfiloti village, like the villages Pyrga and Sia, is located in a mining area due to its richness in minerals. A large mine for the processing of concrete and a work-site for the processing of asphalt operate at its boundaries. In previous times, excavations for the detection of copper and gold were made. Two of them, which are extant until today and are of a significant depth, were made by a Greek person named Prompona. Ever since then the excavations received the name ‹‹Prompona’s hole››. One of them is located at the foot of the Pipis mountain and the other at the foot of the Vizakeri mountain. Promponas, who was a geologist, transferred on donkeys the material that he extracted from the two excavations to a venue close to Agia Thekla for processing. Until today, there are two small ponds in this venue, which he used for the purpose mentioned above. The village is crossed by a tributary of the Tremithos river, which has water in its riverbed also during the months of Spring. This is something that justifies the existence of people here back in older times, since in those days irrigation of the various crops was done directly from rivers. In various pints of the river there were small dams. Even until today there is one of them near the Monastery of Agia Thekla, along with the part of a raceway that was formerly used to transfer water for the needs of the village and the operation of a flourmill, which however is not extant today. The region where the flourmill operated took the name ‹‹Paliomylos›› (Old Mill). South of the “Agia Thekla” Monastery there was also a small dam, serving the agricultural needs of a region named ‹‹Vasileies››. The water ran through the raceway and -at a point where crossing was made impossible because of a small stream -an arch with a spout was constructed. Part of it is extant today and it is called ‹‹The Spout of the Vasileies››. The river “Tremithos” crosses north of the village and constitutes the village’s natural borders with the village Lympia. There also were small dams there in previous times, serving the needs of agriculture and the operation of two flourmills. Out of these two mills, of which small parts are still extant, one can also be found marked in the maps of the Land & Surveys Department and it is located within the limits of Mosfiloti, lending its name to the region that is called ‹‹Mylos›› (mill); the other is located within the limits of the Lympia community. In 1945 a bigger dam was built in the area of the small one, being of the stone-made type and having a capacity of 18,000 m³, which was then demolished in 1976 so that an even bigger dam would be constructed in its place (220,000 m³). Most of it is within the limits of the Mosfiloti community. 6. Ancient items, caves, and a common (group) grave were found -from time to time -in various venues of Mosfiloti. South of the Agia Thekla Monastery, at a distance of 1 kilometre, some caves are extant until today that are separated into “rooms” and it is believed that they were inhabited up to the mediaeval times. This region took the name “Spilioi” (Caves). Testimonies of local people report that the elders found in the caves clay pots, many of which they destroyed because they were unaware of their value. In one of those pots they also discovered the skeleton of a young child. It is believed that -during the first years of people settling in the area and during the conquerors’ raids -the inhabitants took refuge in the caves for their protection. They hid the young children and the infants, covering them with a cloth so that their cries would not be heard and reveal their presence. In the area where the offices of the Community Council are housed, there is -until today -a well. Until 1966, when a water supply system started operating in the community, all the inhabitants of the village used to pump water out of this well for their daily needs, carrying it home in clay vessels. In the road leading from Mosfiloti to Lympia, a stone-made bridge was constructed in 1944 over the Tremithos river. At a distance of 1300 metres north of the bridge, towards the Lympia village, there is a small chapel dedicated to Saint Marina. Next to it there was a shepherds’ settlement. When the Turkish army arrived in these areas from Lefkara, during the year 1570, it attacked the settlement with canons, which the Turks were trying out for the first time. They started firing against the houses with stone-made shells and the settlement was destroyed along with the church. Those inhabitants that survived took refuge in the Lympia village. The day that “Agia Marina” was destroyed was a Tuesday and ever since then the region was named ‹‹ Kakotriti››› (Bad Tuesday). The church of Saint Marina was reconditioned, while signs of caves can be found in the area of the settlement until today. The area of ‹‹ Kakotriti›› was full of thick forests. According to tradition, that is where a large fire broke out and -with the aid of the winds -the flames rapidly moved on towards the village of Alampra, which is located about three kilometres west of ‹‹Kakotriti››. Feeling scared, the people of Alampra gathered in their village’s church, “Agia Marina”, and in tears they fell on their knees and begged the Saint to save them. And so the miracle happened. They saw the shadow of a female leaving the church, followed it, and saw Her guiding them there where the fire was raging. They saw her kneeling and praying to God. In a little while the fire died down and their village remained untouched by the flames. It is ever since then that the village received the name “Alampra”, generated by the privative prefix “a” and the Cypriot word “lampro”, which means “fire”. During the year 1426, the Frank ruler of Cyprus, Giannos, learned that the Mamluks had landed in Limasool and had started looting. He then took his army from Potamia and proceeded toward Limassol. He crossed through Mosfiloti and made a stop further south, in Pyrga, where he gathered his entire army. From there on he moved toward Choirokitia where he was defeated and captured. 7. Mosfiloti receives an annual rainfall of about 410 millimetres. A great variety of seasonal vegetables such as potatoes, cabbages, watermelons, melons, tomatoes, cucumbers, onions, (Jerusalem) artichokes, lettuces, okras, peppers, collards, eggplants and many others are cultivated in its region. Olive and citrus trees, legumes (French peas, broad beans, and chickpeas), cereals (mainly barley), forage plants (tare and clover), locust trees, as well as a few fig, loquats, and apricot trees are also cultivated. Till now there still are several olive trees from the Frank domination era and so they are named ‹‹Frangkoelies›› (Frank Olives). In previous times, there were several vineyards south of the village and for that reason the area bears the name ‹‹paliampela›› (old vines). There is also a region named ‹‹Kaminia››, in which there used to be Kilns for the processing of grapes, used for the production of wine and “Zivania” (strong, transparent alcoholic beverage). The village’s inhabitants were formerly occupied with embroidery, weaving, and knitting. These things slowly disappeared as industrial units developed. Because of the village’s extensive development, large scale reforestation, and the limited land, a stockbreeding zone was created in a non-developed region next to the mine. This is where the few remaining stockbreeders were transferred. They raise sheep, goats, and rabbits. There also are three large farms for the breeding of poultry. 598 goats, 252 sheep, 270 pigs, 6 cows, and 677 poultry were being raised in 1985. 8. The village’s original core is densely built, the houses preserving elements of the traditional folkloric architecture to a great extent. Gradually the settlement abandoned its original core and spread along the main turnpike road that crosses it. From 1881 until 1921, Mosfiloti has gone through fluctuations of its population. In 1881 the inhabitants were 162, decreasing to 161 in 1891, increasing to 178 in 1901, to 200 in 1911, yet decreasing to 169 in 1921. Since 1931 and onwards the population had a steady increase. The large increase of population in the village occurred after 1976 and it was due to the settlement here of a large number of Greek-Cypriot refugees after the 1974 Turkish invasion. So, according to the official population census, in 1982 the population increased to 803 inhabitants. In the last official population census, conducted in 2002, the inhabitants numbered 1095. The village’s church is dedicated to Saint Marina. R Gunnis (1935) reports that he had seen in it a beautiful 16th century icon of the Blessed Virgin Mary, as well as a 1684 icon of St. Tryfonas.
Get Your Calcium Without All The Fat Calcium enriched foods without the fats The following recommendations are not intended for growing children who benefit from milk for growing teeth and bones. This article applies to adults who are trying to follow a heart-healthy diet by getting the necessary calcium withouy all the fat. As you may not know, whole-milk dairy products have a high amount of calcium and they also have a good amount of fat which is mostly saturated. The following tips will give you calcium but in a much healthier way. 1. Always try to substitute skim or low-fat milk for water. Examples would be when making soups, oatmeal, or making muffins. 2. Eat collard, kale, turnip or mustard greens, and broccoli several times a week. I did not mention spinach because it contains substances called oxalates that interferes with calcium absorption. Spinach although is still good, but better as a source of iron. 3. Keep nonfat dry milk on hand and add it to anything you can think of, such as soups, meat loaf, casseroles, and beverages. Adding nonfat dry milk into coffee, tastes just like half-and-half does. 4. Increase your indulgence into low-fat yogurt and frozen yogurts in place of foods such as pies, cheesecakes, ice cream and the list just goes on. One thing to keep in mind though is that on an ounce-for-ounce basis, yogurt has at least as much calcium as milk, but generally not fortified with Vitamin D as milk. 5. When eating a salad or soup, use low-fat grated cheese. If a baked food dish calls for cottage cheese (not a good source of calcium), or cream cheese(high fat content), also use low-fat grated cheese. 6. Cook tofu instead of meat or chicken in stir-fry dishes. Tofu added to salads and soups are also excellent. When you are purchasing tofu, remember to check the label to make sure it has calcium salt. Just four ounces of tofu can have about 300 milligrams of calcium. 7. Make low-fat milk shakes at home by adding milk, ice cubes, flavoring like vanilla extract , or any type of berry, and low calorie sweetener. Mix in a blender and you will have a delicious shake. 8. Keep broccoli handy as a snack because it is a high calcium vegetable. Enhance this snack with a part-skim ricotta cheese or just plain yogurt. Links To More Hubs About Calcium - Calcium And The Vegetarian One nutritional issue that some vegetarians may face if they are choosing not to include any dairy based products in their diet is a lack of calcium. While you can get your calcium through food... - Importance Of Calcium Calcium is a major mineral found in our body. It constitutes 1.5 - 2 per cent of the body weight of an adult person. Our body contains almost 1,200 gm of calcium, of which 98 per cent is found in bones, and... - Calcium, Health and People What do we know about Calcium? Honestly I am telling you that I am still trying to learn about this nutrition. During I was a small boy, I know that Calcium is good for my bone. Milk is the source of Calcium.... - Calcium Facts Are You Getting Enough Calcium? Calcium is an important element, responsible for healthy growth in general and for the strength of bones and teeth in particular. This makes it vital to include calcium in one's...
The golden section and planetary distances. Are the distances of the planets from the Sun arbitrary? could there be hidden geometry in the structure of the solar system, the golden section for example? I am not the first to speculate on this but I might be the first to have discoverd two intriguing facts about the Astronomical Unit values of Venus and Mars. The first is, that successive Fibonacci powers of the A.U. values behave thus and result in Phi or golden section powers. Phi= 1.618033989. It is a property of Fibonacci numbers, that when used as powers, a sequence higher or lower than that above will also work approximately. The second intriguing fact is this equation The square root of which is this it was of course, the great German astronomer and mathematition Johannes Kepler who used a music interval, the natural fifth of 1.5. as a power, in his third law of planetary motion.It was also he that in his five platonic solids model of the solar system placed the icosahedron between Venus and the Earth and the dodecahedron between the Earth and Mars. It was beutiful but completely wrong, although the golden section is inherently part of the structure of both the dodecahedron and the icosahedron. Inspired by Kepler I have come up with the following model of four planets. Here, using the process of iteration are the A.U. values that fit all the equations above.Higher or lower sequences of the fibonacci powers also work approximately as does the Lucas sequence. NASA defines the Astronomical Unit values of the solar system for two different time periods. Unless otherwise stated it is the 3000 years B.C. to 3000 yers A.D. values that I will be using in my search for Phi or the golden section. It was quite by accident that I came across the following equation. Oh! thats nice I thought, three Fibonacci numbers in a row. It was only years later that I realised that a fibonocci sequence higher or lower than that above would also work approximately, this one is even better. and the Lucas sequence works as well. The discrepancy in the equations can best be corrected in the following manner, By adding the Fibonacci powers together, as in the second equation, 55+21=76 and finding the 76th root of the discrepancy 1.00000117%, then it is this factor that Mars and Venus must be altered by, in this case devided by, to get the correct result, Mars 1.523710684 and Venus .723320173. Of course this discrepancy could be loaded to just one planet or spread between them in many differnt ways. The obvious question at this point, of course, is does this work with any other of the twenty possible planetary pairs (between Mercury and Neptune) and the short answer is no! Execpt with Mercury and Jupiter, whereby although it is a multiplication between an outer planet and a inner planet (as seen from the Earth) it is not exactly the same as the Mars Venus relationship. This was most disappointing, only two such connections out of a possible twenty one pairs! and why when it works does it work so well? Just as disapointing was the fact that I could not find a relationship between Jupiter and Mars or Venus, or Mercury and Mars or Venus. I could of course have devided or multiplied both pairs with one another but eventually i gave up and in taking some inspiration from Kepler I got lucky. I had orignally been experimenting with music intervals as powers and I came across the following, (phi written with a small p rather than a capitol P is the reciprical of Phi) The music interval 1.33333 or the natural 4th is the product of 60 devided by 45 and I had another equation also the product of harmonious numbers Now to cut a long story short, by use of iteration, that is to continually correct the values so that both equations would be correct, I arrived at the following.The inverse of the fifty second root of Phi power 35. and here an even more obscure approximation for Mars Well I thought this has not turned out well, how could two so harmonious equations turn out so ugly. Eventually I found this and the square root of which is this Regardless of how the discrepancy is distributed, it is this equation, that results in this one and there is a convoluted reason why this also works. but taking inspiration from my original Fibonacci power sequences, this works as well, and that equation squared is So it is clear to see that by multiplying or deviding the music interval powers by other music intervals, in the case above by 2, the octave, then the results can change while sticking to the same values. Here is one more example just for fun. I have another 7 or 8 of these equations and there must be many more but my point is that by using the 52nd root of Phi power 35 (inverse) as an approximation for the A.U. value for Venus and the 585th root of Phi power 512 for that of Mars then I have found two sharp tools that may prove usefull in searching for Phi in the solar system. I am certainly not suggesting that these values are somehow magical, or that Venus and Mars ever had, or ever will have such values, and I will abandon these artificial values in favour of iterated values as soon as I have built a model. Astronomic Unit values are not constants and over millions of years as the Sun looses mass and the planets gain a little, they all drift further away from the Sun.I initially feared that this fact would automatically undermine any patterns I may come up with but then this would only affect the absolute distances and not necessarily the proportional relationships between the planets. On the other hand I can not assume that proportions would be maintained in such a manner that would suit patterns that I have come up with.It is perhaps worth mensioning at this point that my manipulated Venus and Mars values can also be expressed as the 13th root of Phi power 7 to the power 1.25 (recprical) for Venus and the 13th root 0f Phi power 8 to the power 1.42222 for Mars. The interval 1.25 is the 3rd most important music interval of all, after 2 the octave and 1,5 the natural 5th.The interval 1.42222 is the product of 1.3333 times 1.06666, the natural 4th times the diatonic half tone. Well I now have two ideas concerning Venus Mars and Phi.The first of which is the sequence of Fibonacci powers the best of which is this, and the other idea is this Luckily they are not contradictory and by way of iteration, I have made a list of equations which are all good for the equation stated and also Mars power 1.875 devided by the 1.875 root of Venus equaling Phi squared. The most obvious music interval to go along with 1.875 would be its counterpart, that is what is left over from the octave two, i.e. 2/1.875= 1.0666666, and again I got lucky. It must be rememberd that will it will be the 144th root of the discrpancy that will correct the first equation 1.000022687% and since the answer was too high Jupiter must be devided by this factor and Mars mulipied by it. Jupiter = 5.202369543 = 1.000021269% and Mars = 1,523730866 = 1.000012099%, At this point I am going to introduce my manipulated values of Venus and Mars because they are in a sort of lock step relationship with one another. My manipulated values for Venus and Mars can also be expressed in the following manner. The 39th root of Phi power 14 (inverse) will get me the 1.875 root of Venus =.841354062 and the 39th root of Phi power 64 will get me Mars power 1.875. = 2.202693506. By using these values as constants I can calculate a value for Jupiter dependant on which Fibonacci power sequence I am using. The value for Jupiter in the Jupiter power 89 equation above is then 5.201602391 which is 1,000168755% away from the NASA value. Take the sequence down a step for Jupiter power 55 and the value for Jupiter will be 5,202389147 = 1.0000175%. Using the Lucas number of power 76 for Jupiter and its value is 5.203310632 = 1.000159624%. Now it is hardly suprising that those equations work fairly well since I have manipulated Mars and Venus to make them work but what about Saturn? I got lucky again in that the i.0666666 root of the Saturn A.U. value fits in well enough. So here is a list of six planetary pairs that I will call Model A. Here I will abandon my manipulated Mars and Venus values and by using iteration, the values below will fit all of the equations above. As a next step i will try iteration at the lower step of Jupiter power 55 etc. This model would then be called Model A -1. Here are the iterated values for a step higher in the sequence Jupiter power 144 etc which I will call Model A +1 Turning to Uranus and Neptune I thought I had it with the following But this time my luck has run out because the lock step relationship between Mars and Venus counts against me. By substituting Mars power 1.875 for the 1.875 root of Venus, as in the Jupiter and Saturn equations, this time with Uranus and Mars, I arrive at Phi power 432! This can be solved by finding the 3rd root of the equation because 432/3 = 144. Now the best part of the patterns I have are not so much in the Phi powers but in the music interval powers which are perfect for Venus through to Saturn and here is a multiplication instead of a division. Worse still substituting Mars instead of Venus in The Neptune equation results in Phi power 1309! and so I fear this is a dead end if I want a pattern along the lines of those for Jupiter and Saturn. There is though, another approach, in that although Jupiter and Saturn work well with a Phi squared from Mars and Venus perhaps Phi itself may work with Uranus and Neptune. and this works reasonably well, Using the real Venus value, it results in a Neptune value of 29.91427128 which is 1.005190039% away from the NASA value. The 1.875 root of Neptune is the square root of Neptune power 1.066666 so there is still a connection with Jupiter and Saturn and Mars is still in lockstep with Venus as shown below As for Uranus the 1.0666666 root of it works, sort of but 2 octaves down and so it is the 4.2666666 root of Uranus that fits with Mars and Venus thus. and of course the folowing is also true The above equation power 4 is So leaving aside Mars and Venus I can extend Model A using just the outer planets The number 322 is not a Fibonacci but a Lucas number, which is sort of inevitable. The table below shows the A.U. values by iteraration that fit the equations above. The rather weak value for Neptune drags the others down somewhat, Using Model A+3 just for Jupiter Saturn and Uranus comes out a lot better. 1364 is a Lucas number! By iteration the values below fit the equations above Given that Mars Venus and Phi worked with Uranus and Neptune and that Mars Venus and Phi squared worked with Jupiter and Saturn could Phi cubed work with Mercury? these equations result in inverse Phi values or phi. The lock step relationship between Mars and Venus still works but it will be the square root of the Mars and Mercury equation to get phi 89 (the inverse value of Phi 89) Mercury fits in well with the overall pattern with its 1.06666 root and that squared is power 1.875 and squared again is 3.75 but all the other planets have ended up with a dogs dinner of music intervals. Most pocket calculaters will need the fibonacci powers to be devided by 10 to stay within the range. Note the Phi values with the small p, phi are x to the minus one values. Venus looks alright with either the 1.875 root, or 3,75 the square root of 1.875 but the 4,26666 root for Uranus seems to spoil the picture althoght 4.26666 divided by two octaves is 1.06666. Mars power 1.875 and Neptune power 1.06666 would look a lot better it being similiar to the Jupiter Mars equation but the result would be Phi power 178 ( 178/2)=89 The 1.0666666 root of Uranus together with Jupiter power 2.133333 may look better but again the result would be Phi power 178 Here are the iterated values for Model A Again it is the weak Neptune value that somewhat weakens the whole thing, here are the iteration values for all the planets without Neptune It starts like this 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 and goes on forever but two succesive numbers added together determine the next one. and so It is clear to see that the higher the pair of Fibonacci numbers the closer the answer is to Phi. 987/610 = 1.618032787 = 1.000000743%. The Lucas sequence is the same in prinicple it just starts a bit later 1 3 7 11 18 29 47 76 123 199 322 521 843 843/521 = 1.618042225 = 1.000005091%. Keplers third law of planetary motion is still valid today. The square of the orbital period of a planet is proportional to the cube of the semi major axis of its orbit. When I first heard this it went in one ear and out the other but it realy is easy to understand from the following. The average distance of the Earth from the Sun is defined as the number one, that is then 1 Astronomical Unit and NASA defines the A.U. value for Venus as .72332102. This to the power 1.5 is .615172097 and this multipleid by the number of days in the Earth year 365.25 = 224.692 or 225 days approximately, The same is of course true for all the other planets and so by knowing the mean distance of a planet from the Sun (or any other object that circles the Sun) then by using the music inteval of 1.5, the natural fifth as a power then the orbital period of the object can be calculated. (and of course vice versa). Keplers third law is also called his harmonic law but this is strickly speaking not true, because music intervals work as factors not as powers! A guitar or fiddle string whose length is multiplied or divided by 1.5 would indeed be in tune but that string length to the power 1,5 would be musically meaningless. The only way that using music intervals as powers makes any sense is in the follwing example. Successive octaves or powers of two can be generated by using music intervals as powers in the followig manner. This is true for all numbers including Phi but note that 2 power 7 11 13 and 14 are not acheivable by using music intervals as powers. Music intervals wrong foot the best musicians and the best mathematitions. There is though, the simple and undisputed law of the octave that states by doubling the length of a string the frequency of the note will be halved and that by halving the length, the frequency will double. Our ears recognize these sounds as being the same note, much as our eyes recognize different shades of the same colour. All music intervals are the products of three numbers 2, 1.5, and 1.25, which is not the same as all products of the three numbers are music intervals.These three numbers can best best be seen, heard and understood by playing the so called harmonics on a guitar string, the 6h or lowest bass string works best. We guitar players often use the harmonics, played by just touching the string at the exact place and not pushing the string down to the fretboard as in normally played notes. Playing the open sixth string would normally produce a frequency of 82.5 Herz (given that the fifth string is tuned to 110 Herz, 2 octaves down from the industry standard of 440 Herz) By playing the harmonic halfway along the string the note would be exactly an octave higher at 165 Herz. Some what counter intuitively, playing the harmonic at point 4 would give a frequency doubled again to 330 Herz. Dividing the string by 8 gives an octave higher again at 660 Herz but it is barely audible and is never used by guitar players because it is so extremely weak. Apparently these harmonics occur along the string at so called node points and it is at these points that the sound waves cross from above to below the string. The point 2 on the guitar string is exactly above the 12th fret, the point 4 is above the 5th fret but the point 8 is between the 1st and 2nd frets and is difficult to find, The octave is not the only indisputable music interval, by dividing the string by three, an harmonic 1.5, or the natural fifth is generated above the seventh fret and also on the other side of the string more or less above the sound hole, not used by guitar players because there is no fret to guide the fingers.(dividing the string by 6 should also work but I found this to be truly inaudible) There is also a series of harmonics, all exacctly the same when the string is divided by five. Starting from the left the first occurs above the 4th fret, the second above the 9th fret, the third above the 16th fret but there is no fret to guide the finger for the fourth one. If one looks very closely at the 4th fret it is clear to see that the harmonic occurs slightly before it and this illustrates the difference between 1.25 the natural 3rd and the modern chromatic 3rd which is sharp. The value for the chromatic 3rd is 1.25992105, divided by the natural 3rd of 1.25 results in a discrepancy of 1.00793684% which is extremely annoying for anyone with good ears. The gradual introduction of the chromatic division of the octave more or less coincides with the industrial revolution and it is as if someone had had enough of all the confusion surrounding music intervals and had issued a decree that from now on all music intervals would be equal and mathematically perfect. The new interval, the twelth root of two, would devide the octave into twelve perfectly proportional steps such that my 4th fret would be the 12th root of two to the power 4 and my 5th fret the power 5 etc. It seems to be Johann Sebastian Bach who we have to thank for this but when playing so many notes so fast this dichotomy is not so noticeable.When playing long slow notes it most definitely is. It could be worse of course, at least we get his well tempered piano and not his bad tempered piano and it sure beats carrying twelve guitars around everywhere since the guitar frets would be in slightly different positions for each of the twelve keys. All of this is not a matter of taste or of culural preference, or historical development it is a matter of physics and has been known of, if not entirely understood for thousands of years. Here is a breakdown of all the music intervals that I have been using up untill now. Here is a mathematical breakdown of the major scale in western music, also known as the do ray me sequence. For those mathematitions who get constantly confused by terms such as the natural fifth only to find that this is the seventh fret on the guitar here is why. It is the the fifth note in the do ray me scale! The second row of numbers fills in the gaps, that is to say the intervals between the intervals. To sum up then it is the string divided by 2,3 and 5 that is the basis for music harmony and all music intrevals are products of these three numbers (given that it is the division of the octave the numbers are 2, 3/2 and 5/4, or 2 , 1.5 and 1.25). It is somewhat of an accident that all the numbers between 1 and 10 are musical apart from 7 but 4 is 2 x 2, 6 is 2 x 3, 8 is 2 power 3 9 is 3 squared and 10 is 2 x 5.dividing the string by 7 11 and 13 etc makes some sort of noise but it is not harmonic or musical. There is no getting away from the fact that although Phi is geometrically beauiful it is not musicaly beautiful, The Fibonacci sequence starts so well with 1,5, 1.666666, 1,6, and the Lucas sequence gets us 4/3 1.333333 but it is 13/8 that spoils everthing. There is no mathematical definition of "out of tuneness" but if there were then this 13/8 would be near the top of the list. The continuation of the sequence 21/13, 34/21, etc, alternatively sharpens or flattens the interval somewhat but does not improve upon it. The closest I could get to Phi by using music intervals is in the following, 2 power 4 divided by 1.5 power 4, devided by 1.25 power 3 = 1.61817284 = 1.000085814%. Good as this result is it is just as dreadfully unmusical as 13/8. The best relationship to music intervals is in the square root of 1.25, = 1.118033989, whereby the number is exactly .5 short of Phi. This is also exactly equidistant between the the natural or diatonic intervals for the small and large whole tones of 1.11111111 and 1.125 = 1.00623059%. Another connection could be Phi power 24, which is very close to 2 power 14 times 1.5 power 4, times 1.25. The problem here is that i do not have a music keyboard that goes beyond 14 octaves and it is way beyond the human hearing range anyway! Finally just to show how "magical" Phi realy is, try the following experiment. Enter your telephone number into a pocket calculater, find the reciprical and add 1, repeat the process 20 or 30 times and guess which number you end up with? The question remains, however, as to whose telephone number you get if you do it backwards! I learnd this trick from Dr Ron Knott who runs the best web site on the net for all things Golden and Sectional www.ronknottcom/ Site still under construction.
Decisions made back when Oregon became a state had a long-lasting impact. Like the relative paucity of African-Americans in the state. "Exclusion laws" forbidding black people from living in all or part of the state existed from statehood's dawn into the 20th century. Oregon Humanities' "Conversation Project" offers an traveling presentation called "Why Aren't There More Black People In Oregon?" It is in Cave Junction Thursday (April 7) and Cottage Grove on Friday (April 8). Scholar/poet/writer Walidah Imarisha leads the discussion. She joins us by phone to talk about these and previous episodes in the project.
If one or more of your valves is diseased or damaged, it can affect how your blood flows through your heart in two ways: - If your valve does not open fully, it will obstruct the flow of blood. This is called valve stenosis or narrowing - If the valve does not close properly, it will allow blood to leak backwards. This is called valve incompetence, or regurgitation, or a leaky valve. Many people with heart valve disease need little or no treatment. However, you may be advised to have surgery on your valve, which can greatly improve your symptoms and quality of life. What are the treatment options? There are two options for valve surgery: valve repair and valve replacement. - Valve repair is often used for mitral valves that become floppy and leak but are not seriously damaged. - Valve replacement is when the diseased valve is replaced with a new valve. The most common types of replacement valves are mechanical (artificial) valves or tissue (animal) valves. In some cases, a Transcatheter aortic valve implantation (TAVI) procedure may be used if you are an adult and not well enough to have traditional heart surgery. Whether or not you have heart valve surgery, and whether the operation is a repair or a replacement will depend on many factors, including the cause of the problem, which valve is affected, how badly the valve is affected, how many valves are affected, your symptoms, and your general health. What will happen during a heart valve repair or replacement? In most valve operations, your surgeon will: - reach your heart by making an incision down the middle of your breastbone - use a heart-lung machine to circulate blood around your body during the operation - open up your heart to reach the affected valve, and - perform the repair or replacement. In a small number of cases, one or more small incisions can be made in your chest and your breastbone may not even need to be cut. Speak with your surgeon about the advantages and disadvantages of this type of surgery as it is not suitable for everyone. How long will it take me to recover? If all goes well, you will be helped to sit out of bed the day after the procedure. You can expect some discomfort after your operation and you will be given pain relief medication. Your pain level will be monitored to make sure you are as comfortable as possible. Many people return home within about a week. On average, it takes between 2-3 months to fully recover, but this can vary greatly as it depends on your individual condition. What are the benefits and risks? For most people, the operation will greatly improve symptoms and quality of life. Like all operations, valve surgery isn't risk free. Your own risk will depend on your age, your current state of health and the degree of valve disease. Before your procedure, your surgeon will discuss with you both the benefits and risks of the operation. Endocarditis is a rare but serious condition where the inner lining of the heart becomes infected. This most commonly takes place in one of the heart valves. If you have a heart valve problem or have had surgery on your valve, you are at risk of developing endocarditis. You are also at risk if you have had endocarditis before. Until recently, people at risk of endocarditis were advised to take antibiotics before having dental treatment and some other procedures. However, that is no longer recommended. You can find out more at NHS Choices. You can also check out our Endocarditis warning card. Life after a heart valve replacement - Sam's story Will I have to take any medication afterwards? If you have a mechanical valve replacement, you'll need to take anticoagulant medicine, like Warfarin, for the rest of your life. This is because a mechanical valve is made of artificial material, which increases the risk of a blood clot developing on the valve’s surface. If you have a tissue valve replacement, you may need to take anticoagulants for a shorter period. This may be from a few weeks to 2-3 months after surgery. The most commonly prescribed anticoagulant is called Warfarin. For more information, please see our booklets on Heart Valve Disease or Medicines for your heart. It is important that you speak to your doctor or pharmacist before you take any medicines in addition to those you have been prescribed. A BHF-funded chemical engineer is perfecting a material to tackle heart valve disease. A heart valve needs to be strong yet flexible, capable of opening and closing 200 million times or more. It needs to be compatible with the human body and able to let blood flow easily through it. Dr Geoff Moggridge thinks he’s found the solution. This short animation explains how this new discovery could transform replacement valves in the future. Help us fight against heart failure Your donations to help us fund more pioneering research into treatments that combat coronary heart disease. Find out how your support can make a difference. Want to know more? Order or download our publications:
The Land of Volcanoes El Salvador is the smallest yet most densely populated country in Central America. It has been greatly affected by a large number of natural disasters including flooding, droughts, earthquakes and volcanic eruptions. The disasters have created serious problems related to infrastructure, food security and water quality, which all contribute to a number of health issues. While the government has been a leader in the region for health programs targeting some of the main problems, funding is a constant issue and threatens the progress that has been made in spite of the harsh environmental factors. The Micro Health Insurance Program In February, 2008, we founded a clinic in the rural Las Delicias community, located 45 minutes outside of the capital city. With a population of nearly 3,000, Las Delicias sits in a beautiful valley and is divided into 6 sectors. With low employment rates in the formal sector, as well as excitement by community members to participate in health education, the world's first ever non-monetary Micro Health Insurance Program was established. Las Delicias is located 35km outside of San Salvador in the department of La Libertad. The community is nestled among coffee plantations, corn fields and sugar cane. The closest government clinic is located 10km from the community but can take nearly two hours to reach due to public transportation. The clinic is saturated with patients and often lacks the resources necessary to adequately treat the population. We started with a health census. We met with the leaders and started working alongside the Ministry of Health. We sat and talked to community members. We developed programs and services based on the actual, current needs and have continued to expand them as we grow. Learn More About Las Delicias + Insufficient income has a serious adverse effect on the general health and vitality of the population. During the civil war in the mid-1980s, El Salvador was among the countries of the Western Hemisphere most seriously affected by malnutrition. Today, the principal causes of death remain as gastroenteritis, influenza, pneumonia, and bronchitis, caused or complicated by malnutrition, bad sanitation, and poor housing. Children are particularly vulnerable to health related issues linked to poverty including diarrhea, head lice, malnutrition, and bronchial infections. Community members, as well as patients from twelve outlying communities, receive services at the clinic, including pediatric attention, prenatal and postpartum care. FIMRC works in conjunction with the Ministry of Health, the local development association and other partners on preventative health education. The main focus of the community outreach is to prevent common problems in the area such as malnutrition, gastrointestinal illnesses and respiratory infections. A majority of the health issues are due to a lack of education, scarce resources and poor water quality. The innovative Micro Health Insurance Program (MHIP) was also launched in FIMRC’s El Salvador clinic in June, 2008. The MHIP is the world’s first non-monetary model of health insurance that incorporates health incentives with micro health insurance. The program is a holistic approach to meet the needs of an underserved population. Through an extensive health education program, participants earn health credits to purchase health-related products that would otherwise be unattainable due to a lack of resources. At Project Las Delicias, we have always taken a community based approach and continue to do so. We work closely with the families in Las Delicias, Las Brisas and the surrounding areas to understand their challenges and needs and build programs around their current health situation. With the support of our volunteers, we dedicate our resources to three main areas of focus: clinical activities, health education and special initiatives. Below are a few examples of our work at Project Las Delicias. - Treat patients of all ages (0-96) - Support rural medical brigades in outlying communities - Conduct community based medical visits - Train local doctors on specialized equipment - Conduct testing and treatment campaigns for parasitic infections - Adolescent group workshops - Water sanitation programs - Prenatal programming and house visits - Dental hygiene education - Micro Health Insurance Program - Water filter distribution - Nutrition groups - School water system improvement At this time, we have suspended all volunteer travel to El Salvador. FIMRC is committed to the safety of our volunteers, and we do not take this commitment lightly. We evaluated all available information and decided to pause volunteer travel to El Salvador. We continuously assess the safety of the countries where we work, so we will continue to watch the safety situation and hope to send volunteers again! Stay tuned for additional updates on the volunteer program in El Salvador. We remain committed to the community and our patients. Our clinic in Las Delicias remains open to ensure that our promise of improving health within the community continues.
Blood pressure, heart rate, height and weight. You’re probably used to getting these standard checks when you go to the doctor’s office. But there’s a relatively new vital sign on the nurse’s go-to checklist — physical activity. Why start tracking physical activity? The best answer is that 150 minutes of moderate exercise each week can save your life. According to the American College of Sports Medicine (ACSM), regular exercise can: - Reduce heart disease risk by 40 percent - Lower stroke risk by 27 percent - Reduce diabetes risk by almost 50 percent - Reduce high blood pressure incidences by about 50 percent - Reduce mortality and recurrent breast cancer risk by nearly 50 percent - Lower colon cancer risk by more than 60 percent - Reduce Alzheimer’s disease development risk by one-third - Decrease depression as effectively as certain medications and behavioral therapies Mayo Clinic Health System began screening patients for physical activity as a vital sign (PAVS) in October 2013. Kaiser Permanente and Intermountain Healthcare currently are measuring PAVS, but Mayo Clinic Health System is the first organization to incorporate strength training into the equation. Most people think weight training is just for athletes, body builders and for muscle tone. What they aren’t aware of are the many health benefits of strength training. Meaningful strength training can help improve bone density, blood pressure, cholesterol, diabetes and self-confidence. Having strong muscles also can reduce workload on your heart and lungs, which is helpful if you have heart or lung disease. These are the reasons why Mayo Clinic Health System includes strength training as part of PAVS. The PAVS initiative was started through a partnership between the ACSM and the American Medical Association to promote the importance of exercise as a way of treating and preventing disease. The long-term hope is to make measuring PAVS a standard medical practice throughout the nation. Topics in this Post Cherie Pettitt Sunday, March 30, 2014 It is so great to see MCHS taking this essential next step in helping patients PREVENT diseases and improve their overall health. Yes, even when you have a cold whether or not you exercise matters! Congrats Chip and congrats to our community.
Online payments are becoming more and more important every day, but that doesn’t mean the platforms we use are stepping up their security game. PayPal, one of the largest online payment processors in the world, recently fell victim to a bug in their account system, allowing users to send malicious code through confirmation emails. Luckily, the person discovering this issue has reported the exploit to PayPal through their bug bounty program, rather than using it for malicious intent. Sending Malicious Code With PayPal Confirmation Emails Larger online payment processing platforms have a bigger chance of becoming vulnerable to some form of exploit sooner or later. Luckily for PayPal, German security researcher Benjamin Kunz Mejri discovered a flaw which he reported to the company immediately. If someone else had made this discovery, the company would have been off far worse. The way this exploit works is by sending emails with malicious code through an existing PayPal account. Sending an email to a different PayPal user requires users to fill in a name – usually first and last name – but it turned out that entry field could be filled with random code, including malicious scripts. Doing so was not as straightforward as it sounds, though, as Mejri had to bypass a security filter, which can be seen in the video below this article. Once that step was completed, he used the Paypal feature to share an account with other users by adding multiple email addresses. This feature can be compared to a multisignature Bitcoin wallet, albeit with entirely different security precautions. All of the email addresses on the list to share this particular PayPal account with would receive a confirmation email to accept this invitation. Once a user opens this email, the malicious code is executed in the background, originating from PayPal’s servers. As most people have guessed by now, this method makes it rather easy to execute phishing attacks against other users, while ensuring the email sender is PayPal, rather than spoofing the header. Other exploits included session hijacking, and even redirecting the user to different web pages or websites. Luckily for all PayPal users, this exploit has been patched in early March 2016, and Mejri received a US$1,000 bounty for reporting this security flaw. White hat hackers are of incredible value to financial service providers, which is why companies such as PayPal have their bug bounty program. Bitcoin is An Answer To Centralized Services Although Paypal is one of the most popular online payment processors in the world, their entire business model is as centralized as it can get. Not only do they take a cut of every transaction – and quite a big one too – but they also hold on to customer funds when both depositing and withdrawing money. Relying on a service with a central point of failure is putting consumer’s funds at risk. Bitcoin, on the other hand, is entirely decentralized at its core, although there are centralized platforms in this ecosystem as well. Financial control is something very few consumers are accustomed to, and no longer relying on centralized services requires a major mind shift. However, for those willing to take financial matters into their own hands, Bitcoin is a viable option. What are your thoughts on this recent PayPal vulnerability? Let us know in the comments below! Source: Tweakers (Dutch) Images courtesy of PayPal, Shutterstock The post Recent PayPal Exploit Shows Benefits of Decentralized Payment Solutions appeared first on Bitcoinist.net.
Written by Lowell H. Zuck The evangelical roots of the United Church of Christ represent a unionistpietist liberal approach to Christianity. Among most nineteenthcentury immigrants on the Midwestern frontier, German Evangelicals stood in stark contrast to the doctrinal rigorism that was popular among Missouri Lutherans, Christian Reformed, and, to a lesser extent, Presbyterian, Congregational, Baptist, and Methodist revivalists.(1) German Evangelicals in Missouri, Illinois, and other Midwestern states traced their roots unofficially to the Prussian Union Church, founded in 1817. On arrival in the United States, in 1840, they organized themselves into a church association (Evangelische Kirchenverein des Westens), making use of German confessions from both Lutheran and Reformed traditions. They also displayed a pietistic ability to pray, sing, form congregations, and train ministers, following the ecumenically open but conservative LutheranReformed tradition. They started a church journal (the Friedensbote)and reshaped a new catechism (Evangelical Catechism). But the most important institution for developing a new German Evangelical consciousness in America was a seminary, begun in 1850 at Marthasville, Missouri, and later moved to Webster Groves and renamed Eden Theological Seminary.(2) The fourth president of this Evangelical seminary served from 1872 to 1879. His name was Karl Emil Otto (1837—1916).(3) The story of this immigrant clergyman, who was educated at the German university of Halle, illustrates how an immigrant church, loyal to German traditions, was able to maintain faith commitments in the face of rationalist intellectualism. It is a story of the struggle between the latest German critical biblical scholarship and a healthy religious pietism on the American frontier. Although Otto created a controversy involving parochial immigrant concerns, his life reveals the basically liberal characteristics of American German Evangelicals: a group that was unwilling to stay safely within narrow confessional limits, or to be restricted by fashionable theology and traditional institutionalism. CONCERN FOR ORTHODOXY In 1845 Philip Schaff, who had come to the German Reformed Seminary at Mercersburg, Pennsylvania, and who was also rooted in the same Prussian Union Church as Karl Emil Otto, had been unsuccessfully tried for heresy. He was accused of teaching a view of the Reformation that was too Catholic for his American Reformed audience.(4) The romantic German Mediating theology behind Schaff's teachings was also important to Otto. Karl Emil Otto, however, was more deeply involved with German historical critical scholarship than Schaff had been. Karl Emil Otto was one of the first biblical scholars using German methods to be tried for heresy. Although he was unfavorably judged by the Evangelical Synod in 1880, the judgment did not permanently alienate Otto from the German Evangelical community. His story shows that German Evangelicals had a greater tolerance for German biblical scholarship than any other nonUnitarian American denomination at the time. The only groups in America that advocated more radical doctrines than Otto's were the Free Religious Association, formed out of Unitarianism in 1867, and the Society for Ethical Culture, begun in 1876 by Felix Adler as a reaction to narrow Judaism.(5) Among liberal Protestants in the 1870s, conservatism dominated. Only the Congregationalists and Baptists, with their loose form of government, allowed liberal theology access to seminaries. Prof. Crawford H. Toy was forced to resign from the Southern Baptist Seminary at Louisville, Kentucky, in 1879 when his views seemed to impugn the plenary inspiration of scripture. The Alabama Baptist wrote: "The fortunes of the Kingdom of Jesus Christ are not dependent upon German born vagaries."(6) Toy had studied in Germany. Andover Seminary clung to its Calvinistic creed until the 1890s. In 1891, Prof. Egbert G. Smyth, who had studied in Berlin and Halle in 1863, was able to have the Massachusetts Supreme Court overrule attempts to remove him for heresy from Andover in 1886.(7) Egbert's brother, Newman Smyth, who had also studied in Germany, was denied an Andover appointment in 1881 because of his opposition to eternal punishment. In 1893 the celebrated Presbyterian heresy trial of Charles A. Briggs (Union Seminary) took place, resulting in his dismissal by the General Assembly.(8) A. C. McGiffert, a Marburg Ph.D. and Union colleague of Briggs, resigned from the Presbyterian ministry in 1900 to become a Congregationalist. H. Preserved Smith of Lane Seminary was dismissed from the Presbyterian ministry in 1892. All three had studied in Germany. Fundamentalism continued its hold over Presbyterians for another quarter century. In 1904 a Methodist, Borden P Bowne, was examined and acquitted of heresy at Boston University. And in 1906 Prof. Algernon S. Crapsey was deposed by the Episcopal Diocese of Western New York for not being creedally traditional.(9) In retrospect, it is remarkable that a heresy trial regarding biblical criticism took place in 1880 at a remote German Evangelical Seminary in Missouri. This incident shows the sensitivity of the German Evangelicals to the latest scholarship and their capacity to handle controversy. Karl Emil Otto was born on January 7, 1837, in Mansfeld, Saxony, at the foot of the Harz Mountains.(10) His father, Karl Friedrich Otto, was headmaster of the school at Mansfeld where Martin Luther received his education. Soon after young Otto was confirmed at age fourteen, his father died. An older brother, who had already become a pastor, took charge of Otto's studies, preparing him for high school. For nearly six years Otto concentrated on ancient languages at the Saxon territorial Pforta school, where scholars spoke Latin in middle and upper classes. With the help of his brother, Otto enrolled in the University of Halle and studied there from 1857 to 1860. At Halle, Otto studied with the notable Mediating theologians: August Tholuck in systematics, Julius "Sin" Mueller in biblical theology, and Hermann Hupfeld, in philology. Hupfeld's critical and philologically accurate method of studying Near Eastern languages was especially influential in forming young Otto's approach to biblical exegesis.(11) It is interesting to note that Prof. Heinrich Heppe, who prepared the way for the Wilhelm Herrmann—Rudolf Bultmann liberal tradition at the University of Marburg, had also studied (at Marburg) with Professors Mueller and Hupfeld.(12) After Karl Emil Otto completed theology study and passed his first examination, he spent some years as a private tutor in a pastor's family and taught Latin to gifted students at the famous Francke Orphan's Institute at Halle. With his final examination Otto appeared to have a bright future as a theologian and pastor in Germany. MISSIONARY TO AMERICA In September 1864, however, Otto attended the Altenburg Kirchentag assembly. There he heard addresses by two American pastors from the Lutheran Wisconsin Synod and the Evangelical Synod in Missouri. Both spoke of the desperate need for welltrained theologians to minister to German immigrants on the American frontier. On the spot, Otto decided that he would go to America, if he could find a way. Before long he received a fiveyear appointment to the Wisconsin Synod from the Berlin Missionary Society. He was sent to the Wisconsin Synod with the assurance that he could have a permanent position in Germany, if he should decide to return. In February 1865 Otto was ordained to the Evangelical ministry at Magdeburg, Germany. Karl Emil Otto arrived in Milwaukee on April 29, 1865, where he was kindly received by Pastor Muehlhaeuser of the Wisconsin Synod. He was assigned to two Lutheran and one Reformed rural congregations in Dodge County, Wisconsin. In spite of primitive frontier conditions, Otto endeared himself to his people. He found, however, that the Wisconsin Synod, with its increasingly strenuous Lutheran confessionalism, was in conflict with his commitment to Evangelical unionism and a critical approach to scripture. In a short time Otto became acquainted with the milder Evangelical Synod. The notable Evangelical traveling preacher Louis von Ragué persuaded him to travel to St. Louis in late 1865 to visit Pastor Louis Nollau, founder of the Evangelical Kirchenverein.(13) Nollau appreciated Otto's abilities and viewpoint and told him of a vacancy at St. Paul's Evangelical Church in Columbia, Illinois (across the river from St. Louis). Resigning from the Wisconsin Synod in 1866, Otto spent the next four years at Columbia, Illinois. He became an Evangelical minister in 1867 and married a relative from Germany, Amelia Otto, in the same year. They had seven children. SEMINARY PROFESSOR AND PRESIDENT By 1870 Otto's scholarship and pastoral gifts were well known, and he was called to a professorship at the Marthasville, Missouri, Evangelical Seminary. Otto had barely arrived at Marthasville in July when he learned of the sudden death of the school's fortysevenyearold president, Andreas Irion.(14) When Irion's successor, Johann Bank, resigned in 1873, after little more than a year, because of ill health, Karl Emil Otto, at age thirtyeight, became president of the institution. Under Otto's leadership a new educational spirit was introduced. Irion had powerfully represented the practical and old orthodox spirit of the mission houses, whereas Otto, less comfortably for the synod, taught the critical theology of the German universities. Irion had been a missioninstitute Pietist, teaching theology as a deepgoing mystic; Otto was a critical theologian. Irion represented Wuerttemberg pietism, combining childlike religious feeling with a speculative spirit; Otto, on the contrary, was a North German, a believing Christian but less pietistic. Through his schooling he had been trained in historicalcritical research, leading to positive results. Otto's strength lay in exegesis. He taught dogmatics, but it was not his main field. His greatest love was Old and, especially, New Testament exegesis.(15) In 1873 the seminary and denomination started a new journal, the Theologische Zeitschrift. Already in March of that year Otto published the first of three installments on "The Exegesis of Romans 5:12—19." His intention was to acquaint members of the synod with what he was teaching. The articles were well received, and Otto was chosen editor in 1877. Otto's difficulties did not come from his students. He possessed outstanding teaching abilities that aroused enthusiasm. The students had not previously heard such deepgoing exegesis. Irion had presented the deep thought contents of biblical concepts; but Otto controlled Greek as if it were his basic language. He was able to contribute not only philological enlightenment, but also what his contemporaries called "the nutritious bread of living scriptural thought."(16) Moreover, Otto led his students into developing their own abilities to think and become earnest researchers themselves, thus reaching the highest goal of a teacher. In 1879 Otto's popularity with students resulted in a student strike against the other seminary professor, K. J. Zimmermann, who appeared inadequate by comparison. As a result, Zimmermann resigned and Otto gave up the presidency but remained as professor. Twentytwo of the twentysix strikers later returned. Louis E. Haeberle became president, showing tact and firmness until his retirement in 1902. Meanwhile the students began telling their home pastors about the theological viewpoint of their favorite professor. Many pastors were startled. The old beliefs were no longer being taught at the seminary. The leader of the opposition was retired seminary president Johann Bank. Bank and his friends wrote a formal letter to the seminary board demanding that Otto's teachings be investigated. They referred to Otto's 187374 articles and questioned whether his views on sin as the wages of death, original sin, and atonement and justification were biblical. The board was not convinced. Early in 1880 it passed a resolution of confidence in Otto, asking him to continue teaching. It examined his dogmatics notes regarding the meaning of Christ's death and atonement, the death of humanity, the miracles of Christ, and the sacrifice of Isaac and found no problems. In two resolutions the board fully supported Otto: (1) The Seminary Board has convinced itself that the doubts raised about Professor Otto's teaching have no basis in fact, and that therefore his further continuance at the seminary must be desired by the Board. (2) That Professor Otto shall be requested to forget what has happened and on the basis of strengthened confidence to continue his work with good cheer and courage.(17) However, when four articles on the temptation story in Genesis 3 appeared in the Theologische Zeitschrift from May to August 1880, it became necessary to consider Otto's case at the fall synodical General Conference. Otto had not hesitated to have the articles published, even while he was being examined by the seminary board. He felt no need to seek the approval of any higher authority than his conscience. Otto's symbolical method of scripture interpretation created a sensation among Evangelicals. At the September General Conference of the synod, the committee appointed to investigate his work declared that he had deviated from the synodical doctrinal position and demanded that he promise in the future to maintain true doctrine. Otto defended himself with dignity. He affirmed the basic confessions of the church and accepted the unconditional authority of scripture, but he insisted that a teacher be allowed latitude in interpretation. He questioned the competence of the synod to decide such matters. By a vote of 47 to 9, however, the General Conference declared its lack of confidence in Otto. It also added a "Neological Paragraph" to the synodical Constitution, stating: "We must decidedly repudiate any neological [new] method of teaching and explanation of the scriptures, and insist firmly that in our seminary the Christian doctrine is presented in the manner of the positive believing direction, as it is done in the Evangelical Church of Germany."(18) Otto had no alternative but to resign his professorship, as well as his membership in the synod. He became pastor of a nonsynodical Evangelical Church in Darmstadt, Illinois, where he served until 1887. By 1885, however, Otto renewed his affiliation with the Evangelical Synod with no malice. Already in 1883 the St. Louis Evangelical publishing house issued Otto's 268page Bibelstudien fuer die gebildete Gemeinde (Exposition of Romans for Educated Congregational Members). It included a twentyninepage appendix exegeting the Genesis 3 temptation passage.(19) In 1887 Mennonites from Kansas invited Otto to teach in their preparatory school at Halstead, Kansas. But after only a year, Otto accepted the pastorate of an Evangelical church at Eyota, Minnesota, serving from 1888 to 1890. In 1890 Otto accepted a call to become professor of ancient languages and history at Elmhurst College, Illinois. For fourteen years thereafter he prepared students to enter the Marthasville Seminary from Elmhurst. Samuel D. Press, later president of Eden Seminary, noted with pride that his immigrant father, Gottlob Press, had studied under Otto in 1874 and "stood by Otto after his dismissal, remaining loyal to him to the end." In turn, Sam Press studied under Otto at Elmhurst, saying of him: The only truly academically trained member of the Elmhurst faculty at that time was Prof. Emil Otto, an outstanding scholar, a man of unimpeachable character. . The mainstay of the curriculum at Elmhurst for me were the four years of Latin and the three years of Greek with Prof. Otto. His excellent lectures were too advanced for most of his students.(20) Otto's teaching at Elmhurst included deepgoing lectures on world history and German literature. In 1898 he published a story for young people about an American lad who was kidnapped in Connecticut during the American Revolution, an unusual theme for a German immigrant theologian!(21) He also published a fictional history from ancient times, The Bride from Damascus, set in the Greek Orthodox Church of A.D. 633.(22) In 1897 he produced a 137page Germanlanguage history of the life of George Washington, noting both Washington's success as a military commander and leader and his willingness to give up power and go back to civilian life. This may have reflected Otto's own renunciations as a theologian.(23) Otto retired from teaching in 1904. For the rest of his life he struggled with defective hearing and the illnesses and deaths of his wife and eldest son. He returned to Columbia, Illinois, where he died in 1916 at the age of nearly eighty. Those twelve years as emeritus professor were active years, filled with preaching chores and regular writing for the Theologische Zeitschrift. The announcement of his death in that journal was followed by one of his own articles, "The Meaning of the Old Testament for Christian Preaching," written shortly before his death. The previous issue contained two Otto articles on "American Idealism" and an exegesis of Colossians 1:24.(24) At the funeral Eden Seminary president William Becker used the text, "He that overcometh shall inherit all things" (Rev. 21:7, King James Version). Otto had overcome what he called his "catastrophe." Carl E. Schneider, Evangelical historian, wrote later: "In calmer moments it became apparent that the action [of the synod's excommunicating Otto] had been too hasty. Otto was vindicated not only by posterity but by many of his contemporaries, and never again was the question of confessional orthodoxy made the issue of serious discussion by any General Conference.(25) Otto's exegesis approached Paul critically. He believed that a great assignment had been given to proclaim the gospel, "to pave the way for a Christian unity of the faith between those who are influenced and those who are not influenced by the socalled modern view of the world." Paul needed to be brought nearer to the Christian church by a manner of interpretation that would "explain Paul purely out of himself, uninfluenced by the authority of doctrinal tradition."28 Otto discussed the origins of sin and its consequences by examining Paul and the dogmatic traditions. In exegeting Romans, Otto wrote with Protestant fervor: If we now compare verse 3:22 "Through faith in Jesus Christ for all who believe," with 1:17, "Through faith for faith," and combine these, then we have Paul's trilogy: Out of faith (God's faith), through faith (Christ's faith), to faith (the new mankind's faith), and the doxology concerning the depths of the richness, the wisdom, and the knowledge of God (11:33), which doxology refers to the perfection of the work of redemption; then we have essentially a substantiation of our interpretation.27 But Otto's lengthy discussion of the doctrine of the atonement bordered on heresy: Because God cannot forgive sins without a vicarious death, therefore he himself had to finally furnish the perfect offering, which was to bear vicariously the suffering of punishment for all mankind. That God did by presenting Christ as the atonement offering. . . . One can tell at once by this "orthodox" interpretation, that it is not derived from exegesis, but from dogmatics. One would hardly have found this explanation in these two verses (Rom. 3:2526), if one had not had beforehand this interpretation. And where did one get it? Not from the Bible, but from Scholasticism. . . , The theory of the atonement goes back in the first place to Anselm's attempt to construe the revealed content of the re vealed truth about faith by the means available to human reason. This background should serve warning not to identify the outcome of this theory immediately with revealed truth itself.(28) Even more sensational than his work on Paul were Otto's articles on the study of the Genesis temptation story (Genesis 3). He reviewed different types of exegesis (allegorical, literal, dogmatical, and theosophical), showing reasons for rejecting them all. His was a symbolic interpretation: "The tree of life is not actually a fruit tree; the tree of knowledge is not actually a tree; then too no serpent actually appeared. The appearance of the serpent symbolizes the fact that just that made itself felt which is symbolized by this picture." Otto argued that the serpent was a natural being, created by God as every other creature. It was not a creature of Satan. There was no trace that the serpent had its cunning from anywhere else except from God. "The serpent symbolizes the power that resides in nature and entices to evil. This power is not yet morally bad in itself." It is a power that dare not gain influence over humanity, if we do not want to become morally bad.(29) Otto supported a feminist interpretation of the fall: "The fact that the serpent approaches the woman first is generally associated with the greater lack of selfcontrol and the greater temptability of the woman. But actually it rather points to the connection of the sinful fall with the sex relationship only to the extent that the sex discretion occurs earlier with the woman than with the man."(30) Otto's basic point was that the story of the fall points to sin as grave disobedience to God. The tree, the serpent, and the conversations are merely a shell. Otto was concerned with practical teaching, recognizing two ways to grasp offered truths, either in the form of abstract truth or in the form of a graphic story. He did not believe that they needed to contend against each other as if they were enemies. Those who cannot yet free themselves from the story form to grasp the moral truths should stay with this form so that they will not lose the content of the same. But those who have the duty to impart religious truth to their times and companions should get clear in their minds concerning this truth.(31) Nor did Otto pit science and faith against each other as enemies: The Scriptures should not be interpreted according to the demands of the natural sciences, but according to the Scriptures themselves. Exegesis must simply seek to find that which the Scripture passage intends to say. If it should happen that the passage should represent conceptions which are impossible to reconcile with scientific findings, then there will still be time to decide in favor of which side of the respective collision one might choose to stand.(32) THE EVANGELICAL MIX The synodical case against Otto rejected his statement that he was not asking for recognition of a "liberal interpretation," but of the "neological" (modern) method of exegeting scripture. Such an argument, however, was in keeping with the confessional stance of the Evangelical Synod. Its 1848 confessional statement affirms the Scriptures as the Word of God and as the sole and infallible rule of faith and life, and accepts the interpretation of the Holy Scriptures as given in the symbolic books of the Lutheran and the Reformed Church, the most important being the Augsburg Confession, Luther's and the Heidelberg Catechism, in so far as they agree. But where they disagree, we adhere strictly to the passages of Holy Scriptures bearing on the subject, and avail ourselves of the liberty of conscience prevailing in the Evangelical Church.(33) The confessional statement revealed the unionist confessional spirit of the Evangelicals, which allowed choice between Lutheran and Reformed confessions, while at the same time affirming the priority of scripture and appealing to individual conscience on points of disagreement. Although it could be criticized as contradictory, the paragraph nicely combined liberal individualism with conservative scriptural authority. Highly trained scholars leaned toward autonomy of conscience, while ordinary pastors and church members, grounded in conservative pietistic views, favored traditional scriptural authority. Although Otto followed the confessional tradition of his predecessors, William Binner, Andreas Irion, and Adolph Baltzer, he did so with greater discernment, penetrating to more daring conclusions. Building on his excellent German theological training, Otto affirmed the authority of Holy Scripture without question. However, he also considered himself better qualified than many others to distinguish between favored interpretations of texts and their actual meanings. He insisted that the meaning "which according to my best knowledge is the meaning of Scripture constitutes for me the norm for my teaching."(34) The censure of Karl Emil Otto at the 1880 synodical conference centered on what constitutes Evangelical freedom. Maintaining that he espoused scientific, theological truth, Otto urged that the synod could accept his position without fear of drifting from doctrinal moorings. Indeed, the contrast between an orthodox and a more liberal position, which he admitted he held, was wholesome for the church. On the basis of the confessional paragraph, he demanded recognition and equal rights for both. At the time the irenic spirit of the Evangelicals was overcome by fear of the dangers of "neology." When the vote went against Otto, he was dismissed from the synod. More orthodox Evangelicals tried to prevent any other professors with his views from becoming seminary professors. Yet Otto was not repudiated as a person, or as a teacher, although he was never invited back to teach at the seminary. "Americanization" was a crucial issue for nonEnglishspeaking believers, and the wave of the future for Evangelicals was on the side of Americanization. As early as 1874, when Otto was president at Marthasville, he had proposed that students who completed their work with honors at "Eden" should be sent to an Englishspeaking college or theological seminary for further work.(35) Conservatives responded that it would be far better for them to attend German universities. Yet Otto's flexibility on language issues was consistent with his critical and forwardlooking theological views. As the years went by, it was Otto's approach to scriptural authority, learning, individual conscience, and willingness to allow missionarylike accommodation to American life that prevailed in the Evangelical Synod. By the time of Otto's death in 1916, onetime student Samuel D. Press said of Otto that "not only his theology was Christocentric, but also his life." Press expressed the prevailing Evangelical spirit when he went on to say: Professor Otto holds a distinctive place in our Synod. Through Otto's intellectual talents, God presented our Church with one of his richest gifts. . . Otto was an untiring searcher for truth.... Completely unpartisan, Otto had the courage to present his theological positions freely and openly, without concern for personal consequences, . . . What a tragedy that our Church robbed itself of the services of such an outstanding theological servant! Nevertheless, this noble person continued to serve the Church faithfully until the end of his life.(36) The littleknown heresy trial of Karl Emil Otto before the Evangelical Synod in 1880 presents a unique example of theological leadership and the struggle for denominational identity on the American scene. The small German Evangelical denomination made an initial mistake but went on to recover its identity and its ability to grow amid struggle. Karl Emil Otto and the Evangelical Synod show how sound biblical criticism and flexible churchly pietism learned to live together. 1. For Evangelical Synod history, see Carl E. Schneider, The German Church on the American Frontier (St. Louis: Eden Publishing House, 1939), and David Dunn, ed., A History of the Evangelical and Reformed Church (Philadelphia: The Christian Education Press, 1961). 2. Carl E. Schneider, History of the Theological Seminary of the Evangelical Church (St. Louis: Eden Publishing House, 1925), and Walter A. Brueggemann, Ethos and Ecumenism, An Ecumenical Blend: A History of Eden Theological Seminary, 19251975 (St. Louis: Eden Publishing House, 1975). 3. For Otto, see Schneider, German Church, p. 368, and Dunn, op. cit., pp. 22329. 4. For Schaff, James Hastings Nichols, Romanticism in American Theology: Nevin and Schaff at Mercersburg (Chicago: University of Chicago Press, 1961), and The Mercersburg Theology (New York: Oxford University Press, 1966), and John B. Payne, "Philip Schaff: Christian Scholar, Historian and Ecumenist," Historical Intelligencer 2 (1982):1723. 5. Winthrop S. Hudson, Religion in America (2d ed.; New York: Charles Scribners Sons, 1981), p. 286, and William R. Hutchison, The Modernist Impulse in American Protestantism (New York: Oxford University Press, 1976), pp. 3140. 6. Kenneth K. Bailey, Southern White Protestantism in the Twentieth Century (New York: Harper & Row, 1964), p. 12, and Pope A. Duncan, "Crawford Howell Toy: Heresy at Louisville," American Religious Heretics: Formal and Informal Trials, ed. George H. Shriver (Nashville: Abingdon Press, 1966), pp. 5688. 7. Egbert C. Smyth, Progressive Orthodoxy (Boston: Houghton, Mifflin, 1885); Newman Smyth, Dorner on the Future State (New York: Charles Scribners Sons, 1883). See Daniel Day Williams, The Andover Liberals (New York: Octagon Books, 1970). 8. On Briggs, see Lefferts A. Loetscher, The Broadening Church: A Study of Theological Issues in the Presbyterian Church Since 1869 (Philadelphia: University of Pennsylvania Press, 1958), ch. 4, and H. Shelton Smith, Robert C. Handy, and Lefferts A. Loetscher, American Christianity, 18201960 (New York: Charles Scribners Sons, 1963), pp. 27579. 9. Noted in Hudson, op. cit., p. 280. The Disciples of Christ expelled their first modernist minister, Robert C. Cave, in 1889. See Lester G. McAllister and William E. Tucker, Journey into Faith: A History of the Christian Church (Disciples of Christ) (St. Louis: Bethany Press, 1975), pp. 36364. 10. E. Otto obituary, Evangelical Herald (August 17, 1916), pp. 45, for a brief summary of his life. 11. For the influence of German Mediating theology on American theology and philosophy, see Bruce Kuklick, Churchmen and Philosophers: From Jonathan Edwards to John Dewey (New Haven, CT: Yale University Press, 1985), pp. 12627; Ragnar Holte, Die Vermittlungstheologie (Uppsala: Almquist & Wiksells, 1965). 12. For Heppe, see Lowell H. Zuck, "Heinrich Heppe: A Melanchthonian Liberal in the NineteenthCentury German Reformed Church," Church History 51(1982):41933. 13. Sketches of Ragué and Nollau in Lowell H. Zuck, NewChurch Starts: American Backgrounds of the United Church of Christ (St. Louis: United Church Board for Homeland Ministries, 1982), pp. 1214. 14. For Andreas Irion, see John W. Flucke, Evangelical Pioneers (St. Louis: Eden Publishing House, 1931), pp. 12740, and Schneider, German Church, pp. 31418, 41617. 15. Walter Merzdorf translated Otto's 1873 Dogmatics (1967) in 149 typewritten pages from student notes. Copy in Eden Archives, Webster Groves, MO. 16. H. Kamphausen, Ceschichte des Religioesen Lebens in der Deutschen Evangelischen Synode von NordAmerika (St. Louis: Eden Publishing House, 1924), p. 160. 17. Quoted in ibid., p. 165. 18. Protokoll der GeneralConferenz (St. Louis, September 1880), p. 21. 19. Walter Merzdorf translated Otto's Romans in 196465. Copy available in Eden Archives, typewritten, 414 pages. 20. From Samuel D. Press, typewritten Autobiographical Reflections, in Eden Archives. See William G. Chrystal, "Samuel D. Press: Teacher of the Niebuhrs," Church History 53 (1984):50421. 21. Der Gestohlene Knabe: Eine Geschichte aus der Revolutionszeit (St. Louis: Eden Publishing House, 1898). 22. Die Braut von Damaskus (St. Louis: Eden Publishing House, 1895). 23. Das Leben George Washingtons (St. Louis: Eden Publishing House, 1897). 24. Magazin fuer Evang. Theologie und Kirche 18 (1916):32129, 329 39; 25163, 28797. 25. Carl E. Schneider, The Place of the Evangelical Synod in American Protestantism (St. Louis: Eden Publishing House, 1933), p. 25. 26. Otto, Romans, Merzdorf MSS, p. 4. 27. Ibid., pp. 100101. 28. Ibid., pp. 11415. 29. Ibid., pp. 395, 39798. 30. Ibid., p. 402. 31. Ibid., p. 414. 32. Ibid., p. 411. 33. Schneider, German Church, p. 409. 34. Quoted in Schneider, Place of the Evangelical Synod, p. 25. 36. Samuel D. Press, Otto obituary, The Keryx, October 1916, pp. 2627. Lowell H. Zuck is Professor of Church History at Eden Theological Seminary, St. Louis, Missouri.
No Roots--No Fruits There is a short phrase in Greek “Kalo Riziko!” that is said to people who are moving into a new city or home. It means “may you put down good roots.” Being uprooted and moved to another place is often traumatic, especially in the case of refugees, i.e. war-torn Syria. So having a good root system is important. Some of us are old enough to remember the landmark television series titled “Roots,” which was about an African American named ‘Kunta Kinte’ discovering the story of his slave ancestors all the way back to their home country. I felt like Kunte Kinte when we travelled to Greece in 1994 to visit for the first time the tiny villages where my mother, grandmother and grandfather were born, and meet relatives I had never seen or knew before. Roots are important. That is why adopted children often seek out their birth parents who did not raise them. Jesus is saying to us, “Kalo Riziko!” in today’s Gospel reading from the Fourth Sunday of Luke (8:5-15). “May we put down good roots!” but not in a new house but in a producing a crop. 5“A sower went out to sow his seed. And as he sowed, some fell by the wayside; and it was trampled down, and the birds of the air devoured it. 6Some fell on rock; and as soon as it sprang up, it withered away because it lacked moisture. 7And some fell among thorns, and the thorns sprang up with it and choked it. 8But others fell on good ground, sprang up, and yielded a crop a hundredfold.” When He had said these things He cried, “He who has ears to hear, let him hear!” Yet, Jesus is not even talking about a crop of food like wheat or barley or vegetables. Explaining the parable, Jesus says, 11“Now the parable is this: The seed is the word of God. 12Those by the wayside are the ones who hear; then the devil comes and takes away the word out of their hearts, lest they should believe and be saved. 13But the ones on the rock are those who, when they hear, receive the word with joy; and these have no root, who believe for a while and in time of temptation fall away. 14Now the ones that fell among thorns are those who, when they have heard, go out and are choked with cares, riches, and pleasures of life, and bring no fruit to maturity. 15But the ones that fell on the good ground are those who, having heard the word with a noble and good heart, keep it and bear fruit with patience. Did you hear that one sentence? 13But the ones on the rock are those who, when they hear, receive the word with joy; and these have no root, who believe for a while and in time of temptation fall away. Roots are important, and the root we need to have, according to Jesus, is the root that grows from the seed of the word of God. Why? Because without the root the plant or tree cannot grow and if cannot grow, it cannot produce fruit. Our world needs a lot of good fruit, not just for physical nourishment but more importantly for spiritual sustenance. With good, deep roots we can produce the following. 22But the fruit of the Spirit is love, joy, peace, longsuffering, kindness, goodness, faithfulness, 23gentleness, self-control. Against such there is no law. (Galatians 5:22-23). Roots are important! In the Divine Liturgy we pray at the great litany for “favorable weather, temperate seasons and the fruits of the earth” (Great Litany). Certainly this refers to crops of fruit to feed people but it also includes the spiritual fruits of the Holy Spirit. For Christians bearing fruit is not an option for Jesus said: 16You did not choose Me, but I chose you and appointed you that you should go and bear fruit, and that your fruit should remain, that whatever you ask the Father in My name He may give you. John 15:16 The consequences for not bearing fruit are very serious. Again, listen to Jesus: 18Now in the morning, as He returned to the city, He was hungry. 19And seeing a fig tree by the road, He came to it and found nothing on it but leaves, and said to it, "Let no fruit grow on you ever again." Immediately the fig tree withered away. (Mt.21:18-19). The fig tree likely did not have any good roots. Roots are important! Fr. Anthony Coniaris (“It Withered Because It Had No Root” Meet Jesus vol.2 p.95) concludes, “No Roots—No Fruits!” But if a tree does have fruit, it will be known by what type because Jesus said: 17Even so, every good tree bears good fruit, but a bad tree bears bad fruit. 18A good tree cannot bear bad fruit, nor can a bad tree bear good fruit. 19Every tree that does not bear good fruit is cut down and thrown into the fire. 20Therefore by their fruits you will know them. (Matthew 7:17-20). We are known by the fruits we bear. Good fruit can grow in our lives only if we are rooted in Christ and His Church through faith, prayer and the Eucharist. Again, listen to Jesus teach on this subject. 1"I am the true vine, and My Father is the vinedresser. 2Every branch in Me that does not bear fruit He takes away; and every branch that bears fruit He prunes, that it may bear more fruit. 3You are already clean because of the word which I have spoken to you. 4Abide in Me, and I in you. As the branch cannot bear fruit of itself, unless it abides in the vine, neither can you, unless you abide in Me. 5I am the vine, you are the branches. He who abides in Me, and I in him, bears much fruit; for without Me you can do nothing. 6If anyone does not abide in Me, he is cast out as a branch and is withered; and they gather them and throw them into the fire, and they are burned. 7If you abide in Me, and My words abide in you, you will ask what you desire, and it shall be done for you. 8By this My Father is glorified, that you bear much fruit; so you will be My disciples. (John 15:1-8) Pruning sometimes feels like punishment but we must remember the trials of life are given by God to make us stronger, not weaker. The same is true of roots. Trees that are exposed to strong winds are forced to sink their roots deep into the earth to be able to resist the winds. Sometimes God uses the storms and the adversities of life to strengthen our spiritual root system. When some plants are deprived of water, their roots are forced to grow deeper and deeper in search of moisture. The whole plant is thus made stronger and able to resist serious drought later on. Being connected to Jesus is important because 6Jesus said to him, "I am the way, the truth, and the life. No one comes to the Father except through Me. (John 14:6) Often we hear people say, “What does it matter what we believe as long as we do what is right? It matters very much what we believe, because what we do rises from what we believe. Belief is the root of action. Jesus is new life, that requires a whole new root system. This life begins in baptism. The old infected root system of Adam is replaced by an entirely new one. The diseased infested roots of our sinful nature are amputated by God’s grace. A new, clean, disease-free, sin-free nature was implanted in us. 1There shall come forth a Rod from the stem of Jesse, And a Branch shall grow out of his roots. 2The Spirit of the LORD shall rest upon Him, The Spirit of wisdom and understanding, The Spirit of counsel and might, The Spirit of knowledge and of the fear of the LORD. (Isaiah 11:1-2) Fr. Coniaris shares a quote from D. Elton Trueblood, “The terrible danger of our time consists in the fact that ours is a cut-flower civilization. Beautiful as cut flowers may be…they will eventually die because they are severed from their sustaining roots. We are trying to maintain the dignity of the individual apart from the deep faith that every man is made in God’s image and is therefore precious.” Those in our own country who deny God, yet cling to the concept of human rights are destroying the very roots which nourish and sustain these precious human values. Cut-off from their Christian roots, human rights and human values wither and die. In conclusion, we must remember The most important part of the Christian life is the part you cannot see, the root system, which enables us to draw upon the deep resources of God that help us prevail against the many evil pressures of life. It may be years before we realize how deeply our spiritual roots have grown as a result of the strong winds and dry periods in our lives. 7"Blessed is the man who trusts in the LORD, And whose hope is the LORD. 8For he shall be like a tree planted by the waters, Which spreads out its roots by the river, And will not fear when heat comes; But its leaf will be green, And will not be anxious in the year of drought, Nor will cease from yielding fruit. (Jeremiah 17:7-8) Kalo Riziko! Amen!
Babesia microti IgG Antibodies, Serum Clinical Information Discusses physiology, pathophysiology, and general clinical aspects, as they relate to a laboratory test Babesiosis is a zoonotic infection caused by the protozoan parasite Babesia microti. The infection is acquired by contact with Ixodes ticks carrying the parasite. The deer mouse is the animal reservoir and, overall, the epidemiology of this infection is much like that of Lyme disease. Babesiosis is most prevalent in the Northeast, Upper Midwest, and Pacific Coast of the United States. Infectious forms (sporozoites) are injected during tick bites and the organism enters the vascular system where it infects RBCs. In this intraerythrocytic stage it becomes disseminated throughout the reticuloendothelial system. Asexual reproduction occurs in RBCs, and daughter cells (merozoites) are formed which are liberated on rupture (hemolysis) of the RBC. Most cases of babesiosis are probably subclinical or mild, but the infection can be severe and life threatening, especially in older or asplenic patients. Fever, fatigue, malaise, headache, and other flu-like symptoms occur most commonly. In the most severe cases, hemolysis, acute respiratory distress syndrome, and shock may develop. Patients may have hepatomegaly and splenomegaly. A serologic test can be used as an adjunct in the diagnosis of babesiosis or in seroepidemiologic surveys of the prevalence of the infection in certain populations. Babesiosis is usually diagnosed by observing the organisms in infected RBCs on Giemsa-stained thin blood films of smeared peripheral blood. Serology may be useful if the parasitemia is too low to detect or if the infection has cleared naturally or following treatment. Serology may also be useful in the follow-up of documented cases of babesiosis or if chronic or persistent infection is suspected. A positive result of an indirect fluorescent antibody test (titer > or =1:64) suggests current or previous infection with Babesia microti. In general, the higher the titer, the more likely it is that the patient has an active infection. Patients with documented infections have usually had titers ranging from 1:320 to 1:2,560. Cautions Discusses conditions that may cause diagnostic confusion, including improper specimen collection and handling, inappropriate test selection, and interfering substances Previous episodes of babesiosis may produce a positive serologic result. In selected cases, documentation of infection may be attempted by animal inoculation or PCR methods (PBAB / Babesia microti, Molecular Detection, PCR, Blood) Performance characteristics have not been established for the following specimen characteristics: Reference Values Describes reference intervals and additional information for interpretation of test results. May include intervals based on age and sex when appropriate. Intervals are Mayo-derived, unless otherwise designated. If an interpretive report is provided, the reference value field will state this. Clinical References Provides recommendations for further in-depth reading of a clinical nature Spach DH, Liles WC, Campbell GL, et al: Tick-borne diseases in the United States. N Engl J Med 1993;329:936-947
New condition but for diagonal corner crease on rear cover. An Eighth Day View: The three books considered in this volume constitute the principal biblical witness to Israel's early history. According to A. Graeme Auld, "they tell the story of how under Joshua the land was first taken by Israel and then apportioned to her various tribes. They tell how after Joshua there was a long period of ups and downs; of religious apostasy within the community and repeated harassment from abroad answered by a series of divinely impelled 'Judges' or 'Deliverers.' They offer some samples of life in Israel, 'in the days when the Judges ruled' or 'when there was not yet a king in Israel.'" Carrying forward brilliantly the pattern established by Barclay's New Testament series, the Daily Study Bible has been extended to cover the entire Old Testament as well. Invaluable for individual devotional study, for group discussion, and for classroom use, the Daily Study Bible provides a useful, reliable, and eminently readable way to discover what the Scriptures were saying then and what God is saying today.
Q80. What do you know of Guru Ramdas? Guru Ramdas(1534-1581) was installed as Guru at the age of forty. He put missionary work on sound basis and sent massands to different parts of north India to propagate the message of Sikhism. He himself was fond of serving his disciples. Sometimes, he would distribute water or pull the fan for the Sangat. Guru Ramdas was keen on giving a suitable centre of worship to the Sikhs. He developed the land purchased from local land owners and established a new township called Ramdaspur. Many Sikhs settled in the new town because it was situated on the trade routes. The city was subsequently called Amritsar. Guru Ramdas was a perfect example of humility and piety. Once Sri Chand - the son of Guru Nanak - visited him. He asked the Guru in a humorous way as to why he maintained a long and flowing beard. The Guru gave him an apt reply: "To wipe the dust of your holy feet." Sri Chand was deeply moved by this answer and expressed regret for his impertinence. The Guru's mission spread quickly among the poor and the rich classes. Some aristocrats visited Amritsar and became his followers. The Guru turned his friendship with Emperor Akbar to good account by persuading him to relieve distress and to remove the oppressive taxes on non-Muslims. Guru Ramdas laid down a Sikh code of conduct and worship. He prescribed the routine of a Sikh as in his hymn to be found on page 305 of Guru Granth Sahib. He composed the Lavan for Sikh marriage cermony and other hymns appropriate to certain other functions and festivities. Being a talented musician he composed hymns in eleven new ragas. Arjan, the youngest son of Guru Ramdas was devoted to his father. At the bidding of his father, he went to Lahore to attend a marriage. He was feeling terribly depressed without his father. He wrote two urgent poetic letters, full of longing and love for the Guru. "My soul yearns for the sight of the Guru. "It bewails like the Chatrik crying for the rain." (A.G. p.96) These letters were intercepted by his elder brother Prithi Chand. When the third letter reached Guru Ramdas, he immediately called him. Prithi Chand was keen on the succession, but the Guru tested his sons and finally his choice fell on Arjan who was installed as the Fifth Guru in 1581.
The Khthon, a race of small, hairless humanoids, were native to the planet of Trieste. They looked perpetually old and wrinkled. They were a psychic race: able to foretell the future as well as to read the minds of all but strong-willed individuals. The Khthon also knew about the Yssgaroth, whom they called the Old Ones. The Khthon had a primitive society until Trieste and the other planets in its system were colonised; the human settlers enslaved the weaker Khthons. When the Seven Planets became independent in 2396, the Khthons were given more rights but were still the lower class. They were presumably all killed when the Seven Planets were destroyed in 2400. (PROSE: The Pit)
The vascular endothelium is a monolayer of cells that cover the interior of blood vessels and provide both structural and functional roles. The endothelium acts as a barrier, preventing leukocyte adhesion and aggregation, as well as controlling permeability to plasma components. Functionally, the endothelium affects vessel tone. Endothelial dysfunction is an imbalance between the chemical species which regulate vessel tone, thombroresistance, cellular proliferation and mitosis. It is the first step in atherosclerosis and is associated with coronary artery disease, peripheral artery disease, heart failure, hypertension, and hyperlipidemia. The first demonstration of endothelial dysfunction involved direct infusion of acetylcholine and quantitative coronary angiography. Acetylcholine binds to muscarinic receptors on the endothelial cell surface, leading to an increase of intracellular calcium and increased nitric oxide (NO) production. In subjects with an intact endothelium, vasodilation was observed while subjects with endothelial damage experienced paradoxical vasoconstriction. There exists a non-invasive, in vivo method for measuring endothelial function in peripheral arteries using high-resolution B-mode ultrasound. The endothelial function of peripheral arteries is closely related to coronary artery function. This technique measures the percent diameter change in the brachial artery during a period of reactive hyperemia following limb ischemia. This technique, known as endothelium-dependent, flow-mediated vasodilation (FMD) has value in clinical research settings. However, a number of physiological and technical issues can affect the accuracy of the results and appropriate guidelines for the technique have been published. Despite the guidelines, FMD remains heavily operator dependent and presents a steep learning curve. This article presents a standardized method for measuring FMD in the brachial artery on the upper arm and offers suggestions to reduce intra-operator variability. 28 Related JoVE Articles! Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases Institutions: University of Ulm. Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo . The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis. Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques Matrix-assisted Laser Desorption/Ionization Time of Flight (MALDI-TOF) Mass Spectrometric Analysis of Intact Proteins Larger than 100 kDa Institutions: Université J. Fourier. Effectively determining masses of proteins is critical to many biological studies (e.g. for structural biology investigations). Accurate mass determination allows one to evaluate the correctness of protein primary sequences, the presence of mutations and/or post-translational modifications, the possible protein degradation, the sample homogeneity, and the degree of isotope incorporation in case of labelling (e.g. 13 Electrospray ionization (ESI) mass spectrometry (MS) is widely used for mass determination of denatured proteins, but its efficiency is affected by the composition of the sample buffer. In particular, the presence of salts, detergents, and contaminants severely undermines the effectiveness of protein analysis by ESI-MS. Matrix-assisted laser desorption/ionization (MALDI) MS is an attractive alternative, due to its salt tolerance and the simplicity of data acquisition and interpretation. Moreover, the mass determination of large heterogeneous proteins (bigger than 100 kDa) is easier by MALDI-MS due to the absence of overlapping high charge state distributions which are present in ESI spectra. Here we present an accessible approach for analyzing proteins larger than 100 kDa by MALDI-time of flight (TOF). We illustrate the advantages of using a mixture of two matrices (i.e. 2,5-dihydroxybenzoic acid and α-cyano-4-hydroxycinnamic acid) and the utility of the thin layer method as approach for sample deposition. We also discuss the critical role of the matrix and solvent purity, of the standards used for calibration, of the laser energy, and of the acquisition time. Overall, we provide information necessary to a novice for analyzing intact proteins larger than 100 kDa by MALDI-MS. Chemistry, Issue 79, Chemistry Techniques, Analytical, Mass Spectrometry, Analytic Sample Preparation Methods, biochemistry, Analysis of intact proteins, mass spectrometry, matrix-assisted laser desorption ionization, time of flight, sample preparation High Resolution Whole Mount In Situ Hybridization within Zebrafish Embryos to Study Gene Expression and Function Institutions: Royal Victoria Hospital, McGill University Health Centre Research Institute. This article focuses on whole-mount in situ hybridization (WISH) of zebrafish embryos. The WISH technology facilitates the assessment of gene expression both in terms of tissue distribution and developmental stage. Protocols are described for the use of WISH of zebrafish embryos using antisense RNA probes labeled with digoxigenin. Probes are generated by incorporating digoxigenin-linked nucleotides through in vitro transcription of gene templates that have been cloned and linearized. The chorions of embryos harvested at defined developmental stages are removed before incubation with specific probes. Following a washing procedure to remove excess probe, embryos are incubated with anti-digoxigenin antibody conjugated with alkaline phosphatase. By employing a chromogenic substrate for alkaline phosphatase, specific gene expression can be assessed. Depending on the level of gene expression the entire procedure can be completed within 2-3 days. Neuroscience, Issue 80, Blood Cells, Endoderm, Motor Neurons, life sciences, animal models in situ hybridization, morpholino knockdown, progranulin, neuromast, proprotein convertase, anti-sense transcripts, intermediate cell mass, pronephric duct, somites A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent Institutions: University of Maine, Orono, University of Maine, Orono. Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g. by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5 . Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6 , can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1 . Originally published by Naal et al.1 , we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here. Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11 , although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2 . In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280 = 4,200 L/M/cm)12 . This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential. Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado. Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics. Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil Isolation and Functional Characterization of Human Ventricular Cardiomyocytes from Fresh Surgical Samples Institutions: University of Florence, University of Florence. Cardiomyocytes from diseased hearts are subjected to complex remodeling processes involving changes in cell structure, excitation contraction coupling and membrane ion currents. Those changes are likely to be responsible for the increased arrhythmogenic risk and the contractile alterations leading to systolic and diastolic dysfunction in cardiac patients. However, most information on the alterations of myocyte function in cardiac diseases has come from animal models. Here we describe and validate a protocol to isolate viable myocytes from small surgical samples of ventricular myocardium from patients undergoing cardiac surgery operations. The protocol is described in detail. Electrophysiological and intracellular calcium measurements are reported to demonstrate the feasibility of a number of single cell measurements in human ventricular cardiomyocytes obtained with this method. The protocol reported here can be useful for future investigations of the cellular and molecular basis of functional alterations of the human heart in the presence of different cardiac diseases. Further, this method can be used to identify novel therapeutic targets at cellular level and to test the effectiveness of new compounds on human cardiomyocytes, with direct translational value. Medicine, Issue 86, cardiology, cardiac cells, electrophysiology, excitation-contraction coupling, action potential, calcium, myocardium, hypertrophic cardiomyopathy, cardiac patients, cardiac disease Anti-Nuclear Antibody Screening Using HEp-2 Cells Institutions: INOVA Diagnostics, Inc., INOVA Diagnostics, Inc., INOVA Diagnostics, Inc., INOVA Diagnostics, Inc.. The American College of Rheumatology position statement on ANA testing stipulates the use of IIF as the gold standard method for ANA screening1 . Although IIF is an excellent screening test in expert hands, the technical difficulties of processing and reading IIF slides – such as the labor intensive slide processing, manual reading, the need for experienced, trained technologists and the use of dark room – make the IIF method difficult to fit in the workflow of modern, automated laboratories. The first and crucial step towards high quality ANA screening is careful slide processing. This procedure is labor intensive, and requires full understanding of the process, as well as attention to details and experience. Slide reading is performed by fluorescent microscopy in dark rooms, and is done by trained technologists who are familiar with the various patterns, in the context of cell cycle and the morphology of interphase and dividing cells. Provided that IIF is the first line screening tool for SARD, understanding the steps to correctly perform this technique is critical. Recently, digital imaging systems have been developed for the automated reading of IIF slides. These systems, such as the NOVA View Automated Fluorescent Microscope, are designed to streamline the routine IIF workflow. NOVA View acquires and stores high resolution digital images of the wells, thereby separating image acquisition from interpretation; images are viewed an interpreted on high resolution computer monitors. It stores images for future reference and supports the operator’s interpretation by providing fluorescent light intensity data on the images. It also preliminarily categorizes results as positive or negative, and provides pattern recognition for positive samples. In summary, it eliminates the need for darkroom, and automates and streamlines the IIF reading/interpretation workflow. Most importantly, it increases consistency between readers and readings. Moreover, with the use of barcoded slides, transcription errors are eliminated by providing sample traceability and positive patient identification. This results in increased patient data integrity and safety. The overall goal of this video is to demonstrate the IIF procedure, including slide processing, identification of common IIF patterns, and the introduction of new advancements to simplify and harmonize this technique. Bioengineering, Issue 88, Antinuclear antibody (ANA), HEp-2, indirect immunofluorescence (IIF), systemic autoimmune rheumatic disease (SARD), dense fine speckled (DFS70) Assessment of Vascular Function in Patients With Chronic Kidney Disease Institutions: University of Colorado, Denver, University of Colorado, Boulder. Patients with chronic kidney disease (CKD) have significantly increased risk of cardiovascular disease (CVD) compared to the general population, and this is only partially explained by traditional CVD risk factors. Vascular dysfunction is an important non-traditional risk factor, characterized by vascular endothelial dysfunction (most commonly assessed as impaired endothelium-dependent dilation [EDD]) and stiffening of the large elastic arteries. While various techniques exist to assess EDD and large elastic artery stiffness, the most commonly used are brachial artery flow-mediated dilation (FMDBA ) and aortic pulse-wave velocity (aPWV), respectively. Both of these noninvasive measures of vascular dysfunction are independent predictors of future cardiovascular events in patients with and without kidney disease. Patients with CKD demonstrate both impaired FMDBA , and increased aPWV. While the exact mechanisms by which vascular dysfunction develops in CKD are incompletely understood, increased oxidative stress and a subsequent reduction in nitric oxide (NO) bioavailability are important contributors. Cellular changes in oxidative stress can be assessed by collecting vascular endothelial cells from the antecubital vein and measuring protein expression of markers of oxidative stress using immunofluorescence. We provide here a discussion of these methods to measure FMDBA , aPWV, and vascular endothelial cell protein expression. Medicine, Issue 88, chronic kidney disease, endothelial cells, flow-mediated dilation, immunofluorescence, oxidative stress, pulse-wave velocity An Engulfment Assay: A Protocol to Assess Interactions Between CNS Phagocytes and Neurons Institutions: Boston Children's Hospital, Harvard Medical School. Phagocytosis is a process in which a cell engulfs material (entire cell, parts of a cell, debris, etc.) in its surrounding extracellular environment and subsequently digests this material, commonly through lysosomal degradation. Microglia are the resident immune cells of the central nervous system (CNS) whose phagocytic function has been described in a broad range of conditions from neurodegenerative disease (e.g. , beta-amyloid clearance in Alzheimer’s disease) to development of the healthy brain (e.g., . The following protocol is an engulfment assay developed to visualize and quantify microglia-mediated engulfment of presynaptic inputs in the developing mouse retinogeniculate system7 . While this assay was used to assess microglia function in this particular context, a similar approach may be used to assess other phagocytes throughout the brain (e.g., astrocytes) and the rest of the body (e.g. , peripheral macrophages) as well as other contexts in which synaptic remodeling occurs (e.g. Neuroscience, Issue 88, Central Nervous System (CNS), Engulfment, Phagocytosis, Microglia, Synapse, Anterograde Tracing, Presynaptic Input, Retinogeniculate System Voluntary Breath-hold Technique for Reducing Heart Dose in Left Breast Radiotherapy Institutions: Royal Marsden NHS Foundation Trust, University of Surrey, Institute of Cancer Research, Sutton, UK, Institute of Cancer Research, Sutton, UK. Breath-holding techniques reduce the amount of radiation received by cardiac structures during tangential-field left breast radiotherapy. With these techniques, patients hold their breath while radiotherapy is delivered, pushing the heart down and away from the radiotherapy field. Despite clear dosimetric benefits, these techniques are not yet in widespread use. One reason for this is that commercially available solutions require specialist equipment, necessitating not only significant capital investment, but often also incurring ongoing costs such as a need for daily disposable mouthpieces. The voluntary breath-hold technique described here does not require any additional specialist equipment. All breath-holding techniques require a surrogate to monitor breath-hold consistency and whether breath-hold is maintained. Voluntary breath-hold uses the distance moved by the anterior and lateral reference marks (tattoos) away from the treatment room lasers in breath-hold to monitor consistency at CT-planning and treatment setup. Light fields are then used to monitor breath-hold consistency prior to and during radiotherapy delivery. Medicine, Issue 89, breast, radiotherapy, heart, cardiac dose, breath-hold The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism Institutions: University of Montréal, McGill University, University of Minnesota. Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33 . To help improve this understanding, proton magnetic resonance spectroscopy (1 H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41 . In fact, a recent study demonstrated that 1 H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34 . This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1 H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31 . Methodological factors to consider and possible modifications to the protocol are also discussed. Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke Fundus Photography as a Convenient Tool to Study Microvascular Responses to Cardiovascular Disease Risk Factors in Epidemiological Studies Institutions: Flemish Institute for Technological Research (VITO), Hasselt University, Hasselt University, Leuven University. The microcirculation consists of blood vessels with diameters less than 150 µm. It makes up a large part of the circulatory system and plays an important role in maintaining cardiovascular health. The retina is a tissue that lines the interior of the eye and it is the only tissue that allows for a non-invasive analysis of the microvasculature. Nowadays, high-quality fundus images can be acquired using digital cameras. Retinal images can be collected in 5 min or less, even without dilatation of the pupils. This unobtrusive and fast procedure for visualizing the microcirculation is attractive to apply in epidemiological studies and to monitor cardiovascular health from early age up to old age. Systemic diseases that affect the circulation can result in progressive morphological changes in the retinal vasculature. For example, changes in the vessel calibers of retinal arteries and veins have been associated with hypertension, atherosclerosis, and increased risk of stroke and myocardial infarction. The vessel widths are derived using image analysis software and the width of the six largest arteries and veins are summarized in the Central Retinal Arteriolar Equivalent (CRAE) and the Central Retinal Venular Equivalent (CRVE). The latter features have been shown useful to study the impact of modifiable lifestyle and environmental cardiovascular disease risk factors. The procedures to acquire fundus images and the analysis steps to obtain CRAE and CRVE are described. Coefficients of variation of repeated measures of CRAE and CRVE are less than 2% and within-rater reliability is very high. Using a panel study, the rapid response of the retinal vessel calibers to short-term changes in particulate air pollution, a known risk factor for cardiovascular mortality and morbidity, is reported. In conclusion, retinal imaging is proposed as a convenient and instrumental tool for epidemiological studies to study microvascular responses to cardiovascular disease risk factors. Medicine, Issue 92, retina, microvasculature, image analysis, Central Retinal Arteriolar Equivalent, Central Retinal Venular Equivalent, air pollution, particulate matter, black carbon Implantation of the Syncardia Total Artificial Heart Institutions: Virginia Commonwealth University, Virginia Commonwealth University. With advances in technology, the use of mechanical circulatory support devices for end stage heart failure has rapidly increased. The vast majority of such patients are generally well served by left ventricular assist devices (LVADs). However, a subset of patients with late stage biventricular failure or other significant anatomic lesions are not adequately treated by isolated left ventricular mechanical support. Examples of concomitant cardiac pathology that may be better treated by resection and TAH replacement includes: post infarction ventricular septal defect, aortic root aneurysm / dissection, cardiac allograft failure, massive ventricular thrombus, refractory malignant arrhythmias (independent of filling pressures), hypertrophic / restrictive cardiomyopathy, and complex congenital heart disease. Patients often present with cardiogenic shock and multi system organ dysfunction. Excision of both ventricles and orthotopic replacement with a total artificial heart (TAH) is an effective, albeit extreme, therapy for rapid restoration of blood flow and resuscitation. Perioperative management is focused on end organ resuscitation and physical rehabilitation. In addition to the usual concerns of infection, bleeding, and thromboembolism common to all mechanically supported patients, TAH patients face unique risks with regard to renal failure and anemia. Supplementation of the abrupt decrease in brain natriuretic peptide following ventriculectomy appears to have protective renal effects. Anemia following TAH implantation can be profound and persistent. Nonetheless, the anemia is generally well tolerated and transfusion are limited to avoid HLA sensitization. Until recently, TAH patients were confined as inpatients tethered to a 500 lb pneumatic console driver. Recent introduction of a backpack sized portable driver (currently under clinical trial) has enabled patients to be discharged home and even return to work. Despite the profound presentation of these sick patients, there is a 79-87% success in bridge to transplantation. Medicine, Issue 89, mechanical circulatory support, total artificial heart, biventricular failure, operative techniques Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine. We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7 . This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11 C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9 . Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented. Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques Protocol for Relative Hydrodynamic Assessment of Tri-leaflet Polymer Valves Institutions: Florida International University, University of Florida , University of Florida , Jeddah, Saudi Arabia. Limitations of currently available prosthetic valves, xenografts, and homografts have prompted a recent resurgence of developments in the area of tri-leaflet polymer valve prostheses. However, identification of a protocol for initial assessment of polymer valve hydrodynamic functionality is paramount during the early stages of the design process. Traditional in vitro pulse duplicator systems are not configured to accommodate flexible tri-leaflet materials; in addition, assessment of polymer valve functionality needs to be made in a relative context to native and prosthetic heart valves under identical test conditions so that variability in measurements from different instruments can be avoided. Accordingly, we conducted hydrodynamic assessment of i) native (n = 4, mean diameter, D = 20 mm), ii) bi-leaflet mechanical (n= 2, D = 23 mm) and iii) polymer valves (n = 5, D = 22 mm) via the use of a commercially available pulse duplicator system (ViVitro Labs Inc, Victoria, BC) that was modified to accommodate tri-leaflet valve geometries. Tri-leaflet silicone valves developed at the University of Florida comprised the polymer valve group. A mixture in the ratio of 35:65 glycerin to water was used to mimic blood physical properties. Instantaneous flow rate was measured at the interface of the left ventricle and aortic units while pressure was recorded at the ventricular and aortic positions. Bi-leaflet and native valve data from the literature was used to validate flow and pressure readings. The following hydrodynamic metrics were reported: forward flow pressure drop, aortic root mean square forward flow rate, aortic closing, leakage and regurgitant volume, transaortic closing, leakage, and total energy losses. Representative results indicated that hydrodynamic metrics from the three valve groups could be successfully obtained by incorporating a custom-built assembly into a commercially available pulse duplicator system and subsequently, objectively compared to provide insights on functional aspects of polymer valve design. Bioengineering, Issue 80, Cardiovascular Diseases, Circulatory and Respiratory Physiological Phenomena, Fluid Mechanics and Thermodynamics, Mechanical Engineering, valve disease, valve replacement, polymer valves, pulse duplicator, modification, tri-leaflet geometries, hydrodynamic studies, relative assessment, medicine, bioengineering, physiology Intravitreous Injection for Establishing Ocular Diseases Model Institutions: The University of Hong Kong - HKU. Intravitreous injection is a widely used technique in visual sciences research. It can be used to establish animal models with ocular diseases or as direct application of local treatment. This video introduces how to use simple and inexpensive tools to finish the intravitreous injection procedure. Use of a 1 ml syringe, instead of a hemilton syringe, is used. Practical tips for how to make appropriate injection needles using glass pipettes with perfect tips, and how to easily connect the syringe needle with the glass pipette tightly together, are given. To conduct a good intravitreous injection, there are three aspects to be observed: 1) injection site should not disrupt retina structure; 2) bleeding should be avoided to reduce the risk of infection; 3) lens should be untouched to avoid traumatic cataract. In brief, the most important point is to reduce the interruption of normal ocular structure. To avoid interruption of retina, the superior nasal region of rat eye was chosen. Also, the puncture point of the needle was at the par planar, which was about 1.5 mm from the limbal region of the rat eye. A small amount of vitreous is gently pushed out through the puncture hole to reduce the intraocular pressure before injection. With the 45° injection angle, it is less likely to cause traumatic cataract in the rat eye, thus avoiding related complications and influence from lenticular factors. In this operation, there was no cutting of the conjunctiva and ocular muscle, no bleeding. With quick and minor injury, a successful intravitreous injection can be done in minutes. The injection set outlined in this particular protocol is specific for intravitreous injection. However, the methods and materials presented here can also be used for other injection procedures in drug delivery to the brain, spinal cord or other organs in small mammals. Neuroscience, Issue 8, eye, injection, rat Strategies for Study of Neuroprotection from Cold-preconditioning Institutions: The University of Chicago Medical Center. Neurological injury is a frequent cause of morbidity and mortality from general anesthesia and related surgical procedures that could be alleviated by development of effective, easy to administer and safe preconditioning treatments. We seek to define the neural immune signaling responsible for cold-preconditioning as means to identify novel targets for therapeutics development to protect brain before injury onset. Low-level pro-inflammatory mediator signaling changes over time are essential for cold-preconditioning neuroprotection. This signaling is consistent with the basic tenets of physiological conditioning hormesis, which require that irritative stimuli reach a threshold magnitude with sufficient time for adaptation to the stimuli for protection to become evident. Accordingly, delineation of the immune signaling involved in cold-preconditioning neuroprotection requires that biological systems and experimental manipulations plus technical capacities are highly reproducible and sensitive. Our approach is to use hippocampal slice cultures as an in vitro model that closely reflects their in vivo counterparts with multi-synaptic neural networks influenced by mature and quiescent macroglia / microglia. This glial state is particularly important for microglia since they are the principal source of cytokines, which are operative in the femtomolar range. Also, slice cultures can be maintained in vitro for several weeks, which is sufficient time to evoke activating stimuli and assess adaptive responses. Finally, environmental conditions can be accurately controlled using slice cultures so that cytokine signaling of cold-preconditioning can be measured, mimicked, and modulated to dissect the critical node aspects. Cytokine signaling system analyses require the use of sensitive and reproducible multiplexed techniques. We use quantitative PCR for TNF-α to screen for microglial activation followed by quantitative real-time qPCR array screening to assess tissue-wide cytokine changes. The latter is a most sensitive and reproducible means to measure multiple cytokine system signaling changes simultaneously. Significant changes are confirmed with targeted qPCR and then protein detection. We probe for tissue-based cytokine protein changes using multiplexed microsphere flow cytometric assays using Luminex technology. Cell-specific cytokine production is determined with double-label immunohistochemistry. Taken together, this brain tissue preparation and style of use, coupled to the suggested investigative strategies, may be an optimal approach for identifying potential targets for the development of novel therapeutics that could mimic the advantages of cold-preconditioning. Neuroscience, Issue 43, innate immunity, hormesis, microglia, hippocampus, slice culture, immunohistochemistry, neural-immune, gene expression, real-time PCR Derivation of Thymic Lymphoma T-cell Lines from Atm-/- and p53-/- Mice Institutions: Cornell University. Established cell lines are a critical research tool that can reduce the use of laboratory animals in research. Certain strains of genetically modified mice, such as Atm-/- consistently develop thymic lymphoma early in life 1,2 , and thus, can serve as a reliable source for derivation of murine T-cell lines. Here we present a detailed protocol for the development of established murine thymic lymphoma T-cell lines without the need to add interleukins as described in previous protocols 1,3 . Tumors were harvested from mice aged three to six months, at the earliest indication of visible tumors based on the observation of hunched posture, labored breathing, poor grooming and wasting in a susceptible strain 1,4 . We have successfully established several T-cell lines using this protocol and inbred strains ofAtm-/- mice. We further demonstrate that more than 90% of the established T-cell population expresses CD3, CD4 and CD8. Consistent with stably established cell lines, the T-cells generated by using the present protocol have been passaged for over a year. Immunology, Issue 50, mouse, thymic lymphoma, Atm, p53, T-cell lines Analyzing the Function of Small GTPases by Microinjection of Plasmids into Polarized Epithelial Cells Institutions: Northwestern University. Epithelial cells polarize their plasma membrane into biochemically and functionally distinct apical and basolateral domains where the apical domain faces the 'free' surfaces and the basolateral membrane is in contact with the substrate and neighboring cells. Both membrane domains are separated by tight junctions, which form a diffusion barrier. Apical-basolateral polarization can be recapitulated successfully in culture when epithelial cells such as Madin-Darby Canine Kidney (MDCK) cells are seeded at high density on polycarbonate filters and cultured for several days 1 2 . Establishment and maintenance of cell polarity is regulated by an array of small GTPases of the Ras superfamily such as RalA, Cdc42, Rab8, Rab10 and Rab13 3 4 5 6 7 . Like all GTPases these proteins cycle between an inactive GDP-bound state and an active GTP-bound state. Specific mutations in the nucleotide binding regions interfere with this cycling 8 . For example, Rab13T22N is permanently locked in the GDP-form and thus dubbed 'dominant negative', whereas Rab13Q67L can no longer hydrolyze GTP and is thus locked in a 'dominant active' state 7 . To analyze their function in cells both dominant negative and dominant active alleles of GTPases are typically expressed at high levels to interfere with the function of the endogenous proteins 9 . An elegant way to achieve high levels of overexpression in a short amount of time is to introduce the plasmids encoding the relevant proteins directly into the nuclei of polarized cells grown on filter supports using microinjection technique. This is often combined with the co-injection of reporter plasmids that encode plasma membrane receptors that are specifically sorted to the apical or basolateral domain. A cargo frequently used to analyze cargo sorting to the basolateral domain is a temperature sensitive allele of the vesicular stomatitis virus glycoprotein (VSVGts045) 10 . This protein cannot fold properly at 39°C and will thus be retained in the endoplasmic reticulum (ER) while the regulatory protein of interest is assembled in the cytosol. A shift to 31°C will then allow VSVGts045 to fold properly, leave the ER and travel to the plasma membrane 11 . This chase is typically performed in the presence of cycloheximide to prevent further protein synthesis leading to cleaner results. Here we describe in detail the procedure of microinjecting plasmids into polarized cells and subsequent incubations including temperature shifts that allow a comprehensive analysis of regulatory proteins involved in basolateral sorting. Cellular Biology, Issue 51, Epithelial cells, cell polarity, microinjection, basolateral sorting, MDCK Diagnosis of Ecto- and Endoparasites in Laboratory Rats and Mice Institutions: Charles River, Charles River, University of Washington. Internal and external parasites remain a significant concern in laboratory rodent facilities, and many research facilities harbor some parasitized animals. Before embarking on an examination of animals for parasites, two things should be considered. One: what use will be made of the information collected, and two: which test is the most appropriate. Knowing that animals are parasitized may be something that the facility accepts, but there is often a need to treat animals and then to determine the efficacy of treatment. Parasites may be detected in animals through various techniques, including samples taken from live or euthanized animals. Historically, the tests with the greatest diagnostic sensitivity required euthanasia of the animal, although PCR has allowed high-sensitivity testing for several types of parasite. This article demonstrates procedures for the detection of endo- and ectoparasites in mice and rats. The same procedures are applicable to other rodents, although the species of parasites found will differ. Immunology, Issue 55, rat, mouse, endoparasite, ectoparasite, diagnostics, mites, pinworm, helminths, protozoa, health monitoring Surgical Procedures for a Rat Model of Partial Orthotopic Liver Transplantation with Hepatic Arterial Reconstruction Institutions: RWTH-Aachen University, Kyoto University . Orthotopic liver transplantation (OLT) in rats using a whole or partial graft is an indispensable experimental model for transplantation research, such as studies on graft preservation and ischemia-reperfusion injury 1,2 , immunological responses 3,4 , hemodynamics 5,6 , and small-for-size syndrome 7 . The rat OLT is among the most difficult animal models in experimental surgery and demands advanced microsurgical skills that take a long time to learn. Consequently, the use of this model has been limited. Since the reliability and reproducibility of results are key components of the experiments in which such complex animal models are used, it is essential for surgeons who are involved in rat OLT to be trained in well-standardized and sophisticated procedures for this model. While various techniques and modifications of OLT in rats have been reported 8 since the first model was described by Lee et al. 9 in 1973, the elimination of the hepatic arterial reconstruction 10 and the introduction of the cuff anastomosis technique by Kamada et al. 11 were a major advancement in this model, because they simplified the reconstruction procedures to a great degree. In the model by Kamada et al. , the hepatic rearterialization was also eliminated. Since rats could survive without hepatic arterial flow after liver transplantation, there was considerable controversy over the value of hepatic arterialization. However, the physiological superiority of the arterialized model has been increasingly acknowledged, especially in terms of preserving the bile duct system 8,12 and the liver integrity 8,13,14 In this article, we present detailed surgical procedures for a rat model of OLT with hepatic arterial reconstruction using a 50% partial graft after ex vivo liver resection. The reconstruction procedures for each vessel and the bile duct are performed by the following methods: a 7-0 polypropylene continuous suture for the supra- and infrahepatic vena cava; a cuff technique for the portal vein; and a stent technique for the hepatic artery and the bile duct. Medicine, Issue 73, Biomedical Engineering, Anatomy, Physiology, Immunology, Surgery, liver transplantation, liver, hepatic, partial, orthotopic, split, rat, graft, transplantation, microsurgery, procedure, clinical, technique, artery, arterialization, arterialized, anastomosis, reperfusion, rat, animal model Revealing Dynamic Processes of Materials in Liquids Using Liquid Cell Transmission Electron Microscopy Institutions: Lawrence Berkeley National Laboratory. The recent development for in situ transmission electron microscopy, which allows imaging through liquids with high spatial resolution, has attracted significant interests across the research fields of materials science, physics, chemistry and biology. The key enabling technology is a liquid cell. We fabricate liquid cells with thin viewing windows through a sequential microfabrication process, including silicon nitride membrane deposition, photolithographic patterning, wafer etching, cell bonding, etc. A liquid cell with the dimensions of a regular TEM grid can fit in any standard TEM sample holder. About 100 nanoliters reaction solution is loaded into the reservoirs and about 30 picoliters liquid is drawn into the viewing windows by capillary force. Subsequently, the cell is sealed and loaded into a microscope for in situ imaging. Inside the TEM, the electron beam goes through the thin liquid layer sandwiched between two silicon nitride membranes. Dynamic processes of nanoparticles in liquids, such as nucleation and growth of nanocrystals, diffusion and assembly of nanoparticles, etc., have been imaged in real time with sub-nanometer resolution. We have also applied this method to other research areas, e.g. , imaging proteins in water. Liquid cell TEM is poised to play a major role in revealing dynamic processes of materials in their working environments. It may also bring high impact in the study of biological processes in their native environment. Materials Science, Issue 70, Chemical Engineering, Chemistry, Physics, Engineering, Life sciences, Liquid cell, Transmission Electron Microscopy, TEM, In situ TEM, Single nanoparticle trajectory, dynamic imaging, nanocrystals Phase Contrast and Differential Interference Contrast (DIC) Microscopy Institutions: University of Texas Health Science Center at San Antonio (UTHSCSA). Phase-contrast microscopy is often used to produce contrast for transparent, non light-absorbing, biological specimens. The technique was discovered by Zernike, in 1942, who received the Nobel prize for his achievement. DIC microscopy, introduced in the late 1960s, has been popular in biomedical research because it highlights edges of specimen structural detail, provides high-resolution optical sections of thick specimens including tissue cells, eggs, and embryos and does not suffer from the phase halos typical of phase-contrast images. This protocol highlights the principles and practical applications of these microscopy techniques. Basic protocols, Issue 18, Current Protocols Wiley, Microscopy, Phase Contrast, Difference Interference Contrast Reaggregate Thymus Cultures Institutions: University of Birmingham . Stromal cells within lymphoid tissues are organized into three-dimensional structures that provide a scaffold that is thought to control the migration and development of haemopoeitic cells. Importantly, the maintenance of this three-dimensional organization appears to be critical for normal stromal cell function, with two-dimensional monolayer cultures often being shown to be capable of supporting only individual fragments of lymphoid tissue function. In the thymus, complex networks of cortical and medullary epithelial cells act as a framework that controls the recruitment, proliferation, differentiation and survival of lymphoid progenitors as they undergo the multi-stage process of intrathymic T-cell development. Understanding the functional role of individual stromal compartments in the thymus is essential in determining how the thymus imposes self/non-self discrimination. Here we describe a technique in which we exploit the plasticity of fetal tissues to re-associate into intact three-dimensional structures in vitro , following their enzymatic disaggregation. The dissociation of fetal thymus lobes into heterogeneous cellular mixtures, followed by their separation into individual cellular components, is then combined with the in vitro re-association of these desired cell types into three-dimensional reaggregate structures at defined ratios, thereby providing an opportunity to investigate particular aspects of T-cell development under defined cellular conditions. (This article is based on work first reported Methods in Molecular Biology 2007, Vol. 380 pages 185-196). Immunology, Issue 18, Springer Protocols, Thymus, 2-dGuo, Thymus Organ Cultures, Immune Tolerance, Positive and Negative Selection, Lymphoid Development Preparation of 2-dGuo-Treated Thymus Organ Cultures Institutions: University of Birmingham . In the thymus, interactions between developing T-cell precursors and stromal cells that include cortical and medullary epithelial cells are known to play a key role in the development of a functionally competent T-cell pool. However, the complexity of T-cell development in the thymus in vivo can limit analysis of individual cellular components and particular stages of development. In vitro culture systems provide a readily accessible means to study multiple complex cellular processes. Thymus organ culture systems represent a widely used approach to study intrathymic development of T-cells under defined conditions in vitro . Here we describe a system in which mouse embryonic thymus lobes can be depleted of endogenous haemopoeitic elements by prior organ culture in 2-deoxyguanosine, a compound that is selectively toxic to haemopoeitic cells. As well as providing a readily accessible source of thymic stromal cells to investigate the role of thymic microenvironments in the development and selection of T-cells, this technique also underpins further experimental approaches that include the reconstitution of alymphoid thymus lobes in vitro with defined haemopoietic elements, the transplantation of alymphoid thymuses into recipient mice, and the formation of reaggregate thymus organ cultures. (This article is based on work first reported Methods in Molecular Biology 2007, Vol. 380 pages 185-196). Immunology, Issue 18, Springer Protocols, Thymus, 2-dGuo, Thymus Organ Cultures, Immune Tolerance, Positive and Negative Selection, Lymphoid Development Monitoring Plant Hormones During Stress Responses Institutions: University of Texas. Plant hormones and related signaling compounds play an important role in the regulation of plant responses to various environmental stimuli and stresses. Among the most severe stresses are insect herbivory, pathogen infection, and drought stress. For each of these stresses a specific set of hormones and/or combinations thereof are known to fine-tune the responses, thereby ensuring the plant's survival. The major hormones involved in the regulation of these responses are jasmonic acid (JA), salicylic acid (SA), and abscisic acid (ABA). To better understand the role of individual hormones as well as their potential interaction during these responses it is necessary to monitor changes in their abundance in a temporal as well as in a spatial fashion. For the easy, sensitive, and reproducible quantification of these and other signaling compounds we developed a method based on vapor phase extraction and gas chromatography/mass spectrometry (GC/MS) analysis (1, 2, 3, 4). After extracting these compounds from the plant tissue by acidic aqueous 1-propanol mixed with dichloromethane the carboxylic acid-containing compounds are methylated, volatilized under heat, and collected on a polymeric absorbent. After elution into a sample vial the analytes are separated by gas chromatography and detected by chemical ionization mass spectrometry. The use of appropriate internal standards then allows for the simple quantification by relating the peak areas of analyte and internal standard. Plant Biology, Issue 28, Jasmonic acid, salicylic acid, abscisic acid, plant hormones, GC/MS, vapor phase extraction Heterotopic and Orthotopic Tracheal Transplantation in Mice used as Models to Study the Development of Obliterative Airway Disease Institutions: University Heart Center Hamburg, University Hospital Hamburg, Stanford University School of Medicine. Obliterative airway disease (OAD) is the major complication after lung transplantations that limits long term survival (1-7). To study the pathophysiology, treatment and prevention of OAD, different animal models of tracheal transplantation in rodents have been developed (1-7). Here, we use two established models of trachea transplantation, the heterotopic and orthotopic model and demonstrate their advantages and limitations. For the heterotopic model, the donor trachea is wrapped into the greater omentum of the recipient, whereas the donor trachea is anastomosed by end-to-end anastomosis in the orthotopic model. In both models, the development of obliterative lesions histological similar to clinical OAD has been demonstrated (1-7). This video shows how to perform both, the heterotopic as well as the orthotopic tracheal transplantation technique in mice, and compares the time course of OAD development in both models using histology. Immunology, Issue 35, orthotopic tracheal transplantation, heterotopic tracheal transplantation, obliterative airway disease, mice, luminal obliteration, histology Preparing E18 Cortical Rat Neurons for Compartmentalization in a Microfluidic Device Institutions: University of California, Irvine (UCI), University of California, Irvine (UCI), University of California, Irvine (UCI). In this video, we demonstrate the preparation of E18 cortical rat neurons. E18 cortical rat neurons are obtained from E18 fetal rat cortex previously dissected and prepared. The E18 cortex is, upon dissection, immediately dissociated into individual neurons. It is possible to store E18 cortex in Hibernate E buffer containing B27 at 4°C for up to a week before the dissociation is performed. However, there will be a drop in cell viability. Typically we obtain our E18 Cortex fresh. It is transported to the lab in ice cold Calcium free Magnesium free dissection buffer (CMFM). Upon arrival, trypsin is added to the cortex to a final concentration of 0.125%. The cortex is then incubated at 37°C for 8 minutes. DMEM containing 10% FBS is added to the cortex to stop the reaction. The cortex is then centrifuged at 2500 rpm for 2 minutes. The supernatant is removed and 2 ml of Neural Basal Media (NBM) containing 2% B27 (vol/vol) and 0.25% Glutamax (vol/vol) is added to the cortex which is then re-suspended by pipetting up and down. Next, the cortex is triturated with previously fire polished glass pipettes, each with a successive smaller opening. After triturating, the cortex is once again centrifuged at 2500 rpm for 2 minutes. The supernatant is then removed and the cortex pellet re-suspended with 2 ml of NBM containing B27 and Glutamax. The cell suspension is then passed through a 40 um nylon cell strainer. Next the cells are counted. The neurons are now ready for loading into the neuron microfluidic device. Neuroscience, Issue 8, Biomedical Engineering, Neurons, Axons, Axonal Regeneration, Neuronal Culture, Cell Culture
Clopidogrel bisulfate, also known as Plavix, is a drug used in order to retard the platelet aggregation of cells, which in turn prevents any sort of blood clotting inside of the body. It is necessary for this sort of medication to be administered during different kinds of surgeries and other medical practices. However, there are some side effects of the medication, and it can have an impact on certain present conditions, including pulmonary embolism. Plumonary embolism is a medical condition where a blockage in an artery that transports blood directly from the heart into the lungs (this is the pulmonary artery). Typically, this occurs when a blood clot inside another area of the body, usually the legs or arms, breaks free and becomes lodged in the pulmonary artery. It is very important for pulmonary embolism to be treated quickly and effectively, as if the blockage continues to move forward and becomes blocked in the heart or the lungs, it can cause extremely serious health conditions, including death. Plavix is essentially a specialized blood thinner that utilizes specific medications in order to prevent the clotting of the blood. According to Vein Experts, there are some very specific side effects that occur to someone, especially when they suffer from pulmonary embolism. First, bleeding from different areas of the body is a natural occurrence. This is because the blood is no longer able to clot, so instead of repairing damaged areas of the body, it is simply going to flow through it. Any sort of cut or scratch can bleed profusely and hemorrhages into the brain can occur, although this is not likely (only 1 percent of patients experience any sort of serious bleeding during the use of a blood thinner such as Plavix). As Web MD points out, Plavix is often used in the event of pulmonary embolism in order to break up the former blood clot. The blood thinner is designed to break away the clot so it is able to flow smoothly like the rest of the other blood cells in the body. Generally though, before Plavix or another form of blood thinner is used, the doctor is going to investigate in order to determine exactly how large the former clot actually is. If the clot is large enough, it is possible the blood thinner is not going to break the entire clot up in time before it reaches the heart or the lungs, which in turn would potentially cause a serious health condition. When the clot is excessively large, it might prove necessary to remove it through surgical means. However, most pulmonary embolisms are able to be corrected through the use of a blood thinner such as Plavix. Plavix should not have much of a negative side effect on individuals who suffer from blood clots or other issues such as pulmonary embolism. Typically, the only possible side effect is bleeding throughout the body (typically internally), although this is not always a problem, as a very small percentage of individuals actually have this sort of an issue occur.
General Studies Self Designed Major A General Studies major allows you to build a broad-based education. It teaches you to think critically, communicate effectively, and pull together knowledge from many disciplines – skills you'll need to be successful in any career. It demonstrates to employers and peers alike that you have the self-discipline and intelligence to work through a university-level program in a variety of subject areas. It can serve as a prerequisite for a professional career, or as a stepping-stone to a college degree in another discipline. It can be the most personally rewarding major available, precisely because it is so broad, and it embodies the central philosophy of a liberal arts education that learning to think critically and to read and write well should help anyone in any career.
This year is set to be the hottest year ever recorded globally, beating 2015's record temperatures, the World Meteorological Organisation has said. Global temperatures this year are approximately 1.2C (2.16F) above pre-industrial levels and 0.88C (1.58F) above the average for 1961-1990, which the WMO uses as a reference period, provisional figures show. As a result, 2016 is on track to be the hottest year in records dating back to the 19th century, and 16 of the 17 hottest years on record will have occurred in the 21st century. WMO secretary-general Petteri Taalas said: "Another year, another record. The high temperatures we saw in 2015 are set to be beaten in 2016." The provisional assessment by the WMO has been released to inform the latest round of UN climate talks in Morocco which are focusing on implementing the world's first comprehensive climate treaty, the Paris Agreement. It comes as a study suggests carbon emissions have seen "almost no growth" in the past three years, marking a break from rapidly rising output in the previous decade and raising hopes that emissions may have peaked. But the election of Donald Trump as the next US president has raised concerns about the international fight against climate change, which he has previously described as a hoax created by the Chinese to make American manufacturing uncompetitive. The WMO assessment, which uses several international datasets including one from the Met Office and the University of East Anglia's Climatic Research Unit, showed global temperatures for January to September 2016 were 0.88C above the 14C (57.2F) average for 1961-1990. A powerful climate phenomenon in the Pacific known as El Nino, which pushes up global temperatures, led to a spike in temperatures in the early months of the year. But preliminary data for October suggests temperatures remain high enough for 2016 to be on track for the title of hottest year on record, beating 2015. This year has also seen record-breaking concentrations of greenhouse gas carbon dioxide in the atmosphere, as well as melting ice, coral reefs bleaching in the face of hot oceans, above-average sea level rise and extreme weather. Prof Taalas said: "The extra heat from the powerful El Nino event has disappeared. The heat from global warming will continue. "In parts of Arctic Russia, temperatures were 6C to 7C above the long-term average. "Many other Arctic and sub-Arctic regions in Russia, Alaska and north-west Canada were at least 3C above average. "We are used to measuring temperature records in fractions of a degree, and so this is different." Professor Peter Stott, of the Met Office, said: "Three record-breaking years for global temperature would be remarkable. The year 2015 was exceptionally warm and, like 2016, was influenced by the warm El Nino circulation in the tropical Pacific. "As the El Nino wanes, we don't anticipate that 2017 will be another record-breaking year in the instrumental record." But 2017 was still likely to be warmer than any year prior to the last two decades because of the underlying extent of man-made global warming due to increasing levels of greenhouse gases in the atmosphere, he said. The warning that it is set to be a record warm year comes after an analysis by the WMO that the global climate had seen its hottest five-year period on record between 2011 and 2015, with temperatures 0.57C (1.03F) above the 1960-1991 average. Man-made climate change is driving extreme weather, with more than half of 79 studies published between 2011 and 2014 by the Bulletin of the American Meteorological Society finding global warming contributed to individual extreme events, the WMO said. Responding to the announcement on 2016's record temperatures, Friends of the Earth's head of campaigns Andrew Pendleton said: "This is an urgent memo from the planet to President-elect Trump, Theresa May and any other leaders that think tackling climate change isn't important. "While Trump denies the existence of climate change, and May approves fracking and Heathrow expansion, our planet is warming fast and time for action is ticking down. "It's still possible to stop the worst effects of climate change, but it requires us to stop using coal, oil and gas in less than a generation - and put growingly affordable renewable energy in their place."
I came across a story on the low level of health literacy in Fierce Healthcare, which it had summarized from the Washington Post. It continues to trouble me. The Post reported on a 2006 study conducted by the U.S. Department of Education that found 36 percent of adults to have a basic or below-basic understanding of health material. According to the excerpt, "90 million Americans understand health information at a fifth-grade level or lower. And just over half have intermediate comprehension." What is health literacy? According to HSS (U.S. Dept. of Health and Human Services), "Health literacy is the ability to understand health information and to use that information to make good decisions about your health and medical care... Limited health literacy can affect your ability to: - Fill out complex forms - Locate providers and services - Share personal information such as health history - Take care of yourself - Manage a chronic disease - Understand how to take medicines Yet we know that health information is among the most queried topics on the Web. We also know from recent studies that not all health information is good information. Recently, a team of physicians at Harvard Medical School evaluated the quality/safety of 10 diabetes social networks on 28 indicators and published the results in the Journal of the American Informatics Association. In their paper, Social but Safe? Quality and Safety of Diabetes-related Online Social Networks, they found that the quality of information was variable on many levels. For example: - Only half were in sync with diabetes science/clinical practice recommendations - 70% lacked medical disclaimer use - Misinformation about a diabetes ‘cure’ was found on four moderated sites - Of nine sites with advertising, ads for unfounded ‘cures’ appeared on three Recall from the BUPA Health Pulse study I wrote about in March that only a small percentage (less than 25%) of health seekers bother to verify the information they find online. We live in a time where we have rapid, nearly unlimited, access to health information. Yet we also live in a time where over one-third of the population is considered health illiterate and even fewer check the veracity of information they find on the Web. Therefore, it seems to me, that we have a large number of people who might not understand how to manage their disease and a lot of poor/unsubstantiated information on the Internet waiting to prey on that. We have created yet one more problem to solve in our healthcare system. I don't have the answer, but would love to hear thoughts from my readers and others who care about this topic.
The United States is the world’s leading exporter of agricultural products, important to the U.S. economy, according to a report issued this week by the Joint Economic Committee of Congress and a news release by the U.S. Meat Export Federation. The report, “The Economic Contribution of America’s Farmers and the Importance of Agricultural Exports,” notes that the United States exported a record $141.3 billion of agricultural products in 2012 and boasted a $38.5 billion trade surplus for the year for the agriculture sector. While those totals are impressive, the report also notes that although agriculture has accounted for less than 5% of the United States’ gross domestic product (GDP) from 2007 through 2011, agricultural products as a share of total exports hovered around 10%. “Exports are critical to the success of U.S. agriculture, and population and income growth in developing countries ensures that this will continue to be the case in the decades to come,” the report states. “Taking action to facilitate exports would help to strengthen the agricultural sector and promote overall economic growth.” The report goes on to say that agricultural exporters often encounter trade barriers. “Despite some progress, average agricultural tariffs remain substantially higher than those imposed on other products,” the report noted. “Moreover, unpredictable and unscientific applications of sanitary and phytosanitary (SPS) measures can create a significant burden for exporters, in particular for producers and processors of meat products.” According to the report, pressing for lower tariffs on agricultural products – as well as ensuring that SPS measures are not used inappropriately to keep U.S. goods out of overseas markets – would help exporters. The report recommends actions that Congress can take to facilitate export opportunities for America’s farmers, ranchers and agricultural producers, including: - Enacting a long-term farm bill to provide a certainty for U.S. agriculture - Pushing for provisions that reduce barriers to agricultural exports - Promoting export opportunities for small and beginning farmers, ranchers and processors The report cites the changing landscape for American agricultural exports over the past 20 years. Two decades ago, just 1% of U.S. agricultural export sales went to China. This total increased to 4% by 2002, but by 2012 China was the top destination for U.S. agricultural products, purchasing more than $25 billion in products that accounted for more than 18% of total sales. In 2012, the China/Hong Kong region was the No. 3 market for U.S. pork exports, purchasing 431,145 metric tons (950 million pounds) valued at $886.2 million. China also has rapidly grown into one of the leading global markets for beef, but the country has remained closed to U.S. beef exports since 2003. “This report reinforces the importance of exports for the American agricultural sector,” says Philip Seng, president and CEO of the U.S. Meat Export Federation. “It also documents two areas that are critical for the success of agricultural exports: the enactment of a long-term farm bill to provide support for agricultural exports and provisions that reduce barriers to those exports. Both are equally important for an area of the economy that produces a much-needed budget surplus and supports an estimated one million jobs across the country.” In 2012, U.S. beef, pork and lamb exports amounted to more than 7.5 billion pounds of product valued at more than $11.8 billion. The export value per head processed amounted to $55.87 for pork and $216.73 for beef.
A heat engine is a device that repeatedly converts thermal energy into kinetic energy. It needs the difference of temperature and a working substance with high rate of expansion to do a useful work for us. It can be done in two ways: by means of the change of volume of the working substance that hot and cold reservoirs make alternately expand or compress, and by means of convection that the change of density resulting from the change of volume causes. A heat engine is not confined to an artificial one. The Earth also does a useful work for living systems as a heat engine. The heat engine whose hot reservoir is solar radiation heat maintains the circulation of air and water, while the heat engine whose hot reservoir is geothermal heat maintains the circulation of mantle and mineral nutrition. 1 : The history of heat engines and theories of their principles In thermodynamics, a heat engine is a system that does a certain amount of net positive work repeatedly by means of the conversion of thermal energy to kinetic energy.A heat engine was first devised by an engineer in Ancient Greek, Heron of Alexandria (Ήρων ο Αλεξανδρεύς), but this section takes up the Modern history of the practical application of heat engines and the theoretical reflection upon their principles. 1.1 : The improvement of steam engines In 1712 Thomas Newcomen (1664 – 1729) first succeeded in commercializing a heat engine. The animation (Fig.NE) below shows how the Newcomen engine works. When the valve is opened, steam is let out of the boiler to fill the space in the cylinder and lift the piston upward. The valve is then closed and another valve sends a spray of cold water into the cylinder, creating a partial vacuum under the piston. Pressure difference drives the piston down, raising the pump gear. More than 100 Newcomen steam engines had been employed principally to pump water out of mines until 1733 when his patent for it expired. In spite of this practical success people at that time including Newcomen himself did not understand the cause of the work rightly. The pump equipment was heavier than the steam piston and they wondered what drove the piston down or raised the pump gear.Most of them believed the vacuum in the cylinder rather than atmospheric pressure did the work, much less did they realize the difference of temperature was essential to heat engines. Then James Watt (1736 – 1819) improved the Newcomen steam engine and devised a double-acting engine, utilizing steam pressure alternately above and below the piston. His engine converted the vertical movement of the piston into circular motion, for which there were more industrial applications. Thus he became a leading figure in the British Industrial Revolution. The following illustration (Fig.WE) was drawn in 19 century. Let me explain its mechanism using this illustration. The pipe (v) injects steam into the upper part of the cylinder (J), propelling the piston downward and evacuating the steam from under the cylinder to a separate condensation chamber (H) immersed in a cold water tank (R). Thus the condensation makes the pressure of the lower part of the cylinder low. When the piston reaches the bottom, steam is injected into the lower part of the cylinder and the steam in the upper part of the cylinder is evacuated to the condensation chamber. The low pressure of the upper part of the cylinder and the weight of the pump raise the piston. This cycle is repeated automatically. Watt’s steam engine was different from Newcomen’s in thata steam condenser was separated from the cylinder and steam pressure instead of atmospheric pressure pushed the piston down to increase the efficiency of the engine. Watt also noticed thatinjection of steam to the end of the stroke is unnecessary and adoption of adiabatic expansion or compression can increase the efficiency of the engine. Watt was not a mere engineer. He had an insight into the essential principle of a heat engine. But, generally speaking, because of their empiricist tradition the British engineers made technological improvements on heat engines by trial and error without theoretical reflection. In contrast to Britain, the Continent had the rationalistic tradition and produced a theoretical genius, Nicolas Léonard Sadi Carnot (1796 – 1832). He publishedReflections on the Motive Power of Fire and on Machines Fitted to Develop that Power in 1824 and established the general principles of a heat engine. 1.2 : Carnot’s theory of heat engines in general At the beginning of theReflections, he wrote what motivated him to write this treatise, Notwithstanding the work of all kinds done by steam-engines, notwithstanding the satisfactory condition to which they have been brought to-day, their theory is very little understood, and the attempts to improve them are still directed almost by chance. Although he analyzed steam-engines in those days, his theory can be applied to heat engines in general. According to Carnot, there are two conditions for a heat engine to produce motive power. The first one is the difference of temperature. The production of motive power in steam-engines is then due not to an actual consumption of caloric, but to its transportation from a warm body to a cold body, that is, to its re-establishment of equilibrium, an equilibrium considered as destroyed by any cause whatever, by chemical action such as combustion, or by any other. We shall see shortly that this principle is applicable to any machine set in motion by heat. According to this principle, the production of heat alone is not sufficient to give birth to the impelling power: it is necessary that there should also be cold; without it, the heat would be useless. Watt must also have noticed that not only hot reservoir but also cold reservoir is necessary for a heat engine, because he separated the cold condenser from the hot cylinder. Carnot compared the motive power of a heat engine to that of a water wheel. The motive power of a waterfall depends on its height and on the quantity of the liquid; the motive power of heat depends also on the quantity of caloric used, and on what may be termed, on what in fact we will call, the height of its fall, that is to say, the difference of temperature of the bodies between which the exchange of caloric is made. the quantity of caloric absorbed or relinquished is always the same just as the quantity of water flowed from and into a water wheel is always the same, while both of them are producing power. As the term “caloric (calorique)” he used suggests, he believed in caloric theory that was in fashion at that time and assumed conservation of heat. After he wrote theReflections, however, he, inspired bythe paper by Count Rumford (Sir Benjamin Thompson; 1753 – 1814) in 1798 that reported the frictional heat generated by boring cannon at the arsenal was seemingly inexhaustible,abandoned the law of heat conservation and admitted that part of heat could be converted into work. That is to say,he stated what is today called the first law of thermodynamics. The first law of thermodynamics is the topic I will bring back at the next subsection and let us proceed to the second condition for a heat engine to produce motive power. Carnot thought the difference of temperature was not sufficient. In fact the mere flow of heat from a hot reservoir to a cold reservoir does not result in the motive power that we expect a heat engine to produce [Note]. A heat engine needs working substances susceptible of changes in volume through the alternation of heat and cold, since Heat can evidently be a cause of motion only by virtue of the changes of volume or of form which it produces in bodies. [Note] Because the direct contact of a hot reservoir with a cold reservoir contracts the hot reservoir and expands the cold reservoir, however little the change might be, heat can do work without the third working substance. Of course it is not the work that we expect a heat engine to produce. Anyway we can say that the second condition is not essential to a heat engine. The second condition indicates that in order to increase the efficiency of a heat engine we should avoid the heat transfer that does not contribute to changes in volume. Since every re-establishment of equilibrium in the caloric may be the cause of the production of motive power, every re-establishment of equilibrium which shall be accomplished without production of this power should be considered as an actual loss. Now, very little reflection would show that all change of temperature which is not due to a change of volume of the bodies can be only a useless reestablishment of equilibrium in the caloric. The necessary condition of the maximum is, then, that in the bodies employed to realize the motive power of heat there should not occur any change of temperature which may not be due to a change of volume. Reciprocally, every time that this condition is fulfilled the maximum will be attained. The heat engine that fulfills the maximum is called an ideal engine. It is different from a real heat engine that does not necessarily convert all of the change of temperature into a change of volume, but still the ideal engine is scientifically significant just as the ideal gas is, though it is different from a real gas. The ideal gas expands the gas in a cylinder slowly and gradually, namely through quasi-static process, so that a sequence of states are infinitesimally close to equilibrium. A real engine operates its piston faster and it does not maintain the equilibrium so that convection or vortex motion occurs in the gas of the cylinder. Because a real engine does this excessive work, its heat efficiency is lower than that of the ideal engine. While the dynamic process of a real engine is irreversible, the quasi-static process of an ideal engine is reversible, that is to say, you can transfer heat from a cold reservoir to a hot one using the work produced by the heat flow from a hot reservoir to a cold reservoir. Carnot proved by reductio ad absurdum that a heat engine cannot do more work than an ideal engine.If it could, it could produce work without changing the original difference of temperature, but it is absurd. To use the current terms, such a heat engine is a perpetual motion machine of the first kind which violates the first law of thermodynamics, the law of conservation of energy. The bottom line is that a heat engine needs two conditions, difference of temperature and a working substance (especially fluid with high expansion rate). Carnot, however, regarded the former as more important. Though all the heat engines at that time were steam engines, Carnot realized that the working substances for a heat engine did not need to be steam. His abstraction was free from historical constraints at this point. The following is the bold hypothesis that is today called Carnot’s theorem. The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of the caloric. This theorem holds true even today. The efficiency of a heat engine is the function of only temperatures. 1.3 : Development of thermodynamics after Carnot On account of rather than despite the revolutionary theory ofReflections it was not estimated at all. Some assume it was because his explanation was not mathematical, but thepaper by Benoît Paul Émile Clapeyron (1799 – 1864) in 1834, 2 years later after Carnot’s death, which mathematically formulated Carnot’s theorem, elicited no responses from the academic establishment except William Thomson (Lord Kelvin ; 1824 – 1907) who read the paper during his study in France and developed the theory of Carnot. Another scientist who was unknown until Thomson discovered was James Prescott Joule (1818 – 1889). He showed the mechanical work can be converted into heat by the well-known experiment, where the mechanical work of spinning a paddle-wheel in an insulated barrel of water increased the temperature. He then jumped to the reverse proposition that heat can be converted into the mechanical work. You see, therefore, that living force [energy] may be converted into heat, and that heat may be converted into living force, or its equivalent attraction through space. All three, therefore – namely, heat, living force, and attraction through space (to which I might also add light, were it consistent with the scope of the present lecture) – are mutually convertible into one another. In these conversions nothing is ever lost. The same quantity of heat will always be converted into the same quantity of living force. We can therefore express the equivalency in definite language applicable at all times and under all circumstances. This idea of Joule is incompatible with the law of heat conservation Carnot presumed.Joule tried to convince Thomson out of Carnot’s theory, but he hesitated to decide which is right. It was Rudolf Julius Emmanuel Clausius (1822 – 1888) who solved this problem. Clausius concluded thatthe partial consumption of heat to generate the work of piston is compatible with the heat flow from a hot reservoir to a cold reservoir. This was also the conclusion that Carnot reached in the manuscript published posthumously. All of the work can be converted into heat but not all of the heat can be converted into work. This is the current view and Joule’s hypothesis turned out to be false. The irreversibility led Clausius and Thompson to discover the second law of thermodynamics. Clapeyron, Thompson and Clausius contributed to establishing Carnot’s theory as classic thermodynamics but in the end it was a theory of an ideal engine which neglects convection and vortex motion as valueless turbulence. They were the subjects of fluid mechanics and heat transfer physics. In 1858 Hermann Ludwig Ferdinand von Helmholtz (1821–1894)established three laws of vortex motion and early in the 20th century in the beginning Baron Rayleigh (John William Strutt; 1842 – 1919) and Henri Claude Bénard (1874–1939) studied the typical natural convection (Rayleigh-Bénard convection). This article pays attention to the similarity between two sorts of work produced by an ideal heat engine and convection that have been studied separately. 2 : The structure and operation of Carnot heat engine First we analyze the Carnot heat engine as an ideal heat engine. In this section we will survey four steps of the Carnot cycle, work produced by Carnot heat engine and its efficiency and recognize how the current standard explanation of thermodynamics reflects the theory of Carnot. 2.1 : Four steps of the Carnot Cycle Carnot drew the following piston-and-cylinder diagram (0 of Fig.PM) to explain the operation of an ideal heat engine inReflections. As this diagram is overcrowded, I analyze it into four steps (1-4). The cycle of an ideal heat engine consists of the following four steps: - Isothermal Expansion: The gas in the cylinder absorbs heat from the high temperature reservoir, which increases the entropy of the gas, but, as the gas expands to push the piston, the temperature of the gas remains constant. - Adiabatic Expansion: The cylinder is thermally insulated from the high temperature reservoir, so that entropy remains constant. The gas continues to expand, which causes the gas to cool. - Isothermal Compression: The heat flows from the gas in the cylinder into the low temperature reservoir, which decreases entropy of the gas, but, as the piston compresses the gas, the temperature of the gas remains constant. - Adiabatic Compression: The cylinder is again thermally insulated from the low temperature reservoir, so that entropy remains constant. The piston continues to compresses the gas, which causes the gas to warm. At the last step, the gas comes back to the same state as the start of the first step, thus forming a cycle, called “Carnot cycle”. 2.2 : The Work produced by the Carnot heat engine The figure below (Fig.PV) is Carnot cycle plotted on a pressure-volume graph. Two isothermal stages follow the isotherm lines and two adiabatic stages move between isotherms. V [m3] of the horizontal line multiplied by P [Pa=N/m2] of the vertical line is work W [J=Nm=m3×N/m2]. So, the yellow area bounded by the cycle path represents the total work that can be done during one cycle. The total area is the integral of VA-VB-B-A plus VB-VC-C-B minus VA-VD-D-A minus VD-VC-C-D. The equation to integrate is the first law of thermodynamics. Heat energy Q [J] put into a gas system in the cylinder increases the temperature of the gas system, namely the internal energy (U), or does work (W), pushing the piston to the outside. Meanwhile the total amount of energy is preserved. As it is reversible quasi-static process, the following differential equation can be applied to the infinitesimal change of internal energy. where n is the amount of a substance, Cv is specific heat at constant volume per mole, T is the absolute temperature, P is the pressure and V is the volume of the gas system. To use the ideal gas law PV=nRT, we get an equation: Let us calculate the areas of the 4 steps of Carnot cycle using this equation. 1. Isothermal Expansion (A→B) As this step is isothermal, dT=0. Its work is equivalent to Q2 or the positive work done by the expansion. 2. Adiabatic Expansion (B→C) As this step has no heat transfer, d’Q=0. If we use this equation and replace the increase in volume with the increase in internal energy, we will get its positive work. 3. Isothermal Compression (C→D) As this step is isothermal, dT=0. Its work is equivalent to Q1 or the negative work done by the compression. 4. Adiabatic Compression (D→A) As this step has no heat transfer, d’Q=0. The total net work is the aggregate of these. Because TVγ-1 is constant (γ=Cp/Cv) in the quasi-static adiabatic process, we can formulate the equation as to adiabatic expansion and as to adiabatic compression. The equation (10) divided by the equation (11) is that is to say, From the equation (09) and (13), we get Now we can recognize two conditions for a heat engine to do work in this equation (14). The first condition was the difference of temperature between two heat reservoirs. If T2-T1=0, W=0. No work can be done. The other condition was an expandable and compressible substance. If there is no substance, that is to say, n=0 or no change of volume, that is to say, VA=VB, W=0. It follows that we must increase the difference of temperature between two heat reservoirs and the amount of substance that has high rate of expansion in order to increase the work of a heat engine. 2.3 : The thermal efficiency of the Carnot heat engine Even if the work by a heat engine gets big, it would not be desirable if the efficiency is low. Let’s consider what we should do to improve the thermal efficiency, defining the thermal efficiency as the rate of work to the heat from the hot reservoir, From the equations (04), (07) and (13), So, from the equations (15) and (16) we get It tells us that the thermal efficiency is entirely determined by the ratio of the low temperature to the high temperature. We can recognize that Carnot was right in that he considered the difference of temperature to be decisive. The formula also tells us that we must keep quasi-static process and use the cold reservoir at absolute zero (T1=0) in order to make the thermal efficiency 1. As it is impossible to make the absolute temperature zero, we cannot make the efficiency 1. Carnot knew thatthe efficiency of real steam-engines is far less than the theoretical maximum. The real heat engine whose thermal efficiency is the closest to that of the Carnot engine is Stirling engine invented by Robert Stirling (1790 – 1878)in 1816. Many types of Stirling engines have been conceived and the following (Fig.SE) is one of those. In spite of the high thermal efficiency Stirling engines are employed only for some limited purposes, because their equipment is large and heavy and their capital cost per unit power is high. The heat engine that is most widely used is the internal combustion engine, typically the Otto engine. The animation below (Fig.OE) shows its four stroke cycle: air and vaporized fuel are drawn in at the 1st stroke, fuel vapor and air are compressed and ignited at the 2nd stroke, fuel combusts and piston is pushed downwards at the 3rd stroke and exhaust is driven out at the 4th stroke. Compared to a Stirling engine of the same power rating, an internal combustion engine currently has lower thermal efficiency but lower capital cost and is usually smaller and lighter. So, it is used for transportation and many other purposes. Thermal efficiency is not the only criteria to choose a heat engine. Carnot did not know an internal combustion engine, but his principle can be applied to it. It is applicable also to what traditional thermodynamics does not treat as a heat engine. The next section focuses on more complex but more beneficial heat engines than the artificial heat engines. 3 : The structure and operation of global heat engines Not only humans but also all living things have used two natural heat engines far before artificial heat engines were invented. One is a heat engine whose hot reservoir is the thermal energy caused by solar radiation, whose cold reservoir is outer space, and whose working substance is atmosphere. The other is a heat engine whose hot reservoir is the geothermal energy, whose cold reservoir is the crust of the Earth and outer space in the end, and whose working substance is mantle. The former causes the convection of atmosphere and the latter causes that of mantle. All of the work of convection is converted into heat in the end and emitted into the outer space. So the amount of heat that two global heat engines receive (Qin) is equal to the amount of heat that two global heat engines emit (Qout). Seen from the outside, the thermal efficiency of the Earth as a heat engine is zero. Yet the work two global heat engines do is so important as to decide our existence on the surface of the Earth. This section elucidates this mechanism. 3.1 : The atmospheric convection and the circulation of water As for the artificial heat engines that we have developed since the 18th century, expansion and compression of gas produce useful work, while convection or vortex motion of gas is regarded as a waste. As for the heat engines of atmosphere, on the other hand, the useful work is done not by expansion and compression of atmosphere but by convection. There are two kinds of convection: natural convection, where density differences in the fluid generates the fluid motion, or forced convection, where an external source generates the fluid motion. The atmospheric circulation is natural convection. The vertical temperature gradient of the troposphere is -6.5K/km. It means the higher the cooler. When the bottom of the fluid is heated and the surface is cooler than the bottom, the form of natural convection is decided by Rayleigh number (Ra). where L=scale of convection, g=acceleration due to gravity, T2=surface temperature, T1=fluid temperature far from the surface, ν=kinematic viscosity, α=thermal diffusivity, β=thermal expansion coefficient, k=thermal conductivity, ρ=density, Cp=specific heat capacity. When the Rayleigh number is below 1700, heat transfer is primarily in the form of conduction and when it exceeds 1700, heat transfer is primarily in the form of convection (Fig.RB). When the Rayleigh number exceeds 5×104, the regular pattern of convection cells fluctuates and when it exceeds 106, turbulence occurs. The factors gρβ(T2-T1) in the Rayleigh number represent buoyancy exerted by a fluid whose volume is L3. The difference of temperature plays no less important role here than in the equation (14) that defines the work by Carnot engine. On the other hand there are some differences. While the Carnot engine does work just by expanding and compressing the working fluid, in natural convection fluid arising from resistance to gravity does work. This is why the equation of the Rayleigh number depends on properties of working fluid unlike the equation (14). With all these differences, the atmospheric circulation by natural convection has a cycle with four steps similar to that of the Carnot cycle (Fig.AC). The atmospheric circulation consists of the following four steps: - Isothermal Expansion: Solar radiation heats the air on the Earth’s surface. The air expands, becomes less dense and ascends, with the temperature of the air constant. - Adiabatic Expansion: The air, thermally insulated from the surroundings, continues to expand, which causes the air to cool. - Isothermal Compression: When the air is lifted aloft to the tropopause, the air radiates heat to stratosphere. Then the air is compressed, becomes denser and descends, with the temperature of the air constant. - Adiabatic Compression: The air, again thermally insulated from the surroundings, continues to be compresses, which causes the air to warm. At the last step, the air comes back to the same state as the start of the first step, thus forming a cycle. The troposphere has not only the vertical temperature gradient but also the horizontal temperature gradient. Owing to the difference of incidence angles of solar radiation, low latitudes are hot and high latitudes are cold. Another difference comes from that of specific heat between sea and land. Thanks to many factors the actual atmosphere makes complicated motion, but roughly speaking, it consists of three representative cells: the Hadley cell, the Ferrel cell, and the Polar cell. The waste living systems discharge can be converted to heat and, so long as the waste heat is carried through the atmospheric circulation to the outer space, we do not have to worry about the environmental problems. The atmospheric circulation is also important to life in minimizing the difference of temperature of the Earth’s surface. Humans have utilized this heat engine by means of windmills, windjammers and so on before we began wind power generation. What is more important is that the circulation of air causes the circulation of water. Although there is much water on the surface of the Earth, about 97% of it is mixed with salt. Desalination is difficult, but solar radiation vaporizes and desalts seawater. As moist air rises, the adiabatic expansion cools air and water vapor begins to condense, forming clouds and then precipitation, so that fresh water returns to the earth surface. It not only desalts seawater, but also distributes fresh water widely. The atmospheric circulation, therefore, redresses an imbalance in water as well as an imbalance in temperature. The ocean circulation is another important circulation of water. Beside the wind-driven horizontal circulation of surface water, there is a density-driven vertical circulation of the ocean water, called thermohaline circulation. It circulates this way. The wind-driven surface currents in the Atlantic Ocean head poleward, get cold, salty and dense, then flow downhill into the deep water basins at high latitudes, resurface in the Pacific Ocean and come back to the Atlantic Ocean. The thermohaline circulation results from the gap of temperature widened by solar radiation but results in narrowing it. The wind-driven circulation and the thermohaline circulation are classified as forced convection. The troposphere has natural convection because its bottom is hot and its surface is cold. The ocean has no natural convection because its surface is hot and its bottom is cold, unless a submarine volcano erupts. It is still possible to make a heat engine utilizing the vertical temperature gradient of the ocean. The ocean thermal energy conversion (OTEC) is such a heat engine. The closed-cycle OTEC adopts ammonia or R-134a as working fluids. As they have low boiling points, warm surface seawater can vaporize the fluid and the expanding vapor turns the turbo-generator. Cold water from deep-ocean water condenses the vapor into a liquid, which is then recycled through the closed-cycle system. The open-cycle OTEC directly vaporizes the warm surface seawater at a low pressure. OTEC is one of the candidates for renewable energy, but it does not reach the stage of practical application because of low energy efficiency. The energy efficiency of OTEC is low because we must make the artificial circulation of working fluids. Why then don’t we generate electricity using the existing natural circulation of water? Yes, it has already been put to practical use. The hydroelectric power generation is it. It generates electricity consuming the potential energy of water, which the heat engine of solar radiation brings about through the circulation of water. 3.2 : The mantle convection and the circulation of mineral nutrition Another hot reservoir of the Earth as a heat engine is geothermal energy under the crust of the Earth. Its energy has two origins: heat from the decay of radiogenic isotopes, in particular uranium, thorium and potassium, and heat from the original formation of the planet, especially the giant impact, the collision between the young smaller Earth and a Mars-sized body about 4.5 billion years ago. According to the measurements of the geoneutrino flux from the Kamioka Liquid-Scintillator Antineutrino Detector, Japan, and from the Borexino detector, Italy,heat from the decay of uranium-238 and thorium-232 amounts to 21TW (plus 4 TW from the decay of potassium-40), about half of the current total heat flux, 44TW, while the other half is assumed to be the Earth’s primordial heat supply. Heat of this sort from the interior of the Earth is the hot reservoir and the Earth’s crust and its outside is the cold reservoir of the heat engine under the ground. The figure below (Fig.MC) depicts the natural heat convection of the mantle. The Earth’s tectonic plates moving due to the mantle convection trigger volcanism and earthquake. While the useful work the troposphere does for us is not the expansion and compression of the total atmosphere but its internal convection, the useful work the mantle does for us is not its internal convection but the expansion and compression of the mantle, the crustal upheaval and depression. You might think that the work of the mantle is rather harmful, because they account for volcanism and earthquake, but in fact the geothermal energy brings us more benefit than damage – more than the benefit of hot springs and geothermal power generation. Were it not for the mantle convection, it would be difficult for living systems to get most of essential elements, above all, phosphorus and they would be far poorer than they actually are. The bacterial isolate GFAJ-1 was once thought to incorporate arsenic into their basic biological structures in place of phosphorus, but later it was found thatGFAJ-1 lacks the ability to grow in phosphorus-depleted, arsenate-containing medium, that is to say, GFAJ-1 is an arsenate-resistant, but still a phosphate-dependent, bacterium. Hence, we can say no living systems can lack phospurus. Of course there are other elements essential to life, nitrogen, potassium and so on. So, I will use a comprehensive term, mineral nutrition. As their ions are soluble in water, their solution easily flows from land to sea according to gravity and stays at the bottom of the sea. The Earth’s crust, however, repeats the upheaval and depression, mineral nutrition at the bottom of the sea can be raised above sea level or barren land can be depressed to the bottom of the sea. In this way the mantle convection has circulated mineral nutrition and prevented land from drying it up all through geological time. Even if mantle convection should stop, the circulation of air and water would continue to erode the land and fill up the submarine trench until it would make the Earth’s surface smooth. Suppose the Earth had stopped diastrophism long years ago. In this case the entire surface would have sunk under the sea and no terrestrial living things could have evolved. Could we expect instead a rich evolution of aquatic life? The answer is no. The amount of water on the Earth is 1.4×109 km3. Dividing it by the area of the Earth’s surface, 5.1×108 km2, we get the average depth of water 2.7km. If the Earth’s surface were to become smooth, the entire surface would be below the compensation depth (200m) of photosynthesis. Today about 90% of all marine life lives in the photic zone above the depth of 200m. Aquatic life in the euphotic zone can prosper if sufficient mineral nutrition is available there. Without the coastal upwelling or the eruption of submarine volcano, however, the euphotic zone would be lacking it. As a result the ocean would have two zones, both undesirable for life: the surface zone with sufficient sunlight and insufficient mineral nutrition and the deep zone with sufficient mineral nutrition and insufficient sunlight. Thus the number and the variety of life would be far poorer than they are today. 3.3 : Two global heat engines that enable living systems Let me summarize my conclusion. Difference of temperature and a working substance are conditions for a heat engine to do work. The work useful for us can be done in two ways. One is the change of volume of the working substance that hot and cold reservoirs make alternately expand or compress. The other is convection that the change of density resulting from the change of volume causes. The circulation of mineral nutrition is the result from the former work and the circulation of water is the result from the latter work. The mantle convection and the circulation of mineral nutrition brings us more benefit than damage, even if the convection causes the disaster such as earthquakes and volcanic eruptions, just as the atmospheric convection and the circulation of water brings us more benefit than damage, even if the convection causes the disaster such as hurricanes and torrential rains. Heat inside the earth and heat by solar radiation are two important hot reservoirs of the earth as a heat engine that enables the survival of living systems. Mars has not evolved as rich life as the Earth has and one of the reasons is that Mars is a poorer heat engine than the Earth. Mars is a tenth as heavy as the Earth and because of the weak gravity the atmosphere of Mars is very thin. It has convection but its scope and effects are limited. Although Mars has mantle, no plate movements due to the mantle convection can be recognized. It used to have volcanic eruptions, but the small planet has gotten cold faster than the Earth. A planet that does not sufficiently function as a heat engine is a dead planet. 4 : References - Herons von Alexandria Druckwerke und Automatentheater, Pneumatika, Book ΙI, Chapter XI (author) Hero of Alexandria (translator) Wilhelm Schmidt - 熱学思想の史的展開―熱とエントロピー (page) 262 (author) 山本義隆 - Letter from Boulton to Erasmus Darwin, 4 Jan. 1790 (media)The Selected Papers of Boulton and Watt, Vol. 1: The Engine Partnership, 1775-1825 (page) 72 (editor) Jennifer Tann - 1782 Specification of Patent (media)James Watt and the steam revolution (page) 96ff (editor) Eric Robinson - Réflexions sur la puissance motrice du feu (page) 6 (author) Nicolas Léonard Sadi Carnot - Réflexions sur la puissance motrice du feu (page) 10-11 (author) Nicolas Léonard Sadi Carnot - Réflexions sur la puissance motrice du feu (page) 28 (author) Nicolas Léonard Sadi Carnot - Réflexions sur la puissance motrice du feu (page) 42 (author) Nicolas Léonard Sadi Carnot - An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction (author) Benjamin Thompson (media)The Collected Works of Count Rumford, Volume I: The Nature of Heat (page) 22 (editor) Sanborn C. Brown - Extrait de notes inédites de Sadi Carnot (media)Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (page) 92 (author) Nicolas Léonard Sadi Carnot - Extrait de notes inédites de Sadi Carnot (media)Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (page) 93 (author) Nicolas Léonard Sadi Carnot - Réflexions sur la puissance motrice du feu (page) 14 (author) Nicolas Léonard Sadi Carnot - Réflexions sur la puissance motrice du feu (page) 23-24 (author) Nicolas Léonard Sadi Carnot - Réflexions sur la puissance motrice du feu (page) 20-22 (author) Nicolas Léonard Sadi Carnot - Réflexions sur la puissance motrice du feu (page) 38 (author) Nicolas Léonard Sadi Carnot - Mémoire sur la puissance motrice de la chaleur (author) Émile Clapeyron - On Matter, Living Force, and Heat (author) James Prescott Joule (media)The Scientific Papers of James Prescott Joule, Volume 1 (editor) William Scoresby, Baron William Thomson Kelvin, Baron Lyon Playfair Playfair - An Account of Carnot’s Theory of the Motive Power of Heat – with Numerical Results Deduced from Regnault’s Experiments on Steam (media)Mathematical and Physical Papers, Volume 1 (page) 119 (author) William Thomson Baron Kelvin - Über die Anwendung der mechanischen Wärmetheorie auf die Dampfmaschine (page) 7 (author) Rudolf Clausius - Über Integrale der hydrodynamischen Gleichungen, welche den Wirbelbewegungen entsprechen (media) Journal für die reine und angewandte Mathematik, Zeitschriftenband 1858 (page) 25-55 (author) Hermann Ludwig Ferdinand von Helmholtz - Réflexions sur la puissance motrice du feu (page) 117 (author) Nicolas Léonard Sadi Carnot - Partial radiogenic heat model for Earth revealed by geoneutrino measurements (media) Nature Geoscience (author) The KamLAND Collaboration - Searching for Alien Life, on Earth (media) Astrobiology Magazine, NASA - GFAJ-1 Is an Arsenate-Resistant, Phosphate-Dependent Organism (author) TJ Erb, P Kiefer, B Hattendorf, D Günther, JA Vorholt
Pandas are vegetarians, right? Well, new findings by Conservancy scientists suggest the issue isn’t as black and white (or, er, as green and blood red) as once thought. Motion sensor cameras were set up this summer in the soon-to-be established Motianling County Land Trust Reserve in northern Sichuan by The Nature Conservancy, Peking University and local government partners. In November they captured images of a giant panda consuming the carcass of a takin, a Himalayan goat-antelope. These photos provide visual confirmation that pandas at least occasionally eat meat in addition to their customary staple of bamboo leaves. (See the amazing images captured by remote camera.) While this isn’t news to scientists — evidence in feces has shown that pandas do sometimes eat meat — very few photos exist of a panda actually consuming it. But the panda’s no killer; scientists confirmed that the takin had died of natural causes several days before it was discovered by the panda. “These images show that there is still so much we don’t know about their behavior,” says Zhao Peng, the Motianling project lead for the Conservancy. “They really are an incredible species.” But the question remains: Is the panda portrayed in Kung Fu Panda closer to real life than the cuddly ball of fur we all love and adore? To get to the bottom of this question we reached out to Matt Durnin, the Conservancy’s Asia-Pacific Conservation Science Director. Matt has been studying pandas for more than a decade and conducted his Ph.D. research on the wild giant pandas of the Wolong Nature Reserve in Sichuan province. Q: Where did the researchers find this panda, and what were they investigating with their camera traps? Matt Durnin: The work is being conducted in the Motianling Land Trust Reserve a 110km2 area in Pingwu County, Sichuan Province. The reserve is a vital, healthy habitat for conservation target species including the endangered giant panda, and is adjacent to two existing giant panda reserves – Baishuijiang and Tangjiahe National Nature Reserves. We use remote cameras as a “non-invasive” way to monitor species presence in the reserve. Remote cameras are a now widely used and well proven methodology to gather information not only on the presence of species but — as is evidenced in these photos — on behavior (e.g. feeding or scent marking). In previous work done on giant pandas in the wild, I was able to capture photos of pandas at “scent trees,” sniffing the scent of other pandas as well as leaving their own scent on trees. Remote cameras have also been used to identify the presence of previously unknown species or species that have been considered extirpated from an area. Q: Was this news a surprise to you? Matt Durnin: This news is not so much a surprise because researchers have previously found the remains of animals in panda feces. But it is exciting and significant. I’ve only ever heard of one other incident of a panda being photographed on a carcass and those photos have never been published and were taken with a mobile phone. What makes these photos significant is the number and quality of them, as well as the systematic way in which they were obtained. Researchers came across the carcass (so were able to estimate how long since its death) and placed the camera there to photograph any animals that might come along and feed on it. I don’t think anyone expected that it would be a panda but rather some other carnivore. This is a panda feeding on a carcass over a 6-hour period; it’s the most extensive photographic footage of a panda in the wild doing so. Q: What can you learn from these photos? How common is meat-eating in pandas, and is something new happening here that might be increasing the behavior? Matt Durnin: Pandas are technically carnivores. And we know from finding feces in the field that contained animal remains — as well as anecdotally from conversations with locals living and spending time in panda habitat — that they do eat meat from time to time. So this is not a “new” behavior — but it is, we believe, very uncommon. So documenting it with such a large number of high quality photos is an important result of this research. From the photographs we have what appears to be a very healthy panda (it’s not possible to say if it’s a male or female) feeding for approximately 6 hours on the remains of a takin. There is plenty of its primary food, bamboo, in the area. So we can assume it was not starving from lack of access to bamboo but rather it was hungry, found a “fresh” carcass and so did what carnivores do and ate the meat. However, there is only a small amount of data supporting that wild pandas do eat meat, so we still consider this a rare behavior. I’ve collected hundreds — perhaps thousands — of feces for DNA and bite-fragment analysis in my research over the years and have found only one feces with anything other than bamboo in it. These 600 photos tell us that, despite decades of research on pandas in the wild, we’re still learning. Q: When most people think of panda bears, they think of sweet, seemingly cuddly creatures. Is that popular image true to the facts, or do pandas have a not-so-cuddly side as well? Matt Durnin: The famous field biologist George Schaller was once chased and climbed a tree to get away from a female panda that he was observing. Just like any large carnivore, they have very powerful jaws, sharp teeth and claws — if they were to get a hold of a person or other animal, they could do a lot of harm. Zookeepers and zoo guests (that have jumped into enclosures or stuck their arms through cage bars) have been injured and mauled to death by captive pandas. Whlie there’s no evidence that anyone has ever been killed by a panda in the wild, they are wild animals — they are extremely strong — and if threatened can be as lethal as any other large carnivore out there. They are also, despite most people’s image, very fast albeit in short bursts. However, like most wild animals they do whatever they can to avoid humans and the chances of someone being mauled by a panda are infinitesimally small. Another little known fact is that their fur is quite bristly and not at all soft as many imagine. Q: This takin was already dead when the panda started eating it. Given the photos, do you think it’s possible a panda would kill an animal for meat if it were hungry enough? Matt Durnin: I believe if a panda had to it could catch and kill prey, but it’s not designed for long-distance running. So pandas would need to do any “hunting” by waiting and ambushing a passing animal. However, in the early 1980s, there was a huge bamboo die-off in a large area of Sichuan inhabited by pandas. If there was ever a time for pandas to resort to hunting prey, it would have been then. But as far as I know, there is no evidence any did so, even though in theory they are capable. So there’s no evidence that they hunt and kill their own prey. The evidence we have shows that when they do eat meat, it’s carrion. Q: This panda was photographed on a land trust reserve — the first organized form of private land conservation in China. The United States has certainly had a long history of private land conservation, but not so in China. What can these private initiatives add to China’s nature reserve system? Matt Durnin: The nature reserve system in China, again as in many places around the world, is sorely underfunded. So tapping into the potential that “private initiatives” have is critical to successful conservation in China. The overall goal of this project is to overcome existing barriers (lack of funding being one of the biggest barriers) to effective conservation in China by introducing the land trust model, which enables participation by all sectors of society (non-government as well as government) in protecting critical lands while also incorporating sustainable development opportunities for struggling local communities. The land trust model is a “new” concept to not only China, but much of Asia. Research findings like this help us gain support for piloting this new model. (Image: Panda eating meat captured by motion sensor cameras stationed on the Motianling County Land Trust Reserve in northern Sichuan. Image credit: TNC. View more photos from the remote camera here.)
well it's been a while and I have come to trivial but usefull conclusions lately. Once I thought functional programming was fundamentaly diffrent then oo style programming, but I actually realized how well many aspects of fp match to elements of oop; Functions are simply Objects, Closures are anonymous class, certain design patterns ressemble monads(decorator, chain of reponsibility,...) oo style programming can be seen as a restricted variation of functional programming. That matters because the key aspects of oop is encapsulation and information hiding. This can easyly be achieved in fp through the use of closures and the fact that functions can be treated like any other data. In oop very explicit notations usually exists, wich couple certain functions to certain data through the notion of objects. Both, to increase readability of complex programms, and to lighten the restrictions that come with encapsulation, an explicit notion of inheritance is used. While all this seems pretty obvious to most, frankly, I did not The code was using a many closures like this, which I removed by placing the variables that the closures contained in properties of objects, that also contained the functions corresponding to the closures' code. When I was finished I had not only increased speed by 40-60% but was also surprised by the fact that my code looked a lot like good oo-style code. So in my opinion every programmer should learn to think functional in order to get a hold of what the importance of encapsulation and code reuse. It is rumored that modern lisp systems and erlang(using HiPE) interpreters/compilers are very close to compiled "C" code performance: See i.e. http://www.sics.se/~joe/apachevsyaws.html But to emphasis my point of oop vs fp regard this page: That's it for today.
Dill is a plant native to the mediterranean and black sea regions. It is now cultivated all over the world and its name comes from the Saxon word meaning “to lull”. It has a calming effect on the digestive system and provides the active ingredient in gripe water. It is the seeds that are mainly used in herbal remedies. What it does The herb contains a variety of compounds including coumarins, volatile oil, flavanoids and zanthone derivatives. The oils is 30 – 60% carvone. The herb is carminative (wind relieving), anti-spasmodic and a stomach tonic. |Indigestion||Improve the flow of breast milk| Dill seeds can be infused to make a cup of tea. Gripe water is available for babies suffering from colic. the herb is often added to fish dishes and salads to add flavour aid digestion. Generally, there are no side effects or contra-indications from using dill. At the time of writing there were no well known negative drug interactions with dill.
August 18, 2009 Posted: 08:57 AM ET The Big Crunch may sound like a slogan for crackers or potato chips, but it’s actually an astronomical theory with a gloomy twist. We’ve all heard of the Big Bang, a widely accepted theory that proposes the entire universe began from a single point about 13.7 billion years ago and has been expanding ever since. But will it expand forever? Or could it stop and reverse that process? One possible fate of the universe is the Big Crunch, the idea that the cosmos could one day begin contracting and eventually collapse back on itself or return to a single point. If it ever happens, this anti-Big Bang would take place so far in the future that Earth might even not exist anymore, according to experts writing for Cornell University’s Curious About Astronomy Web site. But the experts also took a stab at what a contracting universe could look like to an observer billions of years into the future. “As the present-day observable universe started to get really small, the observer would most likely see some of the things that happened in the early universe happen in reverse. Most notably, the temperature of the universe would eventually get so high that you could no longer have stable atoms, in which case the hypothetical observer wouldn't be able to hold himself together.” Yikes. But fear not. It turns the expansion of the universe has been accelerating rather than slowing. Astronomers believe that’s caused by a mysterious dark energy pulling galaxies apart, according to NASA. “Dark energy is this idea that not only is the universe expanding, dark energy is actually making that expansion happen even faster,” said Marla Geha, as assistant professor of astronomy at Yale University. “The dark energy will actually continue the expansion of the universe forever, so there probably will not be a Big Crunch if we have the numbers right.” But the continuous expansion would have other consequences. Over tens of billions of years, the galaxies that we see around us would get farther and farther away, making the universe more of a lonely place, Geha said. From around the web Are you a gadgethead? Do you spend hours a day online? Or are you just curious about how technology impacts your life? In this digital age, it's increasingly important to be fluent, or at least familiar, with the big tech trends. From gadgets to Google, smartphones to social media, this blog will help keep you informed.
Mount Revelstoke National Park and Glacier National Park Blanket Creek Provincial Park and the 40 foot high Sutherland Falls Mountain Arts Festival in September Powder Springs Ski Area on Mount MacKenzie Snowmobile season runs from late November into June The First Nations Interior Salish were semi-nomadic tribes within the region known as Revelstoke today. The often cool winters required preparation for food and supplies for winter storage. Hence, they would journey to up the rivers for fish and collect berries throughout the year. The tribes had inhabited these regions for over 5,000 years prior to the arrival of European explorers. The gold rush of the 1860ís in British Columbia had created a stir in Europe as prospectors from all over were drawn to the new world. By 1883, the Trans Canada Railway reached the outskirts of the province / Crown Colony and the next river to cross was the Columbia. The area close to the river was prepared as a development site and name in honour of Lord Revelstoke, who was head of the London bank, which had rescued the railway from bankruptcy. The small town became a supply and transportation center for the mining industry. Later, with the growth of the railway, Revelstoke took on the timber industry to supply more railway ties. As travelers have ventured through the area over the last hundred years, the waters for fishing, great landscape scenery and the winter snowfall have allowed Revelstoke to harvest a new industry out of tourism. Summer average 20 degrees Celsius Winter average -2 degrees Celsius Combination of Temperate Climate and Alpine Climates, receiving approximately 320 centimeters of snow during the winter.
Of the great composers of the nineteenth century, few were British. Indeed, it was one of those running jokes on the continent that British music wasn’t British at all: it was German. Think Handel, for instance, who Baroque masterpieces still infuse a certain sense of the eighteenth century dominance of British power. We know, of course, that this joke was a harsh one – British music was very much alive, whether in the form of brass bands, folk songs and tunes played in pubs, eisteddfodau, wakes, or ceilidhs, or in the choral tradition of the Welsh chapels. Composers such as Parry, Parry and Stanford stamped an identifiably British voice on their music and ensured the vitality of that voice in an era of European dominance. But then we ask the simplest of questions. Just who was the most popular composer Victorian Britain? Answer: Felix Mendelssohn. Behind him came the three giants of concert music: Bach, Beethoven and Brahms. And then? Well, the European romantics of Dvorak and Tchaikovsky, and Wagner of course. The musical culture of Wales, particularly, was dominated by this continental idiom. Those who search the National Library of Wales’ fine new digitised newspaper database will quickly discern that. There were, though, early attempts at distilling Welshness into orchestral form but as the Evening Express pointed out in 1892, ‘few Welshman are aware that there is a Welsh symphony’. So who wrote it and when? The honour of the first symphony to be called ‘Welsh’ lies with Sir Frederic Cowen’s fourth symphony in b-flat minor, which received its premiere at St James’s Hall in London on 28 May 1884. The Western Mail was particularly enthusiastic in its praise for Cowen’s borrowings from Welsh music declaring that the work would ‘give delight to every Cymric heart’. It hasn’t been played much since, and Cowen’s current reputation isn’t all that great. Compared to Stanford’s Irish symphony produced a few years later or Mendelssohn’s popular ‘Scottish’ symphony, it’s a minor work but one that prompted some consideration of where to go next. Here’s the Evening Express again: ‘But why, oh, why did our Welsh musicians allow Mr Cowen to anticipate them in producing a national Welsh symphony?’ It was a long wait for the next one. Indeed, it was not until the maturing careers of Grace Williams, David Wynne, Alun Hoddinott, and William Mathias, after the Second World War, that Wales broke out of it is choral tradition and into the orchestral programmes of the concert hall. My favourite piece of all this is Grace Williams’ ever popular Fantasia on Welsh Nursery Tunes, a work that I first played with the Rhondda Symphony Orchestra as a sixth former, and a violinist. Here it is performed by the National Youth Orchestra of Wales. What difference did this make to the musical tradition of Wales in the nineteenth and early twentieth centuries? It’s a difficult question to answer in a quantifiable way since it depends on the emphasis you place on orchestral music as a manner of musical performance. Certainly, Wales was relatively slow in developing community-based orchestral societies, particularly outside of the seaboard towns of the South Wales and North Wales coasts; many places in the South Wales Coalfield, for instance, didn’t form orchestras until the inter-war years and they did not survive for very long. Brass bands and choral societies, of course, thrived in that part of the country where working-class traditions held sway. Where they did form, in towns such as Newport, which had its own Royal Albert Hall, it was choral repertoire such as Handel’s The Messiah, Haydn’s Creation, or Mendelssohn’s Elijah that held sway. Perhaps the finest orchestra in Wales in the late nineteenth century was that of the Cardiff Orchestral Society and it provides us with a very clear and tangible idea of just what sort of musical identity, in the concert hall at least, existed in coalopolis in that period. It’s worth pointing out that the society’s conductor from 1889-1892 was Joseph Parry of Merthyr Tydfil, the noted Welsh composer. So: the repertoire. The obvious thing to note, as we go along here, is how similar nineteenth century concerts are to today’s mainstays. In one concert given by the Cardiff Orchestra Society in the early 1890s, for instance, there was the overture to Weber’s Oberon, excerpts from Gounod’s Faust and Bizet’s Carmen. Their 1890 season featured equally standard fare such as Beethoven’s Egmont overture; Liszt’s Hungarian Rhapsody, Mendelssohn’s Violin Concerto, and, for the first time, the overture to Rossini’s William Tell. But Cardiff, I hear you cry as you read this, was the main city – it would present that kind of repertoire, wouldn’t it? Well, here’s the favoured composers of Neath Orchestral Society, which was founded in 1904: Mozart, Mendelssohn, Wagner, Tchaikovsky, Grieg, Borodin, Saint-Saens, Svendsen, and Strauss. The Europeanness of the concert hall in nineteenth century Wales ensured that some of the great musicians of the day played there, including the Danish violinist Frida Scotta. Scotta played the Mendelssohn concerto at the Park Hall in Cardiff in February 1895. Here’s what Joseph Parry had to say about it: Miss Frida Scotta’s violin concerto (Mendelssohn) is worthy of criticism. In the first movement the time was satisfactory, but occasionally the wood wind wanted a little more suppressing- The strings in their piano pas- sages were very good, and the soloist’s octave passages were very well rendered, while the intonation was praiseworthy. In the cadenza the violinist’s shakes were meritorious; her upper notes were at all times most- pure. and the arpeggios at the close and the harmonies were invariably clear. In the transitional bridge connecting the first with the violinist’s beautiful solo movement, the intonation of the wind instruments was not as pure as could be desired. The fairy-like last movement, with its jetty wood wind and pizzicati, was at times lacking in steadiness of tempi, but in each (as the shortcoming soon disappeared. The celli contrapuntal passages against the solo were very effective. At the close the audience showed an enthusiastic appreciation of an artist who is certainly an acquisition to the concert-room. Not the most effusive review in the world, but one that hints at the charming character of Scotta’s playing. That Mendelssohn Concerto again, though. Gets into your head a bit, doesn’t it. Historians of other forms of popular music, notably brass bands and choirs, have remarked on their essential national character and linked them very carefully to community identities and to facets of social relations such as class and gender. Clearly orchestral music is never going to yield that kind of localised narrative, the Western canon is much too powerful. But I think it tells us something else about Victorian Wales which is often overlooked as we strive to find the elements that mark the Welsh out as distinctive and different from the other peoples of the British Isles. That is, in short, that Wales was a European nation attuned – to Concert A, if I can make that bad pun – to the cultural life of the rest of Europe. It was not cut off from ballet or opera because it happened to be on the margins of British power, what the London media (and the historians who know no better) happily calls the ‘Celtic Fringe’ or the ‘Provinces’. What remains to be seen is just who was turning up to Cardiff Orchestral Society concerts, for example. Filling the Park Hall with 2,000 people was no mean feat and clearly not all of those sat in the stalls or in the upper circle were middle class. But that’s a line of thought for another day.
Common people in Pakistan are often unaware of science behind the occurrence of extreme weather events such as heavy floods or heat waves. In developed nations, where people now demand from their governments to take action on global warming, our people in Pakistan usually consider such extreme events as their fate, rather than the impacts of climate change. Interestingly, after the two massive flooding in less than one year in Pakistan, thinking of people in Pakistan is now changing. Their thoughts are now going beyond the belief that this is not only their fate or anger of God. Global warming is the word which is getting common in our discussions. However, it is very important that a common person fully understand what global warming is and how can we find solutions to fight with its impacts. It is also important to educate our people, particularly women, on global warming science and its solution as we are among most vulnerable nations in the world who are suffering from the climate catastrophes even more than before. Our earth is surrounded by layers of different greenhouse gases, which we call atmosphere. The molecules of greenhouse gases such as Carbon Dioxide have the ability to trap heat which comes from sun. As our sun is the main source of energy, most of its heat is absorb by our earth. The heat waves reflected by earth are then trapped by the molecule of greenhouse gases in the atmosphere. This natural system in fact helps maintaining sufficient amount of heat on earth which is required for survival of life. However, due to increase in human activities on earth like burning of fossil fuel, industrialization, transportation and deforestation, we are now producing more greenhouse gases and emitting them into atmosphere. With more concentration of greenhouse gases into atmosphere, the ability of trapping heat by atmosphere has increased, particularly after the industrial revolution which is resulting into increase in temperature of our earth. This human induced change in earth temperature is called global warming which continues to change our climate abnormally e.g. increase in earth’ average temperature, increase in number of precipitation days or decrease in average rainfall. According to scientists, safe limit for concentration of greenhouse gases in atmosphere is 350 parts per million (ppm), however due to the factors like rapid industrialization, massive deforestation etc, this concentration has already reached to the level of 390 ppm. Scientists also believe that it is important to keep the concentration of greenhouse gases at 350 ppm to avert the chances of increase in average global temperature above 2 degree Celsius. However the way, our world is emitting greenhouse gases like CO2, scientists carefully forecast that earth temperature may go beyond 4 degree Celsius which will dramatically change our earth eco-system and increase both the frequency and intensity of extreme weather events like the Pakistan floods in 2010 and 2011 or the heat wave of Russia in 2010. Due to global warming, our oceans are getting warmer and have in fact, increased the process of forming water vapors over oceans. Science proves that warmer air can hold more water vapors. Thus more concentration of water vapors over oceans are now resulting into heavy downpours and flooding in different parts of the world. Increase in temperature also result into heat waves and wind storms. In Northern Pakistan, our precious glaciers resources are rapidly melting due to global warming. Not only this but also global warming has increased sea level and swept away number of villages in our coastal areas of Sindh and Baluchistan. Scientists also believe that the human induced climate change is shifting our monsoon season, which in fact give us nearly 60% of rain water. We are clearly observing that rains are either not on time or they happens all at once and create flash floods like we faced in 2010 and 2011 floods. We also observe that winter days are now getting shorter and summer days are getting longer. It is not good to know either that, according to scientists, Pakistan will face severe water shortage and decrease in crops production in coming 30 years due to global warming. It means we have to get ready for the big challenge of food insecurity in coming decades. Heat waves and flooding have also increased different water and other environment related diseases. Pakistan has although very negligible contribution in global environmental pollution, still we are among the most vulnerable countries impacted by global warming. It is although a global issue however, for us it is important that need for effective advocacy by our government in different international forums, particularly in the forthcoming UN conference in Durban, South Africa, we start exploring ranges of local and regional level solutions to fight global warming. Here are some of the solutions we can adopt in our rural and urban communities in Pakistan; First important step is to save every drop of water. We can use innovative water conservation methods. One of the ways is roof water harvesting to collect rainwater rather than wasting them without any use. People can construct rainwater ponds easily to store rainwater. Similarly, by repairing water taps and joints, we can save lot of water from leakage. People using washrooms, particularly in cities, can save good quantity of water by a simple behavioral change i.e. using water pans rather than showers. Adjusting toilet flush with a simple flush water reducer can also help protect water from wastage. We should educate our people to plant trees in all possible free land or space in and around homes, schools and our fields. Due to massive deforestation, we faced more catastrophes during the 2010 flooding. We have less than 5% forest cover in Pakistan and our forest depletion is top in the entire world. Planting trees can help in many ways like protection from floods and landslides, improve air quality, provide shades to protect from heat and improve underground water table. In addition to planting trees, we can avoid using different chemicals and use organic fertilizers to improve our lands fertility and increase food production. We can grow fruit plants and vegetables along with seasonal crops. Using different techniques like construction of check dams, gabion walls and planting trees can protect our agricultural land from erosion. We can learn and encourage multi-cropping techniques and promote ideas like floating gardens in flood affected areas to cope with extremes weathers and improve livelihoods. Using different alternate and renewable energy resources, we will not only contribute in achieving clean energy targets but also help in overcoming issues like electricity crisis or dependency on forests for fuel and heating. Community based micro-hydal power plants, fuel efficient stoves, wind farms and solar energy are number of good options available for us in Pakistan. We can promote eco-friendly and safe constructions. We should avoid construction in locations exposed to flooding or landslide and construct in a way which can maximum use daylight instead of electricity. Roofs with white coated sheets can help in reducing heat inside home during hot summer. We can also reduce our electricity bills at home or work by number of simple ways, particularly in cities. Use air-conditions as less as possible. By keeping AC unit at north side of home, people can protect it from direct sunlight, along with plants trees and shrubs around AC unit to keep it cool. Keep thermostat at 26 degree C. Open drapes, curtains and blinds open at evening time to escape heat from home. Use daylight during work and wear weather appropriate dresses rather than using air conditioners or heaters. Turn off electric appliances when they are not in use. Use fans in room and exhaust fans in kitchen to escape hot air. Equipments on standby still use 90% of energy. Turn off unnecessary lights and use energy savers. We can also promote simple practices like buying items in large size packing rather than small packs. Before buying or using anything, first think do you really need it. Avoid plastic bags and encourage reusable canvas bags for shopping. Use kitchen waste for compost making and separately keep different waste items likes plastic, papers and iron waste to make it easy for recycling rather than throwing them off. Use recycled papers which use half the water than the new paper. Explore and contact recycling companies and send old/used equipments to recycle. Make less use of papers, be online or print on double side. Print drafts on scrap papers. Be creative in using old things for household use e.g. use plastic cans for food storage or roof gardening. We can also contribute in clean environment by using public transport or even bicycle rather than running cars. If cars are necessary to use then make sure car are fuel efficient by their proper tuning.
It goes without saying that severe depression is a serious condition that should be treated by a trained professional. While this may seem obvious to a person not in the throes of depression, for a person struggling with severe depression making this decision may not be all that clear. Regrettably, more than half of the people who need help for depression don’t seek help. In one large study conducted in the United States only about 4 out of 10 people sought help within the first year of their depressed state. Perhaps even more unsettling is on average people struggling with depression tended to wait 8 years before finally seeking treatment. Basic depression is problem best described as an overly sad mood that lasts for along period of time and a lack of pleasure in doing things that usually make you happy. Someone who is depressed likely will experience sleep irregularities, fatigue, weight fluctuations, problems with concentration and feelings or nervousness and agitation. While those experiencing severe depression likely have many of the same symptoms as those with mild to moderate depression they tend to be much more severe and evolve into a sometimes scary and often dangerous problem that must be treated by a skilled professional immediately. In very severe cases, depressed people develop psychotic symptoms, such as hearing voices that tell them to kill themselves or others. This advanced form of depression impedes ones ability to hold a job, maintain a healthy relationship, or be a productive student. Depression may also alternate with periods of high euphoria and energy (mania); in this case, the disease is Bipolar Disorder. The most serious complication of sever depression is without a doubt suicide. It is worth repeating that severe depression must be taken seriously and medical attention should be a top priority if any of the following symptoms are present: *Suicidal or homicidal thoughts *Psychotic symptoms, such as hearing voices, delusions, or hallucinations *Extreme lethargy or fatigue that affects ones ability to complete simple tasks, such as eating, getting out of bed, or showering. While antidepressant are normally the treatment of choice for most cases of depression, for those suffering with severe chronic depression, that does not respond to medication, electroconvulsive therapy (ECT) is used. This procedure consists of inducing a seizure by running a small electrical current through the brain. ECT has shown to be only a temporary fix with a high rate of recurrence. In the end there are not easy answers. For this reason many people are considering natural remedies for depression as a tool in their arsenal of depression fighting solutions. Herbal remedies for depression are safe and free from the side effects that are so prevalent with many well known antidepressant medications. As always check with your doctor to make sure that natural remedies are compatible with any prescription medications you are already taking.
George Ellery Hale as the Mt. Wilson Solar Observatory. Often overlooked, Hale was a major figure in 20th Century science. The photo above is a bust of Hale that was in the observing room for the 150-foot Solar Tower telescope. As a comparison, here's the bronze bust of Hale that resides in the dome of the 200" Hale Telescope at Palomar: Hale looks happier at Mt. Wilson, wouldn't you agree? great webcam on the tower, which you should look at to take in the great view from the mountain. There are two other, older solar telescopes on the mountain - the Snow Telescope and the 60-foot Solar Tower telescope. For my money though, the real gems on the mountain are the 60 & 100-inch telescopes. 60-inch telescope was finished in 1908. For a while it was the largest telescope on Earth. Many have called it the first "modern" reflecting telescope. Astronomer Harlow Shapley used this instrument to help discover our place in the Milky Way Galaxy. I've been to Mt. Wilson a few times but, I still remember how special I felt just to be able to stand in its presence and touch it. available for eyepiece viewing. I have not had the pleasure of looking through this instrument, but I have looked through others of the same size. On a good moonless night the views should be amazing. More than a century ago, as the 60-inch telescope was nearing completion, George Ellery Hale was already planning for an even larger instrument. 100-inch Hooker Telescope was completed in 1917 and stood as the world's largest until Palomar's 200-inch telescope was completed in the late 1940s. It was with this instrument the Edwin Hubble tackled the distance to the Andromeda "nebula," determining that it was akin to the Milky Way, a distinct galaxy in its own right. Hubble also famously discovered the expansion of the universe. Here is the Hooker Telescope and what is reputed to be Edwin Hubble's chair: The Observatory offers guided tours, but not during winter. Check their website for details. I took these photos on a tour that was given by Mike Simmons back in 2005. Mike is the founder and executive director of Astronomers Without Borders. I haven't recently been on one of the public tours given at the observatory so I can't say if the photos I am sharing here are representative of what you might see if you take one.
Unformatted text preview: practicing sorcery in beguiling his mind, Socrates had the right idea. In challenging Meno to think in unfamiliar ways, he was pushing Meno’s mind to its limits and further. Meno’s mind was expanding in ways that perplexed him because he was not used to thinking in such unconventional methods. Socrates was doing Meno a favor, for, in the long run, these exercises would one day prove useful to Meno as he continues his journey and develops responses to other such philosophical questions such as virtue and what it is.... View Full Document This note was uploaded on 07/15/2008 for the course GP 100 taught by Professor Mekios during the Spring '08 term at Stonehill. - Spring '08
55 Synonyms for “Courage” Courage comes in many varieties, often identified by distinct synonyms. Some terms refer to determination more than bravery, but the two qualities are intertwined. Here’s a roster of the valiant vocabulary: 1-2. Adventuresomeness: Like many words on this list, this one is encumbered by the suffix -ness, but it and its nearly identical-looking and somewhat less clumsy synonym adventurousness convey a connotation of a flair for undertaking risky or dangerous enterprises. 3. Audacity: This term’s meaning as a synonym for courage is tainted by another sense, that of shamelessness. 4. Backbone: This word, one of several on this list that figuratively refer to body parts, implies that a courageous person is unyielding or indestructible. 5. Balls: This vulgar slang for testicles suggests that a person said, in a figurative sense, to possess them is endowed with an anatomical feature equated with virility and thus with courage. 6. Boldness: This word means “daring, fearless” but can also mean “adventurous” as well as “presumptuous.” 7. Bottle: This British English slang term derives from the word for a container for liquid; whether it alludes to the receptacle’s sturdiness or to the false courage inspired by imbibing alcohol from it is unclear. 8. Bravery: This word, like courage itself, is an all-purpose term, though it also can mean “finery” or “ostentatious display,” perhaps from the idea of a triumphant hero’s trappings. Brave, too, has an alternate meaning of “excellent,” and as a noun used to refer to an American Indian warrior. 9. Chivalry: This term, from the French word chevaler (whence chevalier as a synonym for knight; the Latin ancestor is caballarius, “horseman”), originally referred to the courage of a knight but later came to encompass other ideal but often unrealized qualities such as courtesy and devoutness. 10. Cojones: This frequently misspelled slang word, from the Spanish word meaning “testicles,” is often used as a (slightly) less offensive alternative to its counterpart in English slang. 11. Courageousness: This is an oddly superfluous term, considering that courage is more compact and means exactly the same thing, but courageous is a useful adjective. 12-13. Daring: This word has a connotation of reckless disregard for personal safety. Daringness is an unnecessarily extended (and therefore unnecessary) variant. 14. Dash: This term suggests ostentatious courage but can also imply the pretense of that quality, and might be confused with other senses of the word. Dashing, however, is a vivid adjective. 15. Dauntlessness: Among the words here saddled with a suffix, dauntlessness is nevertheless an expressive term. Its root, daunt, means “to tame or subdue.” 16. Determination: This word connotes resolve more than courage but is a useful associate for synonyms of the latter term. 17. Doughtiness: This word itself is somewhat clumsy, but the root word, doughty, is one of the most evocative synonyms for brave. 18. Elan: This borrowing from French, best (at least in print) with an acute accent over the first letter, comes from a word meaning “rush” and implies vigor rather than courage but has a swashbuckling flair. 19. Enterprise: This is a synonym for initiative more than for courage but has a similar sense. 20. Fearlessness: This pedestrian word pales by comparison with some of its synonyms but might be useful in a pinch. 21-22. Fortitude: The original sense of this word was “strength,” but now it connotes the determination that enables courage to prevail over fear. The variant “intestinal fortitude” implies that one will not succumb to an abdominal ailment when confronted with adversity. 23. Gallantry: This word, like some others on the list, can easily suggest a pretense of courage rather than the quality itself. 24. Greatheartedness: This word also means “generosity,” so although it can imply both qualities in one person, when it is employed, the context should make the intended sense clear. 25. Grit: This term, memorably employed in the book and film title True Grit, connotes coarse but uncompromising courage. 26-27. Guts: This slang term for the abdominal organs, traditionally thought of as the seat of emotions, applies to a combination of courage and indefatigability. A more verbose variant is gutsiness. 28. Hardihood: This term, combining the adjective hardy (which can mean “brave” as well as “tough” and “audacious”) and the suffix -hood (“state of being”), implies combined courage and robustness. 29. Heart: This word’s use as a synonym for courage stems from the idea that the heart is the source of courage. The root of the latter word, indeed, comes from coeur, the French term for the heart (and ultimately from the Latin word cor). 30. Heroism: The root word, hero, has evolved to have a broad range of senses, and the word for the quality is similarly generic. 31-32. Intrepidity: This word and its close variant intrepidness are based on intrepid, meaning “fearless” (the root word is also the basis of trepidation). 33. Lionheartedness: This term is based on the association of the animal with courage; England’s King Richard I, a medieval model of chivalry, earned the epithet “the Lionhearted.” 34. Mettle: This word, adapted from metal, means “stamina” but is also employed to refer to courage. 35. Moxie: This word, taken from the brand name for a carbonated beverage that, like its better-known and longer-lived competitors Pepsi and Coca-Cola, was originally touted as a source of pep, initially meant “energy” but came to be associated with expertise as well as courage. 36. Nerve: Because of this word’s additional sense of presumptuousness, the connotation of courage might not be clear; both meanings stem from the outdated idea that boldness is conveyed through the body’s nerves. 37. Panache: This word derived from a Latin term for “small wing” implies flamboyance as much as courage, perhaps from the ostentatious display of feathers on knights’ helmets. 38. Pecker: This British English slang term doesn’t translate to American English so well; the association of the word as an irregular synonym for courage as well as with the male genitalia is discussed in the entry for balls. 39. Pluck: This word, converted to noun form from the verb, implies determined courage despite overwhelming odds or in the face of significant adversity. 40. Prowess: This word refers to remarkable skill as well as outstanding courage. 41-43. Resoluteness: This term, more gracefully rendered as resolution or even resolve, implies a purposefulness, rather than courage per se. 44. Spirit: This word carries the connotation of assertiveness or firmness as opposed to courage; it can also mean a display of energy or animation. 45. Spunk: This word, originally referring to materials suitable as tinder, is akin to mettle and pluck in meaning. 46. Stalwartness: The root word of this term, stalwart, is an alteration of stalworth, from an Old English word meaning “serviceable,” and refers more to strength and vigor than courage but is easily associated with the latter virtue. 47-48. Stoutheartedness: This word alludes to the idea that a large, vigorous heart imbues one with courage. A more concise variant is stoutness; someone who is of reliable courage is sometimes referred to as stout. 49. Temerity: This word implies a rash, contemptuous disregard for danger. 50-51. Tenacity: This term and its longer variant tenaciousness suggest persistence. 52. Valor: This word (and the related adjective valiant) implies a romantic ideal of courage. 53. Venturesomeness: The meaning of this word is virtually identical to its virtually identical synonym adventuresomeness (see above). 54. Verve: This term, which shares the same origin as verb, refers to a boldness of expression, whether verbal or artistic. 55. Virtue: In addition to senses of morality or another beneficial quality, this term has acquired status as a synonym for courage.Recommended for you: « Stationary vs. Stationery » Subscribe to Receive our Articles and Exercises via Email - You will improve your English in only 5 minutes per day, guaranteed! - Subscribers get access to our exercise archives, writing courses, writing jobs and much more! - You'll also get three bonus ebooks completely free! 7 Responses to “55 Synonyms for “Courage”” “Balls” is not intended to be exclusive or derogatory towards women. It has more to do with the effects of castration of a man, taking away his aggressiveness. Please stop looking for a reason to be offended. As and antidote for all of those balls I suggest we include “ovaries.” It can also work as an alternative to “seminal” – use “ovarian.” Adrienne Rich’s ovarian later work influenced a generation of women writers and thinkers. “Adventuresomeness” sounds so weird…like a made-up word 🙂 How disgusting it is to imply that courage is the sole preserve of testicle-bearers! (And how demeaning when we women are *reduced* to that status!) 15. Dauntlessness – I’m reminded of the musical Once Upon a Mattress; the male lead’s name is Prince Dauntless. Contrary to what his name implies, Dauntless is rather fearful of his overbearing mother, and must learn to stand up to her. 17. Doughtiness – I recently saw a post where someone misspelled “doubt” as “dought” (the phrase: “I dought it”). They were attempting to criticize a prominent political figure. I am sure they didn’t realize that “I dought it” is not only incorrect and doesn’t make a lick of sense, it sends a completely different message than what they intended! The 55 synonyms for courage — You missed one! The adverb WIGHT. The OED says “1.1 Strong and courageous, esp. in warfare; having or showing prowess; valiant, doughty, brave, bold, ‘stout’. a.1.a of a person, esp. a warrior. ” That’s how the term white knight go misnomered into it’s present spelling. What about gumption?
Refugee and migrant students entering Australian schools bring with them a range of complex experiences. These may include experiences of trauma, violence or displacement. Some of these young people are entering a formal schooling environment for the first time. Often they are in a classroom where no-one else speaks their language or shares their cultural background. Supportive and inclusive school settings are important in helping them settle in to Australia and feel at home. School is often one of the first places where refugee and migrant students and their families begin to form connections with their local communities. In South Australia, refugee and migrant students enter an Intensive English Language Program. These are typically stand-alone classes in mainstream primary schools. Students remain in the program for about 12 months before making the transition to a mainstream class, often at a different school. Refugee children are particularly likely to change school due to things such as insecure housing and changing work settings. Our research suggests that the South Australian Intensive English Language Program offers a “soft landing” for children. At the same time, the children in our longitudinal study showed anxiety about their English language competency. In particular, they expressed concern that English would be an issue for them as they entered mainstream classes. There was a sense among many children that they would be left behind and thus find it difficult to make friends in their new setting. Given this anxiety, we found that class topics that don’t require English language skills – such as art and sport - help this diverse group of children to make friends and adjust to mainstream schooling. Both of these factors are important for increasing the well-being of refugee and migrant students. Students want to share their experiences Spending time on areas that are not directly related to English language acquisition also allows refugee and migrant children to share their experiences before they came to Australia. We found that creating opportunities for students to share information about themselves not only assisted them in making friends, but also helped them feel a sense of belonging in the school environment. Many children in our study expressed a strong desire to discuss aspects of their background. This included celebrating cultural and religious festivals, sharing food and language, and talking about the countries in which they had lived. The ability to share their background provided students with a sense of self-esteem and well-being that went beyond that provided by their ability to learn English or immediately “fit in” to their new school environment after transition. Developing English language skills obviously remains a priority for the education of migrant and refugee children in Australia. However, our research suggests that ensuring the previous experiences of these students are truly heard (rather than just treated as hurdles to English language acquisition) is critically important to their continuing development and school engagement. We would also note that in a context of standardised education – including NAPLAN testing – it is important to ensure that refugee and migrant students have the opportunity to participate in subjects that allow them to showcase their strengths. Feeling a sense of belonging in the early years of school is vitally important to ensure that students stay engaged with their education.
IB Chemistry/Atomic Theory - 1 Atomic Theory Revision Notes - 2 Wave nature of electrons - 3 HL Material - 4 Material for new syllabus - 4.1 ATOMIC STRUCTURE - 4.2 SL TOPIC 2.1 THE ATOM (1H). - 4.2.1 The position of protons, neutrons and electrons in the atom. - 4.2.2 the relative mass and relative charge of protons, electrons and neutrons. - 4.2.3 The terms mass number (A), atomic number (Z) and isotopes of an element. - 4.2.4 The symbols for isotopes - 4.2.5 The properties of the isotopes of an element. - 4.2.6 The uses of radioisotopes - 4.2.7 The number of protons, electrons and neutrons in atoms and ions - 4.3 2.2 THE MASS SPECTROMETER - 4.4 2.3 ELECTRON ARRANGEMENT (2H) - 4.5 TOPIC 12: ATOMIC STRUCTURE (3 HOURS) - 4.6 12.1 ELECTRON CONFIGURATION - 4.6.1 How ionization energy data is related to the electron configuration. - 4.6.2 Evidence from first ionization energies and sub-levels. - 4.6.3 The relative energies of s, p, d and f orbitals. - 4.6.4 The maximum number of orbitals in a given energy level. - 4.6.5 The shapes of s, px, py and pz orbitals. - 4.6.6 The Aufbau principle, Hund’s rule and the Pauli exclusion principle Atomic Theory Revision Notes 2.1 The Nuclear Atom 2.1.1 : Protons and Neutrons form the nucleus of the atom, electrons orbit the nucleus in electron shells. 2.1.2 : Protons -- Mass = 1 amu , charge = +1 .. Neutrons -- Mass = 1 amu , charge = 0 .. Electron -- Mass = 1/1840 amu (usually insignificant), charge = -1 |A simple model of a lithium atom. Not to scale! Atoms are made up of a nucleus and electrons that orbit the nucleus. The nucleus is made up of positively charged protons, and neutrons, which have no charge but about the same mass as a proton. Electrons are negatively charged and fly around the nucleus of an atom very quickly. So far, they do not appear to be made up of anything smaller: they are fundamental particles. They are extremely tiny, so small in fact that no one has managed to detect any size whatsoever. They are also very light, much much lighter than either a proton or a neutron. Hence, the weight of the electron is not included in the atomic number. An atom in its natural, uncharged state has the same number of electrons as protons. If it gains or loses electrons, the atom acquires a charge and is then referred to as an ion. The number of protons in an atom defines its chemical identity (e.g. hydrogen, gold, argon, etc). Protons are not gained or lost through chemical reactions, but only through high energy nuclear processes. 2.1.3 : Mass number (A) -- Number of protons + neutrons. Atomic number (Z) -- number of proton. Isotope -- atoms with same atomic number but different mass number (i.e. different numbers of neutrons) 2.1.4 : XAz... A = mass number, Z = atomic number, X = atomic symbol. 2.1.5 : Isotopes may differ in physical properties (mass/density) and radioactivity but not generally in chemical properties. 2.1.6 : Atomic masses are the average of the atomic mass of each isotope (isotopic mass) times the isotope's relative abundance. results in non integer atomic masses 2.1.7 : Atomic number = number of protons (or number of electrons - ionic charge) , mass number - atomic number = number of neutrons. 2.2 Electron Arrangement 2.2.1 : Continuous spectrum goes continuously through red, orange, yellow, green, blue, indigo, violet. A line spectrum contains only some individual lines from this spectrum. 2.2.2 : Electrons are excited (usually by running an electric current through them). This causes electrons to 'jump' into higher electron shells ( X -> X* ) this state is only temporary, however, and the electron falls back to its ground state. This change (When the electron falls back from the higher shell to a lower one) decreases the energy of the electron, and this energy is emitted in the form of a photon. If this photon falls into the visible spectrum of light, then it produces a visible spectrum. As electrons move further away from the nucleus, the electron shells become closer together in terms of space and energy, and so lines converge towards the end of the spectrum. Wave nature of electrons Electrons behave as particles but also as waves. One of the results of this observation is that electrons can not orbit with any energy they like. Think of a standing wave on a guitar string. Only a whole number of half wavelengths will fit in the string to form a standing wave, likewise for an atomic shell. Since the energy is dependent on the wavelength this means that the energy of an electron in an atom (a bound electron) is quantized. This means that the energy is limited to certain distinct values, one for each shell with no middle values allowed. 2.2.3 : The main electron levels go : 2, 8, 18 etc...2n + 2 for n0, n1 and n2... 2.2.4 : Electrons are added from the left...after each shell is filled, move to the next...2, 8, 18...only up to Z = 20 is required. Topic 12 is the additional HL material for Topic 2. It's not just the energy that is quantized, other properties that an electron can posess are also split into distinct units with no in betweens. The angular momentum is quantised, the spin is quantised, the component of the angular moment in any direction that you care to choose is quantised. There are in fact a whole host of rules determining the values that each of these properties can take. Each different shell is subdivided into one or more orbitals, each of which has a different angular momentum. Each shell in an orbital has a characteristic shape, and are named by a letter. They are: s, p, d, and f. In a one electron atom (e.g H, He+, Li++ etc.) the energy of each of the orbitals within a particular shell are all identical. However when there is more than one electron, they interact with one another and split the orbitals into slightly different energies. Within any particular shell, the energy of the orbitals depends on the angular momentum, with the s orbital having the lowest energy, then p, then d, etc. The s Orbital The simplest orbital in the atom is the 1s orbital. The 1s orbital is simply a sphere of electron density. There is only one s orbital per shell. The s orbital can hold two electrons, as long as they have different spin quantum numbers. The "P" Orbitals Stylised image of all the 2p atomic orbitals. Starting from the 2nd shell, there is a set of p orbitals. The angular momentum quantum number of the electrons confined to p orbitals is 1, so each orbital has one angular node. There are 3 choices for the magnetic quantum number, which indicates 3 differently orientated p orbitals. Finally, each orbital can accommodate two electrons (with opposite spins), giving the p orbitals a total capacity of 6 electrons. The p orbitals all have two lobes of electron density pointing along each of the axes. Each one is symmetrical along its axis. The notation for the p orbitals indicate which axis it points down, i.e. px points along the x axis, py on the y axis and pz up and down the z axis. The p orbitals are degenerate, they all have the same energy. P orbitals are very often involved in bonding. The "D" Orbitals The first set of d orbitals is the 3d set. There are 5 choices for the magnetic quantum number, which gives rise to 5 different d orbitals. Each orbital can hold two electrons (with opposite spins), giving the d orbitals a total capacity of 10 electrons. Note that you are only required to know the shapes of s and p orbitals for the IB. In most cases, the d orbitals are degenerate, but sometimes, they can split, with the eg and t2g subsets having different energy. Crystal Field Theory predicts and accounts for this. D orbitals are sometimes involved in bonding, especially in inorganic chemistry. Material for new syllabus SL TOPIC 2.1 THE ATOM (1H). SEE NEUSS, P6-7 TOK: What is the significance of the model of the atom in the different areas of knowledge? Are the models and theories that scientists create accurate descriptions of the natural world, or are they primarily useful interpretations for prediction, explanation and control of the natural world? The position of protons, neutrons and electrons in the atom. Here is a typical atom, helium: TOK: None of these particles can be (or will be) directly observed. Which ways of knowing do we use to interpret indirect evidence gained through the use of technology? Do we believe or know of their existence? the relative mass and relative charge of protons, electrons and neutrons. The accepted values are: The mass of the atom is due to its nucleons (which is protons and neutrons). An atom is electrically neutral because it has equal numbers of protons and electrons. electrons and protons. Chemistry (basically every single chemical reaction) is solely because of the behavior of electrons. The terms mass number (A), atomic number (Z) and isotopes of an element. The atomic number of an atom is the number of protons. The mass number of an atom is the number of nucleons (the number of protons + the number of neutrons). The atomic number defines which element we are talking about. The element with 16 protons would be ‘sulfur’. The symbols for isotopes The following notation should be used AZX, for example 126C Give the symbols for the following isotopes: The properties of the isotopes of an element. Isotopes have the same chemical properties but different physical properties. ‘Heavy water’ (21H2O or D2O) has the following properties: Boiling point: 101.42 °C at standard pressure. Freezing point: 3.81 °C at standard pressure. Relative density: 1107 g dm-3 at STP. Hydrogen-3 (‘Tritium’, T) is radioactive with a half-life of 12.32 years. Carbon-14 is radioactive with a half-life of 5730 years. CO2 normally has a density of 1.83 g dm-3 but if made with carbon-14 it would have a density of 1.92 g dm-3 The density of chlorine-35 gas is 2.92 g dm-3 under standard conditions, but chlorine-37 gas is 3.08 g dm-3. The uses of radioisotopes 14C in radiocarbon dating Living things constantly accumulate carbon-14 but the isotope decays with a half-life of 5730 y. After death, accumulation stops but the decay continues, so the ratio of carbon-14 to carbon-12 can be used to calculate how long ago death occurred. This can be used to date organic material from archaeological sites. 60Co in radiotherapy Cobalt-60 emits gamma radiation which can be directed onto tumours in an attempt to kill their cancerous cells. Whole-body irradiation can be used to destroy bone marrow before a transplant is attempted. 131I and 125I as medical tracers The thyroid is the only organ of the body which accumulates iodine, so isotopes of iodine can be used to study thyroid disorders. Iodine-131 or iodine-125 are given to patients in very low doses, and the pattern of radiation can reveal tumours or other abnormal growths. Larger doses of iodine radioisotopes can be used as therapy for thyroid cancer. The number of protons, electrons and neutrons in atoms and ions Complete the following table: 2.2 THE MASS SPECTROMETER The operation of a mass spectrometer. Schematic diagram of a mass spectrometer. How the mass spectrometer may be used to determine relative atomic mass By varying the strength of the magnetic field, ions of different masses can be brought to focus on the detector. In this way the relative abundances of ions of different masses produced from the sample can be determined. This is known as a mass spectrum. Usually the electron bombardment is adjusted to produce ions with only a single charge. Any doubly charged ions will be deflected more than the singly charged ions and will in fact behave in the same way as a singly charged ion of half the mass. That is why the x-axis is labeled m/z, where m is the relative mass of the species and z its relative charge. For example, Sulfur-32 (2+) will be observed at m/z=16. Calculation of non-integer relative atomic masses and abundance of isotopes The relative atomic masses of many elements are not whole numbers. This is because they are mixtures of isotopes. Each isotope has a molar mass which is (almost) an integer e.g. the molar mass of chlorine-35 is 35.0 g mol-1 and chlorine-37 is 37.0 g mol-1. Chlorine is a mixture of 24 % chlorine-37 and 76 % chlorine-35. The molar mass is therefore: 37 x 0.24 + 35 x 0.76 = 35.48 Secret information: Carbon-12 is the only isotope with an exact integer for its molar mass. Other isotopes have molar masses which are almost, but not quite, whole numbers. The IB don’t require you to know this – but it explains why the values you calculate don’t always match the values in your data booklet. This is the mass spectrum of mercury. There are two types of information we can find from this spectrum: Mercury has six isotopes Mercury-202 is the most common isotope The molar mass of mercury is actually the average of the molar masses of its isotopes. From the table we can read these values: Note that ‘%’ does not mean ‘percent of the total mercury ions’. It means ‘percent of the most intense signal’. The total ‘percentage’ is 335 %. To find the average, we treat ‘%’ as moles: 33.8 moles of mercury-198 have a mass of 6692.4 g |Molar mass of isotope |Amount (mol)||Mass (g)| |total||335.0 mol||67212.3 g| So we have a molar mass for the average mercury atom: = 200.6 g mol-1 Classwork 1. Copper has two stable isotopes: Isotope Abundance (%) 63Cu 69.1 65Cu 30.9 Calculate the molar mass of copper. 2. Bromine has two isotopes, 79Br and 81Br. Look up the molar mass of bromine. What is the proportion of the two isotopes? 3. Boron has two isotopes, 10B and 11B. What is the proportion? 4. Lead has four stable isotopes: Isotope Abundance (%) 204Pb 1.5 206Pb 23.6 207Pb 22.6 208Pb 52.3 Calculate the molar mass of lead. Homework – due in next week. 1 An experiment was performed to determine the density of gold. The following measurements were recorded. Mass of sample of gold = 30.923 g (to 5 sig fig), Volume of sample of gold = 1.6 cm3 (to 2 sig fig). Which of the following is the most accurate value for the density of gold (in g cm-3) which can be justified by these measurements? A. 19.327 (to 5 sig fig) B. 19.33 (to 4 sig fig) C. 19.3 (to 3 sig fig) D. 19 (to 2 sig fig) 2 The nucleus of a radon atom 22286Rn, contains A. 222 protons and 86 neutrons. B. 86 protons and 136 neutrons. C. 86 protons and 222 neutrons. D. 86 protons, 136 neutrons and 86 electrons. 3 Which of the following statements is/are true according to our current picture of the atom? I More than 90 % of the mass of a given atom is found in its nucleus. II Different atoms of an element may have different masses. III The chemical properties of an element are due mainly to its electrons. A. I only B. I and II only C. II and III only D. I, II and III 4. In which pair do the species contain the same number of neutrons? A. 10846Pd and 11048Cd B. 11850Sn and 12050Sn C. 19678Pt and 19878Pt+2 D. 22688Ra+2 and 22286Rn 5 Which pairing of electrons and protons could represent a Sr+2 ion? Protons Electrons A. 38 36 B. 38 38 C. 38 40 D. 40 38 6 All isotopes of tin have the same I. number of protons; II. number of neutrons; III. mass number. A. I only B. II only C. III only D. I and III only 2.3 ELECTRON ARRANGEMENT (2H) The electromagnetic spectrum. The electromagnetic spectrum unifies a vast range of ‘waves’, ‘rays’ and ‘radiation’. All electromagnetic radiation travels at 299776 ms-1 in a vacuum. Parts of the spectrum can be specified by their wavelength, frequency, or energy. The electromagnetic spectrum. The red line indicates the room temperature thermal energy. (Opensource Handbook of Nanoscience and Nanotechnology). The energies are quoted in eV: 1 eV is 96.5 kJ mol-1. The frequency is directly proportional to the energy, and inversely proportional to the wavelength. When transferring energy, electromagnetic radiation behaves as ‘packets’ of energy known as ‘photons’. Visible light has wavelengths of 400 (blue) to 700 (red) nm, which corresponds to energies of 299 (blue) to 171 (red) kJ mol-1. This is too small to interact with electrons in most bonds, but too large to set bonds resonating. Infrared light has wavelengths of 0.7 to 1000 um, and carries energies (less than 171 kJ mol-1) which can set bonds vibrating. In this way, infrared efficiently carries thermal energy. Ultraviolet light has wavelengths of 1-400 nm, which gives the waves enough energy (300+ kJ mol-1) to disrupt most chemical bonds. A continuous spectrum and a line spectrum. A continuous emission spectrum shows emission over a wide range of wavelengths of electromagnetic radiation: A line emission spectrum only shows emissions at certain wavelengths, with no emission at intermediate wavelengths. The diagram below shows the line emission spectrum of hydrogen, and the continuous emission spectrum of a black body at 10000 K. When hydrogen gas is stimulated, it emits a characteristic set of spectral lines. The gas is usually stimulated by passing a current through a sample of the gas at low pressure, but the same effect occurs if hydrogen gas is heated strongly. The spectral lines have the following characteristics: There are several series of lines, which become more closely packed at higher frequencies (lower wavelengths) until finally the series ends. The highest frequency series, discovered by Lyman, is in the ultraviolet. Lower frequency series are in the visible and infrared: |Lyman||91 nm (UV)| |Balmer||365 nm (Visible)| |Paschen||821 nm (IR)| |Brackett||1.46 μm (IR)| |Pfund||2.28 μm (IR)| The lines are emitted by electrons in the hydrogen atoms. The electrons are ‘excited’ by the energy input to the hydrogen sample (usually electrical current). ‘Excitation’ means that the electron leaves its usual low energy orbit (its ‘ground state’) and enters a higher-energy orbit. The orbits of electrons are often called ‘shells’. Excited electrons eventually ‘relax’ to lower-energy orbits. They emit the excess energy in the form of photons. From the emission spectrum patterns we can deduce two things: a) Because we observe a line spectrum, we know that there are only a limited number of higher-energy orbits for excited electrons. b) The energy of each successive orbit/shell converges to a maximum value, because each series of lines converges at higher energy. The Lyman series is caused by electrons relaxing directly to the ground state. The first ionisation energy is the energy required to remove an electron from each atom in a mole of gaseous atoms. e.g. Ca (g) → Ca+(g) + e- The convergence limit of the Lyman series is related to the ionisation energy of the hydrogen atom: The maximum energy photon emitted in the Lyman series is the highest energy an electron can have while still remaining part of the hydrogen atom. The lower-energy series are caused by electrons relaxing, but not to the ground state. The Balmer series, for example, is due to photons emitted as electrons relax to the orbit with the second-lowest energy (to ‘the second shell’). 1. The spectral line that corresponds to the electronic transition n = 3 → n = 2 in the hydrogen atom is red in colour. What type of radiation is released during the transition n = 2 → n = 1 ? B. Red light 2. The electron transition between which two levels releases the most energy? A. First to third B. Fourth to ninth C. Sixth to third D. Second to first 3. (a) The diagram below (not to scale) represents some of the electron energy levels in the hydrogen atom. (i) Draw an arrow on this diagram to represent the electron transition for the ionisation of hydrogen. Label this arrow A (ii) Draw an arrow on this diagram to represent the lowest energy transition in the visible emission spectrum. Label this arrow B The electron arrangement up to Z = 20. Give the ground state electron arrangements for: Sulphur atom: Potassium atom: Chloride ion: Magnesium ion: TOPIC 12: ATOMIC STRUCTURE (3 HOURS) 12.1 ELECTRON CONFIGURATION Studying the ionisation energies of an element such as calcium allows us to count the number of electrons which can occupy each shell. |Ionisation||Ionisation energy (kJ mol-1)||log10(IE)| |1st||6.00 x 102||2.78| |2nd||1.15 x 103||3.06| |3rd||4.91 x 103||3.69| |4th||6.47 x 103||3.81| |5th||8.14 x 103||3.91| |6th||1.05 x 104||4.02| |7th||1.23 x 104||4.09| |8th||1.42 x 104||4.15| |9th||1.82 x 104||4.26| |10th||2.04 x 104||4.31| |11th||5.70 x 104||4.76| |12th||6.33 x 104||4.80| |13th||7.01 x 104||4.85| |14th||7.88 x 104||4.90| |15th||8.64 x 104||4.94| |16th||9.40 x 104||4.97| |17th||1.05 x 105||5.02| |18th||1.12 x 105||5.05| |19th||4.95 x 105||5.69| |20th||5.28 x 105||5.72| Plot a graph of the log10(IE) against the ionisation. Why does the ionisation energy always increase? What causes the sudden jumps between the 2nd and 3rd, the 10th and 11th, and the 18th and 19th ionisations? What is the electronic configuration of calcium? Draw your prediction for the equivalent graph for sodium: Which element would produce a graph like this? Explain the shape of the graph. Evidence from first ionization energies and sub-levels. Use your data booklet to plot a graph of the first ionisation energies of the elements Li to Ne. Why do the ionisation energies generally increase from Li to Ne? The charge on the nucleus increases from Li to Ne. If we subtract the charge of the inner shell of electrons we can calculate the charge exerted on the outer electron shell: The effective nuclear charge. Because each element in the period has the same number of inner-shell electrons, the effective nuclear charge increases from 1 (Li) to 8 (Ne). The increased charge holding the outer electrons in place increases the energy required to remove one of these electrons. (It also reduces the size of the atom: Li is larger than Ne). Why do the ionisation energies of boron and oxygen break the general trend? The s2 arrangement is stable (like 'noble gas configurations' are stable). Boron has a [He] 2s2 2px1 arrangement. Losing the px1 electron returns boron to this stable state, so losing this electron is suprisingly easy. Similarly, the s2 px1 py1 pz1 arrangement is stable. Oxygen has a [He] 2s2 2px2 2py1 2pz1 arrangement. Losing a px electron returns oxygen to this stable state, so losing this electron is surprisingly easy. The relative energies of s, p, d and f orbitals. The maximum number of orbitals in a given energy level. Each energy level (‘shell’) is made of orbitals (‘sub-shells’). Each orbital can hold two electrons. The number of orbital types is equal to the shell number e.g. shell 3 has three types of orbital, s p and d. Shell 1 1s Shell 2 2s 2p Shell 3 3s 3p 3d Shell 4 4s 4p 4d 4f Shell 5 has, in theory, five types of orbital. No known element uses its g orbitals, however. The orbitals in each shell have increasing energy. s is least energetic, then p, d, f, etc. There is only one s orbital per shell. There are three p orbitals, five d orbitals, etc. The shapes of s, px, py and pz orbitals. s orbitals are simple spheres: The three p orbitals are aligned along the x y and z axes: The orbitals of shells 1 and 2 shown as (top) a cloud of possible electron positions and (bottom) surfaces containing most of the electron character. The Aufbau principle, Hund’s rule and the Pauli exclusion principle The aufbau principle: To find the electron configuration of an element, we build up the electrons one by one, putting each electron into the orbital with the lowest available energy. An easy way to remember which is the lowest available orbital is to use the following diagram: 3s 3p 3d 4s 4p 4d 4f 5s 5p 5d 5f … 6s 6p 6d … 7s 7p … Hund’s rule: If there is more than one orbital to choose from e.g. the 2p orbitals, then the orbitals are filled with one electron each, and then with pairs of electrons. The electron configuration of nitrogen is: 1s 2s 2px 2py 2pz ↑↓ ↑↓ ↑ ↑ ↑ 1s 2s 2px 2py 2pz ↑↓ ↑↓ ↑↓ ↑ The simplest way to write the full electronic configuration is to note the last noble gas and then to add any extra electrons like so: Vanadium: 1s2 2s2 2p6 3s2 3p6 4s2 3d3 Vanadium: [Ar] 4s2 3d3 It does not matter if you write the orbitals in the order they are filled (as in the example above) or in order of their shells: Vanadium: [Ar] 3d3 4s2 Elements 24 and 29 are special cases. The 3d5 and 3d10 configurations are so stable that an electron is taken from the 4s orbital to create 3d5 and 3d10 configurations. e.g. Chromium is not: [Ar] 4s2 3d4 Chromium is: [Ar] 4s1 3d5 Complete the following table: 1s H He 2s 2p Li [He] Ne [He] 3s 3p Na [Ne] Ar [Ne] 3d 4s 4p K [Ar] Kr [Ar] 4d 5s 5p Rb [Kr] The four blocks of the periodic table are named after the highest-energy occupied orbital:
Definition - What does Application Sharing mean? Application sharing is the process of permitting a moderator to share applications or a desktop with other remote members and grant those members control of shared applications. Application sharing relies on screen-sharing technology, which uses the Internet to allow users to remotely view and control software applications on a central host computer. The best part of using application sharing technology is that remote users can easily run software not installed on their systems, and even software that's not compatible with their operating system or requires more processing power than their computer has. This is because remote users are literally viewing and controlling applications installed on the host computer. Application sharing is the foundation of online training and demonstrations and is frequently used by businesses to reduce the amount of travel for employees. Techopedia explains Application Sharing Join thousands of others with our weekly newsletter The 4th Era of IT Infrastructure: Superconverged Systems: Approaches and Benefits of Network Virtualization: Free E-Book: Public Cloud Guide: Free Tool: Virtual Health Monitor: Free 30 Day Trial – Turbonomic:
THERE ARE MANY MORE CANADA PAGES on this site Click on the various links in the left-frame. PARKS AND WILDLIFE Canada commemorates persons and events for their national historic significance as well as places. So far, over 1500 places, persons and events have been commemorated by the Government of Canada There are 40 National Parks and over 150 National Historic Sites in Canada. Additional National Parks are planned for the coming years. At present, there are 2 National Marine Conservation Areas of Canada, with more planned for future designation. In the summer of 1999 Canada added another 80,000 sq.km. to the National Park system when it established 3 new parks in Nunavut Territory. These parks are described as "northern gems" and are named Surmilik, Auyuittuq and Quttinirtaaq. The Inuktitut names chosen for the parks illustrate the frozen terrain's major features: "place of glaciers," "land that never melts" and "top of the world"." There are 21 wildlife refuges and wild bird sanctuaries in all parts of Canada. Canada is home to an estimated 140,000+ native species. Provincial Parks number 1400+, with more added each year. The smallest park is the St. Lawrence Island Park, ON at 40 hectares. Canada's oldest park is historic Banff National Park, AB in the Canadain Rockies. Clayquot Sound, in the Pacific Rim National Park, BC has been declared a United Nations Biosphere Reserve by UNESCO. MOUNTAINS, LAKES AND RIVERS Mountain Ranges include: Torngats, Appalachians, Laurentians, Rockies, Costal, Mackenzie, St. Elias and the Pelly Mountains. At 5,959 metres (19,550 ft), Mount Logan is Canada's tallest peak. Mt. Logan is the highest point in Canada and second in North America only to Mount McKinley. Located in the St. Elias Mountains of southwestern Yukon Territory, the peak towers about 4,300 m (14,000 ft) above the Seward Glacier at the Alaska border to the south and is a focal point of Kluane National Park, a 22,000-sq-km (8,500-sq-mi) rugged wilderness. The actual ridge crest of the mountain is about 16 km (10 mi) across, while the entire mass is more than 32 km (20 mi) long. The longest river is the Mackenzie River flowing 4,241 km. through the Northwest Territories. There are 20 heritage rivers in Canada. The deepest lake in Canada is Great Slave Lake, NW which has a depth of 614 metres. Great Bear Lake is the largest fresh water lake entirely within Canada, with an area of 31,326 sq.km. Four of the five Great Lakes straddle the U.S.-Canada border; the fifth, Lake Michigan, is entirely within the United States The largest of the Great Lakes is Superior, ON with an in-Canada area of 127,700 sq.km (49,305 sq mi). The highest lake in Canada is Chico Lake, BC at 1,171 metres. Della Falls is the highest waterfall in Canada and tenth highest in the world. It is situated close to the southern boundary of Strathcona Provincial park on Vancouver Island. Water flows from Della Lake into Della falls and it has a 440 m. (1,440 ft.) drop on Drink Water Creek to Great Central Lake. The Greatest waterfall by volume is Niagara Falls, ON - the Canadian Horseshoe Falls dumps 5,365 sq. metres per second into the Niagara river. The largest island is Baffin Island, NT. It has an area of 507,452 sq.km. Canada encompasses six time zones. In Newfoundland the time zone is 3 hours + 30 minutes behind Greenwich Mean Time (GMT). The other time zones are in even-hours behind GMT, from East to West. The Pacific region is 8 hours behind GMT. Note: In Canada, Time Zones and Daylight Saving Time are usually regulated by provincial and territorial governments. Map is from National Research Council Canada THE CANADIAN ARMED FORCES There are about 67,000 active and 26,000 Canadian Forces reservists in Canada. The Canadian Armed Forces comprise Land Forces Command (ARMY), Maritime Command East and West(NAVY), Air Command (AIRFORCE) in addition to Communications Command and Training Command. They were integrated in 1967 (although each branch again has its own distinctive uniform and badges) and is a voluntary service. Canadian Forces members have served with distinction in World Wars I and II, the Korean War, Viet Nam, the Gulf War, Somalia, Yugoslavia and several others. Members of the Canadian army's anti-terrorist unit (JTF-2) and several thousand fighting troops and support units are assisting the world effort in the war on terrorism in Afghanistan. In June 2011 on his 90th birthday, Canada named HRH Prince Phillip an Admiral in Maritime Command and a General in Land Forces Command and Air Command. UNKNOWN SOLDIER RETURNS TO CANADA Canada now has its own "Tomb of The Unknown Soldier," after the remains of a soldier killed in the battle of Vimy Ridge in France during the First World War were returned to Canada. The soldier was one of 27,000 Canadian soldiers killed in conflicts abroad whose remains could not be identified or were lost at sea. His remains were interred on May 28, 2000 at the National War Memorial in Ottawa, after a three-day vigil on Parliament Hill. The Royal Canadian Legion began the endeavour to bring the soldier home as a millennium project. They wanted to create a Canadian "Tomb of The Unknown Soldier" to pay tribute to those military and merchant navy members whose remains have not been recovered, or whose graves are marked with the words "Here lies a Canadian Soldier Known Only Unto God". Canada has pledged never to identify the remains using modern technologies such as DNA testing. CANADA'S NATIONAL MILITARY CEMETERY Canada's first National Military Cemetery of the Canadian Forces was dedicated in June 2001, by Governor General Adrienne Clarkson. Ottawa's Beechwood Cemetery, established in 1873, is already the resting place for numerous Canadian war heroes and dates back to the Northwest Rebellion (which took place in what is now Saskatchewan). The sprawling, rolling area set aside for Canada's military personnel, can still accommodate as many as 6,000 graves. The largest military cemetery is in Point Claire, QC, where 18,000 war veterans are buried. Other sites in Vancouver, BC and Winnipeg, MB also contain large numbers of war veterans. The Point Claire site, which is operated by the Last Post Fund, has recently been opened for Canadian Forces veterans, but only those who have served in special duty areas such (as Cyprus and Bosnia) can be buried there. The National Military Cemetery will serve as a national focal point to demonstrate Canada's commitment to peace and security both internationally and at home. The Cemetery will honour the sacrifices made by all current and former Canadian Forces (Regular and Reserve) members who have been honourably discharged and any Peacekeepers and Canadian Veterans of the two World Wars and Korea (including Merchant Seaman). In addition, an immediate family member may also be interred in the same plot as the service members. Canada is well known for its peacekeeping role with the United Nations and NATO in many areas throughout the world. These include: the Golan Heights, Cyprus, Crotia, Haiti, Cambodia, Bosnia, Kosovo, East Timor, Afghanistan and several others. Armed forces members have served and are serving in every U.N. controlled location since 1956. Sadly, more than 185 brave Canadians have lost their lives during Canadian Peacekeeping duties in various countries around the world. The names of all these Peacekeepers can be found on the followiing page on our site: Canadian Peacekeeping Roll of Honour Additional information about Canada's Peacekeeping Forces will also be found on that page. CANADA AND THE UNITED NATIONS Canada's Ambassador to the United Nations is Alan Rock Canada is a full member of the United Nations General Assembly (joining the U.N. on Nov. 9th, 1945) and Canadian delegates serve on many U.N. committees, groups and organizations throughout the world. On December 31st, 2000 Canada completed a two-year term as a member of the 15-seat United Nations Security Council. The United Nations Universal Declaration of Human Rights was developed and shaped (in part) by Canadian John Peters Humphrey. Louise Arbour was appointed U.N Commissioner of Human Rights in July 2004. She served 16 years as a Justice, including several years on the bench of the Supreme Court of Canada. CANADA IN SPACE Canada has sent astronauts into space as part of the U.S. space program on several Space Shuttle flights. Canada has its own astronaut training program (The Canadian Space Agency) and also trains with NASA. To learn more about Canada's Astronauts, The Space Agency and what Canada is doing in space, click on the links in the left-frame or use the Site Map link at the end of this page. THE CANADIAN SECURITY AND INTELLEGENCE SERVICE CSIS is a government agency that has a federal mandate to collect, analyze and retain information or intelligence on activities that may, on reasonable grounds, be suspected of constituting threats to the security of Canada and in relation thereto, report to and advise the Government of Canada. CSIS also provides security screening and assessments, on request, to all Federal Departments and Agencies (including Immigration and Citizenship), with the exception of the RCMP and the Department of National Defence. These threats to the security of Canada include: Espionage, Sabotage, Foreign Influences or Activities, Political Violence and Terrorism or Subversion. HEALTH AND WELFARE Canada has one of the world's highest living standards. All Canadian have free access to a national health care system. Most people over 65 recieve their medical costs, hospital stays and all associated expenses at no cost. Canada has an extensive social safety network with senior's pensions, monthly family allowances, unemployment insurance and a welfare system. The Canadian Citizenship Oath was introduced into leglislation by the Trudeau government in 1976 and came into effect in 1977. |"I [name of person] swear" [or "affirm"] "that I will be faithful and bear true allegiance to Her Majesty Queen Elizabeth the Second, Queen of Canada, Her Heirs and Successors, according to law and that I will faithfully observe the laws of Canada and fulfill my duties as a Canadian citizen".| |"Je jure" [ou "déclare solennellement"] "que je serai fidèle et que je porterai sincère allégeance à Sa Majesté la Reine Elizabeth Deux, Reine du Canada, à ses héritiers et à ses successeurs en conformité de la loi et que j'observerai fidèlement les lois du Canada et remplirai mes devoirs de citoyen canadien".| HONOURARY CANADIAN CITIZENSHIP The sixth Honourary Citizenship was awarded to Malala Yousafzai women's rights and education activist and recipient of the 2014 Nobel Peace Prize. In May 2010 Prime Minister Harper conferred Canada's fifth Honourary Canadian Citizenship on The Aga Kahn. Followers know him as Mawlana Hazar Imam. In October 2007 the Prime Minister conferred Honourary Canadian Citizenship on Aung San Suu Kyi, the world-renowned advocate of freedom and democracy in Burma. In 2006 Prime Minister Stephen Harper bestowed Honourary Canadian Citizenship on His Holiness The Delai Lama. On November 19, 2001, Prime Minister Jean Chretien conferred an Honourary Canadian Citizenship on Nelson Mandela in recognition of Mandela's leadership in defeating apartheid in South Africa. The first Honourary Canadian Citizen was Raoul Wallenberg, who was posthumously awarded Honourary Canadian Citizenship in 1985, for his efforts in saving thousands of Jewish people during WWII. CANADA'S PARLIAMENTARY POET LAUREATE Canada's fourth and the current Poet Laureate is Pierre DesRuisseaux. He was appointed in 2009. George Bowering was named Canada's first Poet Laureate (2002-2004) and Pauline Michel was the second (2004-2006). In November 2006 Canada's third Parliamentary Poet Laureate was named for the period 2006-2008. He is John Steffler of Montreal. THE PEOPLES OF CANADA The population of Canada as of July 2010 is estimated by Stats Canada at 33,476,688 The largest populated city in Canada is Toronto (ON) followed by Montreal (QC), Vancouver (BC), Ottawa-Gatineau (ON-QC), Calgary (AB) and Edmonton (AB). Some of the ethnic groups (in no particular order) in Canada are: English, French, Scottish, Irish, German, Italian, Chinese, North American Indian, Ukrainian, Dutch, Polish, East Indian, Russian, Welsh, Filipino, Norwegian, Portuguese, Métis, Swedish, Spanish, Hungarian (Magyar), Jamaican, Danish. 76.6 per cent of Canadians live in cities and towns while 23.4 per cent live in rural/farm areas. Over 31 per cent of the population live in the largest cities of Toronto, Montreal and Vancouver. The life expectancy of a Canadian woman is 84 years and a Canadian man is 76 years. The size of the average family is 3.1 people (including 1.3 children). THE ABORIGINAL POPULATION There are over 550,000 status or non-status Aboriginal People and another one million or more who claim to be of First Nations descent. Of these million, 790,000 are Natives, 220,000 are Metis and 50,000 are Inuit. Over 300,000 Aboriginals live on reserves throughout Canada. The only true indigenous culture in Canada is that of the Aboriginal Peoples since all other Canadians were originally immigrants. Here is a link to an excellent site with lots of information about Canada's First Nations Majority of Canadians are Christians: 55 percent of Canadians are Roman Catholic, other religions in Canada include: Protestanism, Judaism, Islam, Hinduism, Sikhism and Buddhism. Over 20.4 million Canadians have a mother tongue of English and 6.6 million have a mother tongue of French. 5.6 million Canadians have another mother tongue, but speek one or both official languages. Chinese is Canada's third most common language. NATURAL RESOURCES AND INDUSTRIAL CANADA Principle Natural Resources are: fish, wildlife, natural gas, petroleum, gold, coal, copper, nickel, lead, molybdenum, silver, iron ore, potash, uranium, and zinc along with many timber-related industries and water and hydro-electric power. Leading Industries: automobile manufacturing, pulp and paper, iron, steel work, machinery and equipment manufacturing, mining, extraction of fossil fuels, forestry and agriculture. Leading exports are: automobiles, other vehicles and parts, machinery and equipment, high technology products, oil, natural gas, metals and forestry and farm products, including large wheat exports. Imports are: machinery and industrial equipment, communications and electronic equipment, vehicles and automobile parts, industrial materials (ie: metal ores, iron, steel, precious metals, chemicals, plastics, cotton, wool and other textiles) along with manufactured products and food. So we know you have visited the KCIC Web site, please leave a comment in our GUESTBOOK Use the Guestbook link to the left Member: Canada's National Historic Society
It’s relatively easy to name some of the earliest 20th century music pioneers, but observe any go-to list of the top five and it’s usually a total sausage fest. Again and again, recorded history would have us believe that it’s only men who laid our cultural foundations. Though there were a tremendous number of talented, amazing males making great music, it’s an outright lie that they were alone at the forefront of creation. There are a number of women whose contributions have been largely overlooked or obscured for too long. If you love music, you need to scroll down and get to know these 20 artists who not only shaped modern music but also helped change society — and never got the credit they deserved. Who was she? A black blues singer and guitarist about whom Don Kent wrote in the liner notes to Mississippi Masters: Early American Blues Classics 1927-35, “her scope and creativity dwarfs most blues artists." But little is known about the remarkable woman who made just three records in the early 1930s. There are no photos of her and nobody knows her legal name or what happened to her after she stopped recording music. Key song: “Last Kind Words” Who was she? A child violin prodigy, Rockmore was forced to give up her instrument in her teenage years due to bone problems from malnutrition, but, according to FlavorWire, this didn’t prevent her from making a remarkable contribution to music. When Léon Theremin brought his new instrument from Russia in the 1920s, Rockmore took to it immediately and worked with Theremin himself to craft one perfectly suited to her specifications, and she became the first — and arguably sole — Theremin virtuoso. Key song: Concerto for Theremin by Anis Fuleihan Lucille Bogan (also recorded as Bessie Jackson) Who was she? A Birmingham-based blues singer-songwriter whose bio, both personal and professional, reads like a total badass decades ahead of her time. Bogan wrote and recorded more than 100 albums with her collaborator, pianist Walter Roland, between 1923 and 1935. She began writing slyly funny songs about drinking, sex and prostitution in the '30s, including the one below that even eight decades later is one of the dirtiest tunes I’ve ever heard. Key song: “Shave 'Em Dry” [Warning: NSFW] Who was she? Hall was a Juilliard-trained vocalist who, according to Black Past, spent the better part of her earliest career involved in choral direction and singing. She made her Broadway debut in 1943 and went on to become the first African-American to win a Tony Award for her work in South Pacific. In the 1950s she turned her attention to blues and jazz, taking up residency in a series of Greenwich Village nightclubs and released Juanita Hall Sings the Blues in 1957. Key song: “I Don’t Want It Second Hand” Who was she? Blues-folk singer-songwriter/guitarist who primarily recorded in the '20s, but there’s almost no information available about her online. It’s easy to find her music online and it’s been collected and anthologized plenty, but there are no photos or interviews or anything. In part, there’s something poetic and lovely that our only way to know Jackson is through her music. But it also speaks to the fact that as a black woman in the '20s, her personhood was of little consequence to the cultural curators at the time. Key song: “Careless Love Blues” Who was she? American swing and jazz vocalist in the '30s who was a stylish, talented, plus-size woman decades before any size-positive movement existed. She also had great taste in music. According to Gary Giddins’s book Bing Crosby: A pocketful of Dreams — The Early Years, 1903-1940, Bailey introduced Crosby to African-American jazz greats like Louis Armstrong via her own record collection. Key song: “Please Be Kind” (Aunt) Samantha Bumgarner (pictured) and Eva Davis Who was she? Early Appalachian fiddlers and vocalists who, in 1924, according to Women in Early Country Music, became the first women ever to sing on a country music recording. Bumgarner went on to become the more famous of the two, playing, touring and recording well into her late 70s. Key song: “Cindy in the Meadow” Rosa Lee Carson, a.k.a. Moonshine Kate Who was she? The banjo-playing daughter of famed “Fiddlin’” John Carson, Rosa Lee made her country recording debut in 1925 at age 15, accompanying her father. They formed a musical comedy act and toured internationally, before she branched out as a solo artist under the stage name Moonshine Kate. Key song: “The Drunkard’s Child” Sister Rosetta Tharpe Who was she? A gospel singer/guitarist who’s often credited with basically inventing rock 'n' roll, so you know, no big deal — except it totally was, since Tharpe rose to prominence in the '30s and '40s and continued performing a fusion of gospel, blues, jazz and big band music throughout the late '60s. Key song: “Didn’t It Rain” La Bolduc (a.k.a. Mary Travers Bolduc) Who was she? The French-Canadian singer is considered Quebec’s first singer-songwriter and grew up mastering the fiddle, accordion, harmonica and other instruments at home that were required to play traditional folk tunes of the region. With her husband chronically unemployed, she made her first recording in 1928 and began to perform throughout the province, despite the fact that women were rarely touring musicians, never mind successful ones. She became the primary breadwinner and folded her husband and children into her touring troupe Key song: “Ça va venir découragez-vous pas” Who was she? According to the Queer Cultural Centre, Bentley moved to New York when she was 16 and became a key part of the 1920s Harlem Renaissance, finding a fellow community of gay black artists. Bentley was pretty open about being a lesbian and often both dressed and performed in a tux and other “male” garments, which didn’t diminish her escalating fame as she made a name for herself on the nightclub circuit. Later in life, under threat from the McCarthy-era witch hunt, she claimed to have been “cured” of her homosexuality and married numerous men, but historians believe this was to avoid persecution. Key song: “Wild Geese Blues” Dame Ethel Smyth Who was she? An inspiration to women everywhere, Smyth was fighting for women’s rights as early as 1877 when, according to Oxford Music Online, she defied her father and enrolled in the Leipzig Conservatory to study music and composition. She refused to be confined by traditional limitations: she published under E.M. Smyth so her orchestral debut got fair reviews (and confounded the press when it was revealed Smyth was a woman); she openly engaged in same-sex affairs (including, allegedly, Virginia Woolf); and she began writing operas in 1892. A tireless, early champion of feminism and equality, Smyth gave the suffrage movement its own anthem in 1910 with The March of the Women, served two months in prison for her activism and crafted the feminist opera The Boatswain’s Mate a few years later. Key song: The March of the Women Who was she? The South African musician and civil rights activist helped popularize African music around the world. Exiled from her homeland in 1960 after campaigning against apartheid, Makeba found success in America, winning a Grammy with Harry Belafonte for best folk recording in 1966 and scoring a hit in 1967 with her song “Pata Pata.” But when she married known Black Panther member Stokely Carmichael in 1968, Americans turned on her and it remained that way until she joined Paul Simon on his Graceland tour in 1985. Key song: “Pata Pata” Who was she? Smith was the ultimate blues trailblazer: in 1920 she became the first African-American to make a vocal blues recording, which, according to Gunther Schuller’s 1986 book Early Jazz: Its Roots and Musical Development, went on to sell a million copies within the year and clued music industry execs into the power of the black community. Key song: “Crazy Blues” Who was she? She got her early start in vaudeville, but Austin became most famous in the 1920s for skills as a jazz blues pianist and for her relatively unprecedented role as the bandleader of her Blues Serenaders. She was an in-demand collaborator as well, accompanying other famous blues women such as Ma Rainey, Ida Cox and Alberta Hunter. Key song: “Charleston Mad” Who was she? Cox proved a daring and boundary-pushing songwriter and performer. She flexed the early muscles of feminism with her song “Wild Women Don’t Have the Blues,” started her own vaudeville troupe and made numerous recordings throughout the '20s. Key song: “Wild Women Don’t Have the Blues” Who was she? Smith, the Empress of the Blues, was arguably the most famous blues and jazz vocalist in the '20s. According to PBS, she was also the most successful black recording artist of her time thanks to her cover of “Downhearted Blues,” which was co-written by Lovie Austin and Alberta Hunter. Key song: “Downhearted Blues” Gertrude 'Ma' Rainey Who was she? Revered as the Mother of the Blues, Rainey was among the earliest African-American women to record music in the '20s and distinguished herself thanks to her “moaning,” soulful singing. According to the Georgia Encyclopedia, from 1923-28, Rainey made more than 100 recordings of her own compositions. Key song: “Deep Moaning Blues” Who was she? The Fort Calgary-born violin prodigy left Canada at age four and became an international sensation. A North American tour in 1910 and subsequent tours of Europe, the Far East and Japan cemented her status as the “world’s greatest woman violinist.” After decades as a soloist, Parlow began teaching and performing in ensembles and chamber quartets. Upon returning to Canada during the Second World War, Parlow took over Toronto in the best way possible, teaching at the Royal Conservatory, playing with the TSO and forming new ensembles, including the Canadian Trio. Key selection: Arensky: Serenade for Violin and Piano Who was she? Hailing from Halifax, White was an African-Canadian contralto classical and gospel singer who made her national debut in Toronto in 1941 and went on to international acclaim despite difficulties obtaining bookings because of racism. Hang out with me on Twitter: @_AndreaWarner
Friday 9th March, 2012 3:30pm to 4:30pm Discover the rules of thumb for finger-friendly design. Touch gestures are sweeping away buttons, menus and windows from mobile devices—and even from the next version of Windows. Find out why those familiar desktop widgets are weak replacements for manipulating content directly, and learn to craft touchscreen interfaces that effortlessly teach users new gesture vocabularies. The challenge: gestures are invisible, without the visual cues offered by buttons and menus. As your touchscreen app sheds buttons, how do people figure out how to use the damn thing? Learn to lead your audience by the hand (and fingers) with practical techniques that make invisible gestures obvious. Designer Josh Clark (author of O'Reilly books "Tapworthy" and "Best iPhone Apps") mines a variety of surprising sources for interface inspiration and design patterns. Along the way, discover the subtle power of animation, why you should be playing lots more video games, and why a toddler is your best beta tester. 1. How should UI layouts evolve to accommodate the ergonomics of fingers and thumbs? 2. Why are buttons a hack? Why aren't they as effective as more direct touch gestures? 3. How can users understand how to use apps that have no labeled menus or buttons? 4. What's the proper role of skeuomorphic design (realistic 3D metaphors) in teaching touch? 5. How can animation provide contextual help to teach gestures effortlessly? How does game design point the way here? Principal, Global Moxie I'm a designer specializing in mobile design strategy and user experience. I'm author of the O'Reilly books "Tapworthy: Designing Great iPhone Apps" and "Best iPhone Apps." My outfit Global Moxie offers consulting services and training to help media companies, design agencies, and creative organizations build tapworthy mobile apps and effective websites. Before the interwebs swallowed me up, I worked on a slew of national PBS programs at Boston's WGBH. I shared my three words of Russian with Mikhail Gorbachev, strolled the ranch with Nancy Reagan, hobnobbed with Rockefellers, and wrote trivia questions for a primetime game show. In 1996, I created the uberpopular "Couch-to-5K" (C25K) running program, which has helped millions of skeptical would-be exercisers take up jogging. (My motto for fitness is the same for user experience: no pain, no pain.) Sign in to add slides, notes or videos to this session
The practice of segregating waste for biodegradable or non-biodegradable, wet waste or dry waste and re-usable or recyclable by the households, schools, private and public offices, eating places, malls, institutions and other organizations in the community is necessary for environmental concerns and economic point of view. With this, the people involved in this campaign need a great selection of recycling bins for the implementation of sorting waste in their respective locations. In fact, the government is encouraging the people regarding this matter and had been implemented in the collection and segregation of waste to the entire city. In fact, proper management of waste is also important for ecological and health concerns in the environment. The segregation of waste that starts in household can really help reduce waste in the advocacy campaign of a green environment. Hence, the recyclable and reusable waste collected can be sold to the buyers of scraps which I used to do during my childhood days. Even the personnel of the companies are practicing proper disposal of waste. With this, it could also create jobs for scrapers related to economic concerns. During my high school days, I had been assigned and participated in the campaign of proper waste management in our school. Planning, strategy, teamwork and cooperation of each members and students in the school are factors to be considered in the successful implementation of the advocacy campaign. Of course, we need a great selection of recycling bins for segregation of waste in every designated area in the school building. Information, policies and regulations are informed among the students as a whole with regards to the issue for a clean and green environment campaign. Every individual in the community can help in the clean and green environment by practicing in the proper segregation of waste which is probably starts in households. For the local government, strict policies regulations and programs should be implemented in their respective locality.
A tranquil pool containing fish and plants is projected onto the floor. When the participants discover the pool, they see mostly fish but few plants, because the fish are the plants' predator. When the participants enter, the fish, mysteriously attracted to the participants, playfully swarm around their feet. Because the fish are distracted from eating, the plants are then able to bloom. Most people imagine that ecosystem disruption means the destruction of a specie(s). However, as this project illustrates, a counter-intuitive effect of ecosystem disruption is often that one species is suddenly increased population due to a predator displacement. (c) 2015 Mine-Control, Inc.
I blinked in surprise when my Philosophy professor posed this question in our first ever class. I looked around and saw that most students were equally puzzled. I mean, everyone knows what education is right? What kind of a question is that to start a Master’s program?! But, by the end of those 2 hours, I realised I was mistaken. That question was the perfect start to the whole program: What really is education? Is it a product, measured by the student achievement level? A service? A process? [Tweet “I am still in the process of defining what education is to me. On the other hand, I do know what it isn’t!”] Charlotte Mason (1842 –1923) said, “Education is an atmosphere, a discipline, a life”. This definition rings true to most parents. Not many would dispute that every parent, whether deliberately or unintentionally, creates an environment where a child can learn and thrive. In the 21st century, education has come to mean different things to different people. I’ve compiled a very brief list of the many definitions:. “Education involves essentially processes which intentionally transmit what is valuable in an intelligible and voluntary manner and which create in the learner a desire to achieve it, this been seen to have its place along with other things in life”- R.S. Peters “Education is the kindling of a flame, not the filling of a vessel- Socrates. “There is no end to education. It is not that you read a book, pass an exam and finish with education. From the moment you were born to the moment you die, it is a process of learning”- Jiddu Krishnamoorthy. On the one hand, I am still in the process of defining what education is to me. On the other, I do know what it isn’t and know enough to resist succumbing to common misconceptions. The first among them is… A pattern of reasoning which dominates the current educational system is: My child gets 90%(or more) in annual exams every year and because marks are a good indicator of learning, s/he is getting a superlative education. Let me attempt to debunk this reasoning. Firstly, although a test measures what one knows at a certain point in time, knowledge of a topic is highly dependent on how a test is constructed. In other words, what is the test actually testing? Secondly, high marks need not necessarily lead to a person being educated. Because performing excellently on a test requires 2 conditions: test taking and study skills. If both the 2 conditions are not fulfilled, performance tanks. Focussing excessively on high marks will possibly lead to academic burn out in your child. I say possibly because I accept that there are children who enjoy a competitive environment. I also acknowledge that there are just as many (if not more) students who have test anxiety and under perform. Most children seek validation of their learning and sometimes their personal worth from teachers, parents, and their test marks. You can probably see how this external validation seeking can be dangerous in the long run. An assessment is meant to be a snapshot of a performance on a particular day. As a parent, you are in an enviable position of influence over your child. (Don’t believe parents who claim to have no influence over their children!) What do you focus on? What do you not focus on? Do you scold when your child gets low marks? Do you praise them when they get good marks? Do you ask “Who got the highest”? Do you expect your child to study every day? Do you answer all your child’s questions? How do you encourage curiosity? Do you question (or not) their teachers during a PTM? Are you, the parent, through everyday micro interactions and with the child’s best intentions at heart, telling your child (ren) that a good or great performance in school (and college) is education? That school and college is his/her fate is determined? Almost all Indian parents from the so-called middle and upper classes are having similar conversations with their children. Is it any surprise that our current society is intensely competitive and performance focussed and where students commit suicide because they have failed or disappointed their parents? What about the situation where the parent is not performance focussed but the school is? When my daughter comes home with 10/10, I say “Looks like the test was easy for you” “I suppose the test content was not something you were familiar with”, when she comes with 5/10. The premise behind those statements is to remind children (as well as ourselves) that an assessment is meant to be a snapshot of a performance on a particular day. The purpose of a test is to inform the teacher as well as student of how well the child is learning what she/he’s supposed to learn; to revise teaching strategies and change study habits. Knowing what you know now, and assuming that most parents want to inculcate a lifelong love for learning in their children, how are you going to approach the all- consuming ‘performance’ and ‘marks’ focus in society today? Maybe it begins by recognizing that education does not end at a poor performance on a test in school. This brings me to my second point… That’s right. Most schools with a certain student demographic base will do a satisfactory job of teaching your children. So, why do parent place an undue importance on where the child goes to school? As long as the child is in a safe environment, teachers are more or less friendly, and the curriculum is engaging, one would think that schooling would not take too much of our mind space, right? Most parents agonise over choosing a school. Firstly because of the enormity of school choice faced by our generation of parents. There are elite private schools charging over 14 lakhs a year, next layer is upper- mid range schools anywhere from Rs 3 to 10 lakhs. The mid-range will set you back by Rs 1-3 lakhs. Then come the chain schools or so-called public schools with fees in Rs 80,000-1 lakh range. The lower income group also has a choice of schools costing anywhere between Rs.1000 to Rs.30,000 annually.(My domestic help’s children go to one such Low fee private school) Second, because parents are misinformed about the actual effect of schooling on a child’s future. These parents are under the assumption that a good school leads to a better ‘quality’ education. (Recognise that both the italicised words are highly subjective and have multiple interpretations.) Do schools make a difference? But, most schools, assuming some uniformity in resources, have negligible impact on student achievement or learning outcomes. I’ll explain why… “Schools make no difference; families make the difference.” – (Adam Gamoran, Daniel A Long, 2006) In the mid-1960s, about the time when Dr. Kothari and his team of eminent academicians, scientists, economists (Kothari commission) were busy drawing up a report( The report was based on democratic principles of social justice, equality and opportunity and is most famous for recommending a ‘common school system’ to ensure a more egalitarian society) on the education system in India on the behest of the then Education Minister- M.C. Chagla; The then U.S. Commissioner of education Harold Howe asked professor James Coleman from John Hopkins University to do the same. The aim of the study was to answer a question: “Which strategy was more likely to equalize educational opportunities for poor minority students-compensatory education or racial integration” Family background is more powerful than which school the child goes to. The report was titled Equality of Opportunity and findings of the almost 800 page report have been summed up insightfully in a single line quoted above. The implications of the controversial Coleman Report( Controversial because, up until the Coleman report, the widely accepted belief to equalise educational opportunity was to pour resources into schools.) crushed the long held belief that school quality is tied to achievement. It demonstrated a strong correlation between the family background and student achievement. Putting aside student achievement, the point I’m trying to make is that the family background is more powerful than which school the child goes to. What did the report mean by family background? It could be interpreted as a complex system of values, beliefs, habits and practices. In short, the family ‘culture’. Culture is a fluid concept, it is constantly in flux, reinforced and moulded every time humans interact. In a family unit, the parents determine, reinforce or shape the family culture. What is the dominant culture in society and schooling at present? -An excessive focus on ‘marks’ and ‘performance’. The aim of becoming a ‘rank holder’ encourages super achievement. Children get categorised into winners and losers. I propose that it is time to reculture these binary notions. Reculturation, as the prefix implies, is a process of re-establishing the culture in a unit. Be it a family, school, community or an organisation. Fullan (2001) calls it as ‘transforming the culture…changing the way we do things..” And this reculturation can only come through changing my own belief system. It begins with me challenging those taken for granted, hegemonic norms in society about ‘achievement’ and ‘success’. When what I believe goes through a paradigm shift, it has a ripple effect. Every single conversation with daughter, husband, family and friends undergoes a change. It is easy to underestimate the effect of changing one’s beliefs because it seems insignificant in the larger scheme of things. Yet, Gandhi had it right. Be the change you wish to see. Preeti Konaje is an inquirer at heart. In her past avatars, Preeti has been a copywriter, baker, event manager, homeschooler and tutor. Now, we can add educator and teacher to that mix. She is currently pursuing the ‘Master of Education’ program from Azim Premji University.
Many sites, such as Facebook or a blog, will allow a user to upload or download files to the site for a myriad of reasons, such as pictures for a website or files for a forum or blog software. In either case, there are two ways to upload a file to a server or website: using HTTP or using FTP. Here are some of the differences: 1. HTTP is used to view websites while FTP is used to access and transfer files. FTP's file transfer purpose is more or less for website maintenance and batch uploads, while HTTP is for client-end work and for end users to upload things such as movies, pictures and other files to the server. 2. HTTP and FTP clients: The common HTTP client is the browser while FTP can be accessed via the command line or a graphical client of its own. 3. HTTP Headers: HTTP Headers contains metadata such as last modified date, character encoding, server name and version and more which is absent in FTP. 4. Age Difference: FTP is about 10 years older than HTTP. 5. Data Formats: FTP can send data both in ASCII and Binary Format but HTTP only uses Binary Format. 6. Pipelining in HTTP: HTTP supports pipelining. It means that a client can ask for the next transfer already before the previous one has ended, which thus allows multiple documents to get sent without a round-trip delay between the documents, but this pipelining is missing in FTP. 7. Dynamic Port Numbers in HTTP: One of the biggest hurdles about FTP in real life is its use of two connections. It uses a first primary connection to send control commands on, and when it sends or receives data, it opens a second TCP stream for that purpose. HTTP uses dynamic port numbers and can go in either direction, 8. Persistent Connection in HTTP: For HTTP communication, a client can maintain a single connection to a server and just keep using that for any amount of transfers. FTP must create a new one for each new data transfer. Repeatedly making new connections are bad for performance due to having to do new handshakes/connections all the time. 9. Compression Algorithms in HTTP: HTTP provides a way for the client and server to negotiate and choose among several compression algorithms. The gzip algorithm being the perhaps most compact one but such kind of sophisticated algorithms are not present in FTP. 10. Support for Proxies in HTTP: One of the biggest selling points for HTTP over FTP is its support for proxies, already built-in into the protocol. 11. One area in which FTP stands out somewhat is that it is a protocol that is directly on file level. It means that FTP has for example commands for listing dir contents of the remote server, while HTTP has no such concept. 12. Speed: Possibly the most common question: which is faster for transfers? What makes FTP faster? 1. No added meta-data in the sent files, just the raw binary 2. Never chunked encoding "overhead" What makes HTTP faster? 1. Reusing existing persistent connections make better TCP performance 2. Pipelining makes asking for multiple files from the same server faster 3. Automatic compression makes less data get sent 4. No command/response flow minimizes extra round-trips Conclusion: Ultimately the net outcome of course differ depending on specific details, but I would say that for single-shot static files, you won't be able to measure a difference. For a single shot small file, you might get it faster with FTP (unless the server is at a long round-trip distance). When getting multiple files, HTTP should be the faster one.
Using one perspective, parallel lines converge to one point somewhere in the distance. This point is called the vanishing point (VP). This gives objects an impression of depth. When drawing using one point perspective all objects vanish to one common point somewhere on the horizon. The sides of an object diminish towards the vanishing point. All vertical and horizontal lines though are drawn with no perspective. i.e. face on. One point perspective though is of only limited use, the main problem being that the perspective is too pronounced for small products making them looking bigger than they actually are. So when would you use one point perspective? One area where one point perspective can be quite useful is for sketching room layouts. Although it is possible to sketch products in one point perspective, the perspective is too aggressive on the eye making products look bigger than they actually are.
Bogacz, Renee, and Miguel Gómez Gordillo (2011). Point/Counterpoint: Should Schools Be Held Responsible for Cyberbullying? Learning and Leading with Technology, 38(6) retrieved from Bogacz and Gordillo's article goes well beyond addressing anything specifically digital. The article explores the prevelant societal problem of bullying in general in American culture, including schoolyard bullying. In addition to emphasizing the importance schoolteachers, administrators, and students themselves who witness or are victims of cyberbullying, the first half of the article, namely Bogacz's portion, stipulates that parents closely monitor and check everything that their children do in the online world. Bogacz places the onus of responsibility on the schools to deal with cyberbullying, while Gordillo's response in the second half of the article is to place the onus of responsibility on parents, because children who bully, either in the digital world or the real world, tend to do so because they have not been taught how to respect others. The root of that problem is how their parents, or whoever has raised them, has taught them how to behave. Schools can only do so much. Parents and students themselves have to meet them halfway. Question 1: What can someone like me do about cyberbullying in my future classroom? When my students use the Internet to carry out class assignments--such as having them having engage in online discussions or write responses to prompts, I may deduce points if they are disrespectful to me or their fellow students. Or, I may simply ask them to keep redoing the assignment until they behave. And whether I am dealing with cyberbullying or live bullying, one thing I have had in thr back of my mind for a long time now is that when I catch my students being rotten, I will simply ask them to write letters of apology, as well as ask the two parties--the bully and the victim--to honestly express their feelings to each other, ie for the one who is bullied to tell the bully how the latter made him/her feel while asking the bully themselves to write maybe a semi-essay on why they acted the way they did and why they will never do it again. As an English teacher, getting my students to embrace written and spoken language as a means to improve human relationshsips is what it's all about, for me. Question 2: What should parents do about cyberbullying? I am of the opinion that Bogacz's stipulation that parents closely monitor and check everything that their children do in the online world is indeed overkill. If parents cannot even trust their own children with phones and computers than they shouldn't let their offspring have them at all. Before letting their children have access to electronic communications, however, parents should emphasize to their kids about being careful and watchful of people they run into tin the online world, just as parents should caution children when dealing with people in the real world. Parents can't wrap their children in woll forever; they need to prepare them for dealing with unpleasant and potentially dangerous people in life and teaching their sons and daughters how to behave responsibly and how to avoid undesirables is the key to giving adolescents the tools to function on their own.
Twitter is geared to real-time use more than any other form of social media. Certainly not a medium for expressing nuance, ‘tweets’ do deliver the punch of immediacy. The OPTIMUM project, through one of its three pilot studies — “Proactive improvement of transport systems' quality and efficiency” — has funded a study to analyse short text messages in order to determine levels of stress that commuters experience while travelling. If it proves possible to interpret such messages with reliable accuracy, transport authorities will be able to respond quickly to incidences of high stress in order improve traffic management and quality of service. Mike Theiwall is the author of the new study “TensiStrength: Stress and relaxation magnitude detection for social media texts”. “TensiStrength,” the study begins, “is a system to detect the strength of stress and relaxation expressed in social media text messages. It uses a lexical approach and a set of rules to detect direct and indirect expressions of stress and relaxation, particularly in the context of transportation.” Intelligent transport systems (ITS) already use traffic sensors, road monitoring cameras, mobile phone GPS signals and number plate recognition technology to harness information, but a wealth of text information available to computing systems can now be mined to make further improvements on the predictive power of ITS and other systems. As outlined in the description of the OPTIMUM pilot study: “Since changes in traffic conditions and relevant incidents (such as accidents) can occur unexpectedly at any time, it is necessary to inform travellers and to suggest alternative routes. The integration of various real-time traffic data sources will provide the required information to realise traffic-state-aware routing that can guide travellers along routes to their destinations” — and, hopefully, with a minimum of stress, we can now add. According to the study, “TensiStrength is an adaptation of the sentiment strength detection software SentiStrength.” The tasks of the two programs are related but not equivalent. TensiStrength borrows its emotion terms from SensiStrength, but includes manually added stress terms and indicators “derived from a range of academic and non-academic sources that describe stress in general, or stressors associated with travel.” Examples of commute-related stressors include: heavy traffic, frequent braking, traffic jams, congestion, slow average speeds, transport signals, and unpredictability of journey time. TensiStrength research involved assigning scores on a 1 to 5 scaling system to more than 3,000 stress-related tweets. The tweets were collected "using the keywords from a variety of sources over a one-month period in July 2015 and then randomly sampled”, the study explains. While the results of the study have not produced a consensus, they do show that “TensiStrength is able to detect expressions of stress and relaxation in tweets with a reasonable level of accuracy compared to human coders, more accurately than a similar sentiment analysis program … TensiStrength can therefore be used as an off-the-shelf solution for stress and relaxation detection.” Magnitudes of stress and relaxation can be measured on a personalised level to determine the emotional state of travellers. A user under stress is less likely to select a mode of travel with which he or she is unfamiliar, while a relaxed user may be more accepting of less habitually familiar modal options. The OPTIMUM platform can provide proactive recommendations to match an individual user’s behavioural profile. On a system level, spikes of high-stress tweets within a given area of a transport network may result in driving behaviours that increase the probability of accident and also contribute to the build-up of traffic volume — through, for example, erratic lane changes. Such information will be beneficial for generating more accurate forecasts and other improvements that OPTIMUM hopes to introduce to its event-detection mechanisms. Of course, much work remains to be done in this new field of research, but the initial results are encouraging from OPTIMUM’s ITS-related vantage point. Click here to find more scientific papers published by OPTIMUM
Welsh alphabet. Note that there is no J, K, Q, V, X or Z. However the most common surname in Wales is Jones, and there is a little village called Vivod. And if you find this interesting, just do a search for the definition of a Welsh Mile. Map of Wales showing the old counties before the modern restructure. the most famous symbol of Wales, the Red Dragon, or as it is known in Welsh, " Y Ddraig Goch" (pronounced uh thraig go-k) A restored farmhouse in the Welsh countryside | Period Living The Welsh House | Traditional welsh cottage | Holiday cottage A restored farmhouse in the Welsh countryside | Period Living. Love the fireplace and woodburner The Welsh House - Bryncyn - review and blog at meandorla. Wooden chair, woolen blankets and concrete walls. Minimalist and modern meets old interiors. The old road to Ogwen, an original watercolour painting by Rob Piercy The study found that Britain can be divided into 17 distinct genetic 'clans', as shown in the map above Celebrating St David's Day, the Welsh national day - 1st March.
How to make a smooth transition from home to camp Encourage your child's independence throughout the year with sleepovers. Discuss what camp will be like before your child leaves, including role-playing anticipated situations, such as using a flashlight to find the bathroom. Reach an agreement ahead of time on calling each other. If your child's camp has a no-phone-calls policy, honor it. Send a note or care package ahead of time to arrive the first day of camp. Acknowledge, in a positive way, that you will miss your child. For example, say: "I am going to miss you, but I know that you will have a good time at camp." Don't bribe a child to stay. Linking a successful stay at camp to a material object sends the wrong message. The reward should be your child's newfound confidence and independence. Pack a personal item from home, such as a stuffed animal. When a "rescue call" comes from the child, offer calm reassurance and put the time frame into perspective. Avoid the temptation to take the child home early. Talk candidly with the camp director to obtain his/her perspective on your child's adjustment. Don't feel guilty encouraging your child to stay at camp. For many children, camp is a first step toward independence and plays an important role in their growth and development. Trust your instincts. Severe homesickness is rare. However, a child who is not adjusting to camp after a reasonable amount of time should be allowed to return home. For more advice on sending a child off to camp, see the website www.acacamps.org. (c) Copyright 2001. The Christian Science Monitor
Joe Slovo, Signaller, WW2 Slovo was born in Obeliai, Lithuania to a Jewish family which emigrated to the Union of South Africa escaping Jewish persecution in Europe when he was eight. Slovo first encountered socialism in South Africa through his school-leaving job as a clerk for a pharmaceutical wholesaler. He joined the National Union of Distributive Workers and had soon worked his way up to the position of shop steward, where he was responsible for organizing at least one mass action. The Communist Party in South Africa has an interesting start and it’s not the “Black revolutionary” movement most people perceive it to be now, originally it was started by white South Africans – and in fact it initially concerned itself only with “whites only” workers rights. The Communist Party of South Africa was founded in 1921 under the leadership of William H Andrews, a Briton who came to Johannesburg to work on the mines. The SA Communist Party first came to prominence during the armed insurrection by white mineworkers in 1922, so brutally suppressed by Jan Smuts’ government. The large mining concerns, facing labour shortages and wage pressures, had announced their intention of liberalising the rigid colour bar within the mines and elevate some blacks to minor supervisory positions. (The vast majority of white miners mainly held supervisory positions over the labouring black miners.) Despite having opposed racialism from its inception, the Communist Party of South Africa (CPSA) supported the white miners in their call to preserve wages and the colour bar with the slogan “Workers of the world, unite and fight for a white South Africa!”. With the failure of the rising, in part due to black workers failing to strike, the Communist Party was forced to adopt the “Native Republic” thesis which stipulated that South Africa was a country belonging to the Blacks. The Party thus reoriented itself at its 1924 Party Congress towards organising black workers and “Africanising” the party. Not quite the vision William Andrews, the CPSA founder, had in mind as white worker party and he promptly resigned as the party’s National Secretary. During World War 2, the attitude to Communism by moderate white South Africans was a little different. Communist Russia was an ally of South Africa during the war and all over the country South Africans rallied to the support of Russia’s war effort against Nazi Germany by donating food, medicine and blood in very successful national “Aid for Russia” collection programs. Joe Slovo joined the Communist Party of South Africa in 1942 and served on its central committee from 1953 (the same year its name was changed to the South African Communist Party, SACP). He avidly watched the news of the Allied fronts, especially the way in which Britain was working with Russia to aid her war effort against Hitler, so Joe Slovo volunteered for active duty, and served with South African forces in Egypt and Italy. After the war he joined the Springbok Legion, a multiracial radical ex-servicemen’s organization which was essentially run by a group white war veterans who embraced Communist values. The Springbok Legion should not be confused with the South African Legion, it was a separate and very politically motivated veterans association – whereas the South African Legion was an apolitical veterans charity. Being politically driven The Springbok Legion became one of key driving forces behind Sailor Malan’s “Torch Commando”, which was the first mass protest movement against Apartheid legislation and made up to a smaller degree by this political veterans association and to a far bigger degree of many members of the apolitical war veterans associations – ironically all mainly “white” South Africans (the franchise of been the country’s first mass protest movement against Apartheid does not belong to the ANC). However it was the smallest of the war veterans associations – The “Springbok Legion” that took a direct “political” role – the Springbok Legion was founded in part by a senior South African Legion member – General van der Spuy (a pioneer of the South African Air Force), and its role took over from what he referred to as the South African Legion’s “painfully correct whisper of polite protest” and became a “shout” of protest instead. The history of the Springbok Legion as a political entity is fascinating – initially formed in 1941 by members of the 9th Recce Battalion of the South African Tank Corps, along with the Soldiers Interests Committee formed by members of the First South African Brigade in Addis Ababa, and the Union of Soldiers formed by the same brigade in Egypt. The aims and objectives of the Springbok Legion were enunciated in its ‘Soldiers Manifesto’. The Springbok Legion was open to all servicemen regardless of race or gender and was avowedly anti-fascist and anti-racist. In collaborating with Sailor Malan’s Torch Commando (and by default Jan Smuts’ old United Party with which the Torch Commando was linked), The Springbok Legion had by now become a fully blown political entity, and the inevitable happened, as with any political party, The Springbok Legion gradually became politically radicalized. This was spearheaded by veterans who where also members of the Communist Party of South Africa (CPSA) and who joined The Springbok Legion and served in its upper and lower structures. The targeting of the Springbok Legion by the Communist Party was the result of the South African Communist Party believing that it could use the veterans to re-order “white” political thinking in South Africa along communist lines. This eventually resulted in the fracturing of the Springbok Legion as a whole as moderate “white” members, who made up the majority of its supporters became disenchanted with its increasingly militant leftist rhetoric. Notable SACP communist party veterans to join the Springbok Legion in a leading capacity where none other than ex-servicemen such as Joe Slovo, but also Lionel Bernstein, Wolfie Kodesh, Jock Isacowitz, Jack Hodgson and Fred Carneso. Aside from the Communists, Key members included future political and anti-apartheid leaders, such as Peter Kaya Selepe, an organiser of the African National Congress (ANC) in Orlando (he also served in WW2). Harry Heinz Schwarz, also a WW2 veteran eventually became a statesman and long-time political opposition leader against apartheid in South Africa and served as the South African ambassador to the United States during South Africa’s “transition” in the 90’s. The National Party – which even as part of it’s pre-war make up had a fierce anti-communist stance was becoming increasingly alarmed by the rise of “white” war veterans against their policies – Sailor Malan’s Torch Commando at its peak attracted 250 000 followers – so they began seeking was of suppressing it. One of the mechanisms was to pass the Suppression of Communism Act. The combined effect of the Act, and the broadening and deepening of the Communist rhetoric and politics was alienating the majority of Springbok Legion members rang a death knell for the Springbok Legion and the inevitable happened, the organisation folded as thousands of its “moderate” members left, returning to the either the apolitical MOTH (Memorable Order of Tin Hats) combat vets only order or the broader South African Legion which accommodated all veterans (or both). The Communist Party members of The Springbok Legion who had played a pivot in its rise and its demise i.e. Joe Slovo, Lionel Bernstein, Wolfie Kodesh, Jack Hodgson and Fred Carneso were now banned and left with little other option they all then joined the African National Congress (ANC) and, given their experience as combat veterans, they also all joined its military wing Umkhonto we Sizwe under the command of Nelson Mandela. The story of Joe Slovo – as the National Party’s arch communist enemy, and the story of the East/West divide over communism and the resultant Cold War, of which the South African Border War along Angola and internal armed insurrection (the “struggle”) all qualify – is well known. That Joe Slovo was eventually identified as military target, alongside his wife Ruth Slovo (a daughter of well known Communist supporter prior to the war, Joe had met Ruth at Wits University), and again the assignation of Ruth Slovo is also well known. The irony for the National Party, is that is was this “public enemy number one”, “Rooigevaar” (as the National Party labeled communists and liberals) Communist that extended the olive branch to the National Party – it was Joe Slovo, who in 1992 proposed the breakthrough in the negotiations to end apartheid in South Africa with the “sunset clause”. Slovo’s “sunset clause” allowed for a coalition government for the five years following a democratic election, including guarantees and concessions to all sides. After the elections of 1994 Slovo became Minister for housing in this coalition government he proposed, serving alongside the National Party as they saw out their “sunset” until his death in 1995. His funeral was attended by Nelson Mandela and Thabo Mbeki. In a further twist of history, by 2005 the National Party closed shop and merged with the ANC, and by default they also joined the party which still remains in alliance with the ANC as a political dependent, none other than …. The South African Communist Party. Such is the cycle of history, go figure! Story by Peter Dickens Joe Slovo (left) is seen in his South African Army uniform (and Signaler insignia) in the feature image with fellow South African soldiers Mike Feldman and Barney Fehler (image courtesy of Mike Feldman) References Lazerson, Whites in the Struggle Against Apartheid. Neil Roos. Ordinary Springboks: White Servicemen and Social Justice in South Africa, 1939-1961. Wikipedia and “Not for ourselves” – a history of the South African Legion by Arthur Blake
We live in the world of credit. Most of the banking institutions offer different forms of credit from credit card to signature loans.Majority of people often find themselves in bad credit situations like court judgment, bankruptcy, repossession, foreclosure and loan default,due to lack of enough financial knowledge and discipline which often make it difficult for them to get any credit at all in future. So - what exactly is credit?Credit means that you are getting a service or cash grant to rent for your own purpose. You are often bound with a contract or agreement to repay in future as agreed with lender or service provider. Credit exists in different forms like loan, mortgage, signature loan, or credit card.Every financial institution or lending agency, will first check your credit history, before they will consider giving you credit. If you have defaulted on credit or loan before or have bad credit history you will find it almost difficult to get credit any time you apply for it.However, it's possible for you to improve your bad credit history or build a new good credit history by repairing your bad credit, thus re-establishing your credit-worthiness. This process is called credit repair. It's the process in which consumers with unfavorable credit histories attempt to re-establish their credit-worthiness.Though there are lots of credit repair companies nowadays that promises repairing your bad credit for you, but if you can follow simple guide, it's very possible for you to do it yourself - after all it's your credit.If you repair your bad credit it will make it easy for you to get low interest credit, car or home loans. However, with poor credit rating you may not be able to get loan or be subjected to high interest rates and several other unnecessary conditions. So it's very important that you repair your credit if you have bad credit. You will get lots of tips on how to do this easily in this book.Your credits score - how you can improve it.Your credit score is a very important in any financial transaction that you make or intend making in future. So it's good you know what exactly your score is, understand its meaning and learn how you can improve it if it's not good enough."Many factors can contribute to a negative rating from the credit reporting agencies. Many factors like are non-payment of an account or late payments over an extended length of time, can contribute to someone getting a "bad credit" rating or poor score. Whether non-payment of an account is willful or due to financial hardship, the result can be the same, a negative rating. ... But there is hope to get credit cards for people with bad credit, poor credit or lower credit score"Credit report - its effect on your personal creditCredit report is a compilation of your credit history, past financial transactions and personal information possible. This report is usually compiled by accredited agencies known as credit reporting agency.Credit reporting agencies are organizations that help credit card companies, loan companies, banks, and departmental stores in the country to ascertain the credit worthiness of their would be clients.Once they have detail information from these sources, they give it to any organizations in need of it when requested. Though they keep on file information concerning you and your credit, they don't make final judgments as to your credit worthiness. The decision is up to the credit card companies or any lender which you are dealing with.Credit cards: - types and what you need to know about them. Nowadays, everybody wants to have at least a credit card. Everywhere you go you see adverts from various banks and other financial institution offering you credit card. However, before you apply for a credit card, there are several factors you need to consider. So it's very important that you know more about the types of cards available, and one that will work best for you.Secured credit card: - A secured credit cards for people with bad credit requires a security deposit as collateral before you can get approval. Its type of card that best suit the need of people with no or poor credit who are trying to build their credit history. Your collateral must be equal or greater in value of the credit amount you are applying for.With a secured card you put up your own money (into a savings account with the bank you are applying for credit card) and that amount (or part of it) is the credit line for your card. Put in $500 and you could have up to a $500 credit line. You can deposit anywhere from two hundred to two thousand dollars into an account, and that will be your spending limit. This will give you the flexibility of using a credit card and because if you pay off every statement you are letting creditors know that you can handle credit (again) and your bank may soon begin extending your credit line beyond what you have put in. So you are on your way back to healthier credit, to a status where you will no longer need a secured card.Business credit cards: - These are the card that's available for business owners, directors and business executives. They come with several features just like any traditional credit cards. You have to consider the terms and condition for these types of cards too before applying.Student credit cards are another type of credit card specifically for students. These types of cards are made for students because of their lack of credit history, and if given chance they can build their credit history with such card.Prepaid credit cards: - are set of cards that are just acceptable wherever the traditional credit cards are acceptable, but they are not credit card. You will have to always transfer money to your card before you can make use of the card and you may not be able to spend more than you prepaid for the card.Presently this is almost the best card for people that want to avoid interest and other fees charged on traditional credit card and also for people with bad credit. However, other little charges like monthly fees, application; over the limit and ATM fees are still applicable, but these gets offset if you pay your bills via money orderWhichever card you decide to choose make sure that you go over the terms applicable very well to avoid putting yourself in financial bondage. In second part of this article we will continue looking at other types of credit card.Balance credit cards are unsecured standard cards designed to allow consumers to save money in interest charges by transferring higher interest credit card balance onto a lower interest rate credit card.Low interest credit cards are other types of non secured standard credit card. They offer either low introductory APR that change to a higher rate after a certain period of time or a low fixed rate. You can take advantage of the low introductory APRs to make larger purchases for now and pay them off several months later. It wont be possible to get this credit card for people with bad creditAir Mile Credit cards are cards that are good for people that travel frequently or planning to go on vacation. It's a form of reward card that allow you opportunity of obtaining a free airline ticket. You will need to accumulate specified air miles before you can be entitled to free ticket. All accumulated mile points will be based on dollar amount of your credit card purchases over a period of time based on predetermined point level.Specialty credit cards are other set of standard non-secure cards designed specifically for individual business users and students with unique and special needs.Make sure that you study the terms of any of the card that you pick very well to avoid risking your credit rating. Also, when you pick any of the reward cards make sure you study the forms and offers very well because credit card issuing companies do offer different reward programs and their promotional offers often change. So make sure you thoroughly look over the card's terms and conditions of each specific card before applying.
Cornwall is a Duchy in the far south-west of Great Britain. Its border with Devon, the neighbouring county, comprises mostly of the ancient division between Cornwall and England of the River Tamar. The land border between north Cornwall and north Devon is 28km long. The native language is Cornish, although with the influx of English people, the English language is now more prevalent. Cornwall is renouned in the present day for being a popular tourist destination, due to its beautiful beaches, mild climate and stunning inland countryside. In the past it relied on three main industries: fishing, farming and mining. The mining industry was particularly prominent, and lead to several famous inventions, for example the Davy Lamp and Trevithick's steam engine. Most of the mining was for copper and tin, quantities of which is still present underground today. In the modern day there is an independence movement in Cornwall, to break away from wrongly being labelled as part of England. Cornwall has a history different from that of England: as an ancient Celtic nation Cornwall was never invaded by the Romans, Normans, Saxons and so on. The Cornish people nowadays feel that their Celtic heritage, culture, government - the 'Stannaries', language and rights as a national minority have been disregarded by the domineering English, and there is a campaign for the 'Senedh Kernow', the Cornish Assembly, to be established. The declaration for this is stated as: "Cornwall is a distinct region of the British Isles and Europe with a unique culture, language, and history. Today, however, Cornwall is recognised as one of the poorest regions of the European Union. It has persistent economic problems, which has resulted in it being granted Objective 1 status by the European Commission. Quoted from www.senedhkernow.com There is now an urgent need for a fundamental change in the way that Cornwall is governed, the means by which policies are implemented, and the institutions accountable for delivery. With this in mind, the Cornish Constitutional Convention is leading the campaign for a Cornish Assembly, which has won the support of over 50,000 people who have signed individual declarations calling for a new democratic settlement for Cornwall. We, the People of Cornwall, must have a greater say in how we are governed. We need a Cornish Assembly than can set the right democratic priorities for Cornwall and provide a stronger voice for our communities in Britain, in Europe and throughout the wider world." Cornwall is reputed for the quality of its beautiful surf, and the number of surfers. Beaches such as Fistral in Newquay are world renowned, so locals tend to head for Constantine or Watergate Bay. There is tension between the locals and the 'Emmets' - the tourists - as the population of the Duchy (~400 000) doubles in summer, which has obvious implication for the environment, of overcrowding, and water shortage. Traditional Cornish foods have become popular in a bastardised way in the rest of Britain, foods such as the Cornish Pasty, Starry-gazey pie, Cornish Yarg, Cornish Pepper, and Cream Teas. The 'real thing' are available in any cafe, bakery or local shop in Cornwall. Much of the Cornish cries for independence are based upon the dire economic situation, caused by the decline on mining and increasing reliance on Tourism - attractions such as the Eden Project effectively controlling the local economy. Poverty is high, especially since the decline of traditional industries. Teenage pregnancies, drug problems (both smuggling and abuse), community breakdowns and unemployment are all effects of this. This is the Cornwall the tourists wont see.
BACKGROUNDER ON THE COURT OPINION ON THE MULLER V. OREGON CASE One main goal of the Progressive movement, which lasted from the late 1890s until World War I, was to ameliorate the worst aspects of industrialization -- fouling of the environment, abuse of workers, exploitation of consumers and corruption of the political process. Starting in the state legislatures, reformers passed a variety of statutes, including factory safety laws, workmen's compensation, minimum wages and maximum hours. But conservatives were able to block some of these programs in the courts, where they appealed to a judiciary imbued with the notions that private property was sacrosanct and that legislatures should not be able to tell people how to use their property. Courts also sustained the notion of "liberty to contract," claiming that employers and employees should be able to negotiate without state interference. The courts did acknowledge that the state had an inherent police power, by which it could interfere with property and labor contracts in order to protect the health and safety of citizens. But in the 1905 case of Lochner v. New York, a bare majority of the Supreme Court had ruled that a law limiting bakery workers to a ten-hour day was unconstitutional, because such a measure bore no relation to the workers' health or safety. The Court conceded, however, that such measures might be permissible if it could be shown that the law did in fact serve to protect health or safety. When the state of Oregon established a ten-hour workday for women in laundries and factories, business owners attacked it on the grounds that, like the New York law, it bore no relation to the women's health or safety. To defend the law, Oregon turned to the noted Boston attorney Louis D. Brandeis, who had already won a reputation for defending the public interest. Brandeis seized upon the opening in Lochner, namely, that if he could show how the Oregon law related to worker health and safety, then the Court would have to sustain it. He de-vised a highly unusual brief. He covered the traditional legal precedents in just two pages, and then filled over 100 pages with sociological, economic and physiological data on the effect of long working hours on the health of women. Justice Brewer's opinion not only acknowledged the brief, a highly unusual step, but conceded that women were in fact different from men, and thus needed this type of factory protection. Brandeis's strategy had worked, but it was a strategy for the times; he himself did not consider women inferior or subservient to men. The most important result of the Brandeis brief and of the decision in this case is that it set the model for all future reformers attempting to use the law to affect social and political conditions. Muller democratized the law, in that it made it more open to the everyday facts of life; it called upon justices to take into account the effect of their decisions on the real world and on the lives of real people. For further reading: Philippa Strum, Louis D. Brandeis: Justice for the People (1984); Alpheus T. Mason, "The Case of the Overworked Laundress," in Quarrels That Have Shaped the Constitution (1975). MULLER V. OREGON (1908) Justice Brewer delivered the opinion of the Court. We held in Lochner v. New York (1905) that a law providing that no laborer shall be required or permitted to work in a bakery more than sixty hours in a week or ten hours in a day was not as to men a legitimate exercise of the police power of the State, but an unreasonable, unnecessary and arbitrary interference with the right and liberty of the individual to contract in relation to his labor, and as such was in conflict with, and void under, the Federal Constitution. That decision is invoked by plaintiff in error as decisive of the question before us. But this assumes that the difference between the sexes does not justify a different rule respecting a restriction of the hours of labor. In patent cases counsel are apt to open the argument with a discussion of the state of the art. It may not be amiss, in the present case, before examining the constitutional question, to notice the course of legislation as well as expressions of opinion from other than judicial sources. In the brief filed by Mr. Louis D. Brandeis, for the defendant in error, is a very copious collection of all these matters... The legislation and opinions referred to... may not be, technically speaking, authorities, and in them is little or no discussion of the constitutional question presented to us for determination, yet they are significant of a widespread belief that woman's physical structure, and the functions she performs in consequence thereof, justify special legislation restricting or qualifying the conditions under which she should be permitted to toil. Constitutional questions, it is true, are not settled by even a consensus of present public opinion, for it is the peculiar value of a written constitution that it places in unchanging form limitations upon legislative action, and thus gives a permanence and stability to popular government which otherwise would be lacking. At the same time, when a question of fact is debated and debatable, and the extent to which a special constitutional limitation goes is affected by the truth in respect to that fact, a widespread and long continued belief concerning it is worthy of consideration. We take judicial cognizance of all matters of general knowledge. It is undoubtedly true, as more than once declared by this court, that the general right to contract in relation to one's business is part of the liberty of the individual, protected by the Fourteenth Amendment to the Federal Constitution; yet it is equally well settled that this liberty is not absolute and extending to all contracts, and that a State may, without conflicting with the provisions of the Fourteenth Amendment, restrict in many respects the individual's power of contract... That woman's physical structure and the performance of maternal functions place her at a disadvantage in the struggle for subsistence is obvious. This is especially true when the burdens of motherhood are upon her. Even when they are not, by abundant testimony of the medical fraternity continuance for a long time on her feet at work, repeating this from day to day, tends to injurious effects upon the body, and as healthy mothers are essential to vigorous offspring, the physical well-being of woman becomes an object of public interest and care in order to preserve the strength and vigor of the race... The two sexes differ in structure of body, in the functions to be performed by each, in the amount of physical strength, in the capacity for long-continued labor, particularly when done standing, the influence of vigorous health upon the future well-being of the race, the self-reliance which enables one to assert full rights, and in the capacity to maintain the struggle for subsistence. This difference justifies a difference in legislation and upholds that which is designed to compensate for some of the burdens which rest upon her... For these reasons, and without questioning in any respect the decision in Lochner v. New York, we are of the opinion that it cannot be adjudged that the act in question is in conflict with the Federal Constitution, so far as it respects the work of a female in a laundry. Source: 208 U.S. 412 (1908). Table of Contents
There have been many debates regarding the causes of Autism. It is a question that has haunted many parents who wonder if they have done something wrong to cause their child to have Autism. Although some types of Autism have known causes, most are found to be idiopathic, or without a known cause. There are many theories as what causes autism, including vaccinations, immune deficiency, food allergies, genetics and many other theories. However, none of these theories have been proven. You would think that with so much information available, someone would have figured out the cause of Autism by now; though, it is still seemingly a medical mystery. However, researchers from Boston Children's Hospital may have brought us one step closer to discovering the cause of Autism. The researchers have found that recent tests measuring the electrical activity in the brain can distinguish children with Autism from children with typical brains as early as 2 years of age. Their study was published last week in the online journal, BMC Medicine. Researchers compared raw data from the electroencephalogram tests, or EEGs, of 430 children with Autism and 554 other children from 2 to 12 years of age. Children with Asperger Syndrome did not participate in the testing. The researchers found that children with Autism had consistent EEG patterns showing altered connectivity between different parts of the brain. In general, they showed reduced connectivity compared with the other children's brains. As we get closer to pinpointing the brain’s functions and its effects on human behavior, we grow closer to solving the mystery of the causes of Autism. There are other ways to identify whether children have any type of autism, but many of these signs go unnoticed. Early detection, however, can have a huge effect how students progress and develop if they get early-intervention services to match their needs and support their development. There are also some known causes, including Depakote (also named Valproate), which is an anti-seizure medication taken during pregnancy; Fragile X syndrome, which is a genetic disorder; Rett Syndrome, which is a genetic disorder affecting only females, Tuberous Sclerosis, which is a rare genetic disorder; and Prader-Willi Syndrome, which is also a rare genetic disorder. According to The National Association of School Psychologists, students with Autism are most often diagnosed by school staff. It may be possible, however, that the EEG patterns could change the way children are diagnosed. The researchers believe that their findings could lead to a diagnostic test for Autism, particularly at younger ages when behavior-based measures are less reliable. The researchers plan to next study the EEG patterns of children with Asperger Syndrome and children with Autism. This promises to reveal why it affects some children in one way and others in another. Regardless of the cause, getting a good support system in place for your child is vital. If you are struggling with your child’s school to diagnose and/or provide appropriate support to your child with Autism, we can help. Please visit www.hzlegal.com for more information.
The Barrette Project Honoring Native Women Survivors of Sexual Violence and Creating a Safe Place for our Voices to be Heard and our Stories to be Told Sexual violence is one of the most undisclosed and unreported crimes in today’s society, especially when committed against Native women and girls. It is painful to acknowledge that as Native women each of us either has experienced or knows someone who has experienced some form of sexual violence such as rape, date rape, incest, harassment, being used in prostitution, trafficking, or pornography, and many other sexually degrading experiences. Historically, our people have endured generations of racism and trauma due to the conquest of our tribal Nations through assimilation policies and rape used as tool of war. As Native women, we carry this collective experience in our hearts and the weight is heavy. Because these crimes are often undisclosed and unreported, the documented numbers of assaults taking place in our communities are misleading and needed resources are not allocated. Our stories are not told, and our voices are not heard. The invisibility we feel is not just our own, but that of our entire Nation. We will not be invisible any longer. A few years ago the staff and membership of the Minnesota Indian Women’s Sexual Assault Coalition (MIWSAC) talked about creating a way for Native women to share their stories in a beautiful, powerful way that could also raise awareness about the high rates of sexual violence committed against Native women and girls – it was from this the Barrette Project was born. We have also created a Barrette Project book, where we share the stories of our mothers, grandmothers, sisters, daughters and aunties to raise awareness, honor survivors, and share our powerful stories of healing. The Barrette Project is a living memorial. It is evidence that we are not invisible. We are still here, we have survived, and we honor each other! Our vision is to continue adding barrettes and stories to the Barrette Project for years to come as we raise awareness, reduce occurrences, and work to end sexual violence in our communities. Also, we encourage women and girls who are not yet ready to disclose their story, to anonymously make or donate barrettes to the project. We honor the courage it takes to make this step. We utilize beaded barrettes because they represent so much to us as Native women; pride and beauty - a piece of our dance regalia - the love we feel when clipping a barrette in our daughters hair - or fear and helplessness, knowing that the same barrette may have been jerked from her hair as she was being assaulted. It is because we feel that beaded barrettes carry with them this strong symbolism that we wanted to use them as a physical representation of our stories, that we share on our traveling memorial- red, velvet covered boards with the stories and barrettes displayed. We encourage you to submit your story, or donate a barrette to honor a loved one at any time to our on-going living memorial honoring all of our Native sisters who have experienced sexual violence throughout history. There are no age limits, time limits, or submission limits on the survivor’s stories. Many of us have been victimized as children, and multiple times throughout our lives, and many of us may want to honor several people. We may also want to honor those that have passed on. We have a link to our submission forms and guidelines here. We only ask that you be mindful of the guidelines regarding permission and requests for anonymity. Please join us in our efforts to restore the sacredness and visibility of women in our communities. Help honor the stories and the lives of survivors as we stand together to break the silence and say GUIDELINES FOR SUBMITTING A STORY TO THE MIWSAC BARRETTE PROJECT: Sexual violence in any form is a shaming and degrading experience. For that reason, it is imperative that we respect every individual's right to privacy, anonymity and choice to disclose personal information. The following guidelines have been established to protect and avoid the exploitation of every sexual assault victim. 1. Whenever possible, if you would like to honor a victim of sexual violence talk to the individual, explain the purpose of the Barrette Project and ask permission if you can make or donate a barrette to share and honor their experience. Reassure the individual that any identifying information will not be used in the display. Stress that their story will be displayed alongside a barrette in order to raise awareness about the high incidences of sexual violence. 2. Ask permission to use a name. It could be their first name only, or if they prefer - a made-up name. 3. Ask permission to include their tribal affiliation. 4. Ask permission about what details they feel comfortable sharing. 5. Ask permission to use the year the assault occurred. 6. If the person you wish to honor doesn't want any personal information disclosed, ask if you can simply donate a barrette in their honor so our communities will recognize the high rates of sexual violence. Perhaps you could write a free-style poem or, for example, ask if you can simply state, "my sister, friend, etc. is a survivor of rape." 7. Tell the story with respect for the victim, family members, and friends. 8. Those donating barrettes and sharing stories should determine if they too would like to remain anonymous of if they would like to be recognized for their contribution. They should carefully consider if the stories they share will jeopardize the safety or anonymity considerations of the person they are honoring. 9. If you are a survivor and would like to share your story, carefully consider how much personal information you would like to share. The Barrette Project will be displayed in many public places. 10. Each person who donates will be asked to submit a form which will be filed and held confidential at the MIWSAC office in order to maintain the integrity of the research while accounting for the actual number of acts of sexual violence perpetrated against women and girls in our communities. 11. The Barrette Project will assist MIWSAC in providing information to tribal leadership and state and federal policy makers as we work for social change to protect Native women and girls. 12. Barrettes and stories will become the property of MIWSAC to be used as part of a public awareness campaign. *Guidelines for displaying barrettes with accompanying stories: A Guardian Barrette will be placed with the display to represent the thousands of sexual assaults against Native women and children throughout time that we don't have a story for. Current statistics of sexual assault against Native women and children will be on display to raise public awareness. All stories will be typewritten by MIWSAC staff, and then laminated. A barrette will be attached to each story. An archive book will accompany the display, and with permission from the donors, will include the reasons the donor chose to honor this individual and the impact it has had on their life. An archive book will be provided where people who see the display can share their thoughts or comments.
Amoebic dysentery is unpleasant to say the least. Some fall prey to it while in a foreign country trying to enjoy a well deserved vacation. Eating amoeba contaminated raw vegetables is a common way to get amoebic dysentery. When dystentery is a concern, you must disinfect any raw vegetables and fruit before consuming them -- whether you intend to peel them or not. Disinfect the vegetables as soon as they come home from the store and you'll prevent contamination and make sure your kitchen remains a healthy place to eat. Things You'll Need Wash the vegetables with running tap water. This is OK even if the water may be a source of amoebas. The disinfectant will handle those. And it will work much more effectively once visible surface soil is removed. Soak the raw vegetables in a clean bowl filled with a commercial vegetable disinfectant listed as an amebicide (like Microdyn and Bacdyn). Follow the manufacturer's instructions for dilution rates and soaking times. If you do not have access to a disinfectant, soak the solution in a bleach solution for five seconds. For a more natural option, use a grape seed extract based disinfectant and soak for the manufacturer-recommended amount of time. Or, soak in a solution of one part water and three parts vinegar for one minute. Wash and dry your hands. Remove the vegetables from the soak and place them in a clean colander to drip completely dry. If you used chlorine bleach, rinse them first with bottled water. Tips & Warnings - Cooking vegetables will kill an amoeba. - The temperature of the rinse and soak water should be slightly higher than the temperature of the vegetables. Otherwise, the flesh of the vegetables may uptake surface microorganisms. - You can use a solution of commercial vegetable disinfectant repeatedly -- as long as there is no visible soil or sediment in the solution. - Photo Credit Jupiterimages/Brand X Pictures/Getty Images How to Kill Staph on Surfaces Staphylococcus aureus bacteria is so common that it can be found in three out of 10 people at any given time. It... How to Clean & Disinfect a Greenhouse Greenhouses are a very valuable tool to anyone trying to grow plants and vegetables in cooler regions. But the humidity and warmth... How to Use Vinegar & Water to Disinfect Vegetables Before vegetables get to your kitchen, even if they come straight from your own garden, they might be exposed to bad bacteria...
Cayuse village in the distance seen from Whitman Mission grist mill; Sketch by National Park Service artist. Before the first Euro-Americans settled in the Walla Walla Valley, a winter village of the Cayuse Indians was located for many years in the vicinity of what later became the site of the Whitman Mission. When Marcus and Narcissa Whitman established their Presbyterian Mission in the valley in 1836, the Cayuse Pásxa winter village was located about a quarter mile east of it, clearly visible and an easy walk from the Mission grounds. The current Whitman monument, a granite obelisk on the hill above the Mission, overlooks the site of what was once the Pásxa Village to the southeast. This was a village of equestrians who welcomed the Whitmans and made room for them on their ancestral grounds, which were rich and productive grazing lands of native perennial grasses. Nearby a thermal spring which still remains active and never froze, was a popular spot for watering horses, especially during hard winters. A Cayuse winter village was a place of tule mat longhouses and temporary dwellings where equipment was constructed and repaired, and oral traditions were shared during the long winter months. In other seasons of the year, most of the village’s inhabitants migrated to traditional upland hunting and gathering places, and to fish and dig roots in the spring, hunt and pick berries in the summer and fall, and occasionally cross the Rocky Mountains to hunt bison. The trek to the plains was recognized as an opportunity to expand trade to more interior peoples, and the Cayuse and Walla Walla became adept middlemen. Historian Verne Ray has identified seventy-six traditional Cayuse Village sites, most temporary, seasonal sites. The extent of the Cayuse territory was vast. The Cayuse maintained villages on the Tucannon, Snake, Touchet, and Walla Walla Rivers in Washington, and the Umatilla, Grande Ronde, Burnt, Powder, and John Day Rivers in Oregon as well as on several Washington and Oregon Creeks. Ray identified five separate villages in the Walla Walla Valley and seven Cayuse Bands scattered throughout Eastern Oregon and Washington. The Walla Walla River Cayuse Band was called the Pa’cxapu. The Prince’s Cabin The Prince’s Cabin, 2008, before moving and restoration at Frenchtown When Robin and Kriss Peterson purchased the Smith farmstead just east of the Whitman Mission in 1990, they became interested in the old cabin sitting amongst the farm buildings. Robin was the minister of the College Place Presbyterian Church, taught French at Whitman College, and was also an active farmer. His passionate interest led him to investigate the history and origins of the cabin back to the Prince, the younger brother of the headman at the Cayuse village just east of the Whitman Mission. all of which he described in a paper he provided interested people and organizations, including the Frenchtown Historical Foundation. Robin’s goal for the cabin was to move it to a more prominent location on their place and to restore it for use as a center of reconciliation. Following Robin’s death, in 2013 Kriss Peterson donated the cabin to the Frenchtown foundation on the condition that it be moved to the Frenchtown site to be restored and available for public display. Moving & Restoring the Cabin The cabin starts to take shape again as the lower walls are reinstalled, 2014. During the more than 100 years the property was owned by members of the Smith family, shed roofs were added on both the cabin’s north and south sides protecting it from deterioration. The east wall was also opened to allow farm equipment storage, and electrical wiring was added for its use as a playhouse and machine shop. Beginning in 2008, the Petersons began efforts to prevent deterioration of the cabin, and to prepare it for moving. To move and restore the cabin at the Frenchtown site, the Frenchtown Historical Foundation put together a team of contractors and volunteers. Although the upper floor of the cabin was intact, the partially dismantled and deteriorating lower walls were in six pieces. The first tasks were to prepare a new site for the cabin, to document the condition of the cabin and its contents before the move, and to decide how to move it without further damage. In consultation with archaeologists, contractors, craftsmen, historians, tribal representatives, and movers, a decision was made to remove the remaining shed on the south side of the cabin, to move the upper story intact, to temporarily stabilize the lower wall segments instead of further dismantling them, and to move them separately—which proved to be a successful strategy. For a detailed history of the Prince, the cabin, its design, and its moving and restoration, click here to visit the Frenchtown Historical Foundation website. An interpretive sign has been installed by Walla Walla 2020 on Last Chance Road just north of the Walla Walla River overlooking the original site of the village and the cabin. The cabin itself is now available for viewing at the Frenchtown Historic Site two miles west of the Whitman Mission seven days a week from dawn to dusk with no admission charge, and features additional signage. More information can also be found at http://www.frenchtownwa.org/ and at http://www.ctuir.org/. Historic Sites & Markers Project Thanks to Greg Cleveland, Jennifer Karson Engum, Robin Peterson, Mahlon Kreibel, Sam Pambrun, and others who contributed information shown on this page. Information on Walla Walla 2020’s Historic Research & Plaque Project honoring individual buildings and properties is also available at www.ww2020.net/historic-building-research.
We can’t deny the fact that we need money. We need money to pay our bills. We also need it to advance in our lives (e.g. for education). But what is the right way to treat money? How can we live a meaningful life without money worries? I believe this quote by Jonathan Swift gives us a good answer: A wise man should have money in his head, but not in his heart. I like this quote. In my opinion, it captures the essence of how our attitude toward money should be. Money should be in your head, but not in your heart. What does it mean? First, it means that you should think about money and treat it in a rational way. You should think about how to make money. More importantly, however, you should think about how to manage it. There are many people who make a lot of money but live in a financial mess because they can’t manage it. Second, it means that money shouldn’t be your main motivation. It shouldn’t be the main thing that drives your decisions. If it does, you might be wealthy on the outside, but feel empty on the inside. There is only one way to live a meaningful life: by contributing to a cause that you care about. Here are four tips to help you apply the principle: 1. Find your cause. What dissatisfaction do you have about the world that you can do something about? What is the place where you want to and can make a difference? In my case, it bothers me to see people (including myself) live below their full potential. That’s my cause in starting this blog. I want to help people reach their full potential. This gives me a sense of purpose. You can apply this to your work. Ask yourself: what can I contribute? How can I make a difference? 2. Make your cause your motivation. After finding your cause, make it your motivation. Make it your main reason for doing things. I can assure you: this will make you excited because you have a sense of purpose. 3. Educate yourself in personal finance. When it comes to dealing with money, the first thing you should do is educate yourself. A good resource for this is Investopedia. Read the articles there and you will have a good foundation. 4. Build a system for your personal finance. Based on what you learn in personal finance, you should then build a system to manage your money. The system will help you manage your money in a rational way. Your system should include tracking your expenses. This is important to help you make rational decisions. Do you know how much money you spent last month? And do you know exactly what you spent it on? Start tracking your expenses if you haven’t. You should put money in your head, but not in your heart. By applying the tips above, you will get the best of both worlds: you will live a meaningful life without money worries. If you liked this post, please share it on Facebook. Thanks!
It’s winter. You’re stuck inside the house, school is out, and you don’t know what you’re going to do to entertain the little ones? We feel you. Don’t despair. We’ve put together a list of easy activities you can do at home (with stuff you probably already have around the house). Check out our five holiday-inspired activities that address all five senses: 1. Holiday Sensory Bin Here’s what My Small Potatoes says you need to gather: All you need to do is cut the ribbons to 9 inches (or less), twirl the pipe cleaner into a spring, and throw it all into a bowl. You’ll be amazed at how long it will keep your little one entertained. Filling up and dumping out? That’s a toddler’s jam. Hence the cups and spoons. Watch baby explore. (And try not to let your OCD take over when the cooked rice goes everywhere.) Small Potatoes says this sensory play allows your baby “to practice cause and effect, to test hypothesis, to learn about the properties of weight and volume, to discover the wonders of gravity, and to lose himself in the world of experimenting.” If you’re headed to a relative’s house, put everything in a Tupperware container, and you have a sensory bin on-the-go. 2. Colored Ice Sensory Play Make ice cubes in several different colors (either with food coloring or food puree). (Because we all know the cubes will end up in the baby’s mouth.) Make sure the cubes are large enough that your child won’t choke on them. Learn Play Imagine recommends putting the cubes near a tub of water. Your child will be amazed when the cubes start to melt and change the color of the water. Talk to your child about what she is experiencing. You’re building her vocabulary with words like cold, shiny, sweet and smooth (and, perhaps, “wet mess”). 3. Bring Snow Inside If you’re over bundling up and heading outside to play in the snow, bring it inside! Fill up a container with snow, and the possibilities are endless. Growing a Jeweled Rose suggests: - Using kitchen gadgets (measuring cups and cookie cutters) - Sand toys (buckets and shovels) - Paint brushes and water color - Spray bottles with colored water - Playing with vehicles (like, construction trucks with scoops) No snow? No problem. You can make your own with two ingredients: rice cereal and coconut oil. Lemon Lime Adventures has a recipe here for cloud dough. Give it a holiday flavor with a sprinkle of cinnamon! 4. Holiday Sensory Board Grab some glue, a foam board and your favorite Christmas items. Get things that have lots of different textures, like you see on this sensory board from Toddler Approved. The bows, the hat, the pine cone: your tot is not going to be able to keep his hands away. Introduce each object to your child and describe them in detail. This will teach your child a bunch of new words. Play a game with your child by describing an item and let him or her guess which one you are talking about. 5. Light Sensory Play Unless you have your Christmas tree barricaded in a playpen, your baby is probably trying to put all of the lights and ornaments in her mouth. Lemon Lime Adventures has a solution that lets your baby explore while staying safe! Stuff some lights into a plastic container, and screw on the lid. Your baby gets to play with the container, watch the lights, and even try to put it in her mouth without you worrying she’ll get hurt. Do you have any holiday sensory play ideas? Share your favorite ways to play and learn.
As a seventeenth-century marriage counselor, William Gouge wasn't bad. Here's his seven points for maintaining peace at home. Notice that the husband and the wife are equally responsible for maintaining domestic harmony and that they have the same duties. Now, this sort of egalitarianism was not true of Gouge's counsel overall; at times he laid out specific roles for the husband and the wife based solely on their gender (like the importance of obedience in wives). But it's clear that a successful marriage was not simply the assertion of power by the husband over the wife. As you read this, don't miss the preacher's use of catchy phrases. "The second blow makes the fray," "Wrath must not lie in bed with two such bed-fellows." I'd bet any number of groats that he used those in his sermons on many a Sunday. Here is Mr. Gouge: 1. All offences so much as possibly may be must be avoided. The husband must be watchful over himself that he give no offence to his wife: and so the wife on the other side. Offences cause contentions. 2. When an offence is given by the one party, it must not be taken by the other; but rather passed by: and then will not peace be broken. The second blow makes the fray. 3. If both be incensed together, the fire is like to be the greater: with the greater speed therefore must they both labour to put it out. Wrath must not lie in bed with two such bed-fellows: neither may they part beds for wrath sake. That this fire may be the sooner quenched, they must both strive first to offer reconciliation. Theirs is the glory who do first begin, for they are most properly the blessed peacemakers. Not to accept peace when it is offered is more than heathenish: but when wrath is incensed, to seek atonement is the duty of a Christian, and a grace that cometh from above. 4. Children, servants, nor any other in the family must be bolstered up by the one against the other. The man's partaking with any of the house against his wife, or the wife against her husband, is an usual cause of contention betwixt man and wife. 5. They must forbear to twit one another in the teeth with the husbands or wives of other persons or with their own former husbands or wives [in case they have had any before]. Comparisons in this kind are very odious. They stir up much passion, and cause great contentions. 6. Above all they must take heed of rash and unjust jealousy, which is the bane of marriage, and greatest cause of discontent that can be given betwixt man and wife. Jealous persons are ready to pick quarrels, and to seek occasions of discord: they will take every word, look, action, and motion, in the worse part, and so take offence where none is given. When jealousy is once kindled, it is as a flaming fire that can hardly be put out. It maketh the party whom it possesseth implacable. 7. In all things that may stand with a good conscience they must endeavour to please one another: and either of them suffer their own will to be crossed, rather than discontent to be given to the other. S. Paul noteth this as a common mutual duty belonging to them both, and expresseth their care thereof under a word that signifieth more than ordinary care, and implieth a dividing of the mind into divers thoughts, casting this way, and that way, and every way how to give best content.
Babies born during water births are at risk of contracting Legionnaires' disease, a severe and potentially life-threatening form of pneumonia that infected two infants in Arizona last year. Both infants survived after receiving antibiotics. Infections among infants born in heated birthing pools are rare. So public health officials in Maricopa County were alarmed when they learned of two cases that had taken place just months apart. They identified "numerous gaps in infection prevention" during the water births, according to a report released Thursday by the Centers for Disease Control and Prevention about the Arizona investigation. In the United States, there...
Batik is two-dimensional decorative art on cloth. This hand-made textile art uses wax to draw pattern in the beginning. Fluid of melting wax is painted on the cloth to cover parts that will resist the dye. Batik primarily is used as kebaya and sarung (women and men underskirt). Nowadays its function varies like used as shirt, accessories such as woman's handbags, scarfs even as decorative icons such tablecloths, paintings and lampshades. The center of batik crafts sits in Central Java Tengah in the cities of Solo, Pekalongan and Yogyakarta, although in other area like Lasem and Cirebon batik can also be found. The Indonesian painter who has able to gain international reputation is realist Raden Saleh that lived in the 19th century. And 20th century painter who could reach the same achievement is Affandi. Traditional painting flourishes in Klungkung, Bali in the form of cloth paintings. Then in the thirty, two European painters Walter Spies (Germany) and Rudolf Bonnet (Dutch) came to Bali bringing the influence of European style and concepts of art. Since then Balinese Paintings far more widely developed in theme and style. Painting above is one of Spies' painting called Morning in Iseh, which render a beautiful morning landscape in Iseh. Romantic-decorative shapes show us the influence of Balinese traditional art of painting, and the influence of European art can be seen on the existence Wayang is a form of traditional Javanese theatrical performance. Known as world of shadows, wayang which refers to wayang kulit, is a theater performed by a dalang/puppeteer with plays puppets behind a wide white screen illuminated by an oil lamp. The shadows of puppets on the screen caused by flickering create lively atmosphere for the performance. Puppets played (called wayang,too) are made from cows or goats skin which is carved Wayang kulit is performed in the important ceremonies which mark a stepping stone in one's life such as wedding or selamatan. The plays performed are stories from Mahabarata and Ramayana. Wayang itself keeps so much philosphy from all aspects of life of Javanese Other kinds of wayang are wayang golek ; wayang in three-dimensional shape, wayang orang which performed by people and wayang topeng peformed by poeple with masks. Madura Island lies in the northeast directions of Java. Due to its hard geographical conditions, Madura Island has one unique tradition, Kerapan Sapi. Kerapan Sapi or Bullrace is a blend of traditional art and sport. This race is held between two pairs of human-driven bulls which then will make a kind of drag-race in a 120 meters race track. During a week traditional games and fair are held there, Almost every part of Indonesia has traditional dancing. The number of dancing types is known more than 200. Its functions diverse from performance in the palace, for pay homage to God, entertainment, ritual ceremonies to dance for the death in funeral ceremonies. Gesture of Javanese dancing is generally slow and calm, emphasizing graceful movement. In Central Java, dance firstly was a sacred part of palace life, where dance was only performed for the king. Javanese Dance can still be found at performance in the kraton Whereas in Bali island dance movements tend to be jerk and faster, stress expressive nature. Dance is functioning in a wider sense and touch every pore of society's life. The most well-known Balinese dancing is Legong Dance, which usually performed by Balinese girls (picture above). Mask is a ethnic art which is previously used in the ritual as camouflage between ordinary poeple who face spritual world (world between heaven and earth). Its use continues for performance with still keep the magical and mystical value. Mask spreads widely in Indonesia. Masks shown here is example of Javanese mask. Mask is also used intensively in Bali such as used in Barong performance. Java mask is worn in theatrical show called wayang topeng which performs plays like Panji stories, Mahabarata and Ramayana epics
A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time. No big deal, you might think. After all, computers have been beating humans at western chess for years, and when IBM’s Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity. That hasn’t happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games. The Mainichi Daily News reports that top women’s shogi player Ichiyo Shimizu took part in a match staged at the University of Tokyo, playing against a computer called Akara 2010. Akara is apparently a Buddhist term meaning 10224, the newspaper reports, and the system beat Shimizu in six hours, over the course of 86 moves. Japan’s national broadcaster, NHK, reported that Akara “aggressively pursued Shimizu from the beginning”. It’s the first time a computer has beaten a professional human player. The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu’s defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007. Perhaps the association doesn’t mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player. Meanwhile, humans will have to face up to more flexible computers, capable of playing more than just one kind of game. And IBM has now developed Watson, a computer designed to beat humans at the game showJeopardy. Watson, says IBM, is “designed to rival the human mind’s ability to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately, demonstrate confidence to deliver precise final answers”. IBM say they have improved artificial intelligence enough that Watson will be able to challenge Jeopardy champions, and they’ll put their boast to the test soon, says The New York Times. I’ll leave you with these wise and telling words from the defeated Shimizu: “It made no eccentric moves, and from partway through it felt like I was playing against a human,” Shimizu told the Mainichi Daily News. “I hope humans and computers will become stronger in the future through friendly competition.” Via New Scientist
It's a popular opinion that youngest children are often less resourceful and more bratty than their older siblings. Parents tend to treat their youngest children like babies for years, say the haters, which can shape them into less capable, more entitled adults. Though the science of youngest-child-syndrome is very much up for debate, new research from Jordy Kaufman, a senior research fellow at Swinburne University of Technology in Australia, might help explain why some parents baby their youngest children long past babyhood. In a survey of 747 mothers, 70 percent recalled that when returning from the hospital with a newborn, their former youngest child seemed noticeably larger than they remembered. But rather than suddenly overestimating the size of their older children, Kaufman believes the mothers may have been suffering from the "baby illusion," where they underestimate the size of their newborns. Asking 77 mothers to estimate the height of one of their children between two and six years old, he found the moms consistently guessed low for their youngest child, or only child, by an average of three inches. Meanwhile, they guessed their oldest child's height pretty accurately. Kaufman says the baby illusion may have developed as an adaptive mechanism to protect the most vulnerable offspring. "A perception of baby-like features, such as cuteness or smaller size, helps parents prioritize care for the child who most needs it," says Joy Jernigan at Today. This may play into a mother's desire to baby her youngest for years, while treating her oldest children like "big kids," even if they're only two or three. "Our research potentially explains why the 'baby of the family' never outgrows that label. To the parents, the baby of the family may always be 'the baby'," Kaufman told the BBC. All this seems to fit nicely into the "birth order theory," a kind of controversial idea that says the sequence of births impacts how smart, outgoing, and successful people become as adults. Though researchers point out that this is a nearly impossible theory to test — how does one control for variables like socioeconomic status, sex, parental birth order, and other environmental circumstances, for example — hundreds of studies have identified certain characteristics that only oldest, middle, and youngest children tend to share with their kind. Here's how Susan Krauss Whitbourne, a Professor of Psychology at the University of Massachusetts Amherst, describes the profile of the youngest-borns in Psychology Today: The youngest child may feel less capable and experienced, and perhaps is a bit pampered by parents and even older sibs. As a result, the youngest may develop social skills that will get other people to do things for them, thus contributing to their image as charming and popular. [Psychology Today] Burn. Older children, who may suddenly look big to their moms when a new baby comes home, may not grow up as charming and popular as the little ones. But, according to the birth order theory, they tend to come out on top in other ways. A 2010 study that compiled data from 200 birth order trials found that first-borns tended to be higher achievers, more motivated, and more successful in academic and intellectual pursuits.
Source: The Wall Street Journal AMBERG, Germany—The next front in Germany’s effort to keep up with the digital revolution lies in a factory in this sleepy industrial town. At stake isn’t what the Siemens AG plant produces—in this case, automated machines to be used in other industrial factories—but how its 1,000 manufacturing units communicate through the Web. As a result, most units in this 100,000-plus square-foot factory are able to fetch and assemble components without further human input. The Amberg plant is an early-stage example of a concerted effort by the German government, companies, universities and research institutions to develop fully automated, Internet-based “smart” factories. Such factories would make products fully customizable while on the shop floor: An incomplete product on the assembly line would tell “the machine itself what services it needs” and the final product would immediately be put together, said Wolfgang Wahlster, a co-chairman of Industrie 4.0, as the collective project is known. The initiative seeks to help German industrial manufacturing—the backbone of Europe’s largest economy—keep its competitive edge against the labor-cost advantages of developing countries and a resurgence in U.S. manufacturing. Read the rest of this entry » Source: Foreign Affairs The Revolution in DNA Science — And What To Do About It The revolution in genetic engineering that will make it possible for humans to actively manage our evolutionary process for the first time in our species’ history is already under way. In laboratories and clinics around the world, gene therapies are being successfully deployed to treat a range of diseases, including certain types of immune deficiency, retinal amaurosis, leukemia, myeloma, hemophilia, and Parkinson’s. This miraculous progress is only the beginning. The same already existing technologies that will soon eliminate many diseases that have victimized humans for thousands of years will almost certainly be used eventually to make our species smarter, stronger, and more robust. The prospect of genetic engineering will be exciting to some, frightening to others, and challenging for all. If not adequately addressed, it will also likely lead to major conflict both within societies and globally. But although the science of human genetic engineering is charging forward at an exponential rate, the global policy framework for ensuring this scientific progress does not lead to destabilizing conflict barely exists at all. The time has come for a meaningful dialogue on the national security implications of the human genetic revolution that can lay the conceptual foundation for a future global policy structure seeking to prevent dangerous future conflict and abuse. The rate of recent progress in human genetics has been astounding. It was only 61 short years ago that the DNA helix was uncovered and a mere 50 years later, in 2003, when the human genome was fully sequenced. The cost of sequencing a full human genome was roughly $100 million in 2001 and is under $10,000 today. If even a fraction of this rate of decrease is maintained, as is highly likely, the cost will approach negligibility in under a decade, ushering in a new era of personalized medicine where many treatments will be customized based on each person’s genetic predisposition. Processes like these will only widen and deepen in the future, just at an exponentially accelerated pace. Read the rest of this entry » Source: The Economist Which MBA?, 2014 The Chicago boys, and girls, come top again in our business-school ranking For the fourth time in five years, the University of Chicago’s Booth School of Business tops The Economist’s ranking of full-time MBA programmes. Even as banking jobs have become scarcer, Chicago, famed for its prowess in finance, has maintained a strong record of placing students in work. Last year 94% of graduates were employed within three months of leaving. Fifteen of the top 20 schools are American. However, HEC Paris, the top European school, has climbed four places to fourth, mostly because of the impressive salaries its graduates get. The University of Queensland is the top-ranked school outside America and Europe. This is the 12th time we have published the ranking. Each year we ask students why they decided to take an MBA. Our ranking weights data according to what they say is important. The four categories covered are: opening new career opportunities (35%); personal development/educational experience (35%); increasing salary (20%); and the potential to network (10%). The figures we collate are a mixture of hard data and subjective marks given by the students. Source: Project Syndicate Nathan Eagle is the CEO of Jana, a World Economic Forum Technology Pioneer. BOSTON – Nearly everyone has a digital footprint – the trail of so-called “passive data” that is produced when you engage in any online interaction, such as with branded content on social media, or perform any digital transaction, like purchasing something with a credit card. A few seconds ago, you may have generated passive data by clicking on a link to read this article. Passive data, as the name suggests, are not generated consciously; they are by-products of our everyday technological existence. As a result, this information – and its intrinsic monetary value – often goes unnoticed by Internet users. But the potential of passive data is not lost on companies. They recognize that such information, like a raw material, can be mined and used in many different ways. For example, by analyzing users’ browser history, firms can predict what kinds of advertisements they might respond to or what kinds of products they are likely to purchase. Even health-care organizations are getting in on the action, using a community’s purchasing patterns to predict, say, an influenza outbreak. Indeed, an entire industry of businesses – which operate rather euphemistically as “data-management platforms” – now captures individual users’ passive data and extracts hundreds of billions of dollars from it. According to the Data-Driven Marketing Institute, the data-mining industry generated $156 billion in revenue in 2012 – roughly $60 for each of the world’s 2.5 billion Internet users. Read the rest of this entry » Source: Technology Review One of the characteristics of our increasingly information-driven lives is the huge amounts of data being generated about everything from sporting activities and Twitter comments to genetic patterns and disease predictions. These information firehoses are generally known as “big data,” and with them come the grand challenge of making sense of the material they produce. That’s no small task. The Twitter stream alone produces some 500 million tweets a day. This has to be filtered, analyzed for interesting trends, and then displayed in a way that humans can make sense of quickly. It is this last task of data display that Zachary Weber and Vijay Gadepally have taken on at MIT’s Lincoln Laboratory in Lexington, Massachusetts. They say that combining big data with 3-D printing can dramatically improve the way people consume and understand data on a massive scale. They make their argument using the example of a 3-D printed model of the MIT campus, which they created using a laser ranging device to measure the buildings. They used this data to build a 3-D model of the campus which they printed out in translucent plastic using standard 3-D printing techniques. One advantage of the translucent plastic is that it can be illuminated from beneath with different colors. Indeed, the team used a projector connected to a laptop computer to beam an image on the model from below. The image above shows the campus colored according to the height of the buildings. But that’s only the beginning of what they say is possible. To demonstrate, Weber and Gadepally filtered a portion of the Twitter stream to pick out tweets that were geolocated at the MIT campus. They can then use their model to show what kind of content is being generated in different locations on the campus and allow users to cut and dice the data using an interactive screen. “Other demonstrations may include animating twitter traffic volume as a function of time and space to provide insight into campus patterns or life,” they say. Read the rest of this entry »
Partitioning enhances the performance, manageability, and availability of a wide variety of applications and helps reduce the total cost of ownership for storing large amounts of data. Partitioning allows tables, indexes, and index-organized tables to be subdivided into smaller pieces, enabling these database objects to be managed and accessed at a finer level of granularity. Oracle provides a rich variety of partitioning strategies and extensions to address every business requirement. Moreover, since it is entirely transparent, partitioning can be applied to almost any application without the need for potentially expensive and time consuming application changes. This chapter contains the following topics: Partitioning allows a table, index, or index-organized table to be subdivided into smaller pieces, where each piece of such a database object is called a partition. Each partition has its own name, and may optionally have its own storage characteristics. From the perspective of a database administrator, a partitioned object has multiple pieces that can be managed either collectively or individually. This gives the administrator considerable flexibility in managing partitioned objects. However, from the perspective of the application, a partitioned table is identical to a non-partitioned table; no modifications are necessary when accessing a partitioned table using SQL queries and DML statements. Figure 2-1 offers a graphical view of how partitioned tables differ from non-partitioned tables. Note:All partitions of a partitioned object must reside in tablespaces of a single block size. See Also:Oracle Database Concepts for more information about multiple block sizes Each row in a partitioned table is unambiguously assigned to a single partition. The partitioning key is comprised of one or more columns that determine the partition where each row will be stored. Oracle automatically directs insert, update, and delete operations to the appropriate partition through the use of the partitioning key. Any table can be partitioned into a million separate partitions except those tables containing columns with RAW datatypes. You can, however, use tables containing columns with Note:To reduce disk usage and memory usage (specifically, the buffer cache), you can store tables and partitions of a partitioned table in a compressed format inside the database. This often leads to a better scaleup for read-only operations. Table compression can also speed up query execution. There is, however, a slight cost in CPU overhead. See Also:Oracle Database Concepts for more information about table compression Here are some suggestions for when to partition a table: Tables greater than 2 GB should always be considered as candidates for partitioning. Tables containing historical data, in which new data is added into the newest partition. A typical example is a historical table where only the current month's data is updatable and the other 11 months are read only. When the contents of a table need to be distributed across different types of storage devices. Here are some suggestions for when to consider partitioning an index: Avoid rebuilding the entire index when data is removed. Perform maintenance on parts of the data without invalidating the entire index. Reduce the impact of index skew caused by an index on a column with a monotonically increasing value. Partitioned index-organized tables are very useful for providing improved performance, manageability, and availability for index-organized tables. For partitioning an index-organized table: Partition columns must be a subset of the primary key columns Secondary indexes can be partitioned (both locally and globally) OVERFLOW data segments are always equi-partitioned with the table partitions See Also:Oracle Database Concepts for more information about index-organized tables System partitioning enables application-controlled partitioning without having the database controlling the data placement. The database simply provides the ability to break down a table into partitions without knowing what the individual partitions are going to be used for. All aspects of partitioning have to be controlled by the application. For example, an insertion into a system partitioned table without the explicit specification of a partition will fail. System partitioning provides the well-known benefits of partitioning (scalability, availability, and manageability), but the partitioning and actual data placement are controlled by the application. See Also:Oracle Database Data Cartridge Developer's Guide for more information about system partitioning Information Lifecycle Management (ILM) is concerned with managing data during its lifetime. Partitioning plays a key role in ILM because it enables groups of data (that is, partitions) to be distributed across different types of storage devices and managed individually. See Also:Chapter 5, "Using Partitioning for Information Lifecycle Management" for more information about Information Lifecycle Management Unstructured data (such as images and documents) which is stored in a LOB column in the database can also be partitioned. When a table is partitioned, all the columns will reside in the tablespace for that partition, with the exception of LOB columns, which can be stored in their own tablespace. This technique is very useful when a table is comprised of large LOBs because they can be stored separately from the main data. This can be beneficial if the main data is being frequently updated but the LOB data isn't. For example, an employee record may contain a photo which is unlikely to change frequently. However, the employee personnel details (such as address, department, manager, and so on) could change. This approach also means that cheaper storage can be used for storing the LOB data and more expensive, faster storage used for the employee record. Partitioning can provide tremendous benefit to a wide variety of applications by improving performance, manageability, and availability. It is not unusual for partitioning to improve the performance of certain queries or maintenance operations by an order of magnitude. Moreover, partitioning can greatly simplify common administration tasks. Partitioning also enables database designers and administrators to tackle some of the toughest problems posed by cutting-edge applications. Partitioning is a key tool for building multi-terabyte systems or systems with extremely high availability requirements. By limiting the amount of data to be examined or operated on, and by providing data distribution for parallel execution, partitioning provides a number of performance benefits. These features include: Partition pruning is the simplest and also the most substantial means to improve performance using partitioning. Partition pruning can often improve query performance by several orders of magnitude. For example, suppose an application contains an Orders table containing a historical record of orders, and that this table has been partitioned by week. A query requesting orders for a single week would only access a single partition of the Orders table. If the Orders table had 2 years of historical data, then this query would access one partition instead of 104 partitions. This query could potentially execute 100 times faster simply because of partition pruning. Partition pruning works with all of Oracle's other performance features. Oracle will utilize partition pruning in conjunction with any indexing technique, join technique, or parallel access method. Partitioning can also improve the performance of multi-table joins by using a technique known as partition-wise joins. Partition-wise joins can be applied when two tables are being joined together and both tables are partitioned on the join key, or when a reference partitioned table is joined with its parent table. Partition-wise joins break a large join into smaller joins that occur between each of the partitions, completing the overall join in less time. This offers significant performance benefits both for serial and parallel execution. Partitioning allows tables and indexes to be partitioned into smaller, more manageable units, providing database administrators with the ability to pursue a "divide and conquer" approach to data management. With partitioning, maintenance operations can be focused on particular portions of tables. For example, a database administrator could back up a single partition of a table, rather than backing up the entire table. For maintenance operations across an entire database object, it is possible to perform these operations on a per-partition basis, thus dividing the maintenance process into more manageable chunks. A typical usage of partitioning for manageability is to support a "rolling window" load process in a data warehouse. Suppose that a DBA loads new data into a table on a weekly basis. That table could be partitioned so that each partition contains one week of data. The load process is simply the addition of a new partition using a partition exchange load. Adding a single partition is much more efficient than modifying the entire table, since the DBA does not need to modify any other partitions. Partitioned database objects provide partition independence. This characteristic of partition independence can be an important part of a high-availability strategy. For example, if one partition of a partitioned table is unavailable, then all of the other partitions of the table remain online and available. The application can continue to execute queries and transactions against the available partitions for the table, and these database operations will run successfully, provided they do not need to access the unavailable partition. The database administrator can specify that each partition be stored in a separate tablespace; the most common scenario is having these tablespaces stored on different storage tiers. Storing different partitions in different tablespaces allows the database administrator to do backup and recovery operations on each individual partition, independent of the other partitions in the table. Thus allowing the active parts of the database to be made available sooner so access to the system can continue, while the inactive data is still being restored. Moreover, partitioning can reduce scheduled downtime. The performance gains provided by partitioning may enable database administrators to complete maintenance operations on large database objects in relatively small batch windows. Oracle Partitioning offers three fundamental data distribution methods as basic partitioning strategies that control how data is placed into individual partitions: Using these data distribution methods, a table can either be partitioned as a single list or as a composite partitioned table: Each partitioning strategy has different advantages and design considerations. Thus, each strategy is more appropriate for a particular situation. A table is defined by specifying one of the following data distribution methodologies, using one or more columns as the partitioning key: For example, consider a table with a column of type NUMBER as the partitioning key and two partitions less_than_one_thousand partition contains rows where the following condition is true: 500 <= partitioning key < 1000 Figure 2-2 offers a graphical view of the basic partitioning strategies for a single-level partitioned table. Range partitioning maps data to partitions based on ranges of values of the partitioning key that you establish for each partition. It is the most common type of partitioning and is often used with dates. For a table with a date column as the partitioning key, the January-2005 partition would contain rows with partitioning key values from 01-Jan-2005 to 31-Jan-2005. Each partition has a THAN clause, which specifies a non-inclusive upper bound for the partitions. Any values of the partitioning key equal to or higher than this literal are added to the next higher partition. All partitions, except the first, have an implicit lower bound specified by the THAN clause of the previous partition. MAXVALUE literal can be defined for the highest partition. MAXVALUE represents a virtual infinite value that sorts higher than any other possible value for the partitioning key, including the NULL value. Hash partitioning maps data to partitions based on a hashing algorithm that Oracle applies to the partitioning key that you identify. The hashing algorithm evenly distributes rows among partitions, giving partitions approximately the same size. Hash partitioning is the ideal method for distributing data evenly across devices. Hash partitioning is also an easy-to-use alternative to range partitioning, especially when the data to be partitioned is not historical or has no obvious partitioning key. Note:You cannot change the hashing algorithms used by partitioning. List partitioning enables you to explicitly control how rows map to partitions by specifying a list of discrete values for the partitioning key in the description for each partition. The advantage of list partitioning is that you can group and organize unordered and unrelated sets of data in a natural way. For a table with a region column as the partitioning key, the North America partition might contain values DEFAULT partition enables you to avoid specifying all possible values for a list-partitioned table by using a default partition, so that all rows that do not map to any other partition do not generate an error. Composite partitioning is a combination of the basic data distribution methods; a table is partitioned by one data distribution method and then each partition is further subdivided into subpartitions using a second data distribution method. All subpartitions for a given partition together represent a logical subset of the data. Composite partitioning supports historical operations, such as adding new range partitions, but also provides higher degrees of potential partition pruning and finer granularity of data placement through subpartitioning. Figure 2-3 offers a graphical view of range-hash and range-list composite partitioning, as an example. Composite range-hash partitioning partitions data using the range method, and within each partition, subpartitions it using the hash method. Composite range-hash partitioning provides the improved manageability of range partitioning and the data placement, striping, and parallelism advantages of hash partitioning. Composite range-list partitioning partitions data using the range method, and within each partition, subpartitions it using the list method. Composite range-list partitioning provides the manageability of range partitioning and the explicit control of list partitioning for the subpartitions. In addition to the basic partitioning strategies, Oracle Database provides partitioning extensions: These extensions significantly enhance the manageability of partitioned tables: Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions. You must specify at least one range partition. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database creates interval partitions for data beyond that transition point. The lower boundary of every interval partition is the non-inclusive upper boundary of the previous range or interval partition. For example, if you create an interval partitioned table with monthly intervals and the transition point at January 1, 2007, then the lower boundary for the January 2007 interval is January 1, 2007. The lower boundary for the July 2007 interval is July 1, 2007, regardless of whether the June 2007 partition was already created. When using interval partitioning, consider the following restrictions: You can only specify one partitioning key column, and it must be of Interval partitioning is not supported for index-organized tables. You cannot create a domain index on an interval-partitioned table. You can create single-level interval partitioned tables as well as the following composite partitioned tables: The Partition Advisor is part of the SQL Access Advisor. The Partition Advisor can recommend a partitioning strategy for a table based on a supplied workload of SQL statements which can be supplied by the SQL Cache, a SQL Tuning set, or be defined by the user. These extensions extend the flexibility in defining partitioning keys: Reference partitioning allows the partitioning of two tables related to one another by referential constraints. The partitioning key is resolved through an existing parent-child relationship, enforced by enabled and active primary key and foreign key constraints. The benefit of this extension is that tables with a parent-child relationship can be logically equi-partitioned by inheriting the partitioning key from the parent table without duplicating the key columns. The logical dependency will also automatically cascade partition maintenance operations, thus making application development easier and less error-prone. An example of reference partitioning is the OrderItems tables related to each other by a referential constraint Orders table is range partitioned on OrderDate. Reference partitioning on OrderItems leads to creation of the following partitioned table, which is equi-partitioned with respect to the Orders table, as shown in Figure 2-4 and Figure 2-5. All basic partitioning strategies are available for reference Partitioning. Interval partitioning cannot be used with reference partitioning. In previous releases of the Oracle Database, a table could only be partitioned if the partitioning key physically existed in the table. In Oracle Database 11g, virtual columns remove that restriction and allow the partitioning key to be defined by an expression, using one or more existing columns of a table. The expression is stored as metadata only. Oracle Partitioning has been enhanced to allow a partitioning strategy to be defined on virtual columns. For example, a 10 digit account ID can include account branch information as the leading 3 digits. With the extension of virtual column based Partitioning, an ACCOUNTS table containing an ACCOUNT_ID column can be extended with a virtual (derived) column ACCOUNT_BRANCH that is derived from the first three digits of the ACCOUNT_ID column, which becomes the partitioning key for this table. Virtual column-based Partitioning is supported with all basic partitioning strategies, including interval and interval-* composite partitioning. Just like partitioned tables, partitioned indexes improve manageability, availability, performance, and scalability. They can either be partitioned independently (global indexes) or automatically linked to a table's partitioning method (local indexes). In general, you should use global indexes for OLTP applications and local indexes for data warehousing or DSS applications. Also, whenever possible, you should try to use local indexes because they are easier to manage. When deciding what kind of partitioned index to use, you should consider the following guidelines in order: If the table partitioning column is a subset of the index keys, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 2. If the index is unique and does not include the partitioning key columns, then use a global index. If this is the case, then you are finished. Otherwise, continue to guideline 3. If your priority is manageability, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 4. If the application is an OLTP one and users need quick response times, use a global index. If the application is a DSS one and users are more interested in throughput, use a local index. See Also:Chapter 6, "Using Partitioning in a Data Warehouse Environment" and Chapter 7, "Using Partitioning in an Online Transaction Processing Environment" for more information about partitioned indexes and how to decide which type to use Local partitioned indexes are easier to manage than other types of partitioned indexes. They also offer greater availability and are common in DSS environments. The reason for this is equipartitioning: each partition of a local index is associated with exactly one partition of the table. This enables Oracle to automatically keep the index partitions in sync with the table partitions, and makes each table-index pair independent. Any actions that make one partition's data invalid or unavailable only affect a single partition. Local partitioned indexes support more availability when there are partition or subpartition maintenance operations on the table. A type of index called a local nonprefixed index is very useful for historical databases. In this type of index, the partitioning is not on the left prefix of the index columns. See Also:Chapter 4 for more information about prefixed indexes You cannot explicitly add a partition to a local index. Instead, new partitions are added to local indexes only when you add a partition to the underlying table. Likewise, you cannot explicitly drop a partition from a local index. Instead, local index partitions are dropped only when you drop a partition from the underlying table. A local index can be unique. However, in order for a local index to be unique, the partitioning key of the table must be part of the index's key columns. Figure 2-6 offers a graphical view of local partitioned indexes. Oracle offers two types of global partitioned indexes: range partitioned and hash partitioned. Global range partitioned indexes are flexible in that the degree of partitioning and the partitioning key are independent from the table's partitioning method. The highest partition of a global index must have a partition bound, all of whose values are MAXVALUE. This ensures that all rows in the underlying table can be represented in the index. Global prefixed indexes can be unique or nonunique. You cannot add a partition to a global index because the highest partition always has a partition bound of MAXVALUE. If you wish to add a new highest partition, use the PARTITION statement. If a global index partition is empty, you can explicitly drop it by issuing the PARTITION statement. If a global index partition contains data, dropping the partition causes the next highest partition to be marked unusable. You cannot drop the highest partition in a global index. Global hash partitioned indexes improve performance by spreading out contention when the index is monotonically growing. In other words, most of the index insertions occur only on the right edge of an index. By default, the following operations on partitions on a heap-organized table mark all global indexes as unusable: ADD (HASH) COALESCE (HASH) DROP EXCHANGE MERGE MOVE SPLIT TRUNCATE These indexes can be maintained by appending the clause UPDATE INDEXES to the SQL statements for the operation. The two advantages to maintaining global indexes: The index remains available and online throughout the operation. Hence no other applications are affected by this operation. The index doesn't have to be rebuilt after the operation. Note:This feature is supported only for heap-organized tables. Figure 2-7 offers a graphical view of global partitioned indexes. Global non-partitioned indexes behave just like a non-partitioned index. Figure 2-8 offers a graphical view of global non-partitioned indexes. You can create bitmap indexes on partitioned tables, with the restriction that the bitmap indexes must be local to the partitioned table. They cannot be global indexes. Global indexes can be unique. Local indexes can only be unique if the partitioning key is a part of the index key.
More Lessons for Calculus Help Welcome to our collection of free Calculus lessons and videos. Trigonometric Substitution - Example 1 Just a basic trigonometric substitution problem. I show the basic substitutions along with how to use the right triangle to get back to the original variable. Trigonometric Substitution - Example 2 A complete example integrating an indefinite integral using a trigonometric substitution involving tangent. Trigonometric Substitution - Example 3 / Part 1 A harder example of using a trig sub is shown! First, you have to complete the square! Trigonometric Substitution - Example 3 / Part 2 Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
designs reflect past eras By Roger Lathe, Discover Magazine Knowing historical context allows you to see how popular house designs expressed the taste, the economy and the technology of their times. These are the Victorians you think of as "San Francisco style," usually with small porches featuring classical columns, always with multi-windowed bay fronts. Emphasis is on vertical lines; facades are narrow and tall under a low-pitched roof. The forms and ornament are derived from Renaissance villas and palaces. Look for wooden trim imitating carved stone, heavy cornices (roof edges) with ornate brackets and sometimes arched windows that are often topped with elaborate built up mouldings. Trim can be Classical, Tudor, lacy jig-sawn "Eastlake" or even Gothic, in endless and often lavish combinations. The details rule. Fancy shingles, spindlework, stained or leaded glass and intricate frieze decorations make a glorious visual fruit salad. Midtown has many smaller Queen Anne cottages, without the signature corner towers of larger models. Raised on "flood basements," most have long entry stairways with fancy details. Almost all have steep-pitched roofs and front bay windows. Elaborate "gingerbread" trim on porches, roof edges and around windows was always made of redwood, the ideal exterior wood. Simple box shapes are usually topped with low-hipped roofs, and most facades have full-width porches with wide stairs. Foursquares can be trimmed in a Craftsman motif, or Prairie style (a la Frank Lloyd Wright), or even Spanish Colonial, stuccoed with tiled roof edges. Most have California Revival detailing: Formal columns, wide (sometimes fluted) corner trim, window sashes with small panes and wide entry doors with sidelight windows. Midtown is full of these in every imaginable finish, trim and decor. Craftsman themes are most often seen on bungalows, houses with low profiles and prominent roof exposures. Look for exposed rafter ends (often "birds-mouth" shaped) under wide eaves, plain large beams and heavy brackets, shingle siding, "hand-crafted" light fixtures and hardware and misshapen clinker bricks or river rocks in chimneys and porches. Early midtown bungalows usually have more built-up detailing, sometimes with Swiss or Japanese themes. Exteriors became very plain by the 1920s. Architect trained in the modern International style like to make fun of these pre-World War II escapist fantasies with labels like pseudo-Tudor, faux Chateau, elastic Hispanic, Cape Coddled, etc. But these are usually efficient and pleasant homes, sometimes cozy, sometimes imposing and elegant. Midtown also has many apartment buildings displaying the taste and fashion of the 1950s and '60s. Architecture buffs in the next century will possibly find these interesting, perhaps even attractive. That will take a while yet. source: The Sacramento Bee, Discover Magazine, May 4, 1998
Money, the thing on everybody’s mind, takes center stage in practically every facet of daily life when we consider that just about everything we need, do, or desire, is – in one form or another – monetized. Throughout history, money has taken a variety of forms — shells, strips of leather, even livestock were used as early systems of currency. The earliest form of trade, the act of purchasing something before the inception of our highly sophisticated monetary system, essentially meant swapping what you had of value with something else of what was deemed to be equivalent value. While that simple system still remains the basis of monetary exchange today, the invention of an entire system of symbolic value attests to the ingenuity of the human mind and our knack for shaping the world around us to cater to our innovative and creative natures – for better or worse. The development of a system of currency was in the past one of the key indicators of an empire’s influence, power, and self-sustainability. Evidently, this remains true to this day, with some of the world’s leading and most innovative nations not surprisingly being the wealthiest. Of course, poverty is one of the leading causes of death around the world, and this fact emphasizes the reality that financial disparity continues to rein in a variety of social issues for every country, even the very richest. The fact that some of the richest countries also have notably high poverty rates and profound income inequality makes it clear that, on a national level, being rich doesn’t necessarily mean being better off. According to the World Bank, the 12 largest economies in the world make up two-thirds of the world’s economy — evidently, the problem of unequal distribution extends to the international playing field. So who has the fattest slice of the money pie? We’ve ranked the top 10 richest countries of 2014 based on the sheer girth of their economies, determined by their purchasing power in U.S. dollar sums. The data featured here was collected by CNN from the IMF and the World Economic Outlook. For further insight we make note of each country’s ranking on the CIA World Factbook country comparison of GDP per capita rates, while also emphasizing the income inequality in each of these nominally wealthy countries by specifying the percentage of individuals living below the poverty line in each given country. 10. India: $2.0 Trillion India comes in as the 10th wealthiest economy in 2014 with a purchasing power of $2.0 trillion. However, the densely populated country has a considerably low GDP per capita at $4,000, ranking 169th place out of 228 countries listed in the World Factbook country comparison chart. Despite India’s great economic wealth, a whopping 29.8 percent of Indian citizens live below the poverty line. 9. Russia: $2.1 Trillion Russia’s economy boasts $2.1 trillion but ranks 77th place in the World Factbook country comparison with a GDP per capita of $18, 100. In Russia, 11 percent of citizens live below the poverty line. 8. Italy: $2.2 Trillion Italy has an economy valued at $2.2 Trillion. The GDP per capita in this country is $29,600, placing it as the 51st highest in the world. There is a relatively high poverty rate in this country, with 29.9 percent of Italians living below the line. 7. Brazil: $2.2 Trillion Tying with Italy, Brazil has an economy of $2.2 trillion. Despite the recent attention drawn to the state of the country during the recent World Cup tournament, Brazil’s economy makes it to this list. That’s not to say its social and political problems aren’t real; the GDP per capita in this country is a low $12,100, making it 105th place on the country comparison chart. 21.1 percent of Brazilians currently live below the poverty line. 6. United Kingdom: $2.8 Trillion The United Kingdom has an economy worth $2.8 trillion in purchasing power in 2014. With a relatively strong GDP per capita of $37, 300, the United Kingdom doesn’t seem so badly off, ranking 34th place in the world. 16.2 percent of people in this country live below the poverty line. 5. France: $ 2.9 Trillion France has an economy worth $2.9 trillion. It ranks 39th place in the World Factbook’s GDP per capita country comparison with a GDP per capita of $35,700. The country also has a low poverty rate with 7.9 percent of French people living below the poverty line. 4. Germany: $3.9 Trillion Germany has a purchasing power of $3.9 trillion, marking it out as the wealthiest economy of all the European countries. Germany also ranks a strong 29th place in the GDP per capita country comparison with a GDP per capita of $39, 500. With 15.5 percent of Germans living below the poverty line, the country does have some work to do in terms of encouraging the trickle down effect. 3. Japan: $4.8 Trillion With a purchasing power of $4.8 trillion, Japan takes third place on our list of most affluent economies. The GDP per capita in Japan is $37, 100, making it 36th place in terms of GDP. As of 2010, 16 percent of Japanese citizens are living below the poverty line. 2. China: $10.0 Trillion With a growing economy, China has a purchasing power of $10.0 trillion. But in terms of GDP per capita, it is not a world leader, coming in at number 121 on the World Factbook’s GDP per capita country comparison with a GDP per capita of a low $9,800. Only 6.1 percent of Chinese citizens live below the poverty line in this country but it bears mentioning that in 2013, China set a new poverty line at $3,630 per person. 1. United States of America: $ 17.5 Trillion Boasting $17.5 trillion in purchasing power, the United States continues to reign the number one richest economy in the world. This level of financial fruition lends it a relatively high GDP per capita at $52, 800, taking 14th place on the World Factbook’s country comparison — the highest ranking of all the countries featured on this list. In 2010, 15.1 percent of Americans lived below the poverty line; not a shocking figure in comparison to the many countries where that number is so much greater, but certainly something to consider in light of the wealth of the nation overall.
- Locating and Inserting Clip Art - Inserting a Picture - Adding a Quick Style to a Picture - Applying a Shape to a Picture - Applying a Border to a Picture - Applying Picture Effects - Modifying Picture Size - Compressing a Picture - Modifying Picture Brightness and Contrast - Recoloring a Picture - Cropping and Rotating a Picture - Scanning and Imaging Documents - Managing Pictures - Creating WordArt Text - Formatting WordArt Text - Applying WordArt Text Effects - Modifying WordArt Text Position - Creating SmartArt Graphics - Formatting a SmartArt Graphic - Modifying a SmartArt Graphic - Creating an Organization Chart - Modifying an Organization Chart - Inserting and Creating a Chart - Changing a Chart Type - Changing a Chart Layout and Style - Changing Chart Titles - Changing Chart Labels - Formatting Line and Bar Charts - Editing Chart Data - Saving a Chart Template Modifying WordArt Text Position You can apply a number of text effects to your WordArt objects that determine alignment and direction. The effects of some of the adjustments you make are more pronounced for certain WordArt styles than others. Some of these effects make the text unreadable for certain styles, so apply these effects carefully. You can apply effects to a shape by using the Format Shape dialog box for custom results. You can also use the free rotate handle (green circle) at the top of the selected text box to rotate your WordArt text. Change WordArt Text Direction Right-click the WordArt object you want to change, and then click Format Shape or Format WordArt. If necessary, click Text Box in the left pane. Click the Vertical alignment or Horizontal alignment list arrow, and then select an option: Top, Middle, Bottom, Top Center, Middle Center, or Bottom Center. Click the Text Direction list arrow, and then select an option: Horizontal, Rotate all text 90°, Rotate all text 270°, or Stacked. Rotate WordArt Text Click the WordArt object you want to change. Drag the free rotate handle (green circle) to rotate the object in any direction you want. When you’re done, release the mouse button. Click outside the object to deselect it.
There is research indicating that air pollution is responsible for 2 million deaths around the world, according to The Huffington Post. In regards to the environment, Phys.org also notes that plastics are one of the prime polluters of the marine environment, as just about every product contains plastic. Furthermore, plastics from sewage plants, shipping facilities and oil and gas refineries dispense plastic byproducts into the ocean.Continue Reading Phys.org notes that 28 million tonnes of plastic ended up in the ocean in 2012. The Huffington Post draws a link between environmental pollution and the toll on human health, revealing that ozone pollution causes 470,000 deaths annually. Ozone pollution occurs when air pollution from facilities, like automobile plants, interact with the atmosphere. Web MD also highlights the effects of indoor air pollution on humans and the risk of contracting respiratory ailments, such as asthma and allergies. Cigarette smoke is also classified as a pollutant, with experts believing that 90 percent of lung cancer cases are traced to smoking. Smoking, along with second-hand smoke, increases a person's chances of succumbing to heart attacks and strokes. An article from USA Today mentions that air pollution can have an adverse effect on growing brains in children, causing such problems as autism and schizophrenia. Pregnant mothers living in a polluted environment can also have a negative effect on a child's health.Learn more about Human Impact
Cartilage injury is can occur with direct blows or impact, can be associated with other ligaments injuries or fractures. Treating cartilage injury is difficult because the cells, like brain cells, do not divide and therefore have limited ability to heal on their own. All joints contain cartilage. The normal motion of joints is lubricated by fluid called synovial fluid made by the cells that line each of the joints. The end of the bones themselves are covered with a specific type of cartilage called hyaline cartilage. Motion between hyaline cartilage lubricated with synovial fluid is smoother than ice on ice. Defects in the cartilage decrease this smooth motion and can result in friction causing pain, swelling, and changes to the synovial fluid in the joint. As we go about our daily lives the cartilage in our joints can wear slowly and gradually, much like the tread on a tire. When the cartilage becomes thin this process results in osteoarthritis (which is one type of arthritis). Unfortunately, any injury to the joint, the underlying bone, or it’s surrounding ligaments can speed the otherwise gradual process. Not all cartilage injuries cause symptoms to occur and if the remainder of the joint is healthy some cartilage injury can be very well tolerated. However, when this interferes with activities of daily living or the ability to play sports then treatment is required. Goals are always to maximize non-operative treatments first. By optimizing strength, range of motion, and endurance symptoms can be relieved in a majority of cases. Physical therapy, medications, and injections as well as supplements and adjunctive therapies are all viable options in treatment of cartilage injury. Physical therapy can be performed as either a guided home program or in conjunction with a dedicated physical therapist. Therapists have various modalities that can decrease pain and improve function and can tailor a home exercise program to fit your needs. Non-steroidal anti-inflammatory medications can be beneficial for some patients but have risks. Long term use should be discussed with a doctor or health care provider. Some supplements have shown promise in the treatment of cartilage injury. Common over the counter formulations include glucosamine and chondroitin sulfate. Other herbal medications from ayurvedic and Chinese medicine have shown effectiveness in large research trials, but more research is needed to determine which compounds are providing the effectiveness. Acupuncture is an additional modality which has shown proven effectiveness for some patients. It is low risk and often has additional benefits related to stress relief and relaxation which are often associated with and can worsen pain from injury. In some cases, braces can be useful to provide proprioceptive feedback and compression to reduce swelling. Bracing can be highly customized based on patient anatomy and the nature of the injury. Steroid injections decrease the pain and inflammation associated with cartilage injury and can be very effective in eliminating swelling. This is often a first line treatment because the procedure is simple, effective, and can be performed in the clinic with minimal discomfort and no restrictions after the injection. Steroid injections do not directly treat the injury itself but relieve the symptoms associated with it. Steroid injections can be repeated at intervals of 4-6 months but sometimes lose efficacy over time. Hyaluronic Acid Injections (HA) HA is a normal component of healthy joints and a primary part of the synovial fluid that promotes lubrication between cartilage surfaces. As stated above, in the injured joint there can be loss of these normal compounds Platelet Rich Plasma (PRP) PRP is a relatively recent treatment for articular cartilage injury. Rather than a drug or compound, PRP is a portion of the patient’s own blood that is concentrated and injected into the diseased area. This is done in a procedural area, but not in the clinic, and takes about five to 10 minutes to perform. Blood is drawn and placed in a machine which separates the parts of the blood and isolates the growth factors from the cells. The concentrated growth factors are then injected. There are no restrictions after PRP although localized soreness is common for the first several days and full effects may take up to 6 weeks. While early research is promising, this procedure may not be covered by all insurance plans. At Mammoth Orthopedic Institute we are actively researching PRP and you may be asked to participate in our research outcomes registry. Failure of non-surgical treatments is the primary reason for investigating surgical options. Given the current state of technology there is a wide array of procedures available, some of which are discussed below. Choosing an individual treatment plan is a complex process and involves a thorough understanding of the injury severity, its size and location, and the risks involved. Microfracture was one of the first techniques developed for cartilage repair. The procedure can be performed all arthroscopically and involves use of a pick to create small holes in the bone underneath the damaged cartilage which releases stem cells and growth factors which can generate a scar cartilage. A small pick or a wire can be used the make the microfracture holes. Microfracture does not restore normal cartilage. Rehabilitation after microfracture requires 6 weeks of strict non-weight bearing and early aggressive range of motion. Microfracture is most often used for lesions which are either very small or very large. Biocartilage is an adjunct to microfracture. It is developed form donor cartilage and contains the important building blocks for creating new cartilage such as type II collagen, proteoglycans and additional growth factors. The theory is that the addition of these growth factors will improve the overall quality of tissue that is generated after a microfracture. Because it can be inserted with a small needle biocartilage can be useful in some hard to reach portions of damaged joints and on uneven surfaces within the joint. Rehabilitation is similar to microfracture Osteochondral Autograft/Allograft Transplantation Surgery (OATS) OATS is the only procedure currently that provides mature normal cartilage to replace the damaged cartilage. In autograft OATS the healthy cartilage is transferred from a part of the joint that sees little or no stress and the diseased tissue is transferred to where the healthy cartilage was taken from. This is useful for smaller cartilage lesions up to 1.5 cm. The advantage is that no allograft, or cadaver tissue, is required. The disadvantage is that in some cases problems can occur at the donor site where the healthy cartilage was harvested. In allograft OATS, a cadaver donor is found which has a similar size and shape to the patients damaged tissue and the corresponding area from the healthy cadaver cartilage is transplanted into the patient’s cartilage defect. This is commonly used for larger areas, greater than 2 cm. The transplanted tissue includes both the bone (osteo-) and cartilage (chondral) from the donor. Rehabilitation after OATS in many cases can be accelerated compared to the previously mentioned techniques because the graft is more stable at the time of implantation. One disadvantage to allograft OATS is that it may take time to find a donor cadaver with similar anatomy to the patient. Cartiform is a product that is harvested from cadavers as well and is similar to osteochondral allograft described above. The difference is that the bone portion of the Cartiform is extremely thin making the graft flexible and able to conform to different surfaces. Because of this feature, an exact size match is not required as is the case for allograft OATS. The Cartiform can be trimmed to fit the defect and then secured with sutures or suture anchors. The disadvantage in this case is that the graft does not have immediate stability and that some time is required to allow the graft to heal before initiating weight bearing.
Debugging is always a large part of developing and most developers today have access to a JTAG/SWD debug probe, which is often sufficient for basic debugging tasks. However, the relatively low-cost trace probes available gives the developer even further possibilities for understanding what the application is doing, and makes debugging difficult situations a whole lot easier. If you have chosen to work with a device based on the ARM Cortex-M3 or -M4 core, you can gain a lot from its useful on-chip debug logic, which in combination with a capable debugger enables you to examine the application’s behavior from various angles. The debug architecture consists of five main units as described by Figure 1. Some of these, like the Embedded Trace Macrocell (ETM), are optional so you may need to check which have been implemented in your device. Figure 1. The debug architecture of the ARM Cortex-M3 and M4 devices The Instrumentation Trace Macrocell (ITM) is a lightweight trace that provides selected trace data over a low speed access port. Instrumentation trace is available using a debug probe such as I-jet, a low-cost probe that every developer should have on his or her desk. The ITM provides 32 channels for software trace that can be used by the software to generate packets for distribution to the debugger over SWO. This can be used for instrumentation of the code with very little overhead as the core does not need to stop to output the message or data. All that is needed is a single write operation to one of the 32 ITM stimulus registers. On the other hand, far from every instruction is traced The ITM also takes care of trace events triggered by another unit, the Data Watchpoint and Trace (DWT). The DWT provides a set of functions that collect information from the system buses and generates events to the ITM for packetizing and time stamping and further distribution on the SWO channel. There are four independent comparators, or watchpoints, in the DWT that can generate an event on an address match or a data match. They can be used for various purposes, including triggering the ETM, triggering an ITM packet, or breaking the code at certain conditions. Setting one of the four watchpoints to trigger the ETM is useful when debugging applications that run for a longer time, as it makes it possible to set not just plain trace start and stop breakpoints that starts and stops the trace data collection at specified addresses, but also to set complex trace start and stop conditions that, for example, could be based on when a variable reaches a certain value. The DWT also provides an interrupt trace function and a PC sampler that samples the program counter register at regular intervals. As the sampling is likely to miss most instructions that are executed, it will not be able to give a complete view of the applications whereabouts, but it will be able to provide information about in which functions the application has spent its time. ITM and DWT are very useful and may be sufficient for most embedded projects, but sometimes you will need an even more powerful debug mode to get down to the most difficult problems. Tracing every single executed instruction, the Embedded Trace Macrocell (ETM) provides you with unmatched insight into the microcontroller’s activities and enables you to find those hard-to-find bugs that are difficult or even impossible to find any other way. To use ETM, you need a special trace debug probe that can store the instructions in its memory. The number of instructions, or rather samples, possible to store is of course limited by the size of that memory. IAR Systems trace probe I-jet Trace for ARM Cortex-M reads trace data 4 bits at a time, this is what is called a sample. 4 bits is the standard trace width on ARM Cortex-M devices. I-jet Trace has a large trace memory capacity of up to 32 Msamples but since the ETM protocol is compressed, there is no one to one mapping of number of samples to number of instructions. The maximum number of instructions per sample occurs for completely linear execution (no jumps or branches in the code) and is 7.5. The average is around 2 instructions per 4-bit sample. Unlike the event trace provided by the DWT and ITM, the ETM will let you know what your application was doing before it received an interrupt, what it is doing while the ISR is executing, and what happens after it leaves the interrupt. It will tell you where the application has been and exactly how it got there. In short, it will give you full insight to your applications behavior in real-time without being intrusive. The most obvious way to use trace data in debugging situations is to look backwards in time, for instance when tracking a program where execution goes askew. You can stop execution after the program has deviated, and examine the trace output to see where it went wrong. In addition to the most straight-forward way of using trace, to go back and examine execution, there is other useful functionality that is enabled by using ETM trace. Code coverage analysis supplies information on which parts of your code have been executed and which have not. This information is extremely helpful during testing, since it enables you to ensure all parts of your code have been tested. Function profiling shows you the amount of time being spent in each function, helping you decide where to put most effort in code optimizations and improvements. Function profiling using the sampled SWO trace can also be useful, but is based on a statistical profile instead of the full trace data available with ETM trace. All of the above mentioned functionality is available in IAR Embedded Workbench, and is enabled by using a trace probe equipped with ETM trace, such as I-jet Trace.
What is Ext js? What is Angular js? Extjs vs Angularjs Ext js and angular js both are frameworks to develop the rich user interface in industries. Both ext js and angular js have powerful features. Manufacturers mostly prefer Sencha Ext js for developing the rich user interface. Because it uses plug-in-free charting, advanced model view controller architecture and modern widgets. Ext js is one stop shop for a rich user interface. Meantime angular js used among the web developers where it provides the opportunity of two-way data binding, an ability to create quick custom templates. Few similarities also there if we take ext js vs. angular js like both suitable for single page applications and cross browser compatibility is also one of the similarities between these two. When to use ext js and angular js? Use of ext js - It is the time saver if you want to use components delivered with ext js. - Need not worry about cross compatibility when programming and development. - If you have paid service, the commercial license is affordable. - Separate desktop and mobile applications valuable. Use of angular js - Small footprint required. - The responsive design needed for an application. - It is cheaper to integrate existing 3rd party components for free. - You should feel comfortable with CSS to solve the cross-compatibility issues. - Architectural difference of ext js vs. angular js - Ext js component based like grid, tree, forms, and charts. It also follows OOPs concept and MVC pattern. - Angular js is declarative programming adding with HTML and others. Ext js supports both MVC and MVVM frameworks. It is also component-based modular. Angular js is HTML enhanced web applications. It is module based and supports MVM or MVVM. Ext js follows depth first bottom-up approach and angular js support depth first, bottom up for DOM tree. And controllers liked with a top-down approach. Ext js supports several thirty party testing frameworks like the siesta, jasmine, and mocha. Angular js supports new chrome extension bat rang. Sencha touch is high and leading HTML5 framework for developing mobile web apps. Phone gap is to develop cross-platform applications. Angular js can also develop the cross-platform applications with a trigger.io, Cordova, and Ionic framework. Here the differences of ext js vs. angular js are discussed. Apart from this differences and similarities, some other factors are available on the website. Ext js and angular js are the most used web developing application software’s around the world among industries and individual corporations as well.
Did you know your skin absorbs what you put on it? Even more reason to ensure your skin care ingredients are safe. And you should be looking at skin care product labels in a similar fashion to food labels. There are thousands of chemicals in beauty products. Some are completely safe to use topically, while others are not. And it’s unfortunate, but true, that many of the ingredients in skin care products are literally skin irritants. And others are even endocrine disruptors and carcinogenic. Listed below are some of the worst (and most popularly used) ingredients that are still on the market today. Many increase the risk of cancer, destroy your skin matrix, or cause allergies and asthma. And you’d be surprised what the FDA allows in our country. The main skincare ingredients to avoid include: Parabens are among the most controversial ingredients in the beauty world. And they’re often found in both skin and hair care products. They are used mainly as preservatives, which prevent the growth of bacteria, fungus, and yeast. However, some research shows that parabens may increase the risk of breast cancer. Even so, it is necessary to understand that parabens occur naturally fruits like blueberries. Yet the synthetic kind may be linked to these risks more than the naturally occurring kind. Sodium lauryl sulfate and sodium laureth sulfate are also known as surfactants. And they are found in more than 90% of all skin and hair care products. They are well known to be skin, lung, and eye irritants. So, why do they keep showing up in our personal care products. SLS/SLES is responsible for the foaming action in your shampoos and soaps. And most people have become accustomed to this kind of cleaning. So much so, that when they use a product without these surfactants, they believe they are not clean enough. These ingredients dry out your skin, and may even cause cancer due to a chemical reaction. Oil, lotion, or gel cleansers work better without the risk. Formaldehyde – the stuff morticians use to preserve dead bodies. Yes, you read that right. As you can probably guess, formaldehyde is used as a preservative in beauty products. And although it has the benefit of preventing bacteria growth, it has been linked to cancer of the nose and throat. Plus, it causes skin issues such as irritation and allergies. And the ironic part? It’s found in body washes, shampoos, conditioners, cleanser, and eye shadow. When you see the word “fragrance” under the list of ingredients, it can be perplexing to understand exactly what that means. For the most part, it’s to cover up a company’s seemingly secret formula. So, we aren’t entirely sure what could be in this ingredient. Simply put: it’s a blend of possibly dangerous chemicals. Well known fragrances concoctions have been linked to allergies, skin issues, respiratory diseases, and harmful effects on the reproductive system. And it’s found in almost every type of skin care product across the board. Diethanolamine (or DEA) is often found in skincare products that are creamy – such as creams, serums, and moisturizers. Research shows that it causes mild to moderate skin and eye irritation. And in some experiments, it was discovered that high doses can cause liver, skin, and thyroid pre-cancer or cancer. Europe has already banned this ingredient due to these studies. Plus, DEA is classified as hazardous to the environment as well. Although it’s an organic alcohol, propylene glycol is a known skin irritant and penetrator. And it’s linked to dermatitis and hives in humans. And research shows that it doesn’t take much to cause these effects – only about 2% is needed. This ingredient is often found in skin care product such as moisturizers, make-up, sunscreen, as well as shampoos and conditioners.
“I don’t make songs for free, I make them for freedom.”1 – Chance the Rapper In order to overcome hunger and fear, students from low-income households must ultimately achieve liberation from people and institutions that restrict their freedoms. This goal can only be achieved only through authentic participation within critical discourses related to education policy, curriculum, and learning goals – which influence equity as well as edtech access and use. In order to achieve such freedom, students must be empowered through esteem and self actualization. In support of these efforts, educators and technology developers can support students’ development of epistemic agency as well as the capacity for knowledge building that is relevant to their own lived experiences. 1. Chance the Rapper, 2016
New England fishing trade reels after Gulf of Maine temperature rises faster than 99 percent of world’s ocean Unusual warming in the waters off the northeastern US has killed off vast numbers of Atlantic cod, further endangering a valuable and iconic fishery despite years of fishing restrictions, researchers said Thursday. New England cod stocks are on the verge of collapse, numbering at three to four percent of what scientists say are sustainable levels. The problem has been fueled by overfishing, and exacerbated by a stark warming trend in the Gulf of Maine that is unparalleled on Earth, researchers said in the journal Science. From 2004 to 2013, the rate of warming in that area was almost a quarter degree Celsius (.41 Fahrenheit). “The Gulf of Maine had warmed faster than 99.9 percent of the global ocean over that period,” said lead author Andrew Pershing, chief scientific officer of the Gulf of Maine Research Institute, who studied records back through the 1900s for comparison. “It was a rate that few large marine ecosystems had ever encountered,” he told reporters. The reasons for the spike include global warming and a shift in the Gulf Stream. For fish, these warmer temperatures led to fewer offspring and fewer juveniles surviving until adulthood. Even a series of restrictions on cod fishing put in place to try to save the population was too slow to keep up with the fast-rising temperatures. “The rate of changed outpaced the ability of people to make decisions about the ecosystem,” Pershing said. Quotas kept falling, meaning that fishermen were allowed to take fewer fish, but the models that helped managers make these decisions “consistently overestimated the abundance of cod,” Pershing added. “Warming waters were making the Gulf of Maine less hospitable for cod, and the management response was too slow to keep up with the changes.” – Climate shifts – Experts know that the warming climate is already forcing many species to shift from their traditional habitats toward temperatures that are more suited for their survival. But rather than move north, the struggling Gulf of Maine cod population — a species that prefers cold water — has actually shifted southward over the past 45 years, researchers say. A combination of both overfishing and reduced reproduction in warming waters are to blame for this unfortunate migration, experts say. “We often wonder if it is fishing or climate, but it is both,” said Janet Nye from the School of Marine and Atmospheric Sciences at Stony Brook University in New York. “It is almost always environmental factors combined with fishing that cause stocks to collapse or fail to recover.” But not all cod are in the same boat. A study out earlier this week found that cod to the north, off the coast of Canada, are rebounding and have made a comeback in recent years. Researchers said their findings on the Gulf of Maine cod were likely to stoke controversy among fishermen, whose livelihoods are already limited by the fishing restrictions. More adaptable approaches to management of fisheries could help resolve the problem in the future, said Katherine Mills from the Gulf of Maine Research Institute. “The Gulf of Maine cod, I think, is a wake-up call that we need to bridge the disconnect that currently exists between oceanography, fisheries ecology and stock assessment science,” she said. “There is important science being done in all three of these fields, but perhaps their greatest value will be realized when they are brought together.”
Venda Community, South Africa Venda is situated in the Limpopo province, South Africa, and is famous for its outstanding natural beauty, abundant biodiversity and cultural richness1. The forests of Venda help to maintain the climate of the region; they are also the source of springs and tributaries which feed into the local river system and provide water for the surrounding land. Venda is one of 19 centres of endemic flora in South Africa, and it plays host to over 594 different species2. The area is home to the vhaVenda people, who stand as one of the last indigenous communities in the northern part of South Africa3. They believe the forest is sacred, thusthere are a number of sacred sites in the area, each protected by a different tribe1. They are a matriarchal community; sacred sites are watched over by the elder women, who act as environmental custodians and spiritual leaders known as the Makhadzi. These women are called the “Rainmakers" due to their practice of cultural rituals to invite rain to the area. For the vhaVenda people, such practices play a vital role in maintaining the health of the ecosystem and the local community3. The Venda people have a unique philosophy called Mupo, which encompasses all natural creation and describes a great order of things of which humans are just one small part2. Traditional community life sees the vhaVenda people living closely with the natural environment, depending on ancestral indigenous knowledge and traditional farming practices to maintain ecosystem resources. Today, the traditions and practices of the Venda community are under threat due to the destruction of sacred sites during new developments. The community is actively engaged in a battle with developers to protect their sacred areas, which have become tourist hotspots and attract visitors who have demonstrate a limited understanding for the wider cultural significance of the sites2.Of particular importance is Phiphidi falls, a sacred waterfall which plays a vital role in the Makhadzi rituals to bring rain to the area. During 2007-2008 a road was built across a river near to Phiphidi Falls. The road passed over a sacred rock, which was broken up and paved over4. Most recently, a series of chalets for tourists have been constructed behind the falls. The Makhadzis have been negatively affected by this destruction of their sacred sites, which is causing a considerable disorder within Mupo. As a vhaVenda elder farmer from Tshidzivhe states: “Mupo, or the universe, is the oxygen; if we cut down the trees we can't breathe the fresh air. Even the grasses can die, and our animals cannot find where they can graze. If we cut down the trees, even we are going to die. Mupo is our life" In response to growing concerns heard through discussions with elders, The Mupo Foundation was created in 2007 by Mphatheleni Makaulule, a community leader from the Limpopo province who was also driven by her concern with the erosion of indigenous knowledge systems and cultural values as well as the mass destruction of the forest. The Mupo foundation is now an internationally recognized organization which is affiliated with the Gaia foundation, and advised by the African Biodiversity Network and the GRAIN programme. The foundation works to preserve and revive cultural diversity in South Africa by strengthening local governance, reviving indigenous seed, facilitating and encouraging international learning, and rebuilding confidence in indigenous knowledge systems5. In direct response to the Phiphidi falls developments, the Makhadzis formed a committee called Dzomo la Mupo (meaning Voices of Earth) to defend and protect the falls. In 2010 Dzomo la Mupo took the developers to court for violating the vhaVenda traditional, Constitutional cultural and spiritual rights and also for breaching planning regulations. Supported by The Mupo Foundation and The Gaia Foundation, the application to the South African High Court was successful and required developers to stop building the tourism complex at Phiphidi sacred waterfall and forest, pending a full hearing. The Judge recognised the custodians' constitutional rights and agreed that the whole site is sacred "in the same way a church building is regarded by some as a holy place, even though the rituals are done only at the altar"(Judge Mann, South African High Court (7 July 2010)2,5). The court proceedings are still underway and it is uncertain that the development will be halted indefinitely. The vhaVenda people recognize that this case will set a precedent for all developments within the Venda sacred forest. In order to secure long term protection of their network of sacred sites, Dzomo la Mupo are beginning to document guiding principles to secure 'No Go' zones for development and developing local constitutions and governance plans for each of the clans protecting different areas of forest2. The activities of Dzomo la Mupo also include teaching the people of Venda about the value of sacred sites to ensure that indigenous knowledge is conserved. In order to document such knowledge, a community exercise in eco-cultural mapping was carried out in November 2009. More than 70 people took part, and spent six days creating maps to display indigenous knowledge of the ancestral order of the territory, reflecting how things were when the community lied traditionally and marking sacred sites. These maps were compared to a map of the present, which allowed the community to measure the destruction and habitat loss which has taken place. Finally, a third map shows a vision of the future- the vision of how the communities wish to regenerate their territory and rebuild their communities3. This is key to the Mupo foundation's mission, states Mpatheleni Makaulule (founder and Director): “If we look at the ancestral way, we find the solution to rebuild what has been destroyed"4. The Mupo Foundation is actively engaged in improving the agricultural practice of the Venda farmers by encouraging the trade and use of local seeds including finger millet, maize, beans, sesame and indigenous vegetables, as well as increasing the awareness of wild greens and fruits. In particular, the finger millet is a sacred seed of the vhaVenda people and The Mupo Foundation facilitates seed workshops to encourage the revival of this crop6. Threats and Challenges/ What's Next? As well as an ongoing battle with tourist developers, the Venda forest is under threat due to a water license application by Coal of Africa. The company is prospecting to mine coal at the Vele mining site at the edge of the Mapungubwe world heritage site, which would necessitate the extraction of underground water to feed the new Medupi power station. The Mupo Foundation has sent an appeal to COP17 to halt the mining operations on the grounds that the forest is more valuable than coal7,8. The Mupo Foundation is increasingly aware of the need to conserve the water resources in the Venda area by protecting the natural springs within the forest. The best way to do this is to plant indigenous trees along the river bank which protect the springs from erosion. The vhaVenda people are therefore campaigning against deforestation and to secure protection for the area's natural wetlands7. 1. Lahiff E, (2000). An Apartheid Oasis?: Agriculture and Rural Livelihoods in Venda. p.55, Routledge, London, England. 2. The Gaia Foundation (2012). Saving Phiphidi Falls. Available at: http://www.gaiafoundation.org/saving-phiphidi-falls [Accessed: 29/03/12]. 3. The Gaia Foundation (2010). Venda Community Fight to Protect Sacred Waterfall. Available at: http://www.slideshare.net/FionaWilton/venda-community-fight-to-protect-sacred-waterfall [Accessed: 29/03/12]. 4. The Gaia Foundation (2012). The Makhadzi- Defenders of the Sacred Sites. Available at: http://www.gaiafoundation.org/galleries/albums/makhadzis-defenders-sacred-sites [Accessed: 29/03/12]. 5. The Mupo Foundation (2011). Background. Available at: http://mupofoundation.org/about/ [Accessed: 29/03/12]. 6. The Mupo Foundation (2011). Seed Security. Available at: http://mupofoundation.org/our-programme/seed-security/ [Accessed: 29/03/12]. 7. The Gaia Foundation (2012). CoAL Out of Africa!. Available at: http://www.gaiafoundation.org/news/coal-out-of-africa [Accessed: 29/03/12]. 8. The Mupo Foundation (2011). Coal mining plans jeopardise the cultural and ecological future of Venda. Available at: http://mupofoundation.org/our-programme/seed-security/ [Accessed: 29/03/12]. Last updated: 30th March 2012
The aim of this investigation is to determine the distance a comet moves in a period of time, thus determining the speed of a comet as it approaches the Sun. The comet’s speed will also be compared with the speed of a coronal mass ejection (CME) from the Sun. Digital technology and Pythagoras’ theorem will be used throughout. ACMNA214 - Find the distance between two points located on a Cartesian plane using a range of strategies, including graphing software Distance, Speed, Coordinates, Pythagoras About the Lesson A coronagraph is a telescopic attachment used to block out the Sun's glare so that nearby objects can be resolved. The coronagraph image overleaf was obtained from Solar and Heliospheric Observatory (SOHO) located at http://soho.nascom.nasa.gov/. It represents Comet McNaught as it approached the Sun in early January 2007. The comet appears to be approaching the planet Mercury (the bright object in the foreground). In this activity, students make a series of calculations to determine the distances travelled and the speeds of the comets as they pass by the sun.
State Species of Special Concern Description: The spotted turtle is characterized by a smooth, bluish-black carapace (top shell) with yellow-orange spots. The carapace is made up of a combination of scales (scutes) and bones, and it includes the ribs and much of the backbone. This turtle is sometimes referred to as the “polka-dot turtle,” as the number of spots can range from a single dot to multiple dots per scute. The plastron (bottom shell) is yellowish-tan with dark markings. The sides of the head and chin are often marked with reddish-orange to yellow blotches, and the forearms may also be bright orange. Spotted turtles are small, only growing to about 4.5 inches in length and weighing between ½ to ¾ pounds. Males are distinguished by a tan chin, brown eyes, concave plastron, and a longer, thicker tail. Females have a more domed shell, yellow chin, and orange eyes. Hatchling spotted turtles are 1 to 1.5-inches long when born. Range: The spotted turtle has a somewhat disjunct range in North America. It occupies the eastern portion of the Great Lakes region from Ontario south to Illinois and west to Michigan. It also is found along the eastern seaboard from southern Maine south to Florida. Isolated populations also occur in southern Quebec, southern Ontario, central Illinois, central Georgia, and northcentral Florida. Habitat and Diet: Spotted turtles are found throughout the Connecticut lowlands, close to slow-moving bodies of water. They use shallow water bodies, including unpolluted bogs, pond edges, ditches, marshes, fens, vernal pools, red maple swamps, and slow-moving streams. Water bodies with a soft, murky bottom and abundant aquatic vegetation are preferred. Spotted turtles will seek out other wetlands if their habitat becomes unsuitable. Upland habitats also are used for nesting, aestivating, and travel corridors between wetlands. The spotted turtle is omnivorous, feeding on aquatic plants, small fish, snails, worms, slugs, spiders, tadpoles, and small crustaceans. Interestingly, this species will only feed under water. Life History: Spotted turtles emerge from hibernation in early spring, usually in March, and begin looking for mates. After breeding, the females leave the breeding pools in search of nesting areas. They may travel a good distance and, in many instances, are killed when crossing roads. Preferred nesting sites are generally located in open, upland habitats, such as a meadow, field, or the edge of a road. The female digs a nest cavity with her hind legs and feet, and then lays about 3-4 eggs. She covers the eggs with soil, smoothing it over by dragging her body over the ground. The eggs hatch in mid-September through October, but some hatchlings may overwinter in the nest and surface the following spring. Sex of the hatchlings is determined by the temperature and humidity of the nest. Due to this turtle's small size, predation is high, especially for hatchlings. Mammals, such as raccoons and muskrats, often prey on spotted turtles, as well as some birds and predaceous fish. Spotted turtles are thought to live 25 to 50 years and reach sexual maturity at 8 to 10 years of age. Spotted turtles are active only during daylight hours, and spend the night under water on the pond bottom. They are often seen basking on logs or rocks during spring and summer, but may retreat to an aquatic or terrestrial spot (under the leaf litter) when there is intense heat. This summer “hibernation” is known as aestivation. Conservation Concerns: Spotted turtles are protected by the Connecticut Threatened and Endangered Species Act and may not be collected from the wild. The isolation and decline of spotted turtle populations are attributed to collection for the pet trade industry; the alteration, loss, and fragmentation of habitat; habitat succession; road mortality; and predation. Relatively low reproductive rates, coupled with the above-mentioned threats, make spotted turtles extremely susceptible to population declines. They are sensitive to pollution and toxic substances, and will disappear rapidly from habitats with declining water quality. Mortality associated with crossing roads is especially problematic given that the turtles that cross roads are often pregnant females in search of a nesting site. Every individual turtle collected from the wild to become a pet has a profound effect because each turtle removed is no longer able to be a reproducing member of that population. What You Can Do You can help the spotted turtle, and many other wildlife species, by conserving wetland habitat wherever you can. Use proper erosion and sediment controls during work that involves disturbance of soils adjacent to wetland systems. The creation or maintenance of a buffer strip of natural vegetation (minimum of 100 feet) along wetlands and watercourses will protect the water quality of the wetlands that these turtles require. Leave turtles in the wild. They should never be kept as pets. Whether collected singly or for the pet trade, turtles that are removed from the wild are no longer able to be a reproducing member of a population. Every turtle removed reduces the ability of the population to maintain itself. You should also keep in mind that caring for a pet turtle is not as easy as you may think. They require specific temperatures, diets, and lighting for digestion and shell health. Cages must be kept clean as turtles can carry salmonella. And, turtles live a long time. No person shall possess in excess of one spotted turtle at any time. Never release a captive turtle into the wild. It probably would not survive, may not be native to the area, and could introduce diseases to wild populations. Do not disturb turtles nesting in yards or gardens. As you drive, watch out for turtles crossing the road. Turtles found crossing roads are often pregnant females and they should be helped on their way and not collected. Without creating a traffic hazard or compromising safety, drivers are encouraged to avoid running over turtles that are crossing roads. Also, still keeping safety precautions in mind, you may elect to pick up turtles from the road and move them onto the side they are headed. Never relocate a turtle to another area that is far from where you found it. Learn more about turtles and their conservation concerns, and educate others about how to help Connecticut's turtle species. Content last updated on October 1, 2015.
by John Charlton (Nov. 25, 2009) — It is often asked whether the usurpation of the presidency by Barack Hussein Obama will ever have a remedy in the courts. But the fact is that there already is a remedy in the courts: the ruling of the Supreme Court of the United States in Marbury v. Madison, 5 U.S. 1 Cranch 137 137 (1803). That case was the first to expressly indicate that no action of Congress was valid if it contravened the Constitution. Since the U.S. Constitution requires that a President be a natural born citizen; and since the Supreme Court has in 4 cases used the term “natural born citizen” only in reference to one born in the U.S.A. of parents who were citizens at the time of his birth, it follows inexorably that Obama’s election as president by the Joint Session of Congress, on January 8, 2009, is null and void. Against this legal conclusion there is no argument. Here is the crucial text of the Supreme Court’s decision in Marbury v. Madison. The question whether an act repugnant to the Constitution can become the law of the land is a question deeply interesting to the United States, but, happily, not of an intricacy proportioned to its interest. It seems only necessary to recognise certain principles, supposed to have been long and well established, to decide it. That the people have an original right to establish for their future government such principles as, in their opinion, shall most conduce to their own happiness is the basis on which the whole American fabric has been erected. The exercise of this original right is a very great exertion; nor can it nor ought it to be frequently repeated. The principles, therefore, so established are deemed fundamental. And as the authority from which they proceed, is supreme, and can seldom act, they are designed to be permanent. This original and supreme will organizes the government and assigns to different departments their respective powers. It may either stop here or establish certain limits not to be transcended by those departments. The Government of the United States is of the latter description. The powers of the Legislature are defined and limited; and that those limits may not be mistaken or forgotten, the Constitution is written. To what purpose are powers limited, and to what purpose is that limitation committed to writing, if these limits may at any time be passed by those intended to be restrained? The distinction between a government with limited and unlimited powers is abolished if those limits do not confine the persons on whom they are imposed, and if acts prohibited and acts allowed are of equal obligation. It is a proposition too plain to be contested that the Constitution controls any legislative act repugnant to it, or that the Legislature may alter the Constitution by an ordinary act. Between these alternatives there is no middle ground. The Constitution is either a superior, paramount law, unchangeable by ordinary means, or it is on a level with ordinary legislative acts, and, like other acts, is alterable when the legislature shall please to alter it. If the former part of the alternative be true, then a legislative act contrary to the Constitution is not law; if the latter part be true, then written Constitutions are absurd attempts on the part of the people to limit a power in its own nature illimitable. Certainly all those who have framed written Constitutions contemplate them as forming the fundamental and paramount law of the nation, and consequently the theory of every such government must be that an act of the Legislature repugnant to the Constitution is void. This theory is essentially attached to a written Constitution, and is consequently to be considered by this Court as one of the fundamental principles of our society. It is not, therefore, to be lost sight of in the further consideration of this subject. If an act of the Legislature repugnant to the Constitution is void, does it, notwithstanding its invalidity, bind the Courts and oblige them to give it effect? Or, in other words, though it be not law, does it constitute a rule as operative as if it was a law? This would be to overthrow in fact what was established in theory, and would seem, at first view, an absurdity too gross to be insisted on. It shall, however, receive a more attentive consideration. It is emphatically the province and duty of the Judicial Department to say what the law is. Those who apply the rule to particular cases must, of necessity, expound and interpret that rule. If two laws conflict with each other, the Courts must decide on the operation of each. So, if a law be in opposition to the Constitution, if both the law and the Constitution apply to a particular case, so that the Court must either decide that case conformably to the law, disregarding the Constitution, or conformably to the Constitution, disregarding the law, the Court must determine which of these conflicting rules governs the case. This is of the very essence of judicial duty. If, then, the Courts are to regard the Constitution, and the Constitution is superior to any ordinary act of the Legislature, the Constitution, and not such ordinary act, must govern the case to which they both apply. Those, then, who controvert the principle that the Constitution is to be considered in court as a paramount law are reduced to the necessity of maintaining that courts must close their eyes on the Constitution, and see only the law. This doctrine would subvert the very foundation of all written Constitutions. It would declare that an act which, according to the principles and theory of our government, is entirely void, is yet, in practice, completely obligatory. It would declare that, if the Legislature shall do what is expressly forbidden, such act, notwithstanding the express prohibition, is in reality effectual. It would be giving to the Legislature a practical and real omnipotence with the same breath which professes to restrict their powers within narrow limits. It is prescribing limits, and declaring that those limits may be passed at pleasure. That it thus reduces to nothing what we have deemed the greatest improvement on political institutions — a written Constitution, would of itself be sufficient, in America where written Constitutions have been viewed with so much reverence, for rejecting the construction. But the peculiar expressions of the Constitution of the United States furnish additional arguments in favour of its rejection. Could it be the intention of those who gave this power to say that, in using it, the Constitution should not be looked into? That a case arising under the Constitution should be decided without examining the instrument under which it arises? This is too extravagant to be maintained. In some cases then, the Constitution must be looked into by the judges. And if they can open it at all, what part of it are they forbidden to read or to obey? There are many other parts of the Constitution which serve to illustrate this subject. It is declared that “no tax or duty shall be laid on articles exported from any State.” Suppose a duty on the export of cotton, of tobacco, or of flour, and a suit instituted to recover it. Ought judgment to be rendered in such a case? ought the judges to close their eyes on the Constitution, and only see the law? The Constitution declares that “no bill of attainder or ex post facto law shall be passed.” If, however, such a bill should be passed and a person should be prosecuted under it, must the Court condemn to death those victims whom the Constitution endeavours to preserve? “No person,’ says the Constitution, ’shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.” Here. the language of the Constitution is addressed especially to the Courts. It prescribes, directly for them, a rule of evidence not to be departed from. If the Legislature should change that rule, and declare one witness, or a confession out of court, sufficient for conviction, must the constitutional principle yield to the legislative act? From these and many other selections which might be made, it is apparent that the framers of the Constitution contemplated that instrument as a rule for the government of courts, as well as of the Legislature. Why otherwise does it direct the judges to take an oath to support it? This oath certainly applies in an especial manner to their conduct in their official character. How immoral to impose it on them if they were to be used as the instruments, and the knowing instruments, for violating what they swear to support! The oath of office, too, imposed by the Legislature, is completely demonstrative of the legislative opinion on this subject. It is in these words: “I do solemnly swear that I will administer justice without respect to persons, and do equal right to the poor and to the rich; and that I will faithfully and impartially discharge all the duties incumbent on me as according to the best of my abilities and understanding, agreeably to the Constitution and laws of the United States.” Why does a judge swear to discharge his duties agreeably to the Constitution of the United States if that Constitution forms no rule for his government? if it is closed upon him and cannot be inspected by him? If such be the real state of things, this is worse than solemn mockery. To prescribe or to take this oath becomes equally a crime. It is also not entirely unworthy of observation that, in declaring what shall be the supreme law of the land, the Constitution itself is first mentioned, and not the laws of the United States generally, but those only which shall be made in pursuance of the Constitution, have that rank. Thus, the particular phraseology of the Constitution of the United States confirms and strengthens the principle, supposed to be essential to all written Constitutions, that a law repugnant to the Constitution is void, and that courts, as well as other departments, are bound by that instrument.
Its also known as the McClintic Wildlife Management Area. The 3,655 acres of wild life management is occupied by 600 acres of farmland, 180 acres of wetland, encompassing 31 ponds, 1,100 acres of brush-land and 1,775 acres of mixed hardwood forest. Part of the area was used as a place to manufacture and store ammunition in World War II. Almost a hundred large concrete domes, often called Igloos or bunkers, were built into the ground to house explosives. These were designed to not be noticeable from the air. The 45 million dollar power plant employed 3500 at its peak of operation from 1942 to 1945. Many of The Mothman sightings took place in or around this area leading people to think of it as Mothman's home or hideout.It was later revealed that the TNT Area was horribly polluted and many of the ingredients that went into making the ammunition were dumped and left to seep into the local pounds. Pond 13 is known for being effected the worst. A women fishing in that pond discovered a red water seep and pointed it out to a wildlife biologist named Tom Dotson. He sent it to the lab and discovered that it did contain levels of TNT. In the 1970's, scientists declared the area an environmental disaster and it became one of the top EPA Superfund clean up sights. The abandoned North Power Plant building that once stood on the property was demolished in the early 1990's. It was thought to be unsafe because of the frequent trespassers and the state of the building's decay. On May 17th 2010, an underground storage bunker filled with ammunition exploded in The TNT Area. Empty barrels and metal storage boxes were thrown everywhere, some landing as far as 100 feet away in a nearby swamp. The steel doors were thrown off and the ceiling, which is made of 6-inch concrete, lifted up and then caved in. The bunker contained 15,000 to 20,000 pounds of military issued gunpowder and was leased by Richard King of High Performance Ammunitions.The West Virginia State Police and the Federal authorities temporarily shut down the area later that year. They preformed a full sweep of the property looking for illegal or potentially dangerous explosives before reopening the area. The Mothman Festival still gives tours of The TNT Area each year.
January 17, 2011 High Birth Weight In First Nations Babies Linked To A Higher Risk Of Postneonatal Death High birth weight in First Nations (North American Indian) babies are linked to a higher risk of postneonatal death (infant deaths that occur from 4 weeks to 1 year of age), according to a study published in CMAJ (Canadian Medical Association Journal) (pre-embargo link only) http://www.cmaj.ca/embargo/cmaj100837.pdf. High birth weight or infant macrosomia is defined as a child being born above the 90th percentile relative to a fetal growth standard. Maternal obesity, impaired glucose tolerance and gestational diabetes can all contribute to infant macrosomia and these risks affect a greater proportion of Aboriginal populations.This study was conducted to determine if the high incidence of macrosomia is a risk factor for perinatal and postneonatal death among First Nations in Quebec. It compared 5193 births to First Nations mothers versus 633,424 births to French mother tongue women from 1991 to 2000 in Quebec. "We found that macrosomia was not associated with increased risk of perinatal death among births to First Nations women, although its [macrosomia's] prevalence was three times higher than among births to women whose mother tongue women is French," writes Dr. Zhong-Cheng Luo, of the University of Montreal's Department of Obstetrics and Gynecology and Sainte-Justine University Hospital Centre in Montreal, Quebec with coauthors. "Unexpectedly, macrosomia was associated with a much greater risk of postneonatal death (8.3 times as high) among infants of First Nations versus French mother tongue women." The authors conclude that parents and caregivers should be watchful for the potential high risk of postneonatal death among infants with macrosomia from First Nations women. Further research is needed into determining cause and effective prevention programs must be developed. In a related commentary (pre-embargo link only) http://www.cmaj.ca/embargo/cmaj101700.pdf, Katherine Dray-Donald, Dietetics and Human Nutrition, McGill University, writes, "despite these intriguing results concerning possible protective effects of large-for-gestational-age on infant mortality, the factors leading women to have large-for-gestational-age infants are serious for their health; pregravid obesity, weight gain during pregnancy and gestational diabetes. High birthweights in infants pose their own risks in relation to obesity as well. Good nutrition and healthy weight gains in pregnancy, food security for all, and encouraging breastfeeding and many other factors are needed to close the health disparities between infants of First Nations families and those of other Canadians." On the Net:
Nearly a year has passed since Pakistan was devastated by floods on an unprecedented scale. The disaster left many scars, but none so surprising as the ghostly ‘spider trees’ along the country’s river banks. In July 2010, Pakistan experienced the equivalent of ten years worth of rain in less than a week. The downpour left several thousand people dead and 21 million refugees. The economic fallout was also disastrous, with tens of thousands of cattle drowned and an estimated seven billion euros of material damage. A study carried out by the American Geophysical Union recently concluded that the disaster could have been limited if the Pakistani authorities had known how to analyse the meteorological indications in the days leading up the disaster. These photos were taken in the Sindh Valley, in south-eastern Pakistan. The waters receded more slowly from Sindh than any of the other zones affected by the floods. "They were just small, harmless spiders, but there were a lot of them" Russell Watkins works for the Department for International Development (DfID), the British governmental department responsible for promoting development and poverty reduction. I took these photos in December 2010. We do not know when this phenomenon first appeared but the residents, who had left the region in August during the floods, discovered these trees on their return in November 2010. They had never seen anything like it before. As the waters rose, the spiders instinctively climbed the trees, bit by bit, to protect themselves. The floods took so long to recede that the spiders ended up colonising whole trees. The spider webs which they wove ended up embalming the branches, creating these strange sights. They were just small, harmless spiders, but there were a lot of them ! "Locals say the spiders reduced the risk of malaria infection" Local residents told us that the spider webs trapped a large quantity of mosquitos, which are especially prevalent in the marshy areas. Although they say they have no scientific verification, they believe that the risk of malaria infection has decreased as a result. Some of my colleagues have recently been back to these sites. They noticed that the spider webs are beginning to disappear as the waters recede and the rains start. But most trees were asphyxiated by the webs, and did not survive the spider invasion. There are still a million displaced people in the Sindh region. We are working to build homes and deliver food and medical supplies, but it takes time. We are in for the long-haul." Post written with FRANCE 24 journalist Ségolène Malterre.
About Half of Tax Returns Report Less than $30,000 October 28, 2013 To understand the distribution of the income tax burden, we must first understand the distribution of tax filers. The median taxpayer earns roughly $33,000. This means that half of the 145 million tax filers (about 72 million or so) earn less than $33,000 and half earn more. While only about 14 percent of taxpayers earn more than $100,000, they pay the vast majority of all income taxes in America today. We should always keep in mind that no two tax returns are alike. While one tax return may represent the wage income of a single individual or head of household, another may represent the income of a two-earner married couple, composed of a mix of sources such as capital gains, dividends, or even the profits from a family-owned business. Failing to account for these differences can give a false impression of inequality in America. For more charts like the one below, see the second edition of our chart book, Putting a Face on America's Tax Returns.
For over 60 years, ethnic Karen and other ethnic farming communities in Eastern Burma have been under attack by the Burmese Army. This is because these jungle mountain lands are rich in gold, precious gems, hydro power and other sources of income. As a result, over 3,600 villages have been burned to the ground by the Burmese Army. Over 400,000 villagers have been displaced and over 800,000 forced laborers have been used for the profit of the Burmese government, military and business leaders. As a direct result of this aggression war orphans have been created. The Karen do not believe in offering these children up for adoption and so they take care of their own as they are able. Of all “the oppressed,” these are the most defenseless. Worldwide Impact Now works in collaboration with Karen civil society and health care leaders to modestly support these orphans’ needs based on donor support.
Like a busy city, a cell works better if it can dispose of and recycle its garbage. Now a Japanese scientist has won the Nobel Prize in medicine for showing how that happens. The research may pay off in treatments for diseases such as cancer, Parkinson’s and Type 2 diabetes. Yoshinori Ohsumi, 71, of the Tokyo Institute of Technology, was cited Monday for “brilliant experiments” that illuminated autophagy, in which cells gobble up damaged or worn-out pieces of themselves. Autophagy means “self-eating.”
Qur’an emphasizes on the role of the human hearts in mental, emotional and spiritual decisions of man.Of these Qur'anic statements, some describe this sentient organ as having the capacity of being a center of reasoning, intentions and decision-making. Consequently, human hearts can either be healthy or diseased. A man insulted Khalid ibn Walid, so he turned to him and said, “It is your scroll of deeds, so fill it with whatever you wish.” How would you like to read your book in front of all of mankind? How can you beautify your book by filtering it from non-beneficial things? This article is aimed at providing answers to all this and more, in sha Allah
Advanced Composition discussion “Anemia, an easily reversible feature of end-stage renal disease, is an independent risk factor for clinical and echocardiographic cardiac disease, as well as mortality in end-stage renal disease patients.” (Robert N. Foley, 1996). Anemia in End stage renal disease patients can be easily managed. With the help of Epogen medication, the disease process is decreased. Patients suffer less cardiac diseases added to the renal disease. This will allow the patients to continue to be healthier than without the epogen. With the patients healthier less other medications will be needed. Patients with added heart disease are on multiple medications. Cardiac disease also includes more expensive procedures like cardiac caths, bypass surgery, and valve replacements. Pacemakers are common to help cardiac patients heart stay in rhythm. Not only is the expense more costly for sicker patients, the patients can no longer enjoy life as much being sick. Quality is poor. The quality can become so poor that walker, wheelchair, oxygen tank are now needed to go or do anything. Now the patient with extra medical conditions no longer qualifies for transplant. This patient will always be dependent on dialysis. Now with the cost of epogen saved through decreasing the dose, patients are sicker and costing more. Yes I think the bullying is a coping mechanism set in from the home experience, and now it might be just automatic. The person may not realize it is wrong at all anymore. This is very dangerous. Unremorsfulness is a sign that it is leading from mal-adaptive behavior to psychopathic behavior. It could continue to worsen especially if no one speaks up about the bully. The child being bullied might be too scared to say anything. The adults need to observe and put a stop to it however possible. I wonder do the people that are not able to get the narcotic turn to other drugs or street drugs? Will these cause seekers to go to multiple physicians and pharmacies? Could this lead to the premature deaths we hear about that overdosed on multiple medications?
Are you designing an artwork in Photoshop that is intended as a printing material? Are your Photoshop images printing out blurry and of poor quality? Web, web, web but what about Print Media? We live such a digital world that we forget that print design still a huge part of the design industry and is everywhere we look at. We are so used to see it everyday; on newspapers, advertisement, brochures, business cards, T-shirt designs, posters, etc. So how do you make sure that you leave a good impression on people holding your print design in their hands? This is where experience and advanced print design skills come into play. New designers fail to understand the difference between designing graphics for strict digital use (web) and designing print media. Web graphics commonly consist of banners, logos, web templates, icons etc. that will never see a paper. Print media on the other hand should be designed specifically for high quality printing. Why we need all those numbers? DPI or Resolution DPI = Dots Per Inch (commonly known as resolution) Failing to set the correct DPI, before you start working on your design is one of the most common mistake. When you open up a new canvas in Photoshop by going to File > New, the first window that pops up has a field for resolution, which is usually set by default to 72 pixels/inch. This value is telling us that there will be 72 dots for every square inch of the document. 72 dpi is ideal when viewing graphics on a computer screen, however when you are printing a design out on stock, 72 dots per inch is not enough, and will usually result in your graphic looking blurry and pixelated. As you’ve probably already figured, increasing the dpi will result in a higher quality image. For most printing purposes you will want to set the dpi to 300 pixels/inch to obtain a good quality print. The downside? That X by Y dimensions image that you just entered into your document that took up say 80% of the canvas, will now only take up say 20% of it. Why? This is because you have set the document to contain more pixels for square inch, that X by Y image is still X by Y in dimensions, but now has a much lower resolution compared to your high resolution document thus ends occupying only a smaller space. This is shown the images below. The same stock images is placed in a 5” x 5” canvas. The first one is at 72 dpi while the other one is at 300 dpi. Notice also that the one at 300 dpi has a much larger file size. As you are starting to realize, to obtain high quality prints, the images you use have to be of high resolution, translating directly to higher dimension pictures. For example that 300 x 300 px image you pulled of a Google search will end up being a tiny little spec on your 300 dpi canvas. Sure you can expand the image, but you will just end up degrading its quality. The solution is to find larger images. For example, if you obtain an image off Google that is 2000 x 2000 px, it will take up more space and you most likely wont have to stretch it out. Google image search actually has a built in filter that will let you search specifically for larger files. When making graphics for web, you usually set the dimensions in pixels. For print media however, you will want to set the dimensions based on the physical stock you will be printing to. So if your flyer is 6” x 4” you will want to set your canvas’s dimension to 6 x 4 in inches (plus the bleed margin which we will explain in the next section), not pixels. This is because after setting a higher DPI, the notion of pixels becomes irrelevant to document size, especially when you are printing it out. If you set the DPI to 300 as opposed to the default 72, your document will now have more pixels per square inch. Thus a 6” x 4” document at 300 dpi will have a larger dimension in pixels as opposed to a 6” x 4” document at 72 dpi which would have a much smaller dimension in pixels. Bleed=printing term that refers to printing that goes beyond the edge of the sheet after trimming. No printer, ever, will print your design to the exact dimensions you specify. More than likely, you will lose some data at the edges of your document. This is compensated for by over printing and then cutting to the required dimensions. To recognize the bleed margin there are two margins you will want to set up. The first margin is the cut line, which is the exact dimension of your media, and as the name suggests, is where the stock will be cut. The other margin is the safety line which is an eight inch in of the cut line. All important data such as text should be inside the safety line. Your Photoshop canvas will end up being an extra eight inch off the cut line, you will always add a quarter inch (0.25”) to the height and width of your canvas. Lets take a standard business card which is 3.5” x 2”. Canvas size: 3.75” x 2.25” (the dimensions to use when you create a new document in Photoshop) Cut Line: 3.5” x 2” Safety Line: 3.25” x 1.75” To help you visualize all this while you are designing, you can set up rulers in Photoshop. - Create a new document, and add 0.25” to the exact dimensions. - Go to View > Rulers to turn on the ruler. You will want to insure that it is set on inches (unless you are accustomed to mm) by hitting Ctrl + K to open up the Photoshop Preferences window, selecting Units & Rulers and insuring the units is set to inches. 3. Click within the ruler area and drag out the rulers as shown: Colors, colors and more colors! Advice, the colors you see on your computer screen will most likely not be the exact same colors you see on paper. This is for several reasons, and may even depend on what stock you choose to print it on. This is especially problematic for dark designs which may be viewable on a computer screen but will barely show on paper. Also, some printers may not pick up on lines in your design that are too thin. As a rule of thumb, always print out a sample prior to printing out large quantities. The best color profile for printing is CMYK and should be set in the New window settings. CMYK=Short for Cyan-Magenta-Yellow-Black, is a color model in which all colors are described as a mixture of these four process colors. CMYK is the standard color model used in offset printing for full-color documents. So many acronyms and I still don’t know how to export it Although you have several file formats to choose from when it comes to exporting your design, the easiest format to go with is an Photoshop PDF. The problem with choosing an image format such as jpeg is that it is measured in pixels and as we explained earlier, depending on what resolution you choose the pixel dimensions will vary. That means, whoever is printing your image will not know exactly what physical dimensions you designed for. A PDF however will fix the physical dimensions to match what you defined earlier in Photoshop. With a PDF, pixels will not come into play. To save a PDF, simply go - File > Save as > and select Photoshop PDF in the drop down menu. What techniques you use for your printing material? What else should we have in consideration we creating that awesome printed design?
Interior design course Home furnishing is the art of making an environment more comfortable through furniture, light and the organisation of space. The history of home furnishings is associated with the entire history of human beings. It is humans who create the environment and provide meaning and functionality to spaces, on the basis of their own needs. Space and its designs, objects and furniture tell us about us and about our history: they belong directly to our cultural, spiritual and social heritage. - Historical introduction to design and its evolution - Techniques of graphic representation - Characteristics and properties of materials - Geometric design and graphical representation lessons. - Design analysis elements - Final paper: choice of a subject: creation and graphic and technical representation.
Thanks to the constant snowfall this winter, kids have been unsupervised in their own homes quite often. If you have seen Home Alone, you know that danger lurks inside the home for unsupervised adolescents. General home safety concerns for kids are similar to those for the rest of the family. Here are some guidelines: 1. According to the Red Cross, 65 percent of home fires occur in homes with no working smoke alarms. Make sure there is a smoke detector and carbon monoxide (CO) detector on every level of your home and outside of every sleep area. 2. Replace alkaline batteries in detectors every 6 months and replace detectors every 10 years. 3. Keep a fire extinguisher in your home and make sure your teen knows how to use it. 1. Post emergency phone numbers and information about how to locate your home (e.g., address, nearby landmarks) near each telephone. Include the following numbers: Poison control, Pediatrician, Police/Fire department, Ambulance service and Trusted neighbor or nearby friend. 2. Teach your teens how to safely operate any appliance before allowing them to use it independently (Adolescents under the age of 16 should not operate riding lawnmowers). *Chart courtesy of the University of Louisiana at Monroe
NITROGENNitrogen is instrumental in increase leaf and stem growth, as well as the plant's dark green color. Nitrogen also influences the crispness and quality of leaves as well as quick early growth in the spring. NITROGEN--DEFICIENCYDeficiency symptoms include a yellowing of foliage or a light green-yellow overall color. Plant growth is reduced and/or stunted. | Gardening Books Please read the Copyright Information.
If anyone has a reason to be skittish about space debris, it's the people of Texas. It's in Houston, after all, that much of what we launch into orbit is monitored. And it's in rural Texas that much of the flaming wreckage of the shuttle Columbia landed in 2003. Sunday morning, it looked like Texas was in the path of danger again, when police received numerous reports of a sonic boom, a visible fireball and debris descending in various spots around the state. That debris, people figured, had to be space junk reentering from Tuesday's collision between an American communications satellite and a spent Russian satellite. FAA spokesman Roland Herwig confirms that the calls did come in and confirms that a warning did go out to pilots yesterday to be alert to the possibility of satellite wreckage. But the critical bits of evidence actual debris recovered on the ground has not turned up. "We have not seen any indication of anything being found," Herwig told TIME on Sunday evening. "Our source for this would be local law enforcement." (See pictures of animals in space.) For now, there's little reason to worry. NASA told TIME on Sunday that the events seen and heard earlier in the day bore the hallmarks of a natural incident; debris from a satellite collision is generally too small to be seen. The satellites involved in last week's cosmic crack-up were relatively small machines. The Russian ship weighed 1,235 lbs.; the American ship was about a ton. Once that mass is broken up into smaller pieces, the atmosphere ought to do a pretty good job of incinerating it. Skylab did shower the Australian outback with wreckage during its reentry in the summer of 1979, but it weighed a whale-like 91 metric tons; Columbia weighed 47. This isn't to say that Sunday's reports weren't accurate, but with a lot more naturally occurring flotsam whizzing around space than the man-made kind, Earth is always in the path of something or other. A sonic boom is perfectly consistent with anything entering our atmosphere, as is a visible fireball hence the phenomenon of the shooting star. On any other day, the Texas sightings would be dismissed as nothing more than that. Those rocks don't reach the ground because the atmosphere dispatches them neatly, and it should have no trouble digesting the satellite junk too. One way or the other, Texans and anyone else on the ground are probably safe. (See NASA's renderings of space.) Still, Herwig concedes the FAA is not ruling anything in or out. "The first thing in my job description is not to speculate," he says. That said, he doesn't sound worried either.
C diff - (clostridia difficile) infection is the bane of the existence for patients(and their doctors) who take long term antibiotics for Lyme disease. The infection occurs when normal gut flora are destroyed by antibiotics clearing the way for the opportunistic bacteria to take hold. The biosphere of the GI tract is composed of trillions of micro-organisms including a wide variety of bacteria, fungi and parasites. Destruction of these "good bacteria" clears the way for c diff, previously contained within a narrow niche of the overall ecosystem of the gut flora, to disseminate and cause disease. The infection can be mild or severe, at times even life threatening. Although c diff may be a naturally occurring part of the gut flora, infection may be introduced externally through spores or infected individuals. Even in these cases an intact gut flora helps to prevent the development of clinical disease. So what are these good bacteria? Our bodies are colonized with massive quantities of micro-organisms. These bacteria/organisms may be parasitic or symbiotic. Parasites live off the land, offering nothing in return. Symbionts on the other hand can be either commensal (neutral) or mutualistic: something positive is provided to both the bacteria and the host. One hears a lot about the beneficial effects of "good" bacteria. It turns out that some bacteria are good and beneficial to our immune systems Good bacteria may: synthesize and excrete vitamins, prevent the colonization of pathogenic (disease causing) bacteria, provide natural antibiotic effects and aid in the production of natural antibodies (amongst others). There are two take home points (yet to be made). Some antibiotics are more commonly associated with c diff and some probiotics may help prevent the disease. Quinolones such as Levaquin are highly associated with development of c diff. In addition, because of resistance, these agents appear to be associated with more virulent strains of c diff. Cephalosporins are more highly implicated. For example: Ceftin and Omnicef are more frequently associated with c diff then drugs from the penicillin family like amoxicillin. Clindamycin is also highly associated with c diff. This knowledge can help direct the prescription of the safest antibiotics. In a published clinical study a proprietary mix(yogurt drink)of bacterial probiotics: L casei, L bulgaricus and S thermophilus has been shown to decrease the frequency of c diff. In addtion, the probiotic S boulardii, yeast based, may form a barrier which protects the gut. Please take your probiotics. C diff can lead to sepsis, emergency surgery and even death. Treatments are available (Flagyl/Vancomycin), but they are not always effective. And, more virulent strains of c diff are starting to appear. More importantly, c diff tends to recur. This can make ongoing treatment very challenging to say the least.
The Movement, which held a subsequent meeting at Harper's Ferry, W. Va., issued a statement that said in part, ''We claim for ourselves every single right that belongs to a freeborn American, political, civil and social; and until we get these rights we will never cease to protest and assail the ears of America.'' But the movement, hampered by various difficulties, soon sputtered and became inactive. Then the riot came. For six days in August 1908, a mob of white people surged through the streets of Springfield, Ill., lynching and maiming black people at will and at whim. The irony of this happening in the hometown of Abraham Lincoln, earnestly if somewhat simplistically revered as the Great Emancipator, was lost on no one, the rioters least of all. ''Lincoln freed you, we'll show you your place,'' they cried as they flogged black people through the streets. The appalling spectacle energized white liberals like Mary White Ovington and Oswald Garrison Villard. On Lincoln's 100th birthday, Feb. 12, 1909, they joined with DuBois and other remnants of the Niagara Movement to issue a call for a conference on race. That call -- a century ago Thursday -- was the birth certificate of the National Association for the Advancement of Colored People. Lincoln may have emancipated the slaves, but it is the work of organizations like the NAACP that continue to work toward freedom.
Earth : Oceania : New Zealand : South Island The South Island of New Zealand is characterized by grand open landscapes. Divided by a backbone of mountain aptly called the Southern Alps, the South Island is known for spectacular mountains and fiords, large beech forests, golden sand beaches and broad plains. Generally cooler in climate than the North Island but don't forget sunhats and t-shirts - temperatures are routinely in the 30°C's in summer. In winter the sea buffers the temperatures which rarely drop much below 0°C except in mountainous regions. Town and cities The South Island of New Zealand is the larger of the two main islands though it has fewer people and is sometimes referred to as the 'Mainland' - especially by South Islanders. Geographically the South Island is dominated by the Southern Alps. Dividing the island, the alps affect climate and flora. Most of the South Island's national parks are strung out along the main divide. Generally, the West Coast is wetter and cooler than the east, and the north of the island is warmer than the south. Christchurch, Dunedin, Invercargill and Nelson are the main settlements, although the main attractions are rarely in the cities. All four cities are very different. Christchurch is the largest and has a certain English feel to it though it is definitely a New World city. Dunedin was settled by Scottish Presbyterians and is very proud of those roots. It also feels older than other cities in NZ because it was built by gold rush money in the late 19th century but has since been surpassed by bigger and brasher cities to the north. Nelson is still very young by European standards (although it was the second founded city in New Zealand) but has a very South Pacific feel with palm trees and a huge and beautiful white sand beach. It should be noted that beautiful beaches are a dime a dozen in NZ, and some of the best do not average one visitor per day. The Interislander and the Bluebridge ferry companies run from Wellington to Picton through the Marlborough Sounds and across Cook Strait. The ferries take cars, buses and trains. The scenery on a good day is spectacular. The ferries are substantial ships designed for the sometimes rough conditions and the journey takes between 3 and 3 1/2 hours. Two standout train routes are on the South Island. The Picton - Christchurch Tranzcoastal begins traveling through the Marlborough wine region before hugging the Kaikoura Coast and crossing the Canterbury Plains. The Christchurch - Greymouth Tranzalpine crosses the Southern Alps at Arthurs Pass. Rated as one of the most scenic train journeys in the world. Buses are a cheap way to get around to the main centers of the South Island. There are a range of types of services, from a luxury coach service to minivan shuttles. Shuttles which service a local area can be found in the regions and towns which they service. Roads in the South Island vary in quality and traffic, but as long as they are treated with respect they serve you well. Rental cars are available in most sizable towns. The best range (and hence lowest prices) are in Picton (off the interislander ferry) and Christchurch. Internet based rideshare systems are growing as fuel becomes more expensive. Jayride is a good ridesharing site designed for specifically for carpooling in New Zealand. See the New Zealand page for more options. The South Island has become the home of Adventure Tourism. That is, ordinary people being encouraged to do crazy things; such as jumping off a bridge with a rubber band tied to their ankles, riding in a jet boat or rubber raft. Things to do include:
The Inside Passage stretches from Port Hardy on Vancouver Island to Prince Rupert in northern BC, through the protected waters of British Columbia’s central and northern coastline. There are some places, luckily, that are still inaccessible by road. British Columbia’s Central Coast is one of them. Until BC Ferries launched its Discovery Coast Passage run in the summer of 1996, the Central Coast was also largely inaccessible by water. When European explorers arrived along this coast in the 18th century, it was inhabited by Natives from several cultural groups. Although hunters and gatherers like the tribes of the Interior, the coastal natives, due to their abundant food supply, were able to establish permanent villages. Their complex cultures were distinguished by an emphasis on wealth, a refined artistic tradition, and a rich spirit life. Travel along the coast was accomplished via cedar dugout canoes that could be impressive in their length. Although there’s nothing more inspiring than to see one of these massive canoes in action, they are only brought out for ceremonial occasions, such as a paddle trip to Vancouver or the Olympic Peninsula in Washington. These days, aluminum-hulled, high-speed boats are the vessels of choice among all inhabitants of the coast. Explorers from Russia, Britain, France, and Spain converged on this coastline in the last quarter of the 18th century, motivated by trade possibilities or – in the case of Spain – a desire to protect territorial waters. Two British explorers, Captain James Cook in 1778-79 and Captain George Vancouver in 1792-93, did the most systematic charting of the coast. After an international tussle, the British eventually gained control of what would later become the coast of British Columbia. Colonization and settlement began in the 19th century, although British Columbia’s Central and Northern Coast is still not heavily populated. Logging, fishing, and tourism are the primary industries, though with the decline in stocks and automation in the forest, fewer people live here now than in previous decades. After a disastrous decline in Native populations (by as much as 90 percent in some nations) that began over a century ago due to infectious diseases such as smallpox and tuberculosis, today’s numbers match those of precontact times. BC Ferries sails from its southern terminus in Port Hardy, on the northern tip of Vancouver Island, 250 miles (400 km) north of Nanaimo on Highway 19. The drive from Nanaimo to Port Hardy takes four to five hours. The turnoff for the Port Hardy BC Ferries terminal is at Bear Cove, almost 2 miles (3 km) south of the town of Port Hardy. The ferry’s northern terminus, Prince Rupert, is 450 miles (725 km) west of Prince George on Highway 16. Numerous cruise ships ply the waters of the 314-mile (507-km) Inside Passage en route to Alaska. BC Ferries’ may not rival the QE II in size, but is majestic enough to carry freight trailers, family sedans, recreational vehicles, motorcycles, and touring bicycles. Passengers boarding in Port Hardy for the trip to Prince Rupert include the usual manifest of adventure-hungry world travellers you’d expect to find boarding a ferry in British Columbia, bolstered, depending on the season, by a contingent of tree planters. By the conclusion of the journey, you’ll probably be on nodding, if not full-blown speaking, terms with many of your fellow passengers. Aside from a short stretch of open ocean between Vancouver Island and Rivers Inlet, where the Central Coast archipelago begins, the route north to Prince Rupert leads through a narrow maze of channels, passes, and reaches. Snow and ice coat the peaks of the mountains, and their shoulders plunge to the tideline. So rugged is most of this coast that if you were exploring here by kayak, you’d be challenged to find a welcoming landing site. Passengers should keep their eyes peeled for a whale or dolphin in Queen Charlotte Sound. With luck you might even see a white-coated Kermode bear on Princess Royal Island’s lengthy shoreline. M/V Northern Expedition is BC Ferries’ newest vessel to ply the waters of British Columbia’s Inside Passage. The new 150 metre ship accommodates 130 vehicles and 600 passengers. Among its many features, the Northern Expedition will offer 55 modern staterooms (cabins are reserved in advance and usually book up fast) for customers and an expanded range of food services and other amenities to delight local residents and tourists alike. Passengers will enjoy the spacious cafeteria, called Canoe Cafe, as well as the Vista Restaurant. The Raven’s Lounge offers TV viewing while the reserved seating Aurora Lounge boasts wonderful view and reclining chairs, perfect for taking in the sweeping vistas of northern B.C. You’ll find a great selection of unique treasures that capture the essence of the north coast including gifts, clothing, books, jewellery and treats for everyone at the Passages Gift shop. Together the Northern Expedition and the Northern Adventure will deliver a cruise-like travel experience on the northern routes. Stops at Ocean Falls and McLoughlin Bay in early spring and late fall prolong the daylong journey, but also lead to enjoyable scenery as the ferry threads her way through the Inside Passage. Services aboard the ferry include a dining room with a full buffet, a licenced lounge, a cafeteria, cabins, a children’s playroom, public showers, and a video arcade. If you are travelling with a vehicle, reservations are a must. Visit our Transportation section for more information. Come the end of May, when ferry service to ports on the Central Coast is shouldered by the Queen of Chilliwack (on the Discovery Coast Passage route, inaugurated in 1996), there are no stops between Port Hardy and Prince Rupert, with its connections to Haida Gwaii, formerly the Queen Charlotte Islands and Alaska. That’s a good thing. The ferry has become so popular with summer travellers that everything needs to click in order to keep to the demanding schedule. Circle Tours: See the best of Northern BC on one of the Circle Tours that capture the wonders of the north. The Northern BC Circle Tour incorporates the Alaska Highway through the Rocky Mountain foothills to Watson Lake in the Yukon, linking with the Stewart/Cassiar Highway and Yellowhead Highway 16 in the south. The Inside Passage Tour and The Native Heritage Tour follow the same route, from Port Hardy on Vancouver Island north by ferry to Prince Rupert. Catch another ferry to Haida Gwaii (Queen Charlotte Islands), or venture east on the Yellowhead Highway to Prince George, and south through the peaceful Cariboo to Vancouver along the historic Cariboo Wagon Road. Circle Tours in British Columbia. Location: The Inside Passage stretches from Port Hardy on Vancouver Island to Prince Rupert in Northern BC, on the central and northern coastline of British Columbia. B.C. Ferries operates a year-round ferry service between Port Hardy and Prince Rupert, with stops on the Discovery Coast. The following communities are located in or near the Inside Passage (south to north):
You may need surgery for severe gum disease (periodontitis) if it cannot be cured with antibiotics or root planing and scaling. A flap procedure cleans the roots of a tooth and repairs bone damage caused by gum disease. A gum specialist (periodontist) or an oral surgeon often performs the procedure. Before the procedure, you will be given a local anesthetic to numb the area where the doctor will work on your gums. The doctor will pull back a section of your gums to clean the roots of your teeth and repair damaged bone, if needed. The gum flap will be sewn back into place and covered with gauze to stop the bleeding. Bone may be: - Smoothed and reshaped so that plaque has fewer places to grow. - Repaired (grafted) with bone from another part of the body or with man-made materials. The doctor may place a lining on the bone graft to help the bone grow back. The lining may need to be removed later. What To Expect After Surgery Typically it takes only a few days to recover from a flap procedure. Be sure to follow the home care instructions that your dentist or oral surgeon gives you. If you have questions about your instructions, call the dentist or surgeon. The following are general suggestions to help speed recovery: - Take painkillers as prescribed. - After 24 hours, you can rinse your mouth gently with warm salt water several times a day to reduce swelling and relieve pain. - Change gauze pads before they become soaked with blood. - Relax after surgery. Strenuous physical activity may increase bleeding. - Eat soft foods such as gelatin, pudding, or light soup. Gradually add solid foods to your diet as the area heals. - Do not lie flat. This may prolong bleeding. Prop up your head with pillows. - Continue to carefully brush your teeth and tongue. - Apply an ice or cold pack to the outside of your mouth to help relieve pain and swelling. - Do not use sucking motions, such as when using a straw to drink. - Do not smoke. A few days after the procedure, your dentist will remove the stitches. Why It Is Done The flap procedure is necessary when severe gum disease (periodontitis) has damaged the bones that support your teeth. How Well It Works If you maintain good dental care after the surgery, the flap procedure should allow you to clean your teeth and gums better. Your gums should become pink and healthy again. The roots of your teeth may become more sensitive. The contour or shape of your gums may change. Gum surgery can introduce harmful bacteria into the bloodstream. Gum tissue is also at risk of infection. You may need to take antibiotics before and after surgery if you have a condition that puts you at high risk for a severe infection or if infections are particularly dangerous for you. You may need to take antibiotics if you: - Have certain heart problems that make it dangerous for you to get a heart infection called endocarditis. - Have an impaired immune system. - Had recent major surgeries or have man-made body parts, such as an artificial hip or heart valve. What To Think About - A flap procedure is often needed to save teeth that are supported by a bone damaged by gum disease. - Gum disease usually will come back if you do not brush and floss regularly after surgery. - To promote healing, stop all use of tobacco. Smoking or using spit tobacco decreases your ability to fight infection of your gums and delays healing. To learn more, see the topic Quitting Smoking. - You will need to see your dentist regularly so that he or she can follow your progress. If your gum disease spreads, you may lose teeth. Primary Medical ReviewerAdam Husney, MD - Family Medicine Specialist Medical ReviewerSteven K. Patterson, BS, DDS, MPH - Dentistry Current as ofNovember 20, 2015
I have posted quite a bit about the ongoing collapse of the Arctic sea ice. You can view the images I assembled of the progressive loss of sea ice at Cafe Verd Arctic. Here I explain why it is important, not only to walruses and polar bears, but to human beings who don’t live near the Arctic. Keep in mind that this is somewhat simplified, because I don’t want to bog this down with complicated formulas. To be honest, the math is over my head. But the climatologists know what they’re talking about. The links lead to more detailed explanations. Sea Ice – This Gets a Little Technical How well a surface reflects light is referred to as albedo. An albedo of 0.0 means all light is absorbed, 1.0 means all light is reflected. Sea ice has an albedo of 0.5 to 0.7, meaning 50% to 70% of sunlight is reflected away. Snow cover can increase that to 90%. Open ocean, on the other hand, has an albedo of about 0.06, so 94% of light is absorbed. Clearly, how much of the sea surface is covered with ice will make a big difference on how much solar energy is retained. The Arctic Ocean used to be mostly frozen over year-round. This meant that a large part of the sunlight was reflected back into space. In April, 2012, ice extent had recovered to near the long-term average, but it fell rapidly in June, and by the August 26 had already broken the all-time record minimum in extent*, area*, and volume. Area is most important for albedo, but the volume is based on how thick the ice is. First year ice is thin, less than a meter. If it survives the summer melt, it gets thicker over the cold winter. Old ice can be five meters thick, but most of the ice remaining is only a year or two old, which melts rapidly in the spring and summer. This graph shows how the current warming is a departure from the norm. Courtesy of Suffolk County Community College. As the air temperature in the Arctic has been increasing, the summer ice extent has been falling. With less ice cover, more of the sunlight goes into heating the sea. That carries over—not only does new ice form later in the fall, it is weaker in the spring. All the ice is exposed to warmer air above and warmer water underneath, so the thin ice disappears quickly and the thick ice gets thinner. We have gotten to a point, by all measures, that this feedback process is likely irreversible, a tipping point. That is, even if the air temperature stopped increasing, even if the Arctic started cooling again, the ice loss would continue for many years. The only thing that could slow the warming now would be a major volcanic eruption, which would act to reflect some sunlight and cool the atmosphere for a year or two. The amount of CO2 that we have already added to the atmosphere guarantees that global temperature will continue to rise for many years. Once the Arctic is ice-free in September, all the sun’s energy that was going into melting ice will go to heat the air and water. So the ice-free period will get longer, and yet more energy will be absorbed by the sea. The tipping point has been passed, and the process will play out until we reach a new equilibrium with much less ice in winter, and little or none in summer. Not to Change the Subject, But… What about the huge increase in Antarctic sea ice? Antarctic sea ice continues to reach record high extents. So that balances out the Arctic ice loss, right? Well, remember that September is the tail end of winter in the Southern Hemisphere, so the ice extent runs opposite that of the Northern Hemisphere. Here is what that record extent in 2012 looks like on a graph: Tamino has an in-depth discussion on the how these extents affect total albedo and resulting effect on temperatures at the higher latitudes, the upshot being that the Arctic decrease outweighs the Antarctic increase by a lot. That means a net positive feedback, reinforcing the greenhouse effect. The Jet Stream and Outflows So why should this matter, if you don’t live in the far north? Here we get into some genuine debate, as to exactly how these changes will affect the Earth outside of the Arctic. There is some evidence that the jet stream may be changing, meandering more and changing position more slowly, which causes weather patterns to remain stuck, leading to more severe droughts and storm events. Changing temperature differentials may lead to more outflows of cold Arctic air to surrounding areas, causing some places to have colder winters while the world as a whole warms. Exactly how the ice loss will affect the globe is unknown, but there is no doubt that there will be disruptive changes. The tremendous decline in Arctic ice is an indication of the severity of climate change/global warming. The sea ice collapse is a tipping point. As we will not be able to reverse the process, we must deal with the consequences. However, there are other tipping points approaching. The sooner we get serious about reducing CO2 emissions, the more likely we can avoid triggering those events. We are already committed to another degree (Centigrade) of temperature rise. If we were to continue with “business as usual” through the end of the century, we would be dealing with an increase of perhaps 6°. That would be disastrous. These are the Facts - Global temperature is increasing, at a rate unprecedented in human history, - This will have serious consequences for humans and the environment in general, - Human activity is the primary cause, and - There are steps we can take to reduce the severity of the problem. Now, let’s get started dealing with it.
When such acts [that desecrate Shabbat in order to save a life] are done, they are not to be done by gentiles or by children… but by great Torah scholars and wise people. It is forbidden to delay when desecrating Shabbat for the sake of a dangerously ill person, for the verse says (Leviticus 18:5): “These are the commandments which you shall do and live,” and not die because of them. From this we learn that the laws of the Torah are not in order to punish the world but to bring love, kindness, and peace to the world. As for those heretics who claim that such an act is a desecration of the Sabbath, about them the verse says (Ezekiel 20): “I gave them laws that are no good and rules by which they cannot live.” Mishneh Torah Shabbat 2:3 Find more Wise Fridays wisdom on MJL. Pronounced: shuh-BAHT or shah-BAHT, Origin: Hebrew, the Sabbath, from sundown Friday to sundown Saturday. Pronunced: TORE-uh, Origin: Hebrew, the Five Books of Moses.
How a person speaks is a reflection of who that person is. But speech is not just a means of display, like a peacock's plumage. We don't just speak at each other to prove how articulate or forceful or clever we are. We speak to each other. Dialogue is what we get when we engage in that singularly human exercise of speaking to each other. It's dialogue that allows us to have the most complex interactions and relationships— and the most agonizing misunderstandings. Language is, of course, meant to communicate, and no matter how often we misinterpret each other, we keep on trying to connect through words. But language provides more than connections. It also powers action. Anyone who has tried to find an address in a foreign city knows how essential conversation is to getting something done. So let's go beyond individual voice and speak of voices: arguing, agreeing, jawing, joking— making conversation that matters. You might rent some videos with snappy dialogue, like the screwball comedies of the 30s, or David Mamet's films. Listen for the reaction pauses in those lightning-quick exchanges, and see if you can use for rhythm and balance in your own witty repartee. You'll probably also notice the repetition that links one line to the next like a drumbeat: "So I say, baby, let the good times roll!" "Right. Let 'em roll. I know how that works. You let those good times roll right over you, and tomorrow I'll find you plastered on the sidewalk." Consider some purposes of conversations in your book (the purposes to the conversants, not just to your story), e.g., persuasion, intimidation, comfort, seduction, alliance-building, information exchange, time-passing, boasting.... Just keep focused on the results of this dialogue; what this conversation can do to these characters. Here are some effects that can come right from conversation, without any further action. • A conspiracy to do something. • A breakup. • An alliance. • A change in vote or position. • A discovery of the key to a puzzle. • A deepening mystery. • A misinterpretation. • A revelation of a secret. • A change in attitude. • A change in behavior. • A flirtation. • A deception. • A surrender. 1) Consider some purposes of conversations in your book (the purposes to the conversants, not just to your story), e.g., persuasion, intimidation, comfort, seduction, alliance-building, information exchange, time-passing, boasting.... Choose one purpose and craft a conversation in which the purpose is not fulfilled-- but which still advances the plot in some way. 2) List ways your characters might interact in conversation, e.g., fight, deceive-doubt, interrogate-resist, sweettalk-resist, sweettalk-succumb, comfort-accept, mutual flattery. Choose one and craft a conversation that shows the relationship changing in some way because of the interaction. For example, John is trying to confide in his mother. He confesses his big secret-- that he got a tattoo on his buttocks a few months ago, and he thinks something went wrong. "Mom, do you know anything about, well, hepatitis?" "Hepatitis? I know it's a disease drug ad-- I mean, I know it's a disease. Why? Are you, umm, maybe doing a report for school?" "What is it, sweetie? Come on, tell me. You know you can tell me anything. I might get mad, but you know it never lasts. I'm your mother. I love you no matter what, remember? And if you need help, well, I'll get it for you." "I know. I know. Okay, I'll tell you. Just promise not to get mad, okay? I mean, you can get mad if you have to, but don't get too mad. I-- I don't know what to do!" Mom can sense, probably from her son's tone of voice, that this is serious. So she stops herself from saying something inflammatory about drug addicts, and reminds him instead of her unwavering love. This keeps him from pulling away defensively, and makes him realize that he can trust her to help him out of the trouble his secrecy has gotten him into. Their relationship will be strengthened by this, because they are both being reminded of what that essential parent-child bond means. 3) Revise to make the change in relationship more clear. Dialogue, just like narrative, can cause things to happen in the story-- and SHOULD. :) A conversation, an overheard whisper, a ringing declaration, can make the plot go into a new direction. Striving for this can just about instantly vitalize your dialogue by making it more than just clever conversation. It will be... ACTION. You can probably come up with other ways dialogue can cause change. But the important thing is--make the dialogue you have serve that purpose. Look at the passages, especially the long ones, and see how they can affect the plot either now or later. (That lie she tells in chapter 2 sure better come back to haunt her in chapter 10 or so!) One other thought-- make the characters work at it. The key to effective dialogue, I think, is that the speakers have to spark a bit off each other to get to the change-point. Otherwise you could just summarize it in narrative: -- She told him about the paper hidden in the Bible.-- But if you're going to have dialogue, make the tension in it lead to the change, propel them towards change. "Give me that back! You can't just rifle through my Bible that way!" TYPES OF DIALOGUE ACTION AND INTERACTION Remember John Barnes's definition? He's a theater historian, so he's used to plays, where dialogue is all-important. ACTION is any irreversible event that changes the course of events course of events of the story. Key words: IRREVERSIBLE — CHANGES So Jack speaks his confession into a recorder, then instead of hitting playback, he rewinds and records over it: No go. That's not action because it's reversible. But if Sally is hiding under the bed. and hears him dictating, he can rewind all he likes, but she still knows the truth, and will now be able to act on it. That's irreversible dialogue. Anything spoken aloud and heard by someone else is irreversible. But that does mean anything he says just to himself doesn't count. Introspection is well and good, but he can always take it back. His thoughts have to be heard to be irreversible. He can speak them aloud, or act on them… only then does a thought become irreversible. Harder still is making sure that dialogue has an effect, that it changes something not just in the plot, but in the relationship. How can you accomplish that? First, start by deciding that you're not going to have long stretches of dialogue that just displays how funny this guy is, or shows how well they get along, or passes on to the reader some necessary information. All that is fine, but think how the conversation will crackle when the reader realizes that this moment of conversation is going to change something. What sort of change can a conversation bring? Especially in a comedy, making information exchange a conversation of conflict can provide a bit of humor. Here's an example from a historical novel: conflict can provide a bit of humor. Here's an example from a historical novel: "Jane, do let me put my bonnet up. I have been out all day looking for your bir–" Lucy stopped and clapped her hand over her wayward mouth. your bir–" Lucy stopped and clapped her hand over her wayward mouth. "My bir– my birthday gift? Oh, Aunt Lucy! What? What did you get me?" "Your birthday isn't for three days." "Oh, tell me now! Tell me!" Jane put her little hands to her heart. "I promise to be good!" promise to be good!" How long does Lucy hold out before she tells what the gift is? Now there's bound to be an information exchange, but it isn't just a quick spill– there's conflict, and character revelation, and lots of whining before she imparts the important fact. What's important is that the story changes somehow because one character has passed on some information to the other. So make something happen as a result of this exchange. The niece insists on going to the stable to see the birthday horse, and there she meets the young Mr. Ferguson, nephew of the best friend of Lucy's late husband. Eventually this "seed" conversation can lead to a change in their relationship, where the younger lady becomes more adventurous than her aunt. Using that same story progression, here are some common events that happen because of the action and interaction in dialogue. Discovery is another form of information exchange, but instead of just passing on what one already knows, it results in a revelation of something neither speaker knew. Talking together helps them put together pieces of a puzzle. "The stablemaster writes to say Jane didn't attend her riding lesson today," Lucy said, staring at the note as trepidation filled her. Captain Ferguson frowned. "You know, that must have been your Jane I saw in my nephew's curricle! I thought it looked like her, but I assumed you had her well-chaperoned." "They are courting!" Discovery requires that both contribute some essential fact, and the sum is a new piece of information. The conversation is active because, without this particular sharing of facts, the truth would never come out. This use of dialogue is especially good when you want both to participate in the discovery of some event or clue. It gives them a way to cooperate, to produce something together, and in a romance can subtly show how well A conversation can also result in an alliance of interests. It's most fun if the conversation leads them to realize they need to work together, especially if that's a frightening prospect. "I don't care what you say, Captain Ferguson." Lucy looked implacably at him. "My sister sent Jane to me so that her daughter can marry well. And I regret to say that a penniless young lieutenant isn't going to suit." "You think I want my nephew shackling himself to some twittery little snob?" "My niece is not–" Lucy stopped and listened to the echo of his words. Then, slowly, she said, "You don't want this marriage either?" It's best that they start out somewhat at odds, so the conversation brings them to alliance. Thus, in the course of the dialogue scene, they move from adversaries to reluctant allies. Sometimes when two people realize they have a common interest, they end up conspiring together. This involves agreeing tacitly or openly to work together more or less in secret. So the concerned aunt and uncle above might agree to work to stop the wedding. They're creating a shared goal and a plan to achieve it. Take the conversation further if you can. A plan requires action, so as they're arguing and negotiating the steps involved in stopping the wedding, you'll be showing them learning to work together– and where they're in conflict. "I remember when I was nineteen," Captain Ferguson observed, as if it was a century ago and not just a decade. "I would never have let a relative tell me whom I could court." Lucy sighed. "Jane is just that way. She thrives on opposition. A very dear girl, but..." She glanced over and could see that Captain Ferguson was struggling manfully not to say that this must be a family trait. She said, "They are counting on us to object, aren't they? So why don't we ... surprise them?" "You mean, pretend that we are in favor of the match?" Captain Ferguson frowned in thought. "Well, I can't think of anything more likely to make Joseph think twice, than me telling him that Jane is a perfect wife." Lucy said decisively, "Let's then. Let's take every opportunity to throw them together." "Do you attend the Haversham musicale tomorrow night? We can insist they sit together. With both of us nearby, of course, so as not to excite Conspiracies lead to joint action. Use this conversation to set up regular meetings between them, for example, where they have to act together to further their shared goal. Secrecy only adds to the fun of their meetings. Maybe your characters are getting along way too well, especially if they're conspiring. Well, bring on a conversation that leads to greater conflict. But don't make it trivial. Oh, the surface-level topic might be trivial, but see if you can make their responses reflect some internal conflicts. responses reflect some internal conflicts. Lucy declared, "Everyone in my family gets married at St. George's." "Since we plan that they won't actually get wedded, what difference does it make? It will be easier to set the wedding outside London– easier to cancel it, that is, with the least fanfare." "Jane will think I disapprove if I set the ceremony anywhere but St. George's." He regarded her with narrowed eyes. "Your wedding was in St. George's, I seem to recall." He added, "It rained. All day." "This is England, Captain Ferguson," she said coldly. "It frequently rains here, and not just outside of St. George's. If you hadn't left in the middle of the ceremony, you would have seen that we made a game of it, leaving the church under our umbrellas." "A game. Yes. I've observed that you considered marriage itself a game, Mrs. Endicott." She gasped, but he was going on as if he cared not that he had just impugned her virtue. "No St. George's. I will not hear of it. I will not have my nephew even consider marrying in the place where you married my poor dead fool of a best friend!" Again, aim for some change in their relationship. They start out thinking they can clear this little problem up, but find that actually, the more they talk, the more at odds they are– and it will be especially interesting if it reveals why they are really in conflict. Conflict is the fuel that powers the plot, but you can't have them always fighting, or the reader will start to suspect these two have no reason to ally. If they have been at odds, then a conversation can lead to some kind of truce, reluctant or not. Again, there must be change from the state in the beginning of the conversation to another state at "Gretna Green?" Lucy whispered. "They've eloped?" "Damnation. They've got a two-hour head start on me." Lucy grabbed up her bonnet. "I'm going too." "Nonsense," he said. He couldn't imagine even a few hours alone with Lucy. They would do nothing but argue, and every angry word would put new scars in his heart. "Let me go along," she said. "It might spare Jane's reputation if I'm there to bring her home." He stood irresolute, his hand on the door. Finally he muttered, "We will do them no good if we show up fighting like Napoleon's artillery against Wellington's cavalry." She smiled suddenly, sadly. "I promise to be civil to you. If you promise to be civil back." "Oh, all right." "Let's take your phaeton. It will be faster." A treaty should lead to some shared decision– taking his phaeton, for example– to show that their cooperation is not just talk. Remember that the act of lying is, in itself, irreversible. That is, once it's done, it's very hard to take back, and the resulting mess of admitting to the lie or being caught in it can be extreme. So if one character is deceiving the other, see if you can make him lie directly in conversation. Speaking it aloud makes him commit more to the deception because he cannot take it back now. But make sure the deception has an effect on the plot. For example, she relies on what he has told her to make a decision or take an action, or, alternatively, she recognizes it as a lie, and his deception destroys her trust in him. Or she challenges him and forces him to tell her the truth. "You never told me about when John died." She looked grimly at the road ahead. "I should know. I am his widow." Captain Ferguson's fists closed more tightly on the reins. "You saw the commendation. He died a hero." "Yes. That's what the commendation said. That he died saving someone. But you were there. Whom did he save?" He recalled John protecting his Portuguese mistress with his body as the grenade exploded nearby. "He saved me." "That is very gallant, Captain. Untrue, but gallant." Lucy turned her merciless gaze on him. "Tell me why you are lying." Just keep in mind that a lie will almost always be revealed as a lie, sooner or later. As President Nixon said (and boy, did he know!), it's not the crime but the cover-up that gets you in trouble. The very fact that one character lied to the other, even with the best of motives, should create conflict – within the liar while it's still secret, and within the relationship when it's revealed. The revelation of the lie will manifest issues with trust and honor that might have been buried for years. So if there's a lie, have it revealed early enough that there is time for them to work through its consequences. You can't take back telling the truth either. So a conversation where a long-hidden truth is revealed will lead to real change. Just remember to set this up earlier, whether it involves alluding to a secret or posing a question, such as why Captain Ferguson stalked out of his best friend's wedding. They gazed at the sign welcoming them to Gretna Green, Scotland's most famous site. "So Jane and Charlie now hate each other and refuse to speak, much less marry." Lucy sighed. "I almost started believing in love at first sight again, imagining them wed. But–" "But now, you are made a cynic all over again." He smiled down at her. "And we still have that damnable church reserved." Suddenly he took her in his arms. "What do you say, Mrs. Endicott? Shall we make use of the reservation ourselves?" Lucy opened her mouth, then closed it again. Finally she pressed her cheek against his chest and whispered, "A wedding? You? And I?" "I haven't been, I suppose, entirely honest with you." "I know about John's mistress," she said. "I don't mean that. I mean– oh, hang it all, Lucy. I love you. I've loved you all along. I walked out of St. George's that day because I couldn't bear to see you marrying anyone else, especially my best friend." "Oh." She took a deep breath as she felt his heartbeat beneath her cheek. "You know, I don't truly like St. George's Church." "It always rains there." "Yes, I've noticed that." "Look." Lucy pulled away long enough to gesture at the sky. "The sun is shining now. And I hear they know how to give weddings here in Gretna –" The truth can't be taken back. It's possible for the listener to misinterpret, but even then, the conversation should always have some effect, should change the characters and their actions. The moment one or both speaks openly about a secret (love, or the trauma in the past, or the conflict between them)– well, that's the truth the reader's been waiting for. Take your time with this conversation. Think of the revelation as the irrevocable and dangerous telling of a secret truth, with potentially dire consequences. And leave a little time to show the actually wonderful consequences awaiting the character brave enough to tell the truth. Dialogue takes up a lot of space in a book, and is particularly appealing to readers, as it reveals character in so many ways. So don't waste the space. Look at dialogue passages, especially the long ones, and see how they can affect the plot either now or later. (That lie she tells in chapter 2 sure better come back to haunt her in chapter 10 or so!) One final thought-- make the characters work at it. The key to effective dialogue is that the speakers have to spark a bit off each other to get to the change-point. Without conflict in the conversation, you might just as well summarize it in narrative: She told him about the paper hidden in the Bible. If you're going to have dialogue between two characters, make the tension in it lead to the change, or propel them towards change. RELATIONSHIPS IN PROCESS The people we talk to the most are the ones we have the most trouble understanding, right? That's because we tend to hear all sorts of echoes from the past. We also have more than one purpose in talking to a loved one— we might want information and reassurance. We might even want to fight a little. These are some ways people interact in conversation: mutual flattery mutual insult A married couple, for example, has had this conversation a dozen times before. They even finish each other's sentences. "Want to stay up and watch it?" "Yeah, sure. Just flip off the light--" "So you can rest your eyes. I know, I know. I just want to hear the Top Ten list." Try to establish the familiarity then throw some wrench into it--change it so it's no Ionger a rote conversation but actually becomes an interaction fraught with potential action: "So who's Colbert interviewing tonight?" "Let's see what it says in the TV Guide. Hmm. That new action star, Tim Gordon--" "Tim Gordon? You know, I went on a blind date with him once. My brush with fame, I guess. He wanted to go out again, but I turned him down because you and I had gotten back together." "You never told me that." "It didn't matter, did it, when he was a nobody. I never knew he'd end up being a star." "So what you're saying is-- you wish you'd gone with him that night instead of me?" Now it's not so familiar, is it? You can have one overreact because of something out of their shared past-- that will hint at an unresolved conflict. Take pains to avoid the clichéd exchange of insults. That gets old fast, and seldom results in either the true deepening or the true resolving of conflicts. Instead, make this conversation cause some change in the relationship. For example, one speaker can finally break an old pattern by responding to an old provocation in a new way-- asking a question, or walking out, or sympathizing. Think CHANGE. Choose a scene from your story that involves two people in some conflict with each other. 1) Think of this relationship at this point in the story. How will their conversation reflect their current feelings about each other, and their reasons for being together? 2) Is this encounter cooperative or confrontational? Are they working together or against each other? How can you show their reluctant alliance, or their hostility, or their friendly competition in their dialogue? 3) Are both equally open and forthcoming, or is one keeping secrets? If there's a secret being kept, can you indicate that in the dialogue? No, don't let the other character in on it, but can you have the secretive one start to say something, then abruptly change the subject, indicating to the reader that there's something hidden there? 4) What emotion or attitude is each character trying to convey? Trying to hide? Is that coming out in their speech? 5) How well do they know each other? How does this affect their verbal interaction? If they know each other well, what can you do to make this an unique conversation? If they don't know each other, do you show in their dialogue openness or distrust or wariness or excitement or something that means this encounter has great meaning? 6) Do you show the relationship changing at least a little because of this encounter? At the end, for example, does she feel trusting enough now to confide in him? Or maybe he's figured out she must be the thief because she's spoken so familiarly of the layout of the museum? Does the way they talk shift because of this change in the relationship?