content
stringlengths
167
607k
score
float64
0.5
1
source
stringclasses
1 value
Bloating is that uncomfortable feeling that occurs when gas builds up and gets trapped in the digestive tract. This can lead to a swollen feeling that can be accompanied by abdominal pain and/or lowered self-esteem and body confidence. Here are 5 easy ways to beat the bloat and feel better today! 1. Increasing your fibre intake can help improve your gut health and reduce bloating. This helps to slow down the processing of meals in the digestive tract and keeps you fuller for longer. Try incorporating different types of fibre including soluble fibre foods such as bananas, legumes, chia seeds and oats. It’s vital you pair these with insoluble fibre foods like multigrain bread, wholemeal pasta, brown rice, nuts and seeds. Ensure that you increase your fibre intake gradually, otherwise it may make things worse instead of better. Daily fibre intake should be 25g for women and 30g for men per day. 2. Reduce your sodium (salt) intake, since excess sodium from high salt foods can cause water retention, discomfort and inevitably bloating. For Australian adults, it is recommended to consume less than 2000mg of sodium per day (equivalent to 5000mg or 5g of salt, which is ~1 tsp). Incorporate foods naturally low in sodium such as fresh fruits and vegetables, wholegrains, milk and yogurt, legumes and lean meats. Try avoiding the use of sauces and seasonings in cooking and adding salt at the table. Remember to use the nutrition information panel on the back of food packages and products to check the sodium content of foods. 3. Drink an adequate amount of water every day as this helps to flush out excess sodium, other minerals and toxins from the body. Try to aim for at least 2L of water a day and replace sugary drinks with water, as it is an easy way to meet your body’s fluid requirements. 4. Bloating can be caused by overeating past our level of fullness, so try eating slowly to help prevent bloating after meals and snacks. Taking time to eat your food helps to ensure that food is chewed adequately and proper digestion is achieved. Try mindful eating practices such as avoiding eating on the go, while driving, watching the TV or looking at your phone. 5. If you are experiencing any other prolonged symptoms with bloating such as diarrhea, constipation and/or abdominal pain, these could be signs of a food intolerance or allergy and it is important to consult with a dietitian. A dietitian can help identify what is causing these problems and help you develop a personalised plan to help overcome them. Try out these 5 easy ways to relieve bloating! If symptoms worsen or are prolonged, reach out to a dietitian for more personalized advice. This blog was co-written by Amy Gibson, registered dietitian and Jessica Wolff, a Nutrition and Dietetics student at the University of Queensland.
0.8255
FineWeb
Disposing of cat litter can be a tricky task. Not sure where to put your cat’s dirty litter? After doing some research we can advise you the only safe place for cat litter to be disposed of is in the trash can outside of your home. It is best to double bag the litter and move it to an outside location to prevent the bad smell in you home. Why can’t I put dirty litter anywhere else? - It is not advised to place litter inside your kitchen trash because it can have parasites in it and could contaminate other things in your kitchen. - It is not acceptable to dispose of litter in your yard because it can contaminate ground water. - It is the most hazardous to flush litter down your toilet. Cat litter clumps and expands when wet and can clog your pipes or damage a septic system (even the ones that say flush-able). - Newer water saving toilets don’t use enough water or pressure to move the litter through your pipes. - Cat waste hardens up fast and dries out, this could be difficult to move through pipes as they bend and turn. - Flushing litter can introduce a parasite, named toxoplasma gondii, into the water supply. T. gondii is commonly found in cats and flushing this parasite down the toilet can contaminate water supply. Supply treatment centers may not be able to clean all of the parasite out of the water. This increases the possibility of a public infection. Those most at risk are children, elderly people, and pregnant women. The EPA has classified pet waste as a pollutant that can cause harm to fish and wild life. It can kill native vegetation and cause unsafe water. What prompted us to write this? We received a call last week. Our customer was complaining about clogged pipes. We went to the home the same day we received the call. Upon cabling the customers line, our technician pulled back cat litter. This made us wonder how many other customers are unaware of the havoc that can be caused by litter.
0.5173
FineWeb
Negative Psychological Reactions Getting lost can be an incredibly frightening and unnerving experience. Studies have shown that people universally become agitated and upset on some level when they lose their way. This type of psychological stress goes hand in hand with wilderness survival. You can think of your stress response in the wilderness like a stove. When turned on, it gets things cooking, but too much too fast will burn. When you realize you've entered into a survival situation, resist panic and take a few minutes to plan. If your stress level doesn't overheat, so to speak, it can actually help you. Although stress usually gets a bad rap, it can produce positive results for the short term. Once our brains recognize that we face a threatening circumstance, the hypothalamus goes to work, producing our fight-or-flight response. The hypothalamus sits in the mid-region of the brain base and, among its other job titles, is regulator of hormone secretion. It triggers the adrenal glands to release hormones including adrenaline and cortisol. Adrenaline boosts your heart rate and blood pressure, causes the liver to release stored energy in the form of glucose and sends blood to your large muscle groups. Cortisol tempers the bodily functions that aren't necessary when you're in a serious bind, such as digestion and growth. During fight-or-flight situations, your pupils dilate, and your visual scope focuses in, decreasing the number of things you notice. It impairs fine and complex motor skills as well, giving more energy to larger movement, such as lifting or running. For brief periods of time, these hormones can send us into "Incredible Hulk" mode. In survival situations, our unconscious stress response can prod us to eliminate the immediate threats to our safety by building shelter, making fire and evading wild animals. In fact, people actually function at peak performance under the right amount of stress because of these physiological effects. But the stress-performance gradient looks like an arch. That means that while humans work well under stress, too much sends us sliding down a slippery slope that can end in a mental and physical freeze-up. Because of this balance, the long-term effects of stress could be more threatening to your survival than any grizzly. Continual release of stress hormones leaves you physically and mentally exhausted when you should be conserving energy. After the initial stress eases, your parasympathetic nervous system kicks back in to regulate those functions that the cortisol constricted. This entire process saps your strength, especially when it happens over and over again. Prolonged cortisol exposure can also promote depression. Once your mental state deteriorates, so goes your will to live. In some life or death survival conditions, that determination can save you. Next up, we'll look at what happens in our brains when we turn that frown upside down, and why survival experts preach the word of positive mental attitude.
0.9003
FineWeb
On this page: Who is Ayako Uehara? Ayako Uehara is a classical pianist. - also known as 斎藤彩子 - born on (34 years ago) in Japan - nationality: Japan - profession: Pianist - Marriage with 斎藤孝史 since - official website: www.universal-music.co.jp/uehara-ayako Online dictionaries and encyclopedias with entries for Ayako Uehara Click on a label to prioritize search results according to that topic: Share this page Go to the usage examples of Ayako Uehara to see it in context!
0.8458
FineWeb
Interstar Bricks Construction Toy for Age 3+ - Make fascinating 3D models with these familiar brick shapes and wheels in vibrant colours. - The 'teeth' on each piece slot easily into other pieces to hold them together firmly. - 20 pieces in different shapes and sizes allow children to form amazing 3D geometric structures. - Includes model suggestions. - Tube size 32.5cms x 9cms. Helps develop hand eye coordination, dexterity, concentration and imaginative play.
0.9929
FineWeb
Donald Trump’s campaign, while preposterous and inconsistent, has thrown a bright light on a social divide in America that is deeper and more severe than most of us realized. There is no doubt that Trump is a narcissistic ass who has the ethics of an alley cat with a grossly enlarged libido. Nonetheless, despite his bombastic claims and morally degenerate behavior, many Americans are willing to turn a blind eye to his racial, ethnic, and misogynist hate mongering, and believe that “Donald really tells it like it is.” People, we know something’s wrong, but let’s not forget what is right. Clearly, the great social experiment of equal opportunity and mutual respect in our Democracy is far less mature than it ought to be at this point in our history. What has happened to stir up such vile and destructive emotions among so many people? I believe that the current, worsening social divide is a product of an even more stark divide in the United States, namely the economic chasm between the wealthy and the rest of us. During most of the 20th Century, the strength of our nation was largely drawn from a large and vibrant middle class that bridged the divide between the rich and the poor. That middle class provided a vital pathway to success for people in poverty who wanted to better their conditions. It provided the reachable goal of helping a next generation live better than the parents had. Anyone could get ahead through honest work and intestinal fortitude. There was credible reason for hope. But the dawn of both the information age that supplanted our manufacturing economy and the political shift of tax burdens from the rich to the middle class, changed the role and nature of the class structure in America. Today, the poor have little means or hope to achieve upward mobility. They have been nearly completely disenfranchised from the American Dream. They are surrounded by greedy predators, like Mr. Trump, who create bogus “universities” touting the promise of a bright future. Unsuspecting and well meaning people take on student loans to pay for an ultimately bogus education and a useless degree. Because student loans are not forgiven by bankruptcy, people least able to pay back loans trade their hopeful futures for a lifetime debt. And the rich get richer. While the poor lose hope, many in the middle class face the legitimate fear of falling below the poverty line to join the ranks of the hopeless. The middle class is not shrinking because more people are getting rich. The erosion of the middle class is from the bottom, not the top. Meanwhile, the rich buy the allegiance of law makers in local, state, and national political arenas and quietly arrange the rules of the land in ways that allow them to systematically siphon off more wealth from middle class and the poor. Nowhere is that more evident than in the tax codes of all levels of government. There is no doubt that the top 10% of wealthy Americans pay significantly less than their fair share of taxes. Don’t listen to politicians or political pundits who, like Mr. Trump, make up false statistics to the contrary. Fully 50% or more of the wealth of this nation is owned by that 10%. By far, more taxes are collected from our 50% than are collected from their 50%. End of story! This economic disaster began with the Ronald Reagan notion that if we cut taxes for the wealthy, all kinds of great things would happen. They would invest all that untaxed money to create tons of new jobs for millions of people. Some of the wealth of the top 10% would “trickle down” like soft rain from a cloud, nourishing us all. It simply didn’t happen then, and it will not happen now. The rich don’t use their money to create rainfalls for anyone. Most use money to get more money. Few, if any people benefited more financially than the rich during the end of the 20th Century and the beginning of of the 21st. They have built their rigged system. All they need do now is maintain it. The spiral we are in where the rich profit at the expense of the rest of us can only be stopped if the rest of us, 90% of the electorate, demand that: - Tax codes are simplified and the wealthy are made to pay their fair share without exception. The purpose of taxes in a democracy includes being a mechanism for a healthy distribution of wealth among the electorate. That needs to happen. - The power of free speech is equitably shared by all. One’s right to speak freely is sacrosanct, but the decibel level of one person’s right to speak should not be allowed to drown the speech of others. Free speech needs to include the concept of an equal right to be heard. That’s where the power resides. That is why we desperately need to impose limits on how much money any one individual or group of individuals, or a corporation or a union, or a political action committee can contribute to the decibel level of any one candidate. The idea that free speech in America includes the right to drown out the speech of others who may not have a mega-phone is dangerous and ethically contrary to the bedrock upon which this nation was built. - Good jobs are created by government to repair and improve the infrastructure we all depend on (even the rich) for transportation, communication, education, law enforcement, welfare, and commerce etc.. - Education for the information age is widely made available and affordable if not free in many cases. The gulf between greed and simple comfort is wide. The gulf between greed and happiness is wider. Being part of the solution is always better than being part of the problem.
0.6432
FineWeb
PPS – Process Planning System Optimal, Easy and Reliable Planning of Processes The integrated Process Planning System (PPS) by CSB-System is a powerful tool for fast, secure and demand-oriented production planning. Whether long, medium or short-term planning – you can calculate detailed planning data for comprehensive management and control of your production processes. This gives you an up-to-date overview of the quantities to be produced as well as the throughput times of your orders, your current stock on hand and the capacity utilization of your departments and machines. The data required for planning is made available online via the central ERP system in a freely-configurable, clear and well structured planning matrix. Due to its modular design with a planning matrix and an optimization graphic, the PPS can be utilized both in small and large planning environments without any problems. The challenges in production are complex and not always predictable. This makes accurate production planning all the more important. The process planning system by CSB-System helps you improve your production processes continuously and recognize optimization potential at an early stage. Your Benefits at a Glance: - Cost reductions through optimization of product wait times, setup times and production idle times - Maximization of stock turnover and as a result increase in liquidity - Profit maximization through integrated forecast of contribution margins - Increased cash flows through reduced stock on hand and optimization of throughput times - Increased utilization of resources and capacities
0.8881
FineWeb
1. Where did freedman Rutherford Calhoun work when he was a slave? 2. What does Rutherford say New Orleans women always smell like? 3. When he arrives in New Orleans, where does Rutherford try, but fail, to find work? 4. What does Rutherford eventually take up as a career? 5. What did Reverend Chandler, Rutherford's master, try to teach him while he was growing up on the plantation? 6. Rutherford warns the reader not to be too ________________ his way of life. 7. While in New Orleans, where does Rutherford like to hang out? This section contains 4,765 words (approx. 16 pages at 300 words per page)
0.9717
FineWeb
Inner Growth Mindset Insight of the Day: Give Yourself Permission To Shine Exploring giving yourself permission to shine; how an inner growth journey and mindset assist you; why it takes you, time, patience, and lots of love; what heart and mind flow have to do with it; and how this also gets you to tap into your infinite human potential. Your Inner Growth Mindset Insight of The Day: - Inner Growth Mindset Topic of The Day: Every time you are you, you shine with or without your conscious permission. The part that doesn’t feel like you are shining are the very areas you get to grow and learn about, bring love to and then expand or transform, and so on. Shining through the heart, which is the essence of you, and bringing it to harmony with your mind, allows a flow to come forth when you’re ready that will get you to give yourself permission to shine as bright as you want, for as long as you want, and more. - Inner Growth Mindset Exploration of The Day: Explore more on the topic on the Inspiring Human Potential website. - Inner Growth Mindset Exercise of The Day: Inspiring Human Potential patreon member access – Find out how to apply the inner growth mindset to unlock your infinite potential from within. Your turn – Share your experience with your inner growth journey😊 For access to the Inspiring Human Potential – Inner Growth Exercises of The Day Click Here
0.9523
FineWeb
Languages with a slow edit/compile/link/test development loop tend to require sophisticated tracing/stepping debuggers to facilate debugging. A much better (faster) way in fast-compiling languages is to add printing code at well-selected places, let the program run, look at the output, see where things went wrong, add more printing code, etc., until the bug is found. The simple debugging aids provided in debugs.fs are meant to support this style of debugging. ~~ prints debugging information (by default the source location and the stack contents). It is easy to insert. If you use Emacs it is also easy to remove (C-x ~ in the Emacs Forth mode to query-replace them with nothing). The deferred words .debugline control the output of ~~. The default source location output format works well with Emacs' compilation mode, so you can step through the program at the source level using C-x ` (the advantage over a stepping debugger is that you can step in any direction and you know where the crash has happened or where the strange data has occurred). ~~– gforth “tilde-tilde” Prints the source code location of the ~~ and the stack printdebugdata– gforth “print-debug-data” .debuglinenfile nline – gforth “print-debug-line” Print the source code location indicated by nfile nline, and additional debugging information; the default prints the additional information with ~~ (and assertions) will usually print the wrong file name if a marker is executed in the same file after their occurance. They will print `*somewhere*' as file name if a marker is executed in the same file before their occurance.
0.5197
FineWeb
List Price: $6.95 Sale Price: $1.44 Availability: Usually ships in 1-2 business days Master Skills Thinking Skills for students in grade 6 is the perfect workbook to help children achieve mastery of the critical thinking skills necessary to succeed in school! --Designed by educational experts, specifically for children in grade 6, this essential workbook teaches children basic critical thinking concepts and skills and then offers a variety of activities for skill-and-drill practice. Skills covered include: vocabulary, main idea, fact and opinion, outlining, and dictionary skills! Its 128 pages feature challenging lesson content with real-life applications, easy-to-understand directions, and a complete answer key. --The Master Skills series has drawn national acclaim for its vivid illustrations, challenging lesson content and real-life applications. It spans grades K through 6 in six key subject areas: reading comprehension, English, math, reading, spelling & writing, and thinking skills. It is the perfect workbook series to teach children learning fundamentals. No features available.
0.5529
FineWeb
Professor: Dr. Hirotsugu Uchida Prerequisites: EEC 205 Catalog Description: Economic analysis of policies that address environmental and natural resource problems. Topics include pollution control policies, economic incentives, and the optimal use of renewable and nonrenewable natural resources This course will introduce how economists view, think, and analyze natural resource management, including sustainable extraction and conservation, and environmental issues such as pollution and climate change. Almost every aspect of human activities—from housing, eating, commuting, working, and leisure—involves impacts on nature and natural resources. The quest to find the balance between “acceptable” living standards and sustainable resource use and conservation has never been greater. However, this is easier said than done (if it was not, then the problems would have been solved by now). We need to understand why people behave the way they do, and only from that insight we can determine what can be done to alter people’s current behavior. In another word, we need economic analysis of the problem. This is the most advanced course on undergraduate environmental and natural resource economics, and it is also the cap-stone course for EEC majors. As such, the materials presented will be rigorous in economic concepts and basic-level calculus will be involved. By the end of the semester students will be able to: - Analyze sustainable extraction and conservation of natural resources - Conduct economic analysis of policy issues of energy, climate change and other natural resource use - Identify economic trade-offs with regard to environmental and natural resource issues - Introduction and warm-up - Class overview - Basic calculus - Optimization using Lagrangian method - Public goods and externality - Managing Common Pool Resources: Sector Allocation in Fishery - Why common pool resources (CPRs) are vulnerable to overexploitation? - The state of world and US fisheries - Optimal management: MSY, MEY, static and dynamic - Privatization: economists’ view of the solution - Do consumers have any role? - Potential and issue of “sector allocation” - Fair Externality? Global Climate Change and Economic Development - Why do we tend to over-pollute? - Taxonomy of pollution: point- and nonpoint-source - Optimal level of pollution: MDC and MAC - Quantity or Price? Pigouvian tax and permit trading - Kuznets Curve: pollution by developing countries - Global dimming: hidden side of global climate change? - Energy: Technology, Individual Behavior, and Everything in Between - Optimal way to consume oil? The economics of non-renewable resources - Back-stop price and technological change - Alternative energy: nuclear, wind, solar, footsteps? - Ultimate challenge: changing people’s behavior Dr. Hirotsugu Uchida Department of Environmental and Natural Resource Economics 212 Kingston Coastal Institute Dr. Uchida is an environmental economist who specializes in natural resource economics. His current research interests are: - fisheries management; - seafood markets and ecolabel; and - sustainable use of ecosystem services and economic development. This course will provide you with the opportunity to acquire the competencies required to analyze hot policy debates in current environmental issues. - Eco RI News - This is a nonprofit Web-based news agency that provides information about environmental issues, their consequences, and potential solutions. - World Bank: World Development Report and Climate Change - The World Bank is an international organization made up of two development institutions, the International Bank for Reconstruction and Development (IBRD) and the International Development Association (IDA), owned by 187 member countries with the stated goal of “…advancing the vision of inclusive and sustainable globalization.” Their Website offers a comprehensive report on world development and its relation to climate change. - Dimming the Sun This is a media production credited to the filmmaker David Sington that investigates the evidence for a phenomenon called “global dimming” and its implications for the climate change.
0.9778
FineWeb
Question: What is ventricular dysynchrony, what effect does it have on heart failure, and how is it diagnosed/treated? Answer: Ventricular dysynchrony is something that often happens in patients who have heart failure. It is essentially the inability of the heart to beat in a coordinated fashion. This can be remedied in many cases using cardiac resynchronization therapy. This is simply a pacemaker that helps force the heart to beat in a coordinated manner. The advantages of having cardiac resynchronization therapy -- or CRT -- are that the patient can feel better, less shortness of breath, and can potentially walk farther.
0.9969
FineWeb
“Serenity is not freedom from the storm, but peace amid the storm.”—Unknown Everyone experiences stress in life, and people with diabetes are no exception. In fact, diabetes itself can be a source of much stress. While stress is often perceived as a negative thing, at times it can also have positive aspects, such as when it motivates a person to take positive action. The challenge, therefore, is not to seek a life with no stress but to learn to deal with the stresses life hands us. Dealing with stress effectively is particularly important for people with diabetes because stress can have an effect on blood glucose control. Learning stress-reduction techniques can be a useful part of your diabetes management plan. Effects of stress Stress can be defined as a demand on physical or mental energy. Injury, illness, infection, and surgery are some examples of physical stresses. Mental stresses may include difficulties with relationships, job pressures, financial strain, and even concerns about self-worth. Your body naturally responds to stress, whether mental or physical, by “defending” itself with what is often called the fight-or-flight response. In essence, your body prepares itself to either run away from danger or fight off an attack. As part of the fight-or-flight response, so-called stress hormones, including epinephrine, cortisol, and glucagon, are secreted, which increases heart rate, blood pressure, and blood glucose levels and dilates the small passageways of the lungs. In the short term, this gives the body the extra oxygen and energy it needs to cope with stress. But if a person with diabetes doesn’t have enough insulin circulating in his bloodstream to enable his cells to use the extra glucose, the result will be high blood glucose. Some stresses are short term. For example, you may feel stressed before and while you are taking an exam but quickly relax once it’s over. Short-term stresses may or may not affect your blood glucose control, and they’re less likely to if stress relief is prompt. But some stresses can last over a long period of time. Things like dealing with financial problems or job insecurity or recovering from an illness that requires several months of rehabilitation can cause prolonged stress. When stress becomes long term or chronic, the stress hormones are secreted over a long period of time, and the result can be chronic high blood glucose. If long-term (or short-term) stress is affecting your blood glucose control, speak to your diabetes care team about how to manage your blood glucose during times of stress. In addition to its direct effect on blood glucose levels, stress can affect diabetes control in indirect ways as well. Some people experience poor sleep habits when they feel stressed, and disruptions in usual sleep patterns can decrease energy levels and interfere with your normal routine. People often do not take good care of themselves when under stress. For example, you may drink more alcohol or exercise less frequently when feeling stressed, both of which can affect blood glucose levels. You may also not be as attentive to daily diabetes self-management tasks, such as checking your blood glucose levels or making healthy food choices. The end result can be either high blood glucose, particularly if you tend to eat more or exercise less when stressed, or hypoglycemia, if you tend to skip meals or pay less attention to matching insulin doses to meals or activity. Dealing with stress One way to deal with stress is to identify what is causing it and to look for a way to change the situation. Family problems, a boss or coworker who is difficult to work with, or financial commitments that are out of control can all create stress. Changing situations like these might involve letting others know how their behavior is affecting you, seeking family or financial counseling, or looking for a new job. If a physical stress, such as an infection or illness, is affecting your blood glucose control or your health generally, getting prompt medical attention to treat the problem is in order.
0.9299
FineWeb
I fell in love with math when I was a fifth grader. A dear friend of mine challenged me to complete all the math problems included in our workbook, even though our teacher had only started to teach the first few pages. I didn't realize that accepting the challenge would change the rest of my life. Now, I understand why some kids may feel encouraged when their friends challenge them to a match. If your kids can't withhold themselves against a challenge, you might appreciate my app pick for today: Math Champ Challenge. It's a math workbook app that has competitive features applicable in a duel or even a classroom setting from Grade 4 to Grade 7. Once you have selected a Grade that you want to play, you can choose one of five difficulty levels from Easy to Super Nerd. In each level, the app will generate 30 random questions covering all Common Core State Standards math curriculum for that Grade. All questions are in multiple choice format with one correct answer. Each question has a timer, mostly less than a minute. When you choose the correct option, the remaining time will be converted into your score. If you manage to answer within a few seconds, you might even get bonus points for being fast. But if your answer is incorrect, you will get a penalty (minus points), reducing your total score. At the end of each level, you will get a summary of areas where you need to practice more. The Skill Builder Skill Builder is a separate section in the app where you can practice your skills without being timed. The questions may also be open-ended, where you enter in your answer using the keyboard instead of choosing one from the available options. After you select a Grade, you will see five domains - each representing several clusters of similar topics. For example, in Grade 4 and 5, you will see domains such as Algebraic Thinking, Fractions, and Geometry. For each practice session, you can select any number of domains at the same time. If you select more than one domain, the app will mix questions from these domains into the practice set. The Skill Builder also has a feature called Personal Learning Environment, which will provide questions with adaptive difficulty levels. If you manage to solve many questions in a cluster/domain, the app will generate more difficult questions. Otherwise, the difficulty levels will remain fairly the same. Parents Need to Know Math Champ Challenge is released in two versions: free and paid. The free version includes the Easy challenges for Grade 4-7 for free. If you want to play all the other levels, you need to unlock them via a single in-app purchase. I'd recommend you get the paid version - called School Edition - instead, because it has unlimited access to all the contents, including the Skill Builder. The app supports multiple player profiles. In fact, the name School Edition suggests that the app can be used in a classroom setting. All achievements and badges that have been awarded will be tracked separately for each profile. Things I Like I like the app's approach to include a blackboard section at the bottom half of the screen, called Work Area. It's a perfect area for you to calculate your way into the final solution. If you're playing it with your junior, you can also draw some illustrations to translate the word problems into a more visual or mathematical model. The app has a total of 2,500+ questions built into it. As an adult who loves math, it took me hours to play with all the contents in Skill Builder and Challenges for Grade 4. I believe it would take many days (or even weeks) for juniors of that Grade to play with all the contents. And once they're ready, they may even start practicing for higher-level contents. Math Champ Challenge is a high quality math workbook app with built-in competitive features that are perfect for kids in Grade 4-7. Its support for multiple user profiles allows you to use it in a classroom setting. I'd recommend it for anyone whose kids are in that age range. It's a must-have math workbook app. Get Math Champ Challenge on the App Store. App was provided for our honest review.
0.5533
FineWeb
Can I send or receive money while my PayPal account is limited? When your PayPal account is limited some features may be unavailable, affecting your ability to send and receive money. To learn what you can do with your account while it's limited: - Go to Summary. - Click the 'Notifications' icon at the top of the page. - Click Learn how to remove this limitation. A table on this page explains what you can and cannot do while your account is limited.
0.6439
FineWeb
The impact of evening markets on traffic flow along Namirembe road and its implication to planning and design. MetadataShow full item record This study examines the impact of evening markets on traffic flow and its implication to planning and design. It examines the operation of evening markets along Namirembe road, assess the effects of these evening markets on traffic flow along Namirembe road, evaluate the implication of Namirembe road evening market to planning and design and it suggests measures for controlling and regulating the Namirembe road evening markets so as to reduce the impact on traffic flow. The study does this through literature review, administering questionnaires, it also involved mapping, interviewing, participatory observation, photography, recording and measurement. These helped to gather information, different perceptions, get knowledge and experience while in the field. It revealed the presentation of research findings such as how the street vendors acquired the space, the types of items sold. It also shows the effects of Namirembe road evening markets, the implication to planning and design and the ways how to improve traffic flow along Namirembe road. The study therefore recommends that KCCA should construct affordable markets for vendors, politics should be removed from vending activities among others.
0.5294
FineWeb
EKB-569 (Pelitinib), an irreversible EGFR tyrosine kinase inhibitor has shown potential therapeutic efficiency in solid tumors. However, cell-killing potential in combination with radiotherapy and its underlying molecular orchestration remain to be explored. The objective of this study was to determine the effect of EKB-569 on ionizing radiation (IR)-associated NFκB-dependent cell death. SCC-4 and SCC-9 cells exposed to IR (2Gy) with and without EKB-569 treatment were analyzed for transactivation of 88 NFκB pathway molecules, NFκB DNA-binding activity, translation of the NFκB downstream mediators, Birc1, 2 and 5, cell viability, metabolic activity and apoptosis. Selective targeting of IR-induced NFκB by EKB-569 and its influence on cell-fate were assessed by overexpressing (p50/p65) and silencing (ΔIκBα) NFκB. QPCR profiling after IR exposure revealed a significant induction of 74 NFκB signal transduction molecules. Of those, 72 were suppressed with EKB-569. EMSA revealed a dose dependent inhibition of NFκB by EKB-569. More importantly, EKB-569 inhibited IR-induced NFκB in a dose-dependent manner, and this inhibition was sustained up to at least 72 h. Immunoblotting revealed a significant suppression of IR-induced Birc1, 2 and 5 by EKB-569. We observed a dose-dependent inhibition of cell viability, metabolic activity and apoptosis with EKB-569. EKB-569 significantly enhanced IR-induced cell death and apoptosis. Blocking NFκB improved IR-induced cell death. Conversely, NFκB overexpression negates EKB-569 -induced cell-killing. Together, these pre-clinical data suggest that EKB-569 is a radiosensitizer of squamous cell carcinoma and may mechanistically involve selective targeting of IR-induced NFκB-dependent survival signaling. Further pre-clinical in-vivo studies are warranted. Citation: Aravindan N, Thomas CR Jr, Aravindan S, Mohan AS, Veeraraghavan J, et al. (2011) Irreversible EGFR Inhibitor EKB-569 Targets Low-LET γ-Radiation-Triggered Rel Orchestration and Potentiates Cell Death in Squamous Cell Carcinoma. PLoS ONE 6(12): e29705. doi:10.1371/journal.pone.0029705 Editor: Christina Lynn Addison, Ottawa Hospital Research Institute, Canada Received: August 8, 2011; Accepted: December 1, 2011; Published: December 29, 2011 Copyright: © 2011 Aravindan et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported, in whole or in part, by National Institutes of Health Grant R01 CA112175 (to M.N.) and funds from the Office of Science (Biological and Environmental Research), United States Department of Energy Grant No. DE-FG03-02ER63449 (to M.N.). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Head and neck squamous cell carcinoma (HNSCC) is the sixth most common cancer in the world and accounts for 90% of malignant neoplasias of the upper respiratory system . Despite recent advances in the management of locally advanced HNSCC, the overall survival of patients has improved only marginally over the past three decades mainly due to development of therapy-induced chemo and radioresistance. To that note, in recent years there has been substantial interest in developing novel therapeutic agents that specifically target growth factor pathways that, are dysregulated in tumor cells. Such targeted “biological” agents might offer alternative treatment options for patients refractive to chemoradiotherapy. Also, with unique mechanisms of action and toxic profiles that generally do not overlap, targeted agents and standard therapies can be used in combinations to enhance overall treatment efficacies and prevent dose reduction. Because many solid tumors, including HNSCC have hyper activated epidermal growth factor receptor (EGFR) ,, there has been great interest in the use of EGFR inhibitors to control cancer growth. EGFR is a 170 kDa glycoprotein containing an extracellular ligand binding domain, and an intracellular tyrosine kinase (TK) domain . Upon binding to ligands such as EGF or TGFα, EGFR dimerizes with itself (homodimers) or other members of the family such as c-ErbB-2 (heterodimers). Upon dimerization, TK activation increases and receptor gets autophosphorylated at tyrosine residues. Phosphorylated EGFR (p-EGFR), like other activated receptor TKs, involved in phosphorylation and activation of several signal transduction pathways including phosphoinositide 3-kinase-AKT, extra cellular signal-regulated kinase 1and 2 (ERK1/2), and the signal transducer and activator of transcription 3 (STAT3). Activation of these signal transduction pathways subsequently activate key transcriptional machineries such as NFκB that promote tumor growth and progression by inducing inhibition of apoptosis, proliferation, maturation,clonal expansion, invasion, and metastasis. NFκB is a member of the c-rel proto-oncogene family found within the promoter and enhancer region of a wide variety of genes involved in proliferation, cell cycle control , , oncogenic activation , cell growth, differentiation and metastasis , . NFκB is retained in the cytoplasm by association with the inhibitory protein IκB. On phosphorylation, IκB is ubiquitinated and subsequently degraded by the 26S proteasome, resulting in the liberation of NFκB. NFκB can then enter into the nucleus to regulate the expression of downstream genes. Elevated NFκB activity has been linked with tumor resistance to chemotherapy and IR in a number of cancer types, including head and neck cancer . Conversely, inhibition of NFκB favors pro-apoptotic processes, decreases growth and clonogenic survival – and enhances chemo/radiosensitivity –. In addition to this persistant activation of growth-promoting signaling pathways, development of HNSCC also involves the accumulation of genetic and epigenetic alterations in tumor-suppressor proteins.. The activation of EGFR is a frequent event in HNSCC, and has provided the molecular basis for current efforts aimed at evaluating the clinical activity of EGFR inhibitors in HNSCC , . However, to date, the role of EGFR-dependent NFκB in the functional orchestration of HNSCC progression and metastasis is poorly realized , . Since NFκB is able to regulate more than 150 genes, and is able to functionally orchestrate many steps in carcinogenesis, tumor progression and metastasis, it is important to delineate the efficacy of potential EGFR-TK inhibitors that target the NFκB-dependent HNSCC cell survival advantage. The two most commonly employed strategies in drug development are introducing covalent (irreversible) binding of the drug target and and broadening the affected receptor tyrosine kinase targets of the drug within the cell. Currently, the second generation of EGFR TKI compounds is emerging from the drug developmental pipeline and being introduced into clinical trials. Many of these second-generation compounds form tighter covalent bonds with their target, which should theoretically increase their effectiveness by prolonging the inhibition of EGFR signaling to the entire lifespan of the drug-bound receptor molecule. In cell culture systems, such irreversibly binding TKIs can effectively kill cells that have acquired resistance to first-generation TKIs . As per the other common theme of drug development, second-generation EGFR TKI have been developed that, in addition to blocking EGFR signaling, target multiple kinases in the ErbB family. The signaling network that emerges from the ErbB family of transmembrane TK receptors (of which EGFR is a member) is large, interconnected, and redundant, with many possible routes between the ligand at the cell surface and the message destination within the nucleus . It is this diversity in possible signal transduction routes that allows a cell to have flexibility and, in the case of cancer cells treated with anticancer agents, facilitates resistant cell clones that bypass the inhibited receptor . Blocking multiple signaling pathways with either a combination of agents or a single but multi-targeted agent has been synergistic in its effects in preclinical models . Second-generation EGFR TKIs have been developed that target additional members of the ErbB family or ‘other downstream or parallel pathways such as the NFκB pathway’. EKB-569 (Pelitinib; WAY-172569), a 4-Dimethylamino-but-2-enoic acid [4-(3-chloro-4-flurophenylamino)-3-cyano-7ethoxy-quinolin-6-yl]-amideis one such second generation irreversibly-binding inhibitor of EGFR TK activity . In this study, we examined the efficacy of EKB-569 in inhibiting ionizing radiation (IR)-induced NFκB activity, in modulating the transcription of 88 NFκB-dependent signal transduction molecules, in activating translation of NFκB-mediated downstream Birc1, 2 and 5 protein, in reducing cell viability, and metabolic activity and apoptosis. Further, we delineated the selective targeting of IR-induced NFκB through EKB-569 and its direct influence in HNSCC cell-fate. Materials and Methods Human tongue squamous cell carcinoma SCC-4 and SCC-9 cells were obtained from ATCC (Manassas, VA) and maintained as monolayer cultures in DMEM/F-12 50/50 (Mediatech Inc., Herndon, VA) growth medium supplemented with 1.5 g/L sodium bicarbonate, 2 mM L-glutamine, 15 mM HEPES, 1% NEAA, 1% MEM vitamins, 5000 I.U/ml penicillin/5000 µg/ml streptomycin, 1% sodium pyruvate, and 10% FBS (Invitrogen, Carlsbad, CA). For passage and for all experiments, the cells were detached using trypsin (0.25%)/EDTA (1%), resuspended in complete medium, counted (Countess, Invitrogen) and incubated in a 95% air/5% CO2 humidified incubator. SCC-4 and SCC-9 cells were exposed to 2Gy using Gamma Cell 40 Exactor (Nordion International Inc, Ontario, Canada) at a dose rate of 0.81Gy/min. Irradiated cells were examined for IR-induced alterations in NFκB signal transduction, selective yet, sustained NFκB activity, NFκB's role in survival advantage and to identify the efficacy of EKB-569 on IR-induced NFκB dependent HNSCC progression. Mock irradiated cells were treated identical except that the cells were not subjected to IR. Irradiated cells were incubated at 37°C for additional 1, 3, 6, 24, 48 and 72 h. All experiments were repeated at least three times in each group. Plasmid preparation and DNA Transfection Transient transfection of NFκB p65 and p50 subunits was carried out by the lipofection method using Effectene™ reagent (Qiagen, Inc., Valencia, CA) as described in our earlier studies . NFκB inhibition was achieved using transient transfection of S32A/S36A double mutant IκBα (ΔIκBα, Upstate biotechnology, Lake Placid, NY) as reported in our earlier studies . The mutated form of IκBα with a serine-to-alanine mutation at residues 32 and 36 does not undergo signal-induced phosphorylation and thus remains bound to NFκB subsequently preventing nuclear translocation and DNA binding. After 18 h, transfection medium was replaced with growth medium before IR. Electrophoretic Mobility Shift Assay (EMSA) Nuclear protein extraction and electrophoretic mobility shift assay for NFκB, AP-1 and SP-1 were performed as described in our earlier studies . Autoradiograms were overexposed in order to reveal the low inhibitory effects that were below the constitutive level. Densitometry analysis was performed using a BioRad Multi-Analyst software package with an integrated density program. Group-wise comparisons were made using ANOVA with Tukey's post-hoc correction. A P value of <0.05 is considered statistically significant. For the competition assay, the nuclear extract was pre-incubated with unlabeled homologous NFκB oligonucleotide followed by addition of [γ-32P]-ATP labeled NFκB probe. Supershift analysis was performed as described earlier . Total protein extraction and immunoblotting were performed as described in our earlier studies . Rabbit polyclonal anti-IκBα, Birc1, 2, 5 or Bax antibody (Santa Cruz) were used to detect the respective protein expression levels between the EKB treated, IR exposed and control groups. Blots were stripped and reprobed with mouse monoclonal anti-α-tubulin antibody (Santa Cruz) to determine equal loading of the samples. One diamentional gel analysis was performed using a BioRad Multi-Analyst software package with an integrated density program. Group-wise comparisons were made using ANOVA with Tukey's post-hoc correction. A P value of <0.05 is considered as statistically significant. Real-Time QPCR profiling of NFκB signaling pathway molecules Total RNA extraction and real-time QPCR profiling were performed as described in our earlier studies . We used human NFκB signaling pathway profiler (Realtimeprimers.com, Elkins Park, PA) containing 88 genes representing 8 functional groups including (i) Rel/NFκB/IκB family, (ii) NFκB responsive genes, (iii) Ligands & Transmembrane receptors, (iv) Adaptor proteins, (v) Signal transduction kinases, (vi) Transcription factors, (vii) Cell death/survival molecules, and (viii) Other factors. We started with this highly selected QPCR profiler instead of an all-encompassing gene array because the selected genes entail a well-characterized profile governing NFκB signal transduction and transcriptional targets, hence facilitating interpretation of data, simplifying data acquisition and analysis, and avoiding genes not functionally characterized. Furthermore, QPCR profiling allows detection and quantification of gene expression in real-time. Each profiling plate was also equipped with reverse transcription control, positive PCR control, genomic DNA control and five housekeeping genes – β-Actin, GAPDH, Rpl13a, HPRT1 and β2M. The ΔΔct values were calculated by normalizing the gene expression levels to the expression of the housekeeping genes. The normalized data were then compared between groups, and the relative expression level of each gene was expressed as fold change. When comparing each gene's signal intensity between groups, we used a twofold or more (≥2 fold) increase or decrease to represent “stringent” criteria for upregulation or downregulation and an increase/decrease of <2 fold to represent “less stringent” criteria. Classifying gene regulation criteria in this manner can provide an index of reliability of the gene expression data . Trypan blue dye exclusion assay was used to identify IR modulated cell viability in HNSCC cells and further, to determine the efficacy of EKB-569 in this setting. Cells exposed to IR alone and cells pre-treated with EKB-569 followed by exposure to IR, were sequentially analyzed with the Countess automated cell counter (Carlsbad, CA). Furthermore, to determine the efficiency of EKB-569 in targeting IR-induced NFκB dependent cell viability, trypan blue exclusion assay was performed in NFκB over-expressed HNSCC cells exposed to EKB-569. Group-wise comparisons were made using ANOVA with Tukey's post-hoc correction. A P value of <0.05 is considered statistically significant. Cell survival by MTT assay Cell survival was analyzed using MTT assay as described in our previous studies . HNSCC cells at a density of 1000 cells/300 µl in a 24-well plate were either (i) mock-irradiated, (ii) exposed to IR alone, (iii) treated with EKB-569 (0.5, 1.0, 2.0 and 5.0 µg) alone, (iv) pretreated with EKB-569 (5.0 µg) followed by exposure to IR, (v) prior transfection with ΔIκBα followed by exposure to IR, or (vi) prior transfection with p50/p65 treated with or without EKB-569. The treated and/or exposed cells were added with 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazoliumbromide (30 µL/well from 5 mg/mL stock) for 4 h after 24, 48 and 72 h of post-IR. Solubilization of converted purple formazan dye was accomplished by acid-isopropanol with continuous shaking at 37°C. The reaction product was quantified by measuring the absorbance at 570 nm using Synergy II micro plate reader (Biotek). Cell survival response was compared using ANOVA with Tukey's post-hoc correction. Nuclear morphology by dual staining SCC-4 cells (5×105 cells in 500 µl of complete growth medium) grown in 4-well plate (Nunc) were either: 1) sham treated, 2) treated with EKB-569 (0.5–5.0 µg), 3) exposed to IR with or without prior EKB-569/ΔIκBα transfection, or 4) transfected with p50/p65 subunit with or without prior EKB-569 treatment. The cells were analyzed for nuclear morphology as described earlier . In brief, the medium was replaced with a fresh medium containing reduced serum (2%) without any added growth factors and incubated further for 16 h at 37°C in air/CO2 incubator. The cells were then stained with acridine orange (1 µg/ml) and ethidium bromide (1 µg/ml) and immediately examined for the morphological characteristics of apoptosis at 200× magnification using an Olympus VANOX fluorescent microscope. Four morphological states were examined: (1) viable cells with normal nuclei (bright green chromatin with organized structure); (2) viable cells with apoptotic nuclei (green chromatin which are highly condensed and/or fragmented); (3) non-viable cells with normal nuclei (bright orange chromatin with organized structure); and (4) non-viable cells with apoptotic nuclei (bright orange chromatin which is highly condensed or fragmented). EKB-569 selectively inhibits IR-induced persistent activation of NFκB The effect of EKB-569 in selectively inhibiting IR-induced NFκB-DNA binding activity was elucidated using four different approaches. First, we investigated whether EKB-569 as a stand-alone compound, could modulate NFκB activity in both SCC-4 and SCC-9 HNSCC cells. Compared to untreated cells, EKB-569 treatment dose-dependently inhibited NFκB DNA binding activity with a substantial inhibition at 5.0 µg (Fig. 1A & B). Next, to unveil the radiosensitizing efficacy of EKB-569, HNSCC cells mock-irradiated, exposed to IR or treated with EKB-569 (0.5, 1.0, 2.0 or 5.0 µg) and then exposed to IR were analyzed for alterations in NFκB activity. Unlike the mock-IR controls, IR at 2 Gy significantly (P<0.001) induced NFκB-DNA binding activity in both SCC-4 and SCC-9 cells (Figure 1 A & B, bottom panel). This IR-induced NFκB activity was drastically (P<0.001) inhibited with EKB-569 treatment in a dose dependent manner (Fig. 1D) in both cell types. It is interesting to note that at 5.0 µg concentration, EKB-569 completely suppressed IR-induced NFκB activity even below the constitutive (mock-IR) levels in this setting. Further, to delineate whether EKB-569 persistently inhibits IR-induced NFκB or there is recovery of IR-induced NF-kB activity over time, SCC-4 cells pretreated with EKB-569 and exposed to IR were examined for 3 days post-radiation exposure. EKB-569-induced inhibition of IR-induced NFκB DNA-binding activity remained at the same decreased level at all time points investigated (Figure 1E). Densitometric analysis revealed a significant (P<0.001) inhibition of IR-induced NFκB DNA-binding activity up to at least 3 days post-radiation exposure. (Fig. 1F). To confirm the specificity of the EMSA band seen in Figure 1 A and B, a competition binding assay was performed. The NFκB DNA-binding activity was competitively reduced to 47% and 36.4% by the addition of 0.02 and 0.2 pmol of homologous unlabeled NFκB specific-double stranded oligonucleotide, respectively. Supershift analysis with p50 and p65 antibodies further confirmed that the gel shifted bands are indeed NFκB (data not shown). Figure 1. Effect of EKB-569 on radiation modulated NFκB, AP1 and SP1 DNA binding activity. A representative autoradiograms showing the NFκB-DNA binding activity in the nuclear extracts of human SCC-4 cells (A) or SCC-9 cells (B) that are either treated with EKB-569 alone (upper panel) or in combination with IR (Lower panel). NF-kB-specific bands are indicated by an arrow head. Autoradiogram was slightly overexposed to reveal EKB-inhibited NF-kB-specific bands. Densitometric analysis of three independent experiments showing dose-dependent inhibition of NFκB-DNA binding activity in SCC-4 cells (C) and SCC-9 cells (D). (E) Time-dependent inhibition of NFκB-DNA binding activity in human SCC-4 cells by EKB-569 (5.0 µg) in the presence or absence of IR exposure. EMSA was carried out in the nuclear extract at 1, 3, 6, 24, 48 and 72 h post-exposure. (F) Representative autoradiogram from three independent experiments showing AP-1 DNA binding activity in SCC-4 cells treated with EKB-569 (1.0, 2.0 and 5.0 µg) or exposed to IR in the presence or absence of EKB-569. (G) Representative autoradiogram from three independent experiments showing SP-1 DNA binding activity in SCC-4 cells treated with EKB-569 (0.5, 1.0, 2.0 and 5.0 µg) or exposed to IR in the presence or absence of EKB-569.doi:10.1371/journal.pone.0029705.g001 Next, to demonstrate that the inhibition of NFκB signaling pathway is not a EKB-569 compound-specific effect and that the proposed combination (IR and EGFR inhibition) can be carried out on to the clinic with any other EGFR compound, we incubated the SCC-4 cells with other commonly used irreversible EGFR blockers, afatinib and neratinib (HKI-272). Afatinib and neratinib dose-dependently inhibit NF-kB DNA-binding activity (Figure 2 A, B, C & D). The inhibition of NFκB was found to be persistent up to at least 72 h (Figure 2H). To further validate whether EKB-569 directly inhibits NFκB activity, we examined the inhibitory effect on the activity of unpstream kinases. Data presented in Figure 2G shows EKB-569 and other related EGFR inhibitors, afatinib (300 nM) and neratinib (200 nM) significantly block the expression of IR-induced upstream IκB kinase beta (IKK-β). Additionally, we confirmed that EKB-569-mediated inhibition of NF-κB is EGFR-dependent. EGFR-knockdown experiments with a widely used specific EGFR inhibitor, PD153035 were performed to confirm the EGFR-mediated NFκB inhibition. Cells incubated with PD153035 at concentrations 50, 75 and 100 nM clearly showed a significant decrease in radiation-induced NFκB DNA binding activity and mRNA expression similar to the cells incubated with EKB-569 (Figure 2 E&F). Figure 2. Effect of EGFR inhibitors on NFκB DNA binding activity, EGFR mRNA and, EGFR and IKKβ protein levels. (A) Representative autoradiogram showing the NFκB-DNA binding activity in the nuclear extracts of human SCC-4 cells exposed to IR (2Gy) or treated with 50, 100 or 200 nM HKI-272 (neratinib) prior to IR exposure. Neratinib treatment significantly inhibited IR-induced NFκB DNA binding activity (Left panel). Representative autoradiogram showing the NFκB-DNA binding activity in human SCC-4 cells exposed to 50, 100 or 200 nM neratinib (Right panel). Compared to the mock-IR cells, neratinib induced a dose-dependent suppression of NFκB activity in these cells. (B) Representative autoradiogram showing the NFκB-DNA binding activity in human SCC-4 cells exposed to IR with or without Neratinib (200 nM) and harvested after 1, 3, 6, 24, 48 and 72 h. Neratinib persistently inhibited IR-induced NFκB-DNA binding activity at all time points investigated. (C) Representative autoradiogram showing the NFκB-DNA binding activity in human SCC-4 cells exposed to 100, 200 or 300 nM afatinib. Compared to the mock-IR cells, afatinib induced a dose-dependent suppression of NFκB activity (Left panel). Representative autoradiogram showing the NFκB-DNA binding activity in SCC-4 cells exposed to IR or treated with 100, 200 or 300 nM afatinib and exposed to IR. Afatinib treatment significantly inhibited IR-induced NFκB DNA binding activity (Right panel). (D) Representative autoradiogram showing the NFκB-DNA binding activity in human SCC-4 cells exposed to IR with or without afatinib (300 nM) and harvested after 1, 3, 6, 24, 48 and 72 h. Afatinib treatment persistently inhibited IR-induced NFκB-DNA binding activity at all time points investigated. (E) Representative autoradiogram showing the NFκB-DNA binding activity in human SCC-4 cells exposed to IR or treated with 50, 75 or 100 nM PD 153035 hydrochloride (a potent EGFR-TK inhibitor) and exposed to IR. PD153035 treatment induced a significant dose-dependent inhibition of IR-induced NFκB DNA binding activity. (F) Real-time QPCR analysis showing EGFR mRNA levels in SCC-4 cells mock-irradiated, exposed to 2Gy and in cells treated either with EKB-569 (5.0 µg) or PD153035 (50 nM) and exposed to IR. (G) Immunoblot showing complete suppression of radiation induced EGFR and IKKβ levels in SCC-4 cells pretreated with EKB-569 (5.0 µg), afatinib (300 nM), neratinib (200 nM) or PD153035 (75 nM). (H) QPCR analysis showing complete and sustained (up to 72 h) suppression of radiation induced EGFR transcriptional levels in SCC-4 cells treated with either afatinib (300 nM) or neratinib (200 nM).doi:10.1371/journal.pone.0029705.g002 In order to determine whether EKB-569 selectively targets NFκB or the global transcription machinery in general, we analyzed the effect of EKB-569 on IR-modulated AP-1 and SP-1 transcription factors. SCC-4 cells mock-irradiated, treated with EKB-569 (0.5–5.0 µg), exposed to IR or, treated with EKB-569 (0.5–5.0 µg) and then exposed to IR were examined for AP-1 and SP-1 DNA binding activity (Figure 1 F&G). In contrast to the NF-kB pathway response, EKB-569 by itself, without radiation exposure, fails to inhibit the constitutive levels of AP-1 DNA-binding activity. On the other hand, with regard to SP-1, EKB-569 inhibits its activity at the lower concentrations of 1 and 2 ug, but not at the higher (5 ug) concentration. More interestingly, with the addition of EKB-569 further increased the activation of AP-1 and the SP-1 induced by IR exposure. These results confirmed that the mechanism of EKB-569-mediated radiosensitization is acting specifically through NF-kB pathway. EKB-569 inhibits IR-induced transcriptional modulation of NFκB signal transduction and pathway molecules in HNSCC cells To further to substantiate our findings of IR-induced NFκB activation and EKB-569 associated selective targeting, SCC-4 cells mock-irradiated, exposed to IR or pretreated with EKB-569 (5.0 µg) and then exposed to IR were examined for transcriptional changes in 88 NFκB signal transduction and downstream target genes (Figure S1). Compared to mock-IR controls, IR exposure upregulated 74 genes, down regulated two genes, while having no effect on the expression of 12 genes. Though, originally we intended to classify the gene expression implying less stringent (overall) and stringent (≥2 fold) criteria, there is only one gene, Myd88 showed less than 2 fold (1.4) while remaining 73 genes showed significant (≥2 fold) upregulation compared to untreated control. Conversely, EKB-569 pre-treatment profoundly inhibited 72 of 74 IR-induced genes in this setting (Figure 3). Interestingly, expression of two genes, TLR4 and Ppm1A were significantly increased with EKB-569. A plethora of scientific literature demonstrates the functional significance of these NFκB-dependent signaling and target molecules in tumor cell radioresistance suggesting that inhibitory approaches of these molecules may benefit radiosensitization. Figure 3. Real time QPCR profiling: Histograms showing IR-induced NFκB-dependent downstream signal transduction molecules and the effect of EKB-569 (5.0 µg) on these IR-modulated genes in human SCC-4 cells.doi:10.1371/journal.pone.0029705.g003 EKB-569 regulates NFκB dependent downstream Birc 1, 2 and 5 and upregulates pro-apoptotic Bax in HNSCC cells QPCR profiling demonstrated a significant inhibition of IR-induced NFκB-dependent downstream pro-survival protein, Birc 2 and 5 upon EKB-569 treatment (Figure 3). In order to confirm the IR-induced modulations and to validate the functional significance of EKB-569-mediated regulation, we investigated whether the transcriptional machinery modulation is in fact translated to the protein level. First, immunoblotting analysis confirmed the involvement of post-translational modification of IκB in IR-induced NFκB. Further, we observed a significant setback of IR-inhibited IκBα levels upon EKB-569 treatment. This correlated well with induced NFκB activity data (Figure 1 A–D). Compared to mock-IR controls, we observed a significant induction of BIRC 2 and 5 levels (Figure 4 A&B) reflecting and correlating well with their mRNA expression levels. More importantly, treatment with EKB-569 completely (P<0.001) inhibited IR-induced BIRC2 and 5 in SCC-4 cells. Though IR did not show induced expression of BIRC 1 in this setting, we observed a conferring inhibition of this protein with EKB-569. Conversely, we observed a significant induction of pro-apoptotic Bax in cells pre-treated with EKB-569. Figure 4. Effect of EKB-569 on radiation modulated prosurvival signaling molecules, cell viability, survival and/or death. (A) Representative immunoblot showing expression levels of IκBα, pro-apoptotic Bax and anti-apoptotic Birc1, 2 and 5 in human SCC-4 cells exposed to IR or treated with EKB-569 (5.0 µg) prior to IR exposure. α-tubulin was used to show equal loading of protein samples. (B) Semi-quantitative 1D gel analysis showing increased IκBα and Bax levels in EKB-569 treated cells. EKB-569 treatment significantly suppressed Birc1, 2 and 5 in these IR-exposed SCC-4 cells. (C) Histograms showing the percent cell viability in cells treated with EKB-569 (0.5, 1.0, 2.0 and 5.0 µg). EKB-569 inflicted a dose dependent inhibition of cell viability in this setting. (D) Histograms showing the percent cell viability in cells either mock-irradiated, exposed to IRexposure or treated with EKB-569 (5.0 µg) and exposed to IR. Compared to the mock-IR cells, IR resulted in reduced cell viability. Relatively, EKB-569 treatment significantly conferred IR-inhibited cell viability. Cell viability was measured using Trypan-blue dye exclusion assay and counted in automated countess cell counter. (E) Cell survival in mock-IR, EKB-569 (0.5, 1.0, 2.0 and 5.0 µg) treated and in irradiated cells with or without EKB-569 treatment. MTT assay was used to analyze the induced cytotoxicity and the reaction product was quantified by measuring the absorbance at 570 nm. Percent cell survival was calculated as (mean of test wells/mean of control wells) ×100 and compared using ANOVA. EKB-569 induced a dose dependent inhibition of cell survival. Like-wise IR suppressed cell survival and this IR-inhibited cell survival was further inhibited with EKB-569 in a dose dependent fashion. (F) Nuclear morphology with dual staining showing apoptotic characteristics in cells either mock-IR, treated with EKB-569 (0.5, 1.0, 2.0, 5.0 µg), exposed to IR, or treated with EKB-569 and exposed to IR. Insert: High magnification photomicrographs showing chromatin with organized structures indicating viable cells with normal nuclei in untreated control cells and, chromatin with blebbing, nuclear condensation, and fragmentation indicating typical apoptotic characteristics in cells treated with 5.0 µg of EKB-569 and exposed to IR.doi:10.1371/journal.pone.0029705.g004 EKB-569 confers radiosensitization in HNSCC cells To identify the efficacy of EKB-569 at the cellular or tissue level of HNSCC radiosensitization, we examined their potential in conferring functional endpoints like cell viability, survival and apoptotic death. First, trypan blue exclusion assay demonstrated that EKB-569 as a stand-alone compound induced dose-dependent inhibition of SCC-4 cell viability with a maximum (P<0.001) inhibition at 5.0 µg concentration (Figure 4C). Similarly, unlike the mock-irradiated control, cells exposed IR significantly (P<0.001) inhibited HNSCC cell viability (Figure 4D). More importantly, compared to IR exposed cells, EKB-569 (5.0 µg) treatment significantly (P<0.001) conferred IR-inhibited cell viability. Substantiating our cell viability data, MTT analysis revealed a dose dependent inhibition of metabolic activity with EKB-569 treatment (Figure 4E). To that end, at low concentration (0.5 µg) we did not see any significant inhibition of cell survival. However, with increase in EKB-569 concentration we observed a significant (1.0 µg, P<0.05; 2.0 µg, P<0.01 and 5.0 µg, P<0.001) inhibition of cell survival in this setting. On the other hand, compared to mock-irradsiated, cell exposed to IR showed significant (P<0.01) suppression of cell survival (Figure 4E). Addition of EKB-569 significantly conferred IR-inhibited cell survival in a dose dependent fashion. Even concentrations as low as 0.5 µg significantly conferred IR-induced cell death and we observed a complete inhibition of cell survival in IR-exposed cells with 5.0 µg demonstrating the radiosensitizing potential of EKB-569 in HNSCC cells. Further, nuclear morphology with dual staining showed bright green chromatin with organized structures in untreated control cells indicating viable cells with normal nuclei (Figure 4F). Where as, cells treated with EKB-569 showed typical apoptotic features of bright orange chromatin with blebbing, nuclear condensation, and fragmentation. We observed a dose dependent increase in apoptosis after 0.5, 1.0, 2.0 and 5.0 µg of EKB-569. Consistent with our cell viability and survival data, we observed an induced cell death in cells exposed to IR with bright orange chromatin with blebbing, nuclear condensation, and fragmentation. More importantly, compared to IR alone, cells pre-treated with EKB-569 (5.0 µg) and exposed to IR showed extensive apoptotic characteristics and demonstrated a radiosensitizing potential in HNSCC cells (Figure 4F). EKB-569 targets IR-induced NFκB- regulated radiosensitization To further identify whether targeting IR-induced NFκB orchestrates EKB-569-induced radiosensitization in HNSCC cells, we adopted two approaches. First, we determined whether IR-induced NFκB regulates induced radioprotection in SCC-4 cells. To achieve this we investigated the alterations in cell viability, survival and death after muting IR-induced NFκB. Ecotopic expression of IR-induced NFκB was inhibited by transient transfection of ΔIκBα. Knocking-out IR-induced NFκB was confirmed with EMSA (Figure 5A&B). Compared to vector controls, knocking out IR-induced NFκB with ΔIκBα significantly (P<0.001) conferred IR-inhibited cell survival (Figure 5C), cell viability (Figure 5D) and enhanced IR-induced cell death (evident with bright orange chromatin with blebbing, nuclear condensation, and fragmentation) dictating the role of IR-induced NFκB in radioresistance. Next, to identify that EKB-569 induced radiosensitization occurs at least in part by targeting IR-induced NFκB, p50/p65 over-expressed SCC-4 cells were treated with EKB-569 and analyzed for cell viability, survival and death. EMSA analysis (Figure 5A) confirmed the robust NFκB DNA-binding activity in p50/p65 transfected SCC-4 cells (Figure 5B). Further, over expression of NFκB in these cells significantly (P<0.001) induced cell survival (Figure 5C) and showed bright green chromatin with organized nuclear morphology (Figure 5E) and, served as positive controls for our study. Consequently, treatment with EKB-569 significantly (P<0.001) inhibited cell viability, survival and showed bright orange chromatin with blebbing, nuclear condensation, and fragmentation in these NFκB over-expressed cells (Figure 5C–E) delineating that EKB-569 target NFκB and potentiate cell death in this setting. Figure 5. IR-induced NFκB regulates radioresistance in HNSCC cells. (A) Representative autoradiogram of EMSA analysis showing complete muting of NFκB DNA binding activity in IR-induced or NFκB overexpressed cells with ΔIκBα. (B) Densitometric analysis of NFκB-DNA binding activity showing significant NFκB silencing with ΔIκBα and significant activation with p50/p65 transfection with NFκB over expression vectors, p50 and p65. (C) Histograms showing the results of MTT analysis in p50/p65 over-expressed cells treated with EKB-569 (5.0 µg). NFκB over-expression robustly induced SCC-4 cell survival. Conversely, treating NFκB over-expressed cells with EKB-569 completely (P<0.001) inhibited NFκB-induced SCC-4 cell survival. Like-wise, muting NFκB (with ΔIκBα) completely inhibited IR-induced cell survival. (D) Histograms showing cell viability in NFκB muted cells exposed to IR or NFκB overexpressed cells treated with EKB-569. Silencing NFκB significantly inhibited IR-induced cell viability. Like-wise, treating NFκB overexpressed cells with EKB-569 (5.0 µg) completely inhibited NFκB-induced cell viability. (E) Nuclear morphology with dual staining showing typical yet increased apoptotic characteristics in NFκB muted cells exposed to IR. NFκB overexpressed cells displayed chromatin with organized structures indicating good viability with normal nuclei. However, treatment with EKB-569 (5.0 µg) significantly inflicted chromatin with blebbing, nuclear condensation, and fragmentation in these NFκB overexpressed cells.doi:10.1371/journal.pone.0029705.g005 Primary and acquired resistance to conventional chemotherapy and radiotherapy represent the central therapeutic challenge in oncology today. Resistance may develop through varied mechanisms, including increased expression of cellular drug efflux pumps; mutation of the therapeutic target; increased activity of DNA repair mechanisms and altered expression of genes involved in apoptotic pathways. To overcome these resistance mechanisms, conventional cancer treatments are increasingly combined with molecularly targeted therapies. Because cytotoxic and targeted therapies have distinct biologic effects and toxicity profiles, such combinations are both rational and well tolerated. To date, the molecular pathway most frequently targeted in combination with conventional chemotherapy or radiotherapy is that of the EGFR. After activation by binding of the EGF and other natural ligands, EGFR activates prosurvival, pro-angiogenic, and anti-apoptotic pathways that may confer resistance to cytotoxic therapies. Interestingly, all these aforementioned functional pathways are known to be controlled by transcriptional master switch regulator, NFκB that also happens to be a downstream target for EGFR. In this study, we investigated the specific inhibitory effect of EGFR TK inhibitor EKB-569 on the regulation of NFκB-dependent survival advantage and elucidated its influence in potentiating radiotherapy for head and neck cancers. To our knowledge, for the first time, we have demonstrated the specific inhibition of IR-induced NFκB with irreversible EGFR TK inhibitor, EKB-569 and dissected out the functional downstream signaling that orchestrate in promoting radiosensitization at least in head neck cancer. Our results indicate that radiation at clinically relevant doses activated NFκB pathway in SCC-4 cells through the mechanism that interacted with EGFR. To that note, activation of EGFR intrinsic receptor protein TK and tyrosine autophosphorylation results in the activation of a number of key signaling pathways . One major downstream signaling route is via Ras-Raf-MAPK pathway where activation of Ras initiates a multistep phosphorylation cascade that leads to the activation of ERK1 and 2 that regulate transcription of molecules that are linked to cell proliferation, survival, and transformation . Another important target in EGFR signaling is PI3K and the downstream protein-serine/threonine kinase Akt , which transduces signals that trigger a cascade of responses from cell growth and proliferation to survival and motility . One more route is via the stress-activated protein kinase pathway, involving protein kinase C and Jak/Stat. Interestingly, the activation of these pathways converges into distinct transcriptional program involving NFκB that mediate cellular responses, including cell division, survival (or death), motility, invasion, adhesion, and cellular repair . QPCR profiling revealed a significant increase in these EGFR dependent NFκB activating molecules viz. Akt1, Jun, Map3K1, Raf1 after IR and, EKB-569 treatment resulted in complete suppression of these molecules and serve as the positive controls for the study. Transformed cells have been shown to possess deregulated apoptotic machinery . Transcriptional regulators that regulate pro-apoptotic and/or activate anti-apoptotic proteins play a key role in switching the therapy associated balance of apoptotic cell death. In this regard, EGFR blockers appear to inhibit tumor cell death via multiple mechanisms. EGFR-mediated signaling via the Ras-Raf-MAPK, PI3-K/Akt or PKC-Jak/STAT pathways leads to the activation of NFκB which in turn imbalance the pro/anti-apoptotic protein expression. As is evident from our data, IR-induced NFκB and NFκB-dependent metabolic activity, cell viability and cell death indicate NFκB's direct role in induced radioresistance. Consistently, in multiple tumor cells, we and others have extensively documented that RT induces NFκB activity and delineated its direct role in induced radioresistance , –. Conversely, muting NFκB function has been shown to restore apoptosis and confer apoptotic effect in chemo and/or radioresistant tumor cells . Consistently, we observed a complete inhibition of IR-induced NFκB activity with EKB-569 designating that this compound may rectify IR-induced aberrant apoptotic machinery. These results though confirmed that the mechanism of EKB-569-mediated radiosensitization of squamous cell carcinoma is acting specifically through NF-kB pathway, it is interesting to note an induction in the activity of other transcription factors, AP-1 and SP-1. This differential mechanism in the activation of NFκB versus AP-1 and SP-1 may be speculated partly as cell type- and/or stimuli-specific. However, addressing the complete mechanism involved in the induction of IR-induced AP-1 and SP-1 with EKB-569 treatment and its impact on radiosensitization compared to other EGFR-TK inhibitors may help in ascertain the complexity in the combination treatments. It is also interesting to note form this study that the inhibition of NFκB signaling pathway is not a EKB-569 compound-specific effect. Other commonly used irreversible EGFR blockers, afatinib and neratinib (HKI-272) dose-dependently inhibit NFκB DNA-binding activity. The inhibition of NFκB by these two related compounds was found to be persistent up to at least 72 h as seen with EKB-569 treatment. Similarly, all three EGFR inhibitors, EKB-569, afatinib and neratinib directly inhibit NFκB activity by blocking the activity of IR-induced upstream IκB kinase beta (IKK-β). This direct action of inhibition of NF-kB is EGFR-dependent. EGFR-knockdown experiments with a widely used specific EGFR inhibitor, PD153035 confirmed the EGFR-mediated inhibition of NFκB DNA-binding activity and mRNA expression in the irradiated cells. Therefore the proposed combination of IR and EGFR/NFκB inhibition can be carried out on to the clinic with any EGFR inhibitor compounds other than EKB-569. To further substantiate our findings, we analyzed the efficacy of EKB-569 in IR-modulated NFκB signaling pathway transcriptional response. Interestingly, EKB-569 robustly modulates the transcriptional response of NFκB signal transduction and downstream mediators of this pathway in SCC-4 cells. To that note, EKB-569 inhibited IR-induced transcription of pro-survival molecules in this setting. Disruption of aberrantly regulated survival signaling mediated by NFκB has recently become an important task in the therapy of several chemoresistant and radioresistant cancers . Anti-apoptotic molecules are expressed at high levels in many tumors and have been reported to contribute to the resistance of cancers to RT . Because activation of caspases plays a central role in the apoptotic machinery , therapeutic modulation of molecules such as IAPs could target the core control point that overturn the cell fate and determine sensitivity to RT –. A recent body of evidence has emphasized a central role for NFκB in the control of cell proliferation and survival. NFκB enhances cell survival by switching on the activation of pro-survival molecules that dampen pro-apoptotic signals and attenuate apoptotic response to anticancer drugs and IR , . In this perspective, we recently demonstrated that muting IR-induced NFκB regulates NFκB dependent pro-survival molecules and potentiate radiosensitization at least in breast cancer and neuroblastoma models. To our knowledge, the present study for the first time throws light on the efficacy of EKB-569 in regulating IR altered NFκB signal transduction and downstream effector molecules in HNSCC cells. This insight into the comprehensive regulation of IR-induced survival transcription recognizes EKB-569 as “potential radiosensitizer” and further allows us to identify the role of EGFR dependent NFκB mediated orchestration of radioresistance at least in HNSCC. Though a plethora of studies dissected out the EGFR downstream signaling (some of them discussed above) and suggested that these signaling converge at transcriptional machinery, there remained a paucity of information on the role of specific transcriptional switch in orchestrating EGFR dependent tumor progression. Not only, this study throws light on the molecular blue print that underlies after clinical doses of IR in HNSCC, this study also identifies the potential of the EGFR TK, EKB-569 in selectively targeting IR-induced NFκB and subsequent tumor progression. In this regard, p65 subunit of NFκB is constitutively activated in 70% of HNSCC and IR-induced NFκB plays an important role in HNSCC resistance to RT. Though constitutive and RT-induced NFκB has been causally linked to induced-radioresistance, its precise participation in RT-induced cell death orchestration is poorly understood. In this regard, results of the present study exhibit that ecotopically muting IR-induced NFκB with ΔIκBα robustly induced cell death in HNSCC cells demonstrating that IR-induced NFκB regulates cell death at least in this setting. Furthermore, to causally delineate that EKB-569 dependent silencing of NFκB mediates the induced radiosensitization, we analyzed their effect on NFκB overexpressed cells. For the first time, the results of the present study imply that EKB-569 inhibits HNSCC cell survival and viability by selectively targeting NFκB. In summary, these results demonstrate that EKB-569 significantly inhibits IR-induced NFκB activity in human HNSCC cells. Furthermore, this study identifies the EKB-569-associated inhibition of NFκB pathway survival signaling blue print, more precisely to the regimen of the treatment modality, in this case IR. Evidently, treatment with EKB-569 profoundly conferred IR-inhibited HNSCC cell survival and viability. Consistently, this EGFR TK significantly enhanced IR-induced HNSCC apoptosis. More importantly, NFκB over expression and knockout studies demonstrated that EKB-569-associated targeting of IR-induced NFκB mediates cell death in HNSCC cells. Taken together, these data strongly suggest that EKB-569 may exert radiosensitization at least in part by selectively targeting IR-induced NFκB dependent survival signaling, that potentiate radiotherapy in effective HNSCC cell killing. Further in-depth in vivo studies are warranted to verify this suggestion and are presently under investigation in our laboratory. QPCR profiling amplification charts and heat map showing transcriptional changes in 88 NFκB-dependent downstream target genes in SCC-4 cells. Cells were either mock-irradiated, exposed to IR or pretreated with EKB-569 (5 ug) and then exposed to IR. Real-time QPCR profiling was performed using human NFκB signaling pathway profiler (Realtimeprimers.com, Elkins Park, PA). Conceived and designed the experiments: MN CRT NA. Performed the experiments: MN JV SA ASM. Analyzed the data: MM JV ASM. Contributed reagents/materials/analysis tools: MN ASM. Wrote the paper: MN ASM JV. - 1. Parkin DM, Bray F, Ferlay J, Pisani P (2005) Global cancer statistics, 2002. CA Cancer J Clin 55: 74–108. - 2. Hunter KD, Parkinson EK, Harrison PR (2005) Profiling early head and neck cancer. Nat Rev Cancer 5: 127–135. - 3. Salomon DS, Brandt R, Ciardiello F, Normanno N (1995) Epidermal growth factor-related peptides and their receptors in human malignancies. Crit Rev Oncol Hematol 19: 183–232. - 4. Woodburn JR (1999) The epidermal growth factor receptor and its inhibition in cancer therapy. Pharmacol Ther 82: 241–250. - 5. Arteaga CL (2001) The epidermal growth factor receptor: from mutant oncogene in nonhuman cancers to therapeutic target in human neoplasia. J Clin Oncol 19: 32S–40S. - 6. Baeuerle PA, Baltimore D (1991) Hormonal Control Regulation of Gene Transcription. pp. 409–432. In Molecular Aspects of Cellular Regulation, Cohen, P and Foulkes, JG (eds), Elsevier/North Holland Biomedical Press Amsterdam. - 7. Lenardo MJ, Baltimore D (1989) NF-kappa B: a pleiotropic mediator of inducible and tissue-specific gene control. Cell 58: 227–229. - 8. Neri A, Chang CC, Lombardi L, Salina M, Corradini P, et al. (1991) B cell lymphoma-associated chromosomal translocation involves candidate oncogene lyt-10, homologous to NF-kappa B p50. Cell 67: 1075–1087. - 9. Higgins KA, Perez JR, Coleman TA, Dorshkind K, McComas WA, et al. (1993) Antisense inhibition of the p65 subunit of NF-kappa B blocks tumorigenicity and causes tumor regression. Proc Natl Acad Sci U S A 90: 9901–9905. - 10. Tozawa K, Sakurada S, Kohri K, Okamoto T (1995) Effects of anti-nuclear factor kappa B reagents in blocking adhesion of human cancer cells to vascular endothelial cells. Cancer Res 55: 4162–4167. - 11. Orlowski RZ, Baldwin AS Jr (2002) NF-kappaB as a therapeutic target in cancer. Trends Mol Med 8: 385–389. - 12. Yan M, Xu Q, Zhang P, Zhou XJ, Zhang ZY, et al. (2010) Correlation of NF-kappaB signal pathway with tumor metastasis of human head and neck squamous cell carcinoma. BMC Cancer 10: 437. - 13. Chen X, Shen B, Xia L, Khaletzkiy A, Chu D, et al. (2002) Activation of nuclear factor kappaB in radioresistance of TP53-inactive human keratinocytes. Cancer Res 62: 1213–1221. - 14. Herscher LL, Cook JA, Pacelli R, Pass HI, Russo A, et al. (1999) Principles of chemoradiation: theoretical and practical considerations. Oncology (Williston Park) 13: 11–22. - 15. Tang G, Minemoto Y, Dibling B, Purcell NH, Li Z, et al. (2001) Inhibition of JNK activation through NF-kappaB target genes. Nature 414: 313–317. - 16. Sun Y, St Clair DK, Fang F, Warren GW, Rangnekar VM, et al. (2007) The radiosensitization effect of parthenolide in prostate cancer cells is mediated by nuclear factor-kappaB inhibition and enhanced by the presence of PTEN. Mol Cancer Ther 6: 2477–2486. - 17. He L, Kim BY, Kim KA, Kwon O, Kim SO, et al. (2007) NF-kappaB inhibition enhances caspase-3 degradation of Akt1 and apoptosis in response to camptothecin. Cell Signal 19: 1713–1721. - 18. Raffoul JJ, Wang Y, Kucuk O, Forman JD, Sarkar FH, et al. (2006) Genistein inhibits radiation-induced activation of NF-kappaB in prostate cancer cells promoting apoptosis and G2/M cell cycle arrest. BMC Cancer 6: 107. - 19. Magne N, Toillon RA, Bottero V, Didelot C, Houtte PV, et al. (2006) NF-kappaB modulation and ionizing radiation: mechanisms and future directions for cancer treatment. Cancer Lett 231: 158–168. - 20. Kim BY, Kim KA, Kwon O, Kim SO, Kim MS, et al. (2005) NF-kappaB inhibition radiosensitizes Ki-Ras-transformed cells to ionizing radiation. Carcinogenesis 26: 1395–1403. - 21. Forastiere A, Koch W, Trotti A, Sidransky D (2001) Head and neck cancer. N Engl J Med 345: 1890–1900. - 22. Squarize CH, Castilho RM, Sriuranpong V, Pinto DS Jr, Gutkind JS (2006) Molecular cross-talk between the NFkappaB and STAT3 signaling pathways in head and neck squamous cell carcinoma. Neoplasia 8: 733–746. - 23. Vlantis AC, Lo CS, Chen GG, Ci Liang N, Lui VW, et al. (2010) Induction of laryngeal cancer cell death by Ent-11-hydroxy-15-oxo-kaur-16-en-19-oic acid. Head Neck 32: 1506–1518. - 24. Kwak EL, Sordella R, Bell DW, Godin-Heymann N, Okimoto RA, et al. (2005) Irreversible inhibitors of the EGF receptor may circumvent acquired resistance to gefitinib. Proc Natl Acad Sci U S A 102: 7665–7670. - 25. Yarden Y, Sliwkowski MX (2001) Untangling the ErbB signalling network. Nat Rev Mol Cell Biol 2: 127–137. - 26. Rubin BP, Duensing A (2006) Mechanisms of resistance to small molecule kinase inhibition in the treatment of solid tumors. Lab Invest 86: 981–986. - 27. Sequist LV (2007) Second-generation epidermal growth factor receptor tyrosine kinase inhibitors in non-small cell lung cancer. Oncologist 12: 325–330. - 28. Wissner A, Overbeek E, Reich MF, Floyd MB, Johnson BD, et al. (2003) Synthesis and structure-activity relationships of 6,7-disubstituted 4-anilinoquinoline-3-carbonitriles. The design of an orally active, irreversible inhibitor of the tyrosine kinase activity of the epidermal growth factor receptor (EGFR) and the human epidermal growth factor receptor-2 (HER-2). J Med Chem 46: 49–63. - 29. Veeraraghavan J, Natarajan M, Aravindan S, Herman TS, Aravindan N (2011) Radiation-triggered tumor necrosis factor (TNF) alpha-NFkappaB cross-signaling favors survival advantage in human neuroblastoma cells. J Biol Chem 286: 21588–21600. - 30. Aravindan N, Shanmugasundaram K, Natarajan M (2009) Hyperthermia induced NFkappaB mediated apoptosis in normal human monocytes. Mol Cell Biochem 327: 29–37. - 31. Baselga J (2006) Is there a role for the irreversible epidermal growth factor receptor inhibitor EKB-569 in the treatment of cancer? A mutation-driven question. J Clin Oncol 24: 2225–2226. - 32. Alroy I, Yarden Y (1997) The ErbB signaling network in embryogenesis and oncogenesis: signal diversification through combinatorial ligand-receptor interactions. FEBS Lett 410: 83–86. - 33. Lewis TS, Shapiro PS, Ahn NG (1998) Signal transduction through MAP kinase cascades. Adv Cancer Res 74: 49–139. - 34. Chan TO, Rittenhouse SE, Tsichlis PN (1999) AKT/PKB and other D3 phosphoinositide-regulated kinases: kinase activation by phosphoinositide-dependent phosphorylation. Annu Rev Biochem 68: 965–1014. - 35. Vivanco I, Sawyers CL (2002) The phosphatidylinositol 3-Kinase AKT pathway in human cancer. Nat Rev Cancer 2: 489–501. - 36. Igney FH, Krammer PH (2002) Death and anti-death: tumour resistance to apoptosis. Nat Rev Cancer 2: 277–288. - 37. Aravindan N, Madhusoodhanan R, Ahmad S, Johnson D, Herman TS (2008) Curcumin inhibits NFkappaB mediated radioprotection and modulate apoptosis related genes in human neuroblastoma cells. Cancer Biol Ther 7: 569–576. - 38. Aravindan N, Madhusoodhanan R, Natarajan M, Herman TS (2008) Alteration of apoptotic signaling molecules as a function of time after radiation in human neuroblastoma cells. Mol Cell Biochem 310: 167–179. - 39. Madhusoodhanan R, Natarajan M, Singh JV, Jamgade A, Awasthi V, et al. (2010) Effect of black raspberry extract in inhibiting NFkappa B dependent radioprotection in human breast cancer cells. Nutr Cancer 62: 93–104. - 40. Madhusoodhanan R, Natarajan M, Veeraraghavan J, Herman TS, Aravindan N (2009) NFkappaB activity and transcriptional responses in human breast adenocarcinoma cells after single and fractionated irradiation. Cancer Biol Ther 8: 765–773. - 41. Madhusoodhanan R, Natarajan M, Veeraraghavan J, Herman TS, Jamgade A, et al. (2009) NFkappaB signaling related molecular alterations in human neuroblastoma cells after fractionated irradiation. J Radiat Res (Tokyo) 50: 311–324. - 42. Veeraraghavan J, Aravindan S, Natarajan M, Awasthi V, Herman TS, et al. (2011) Neem leaf extract induces radiosensitization in human neuroblastoma xenograft through modulation of apoptotic pathway. Anticancer Res 31: 161–170. - 43. Veeraraghavan J, Natarajan M, Herman TS, Aravindan N (2010) Curcumin-altered p53-response genes regulate radiosensitivity in p53-mutant Ewing's sarcoma cells. Anticancer Res 30: 4007–4015. - 44. Sclabas GM, Fujioka S, Schmidt C, Fan Z, Evans DB, et al. (2003) Restoring apoptosis in pancreatic cancer cells by targeting the nuclear factor-kappaB signaling pathway with the anti-epidermal growth factor antibody IMC-C225. J Gastrointest Surg 7: 37–43; discussion 43. - 45. Arlt A, Vorndamm J, Breitenbroich M, Folsch UR, Kalthoff H, et al. (2001) Inhibition of NF-kappaB sensitizes human pancreatic carcinoma cells to apoptosis induced by etoposide (VP16) or doxorubicin. Oncogene 20: 859–868. - 46. Piva R, Belardo G, Santoro MG (2006) NF-kappaB: a stress-regulated switch for cell survival. Antioxid Redox Signal 8: 478–486. - 47. Salvesen GS, Duckett CS (2002) IAP proteins: blocking the road to death's door. Nat Rev Mol Cell Biol 3: 401–410. - 48. Cao C, Mu Y, Hallahan DE, Lu B (2004) XIAP and survivin as therapeutic targets for radiation sensitization in preclinical models of lung cancer. Oncogene 23: 7047–7052. - 49. Lu B, Mu Y, Cao C, Zeng F, Schneider S, et al. (2004) Survivin as a therapeutic target for radiation sensitization in lung cancer. Cancer Res 64: 2840–2845. - 50. Giagkousiklidis S, Vogler M, Westhoff MA, Kasperczyk H, Debatin KM, et al. (2005) Sensitization for gamma-irradiation-induced apoptosis by second mitochondria-derived activator of caspase. Cancer Res 65: 10502–10513. - 51. Rodel C, Haas J, Groth A, Grabenbauer GG, Sauer R, et al. (2003) Spontaneous and radiation-induced apoptosis in colorectal carcinoma cells with different intrinsic radiosensitivities: survivin as a radioresistance factor. Int J Radiat Oncol Biol Phys 55: 1341–1347. - 52. Nakanishi C, Toi M (2005) Nuclear factor-kappaB inhibitors as sensitizers to anticancer drugs. Nat Rev Cancer 5: 297–309. - 53. Ravi R, Bedi A (2004) NF-kappaB in cancer–a friend turned foe. Drug Resist Updat 7: 53–67.
0.6134
FineWeb
The New International Encyclopædia/Bitter, Karl Hermann BIT'TER, Karl Hermann (1813-85). A Prussian statesman and writer on music. He was born at Schwedt, Province of Brandenburg, and studied law and cameralistics at Berlin and Bonn. He served as the plenipotentiary of Prussia on the Danube Commission from 1856 to 1860, was prefect of the Department of Vosges during the Franco-Prussian War, and subsequently became minister of finance (1879) — an office in which he displayed exceptional ability. He increased the indirect duties derived from the so-called tobacco monopoly and the tax on spirits and malt, introduced the ‘Börsensteuer’ (tax on the bourse), and concluded the commercial treaty with the city of Hamburg by which that city entered the German Customs Union. He reëstablished the stability of the Prussian finances, and took a prominent part in bringing the railroads of Germany under Government control. He resigned in 1882, in consequence of differences with Bismarck. His literary activity was confined almost exclusively to works on music.
0.7294
FineWeb
1.1 Background to the Study Examination has been generally accepted as the best means of assessment. It is a formal test of knowledge or ability. Infact, in a school setting, examination is a means of evaluating the quantity of knowledge a student has acquired within a specific period of time. Adekunle (2003) sees examination as an instrument used for the assessment of individual skills and knowledge-content, both in general and specific area of study. Teaching and learning become more effective when the students are subjected to an examination process to determine the extent to which the students have assimilated the content of the instruction given and the teacher can also assess himself from the performance of the students. In essence, examinations are used to determine pass or fail of a student or group of students in the opinion. History seems to support the view that setting children against one another in trials and competitions has always been a respectable means of inciting them of deal’’. A student who knows that he might fall on examination which will in turn determine his progress or promotion will strive hard in order to pass. This once more encourages kind of competition within groups of students who will dim at nigh position in their classes. Examinations are also used for academic stratification or for assigning grades to students. For decade, the West Africa Examination Council awards results on the basis of some stratification’s and three. The contemporary practice of (N.C.E). Students are stratified into distinction, credit merit and pass, while in the university also, students are stratified into first, second (upper and lower) or third class degrees having gone through an examination. These grades are a measure of success and prestige. A child with a division one pass in school certificate examination will be regarded by those around him as academically precocious. He is also likely to have a place in the institution of higher learning or in job situation within the society easier than a child with a division three pass. All these conditions have combined to influence a child’s attitude to an examination; attitude which always colonnade in an urge for success in any particular examination whether or not he had prepared for it. These competitions in school have their parallel in the society. Unfortunately, this all important means of assessing students has become ineffective as all forms of malpractice have been introduced into the system. Adesina (2000) traced the history of examination malpractice in Nigeria to 1914. When there was a leakage of the Cambridge examination. Cheating became widespread in schools hence in 1967, the Alexander Commission was set up as a special commission of inquiry to investigate the incidences of malpractice in Nigeria. In 1977, there was a widespread leakage of the West Africa School Certificate Examination questions. Government took it as a challenge to address issues of examination malpractice. A special conference was held in that regard at Ibadan in 1986. Decrees were promulgated, schools were sanctioned, results cancelled and invigilators arrested all in a bid to curb malpractice. The irony of it all is that despite the several attempts made by school authorities, government agencies, parents and church leaders in trying to concentise the Nigerian students on the evils of examination malpractice, this menace is still in its increase in the various schools. There is the need to find out The Causes and Effect of Examination Malpractice in Nigerian schools. 63 total views, 1 views today
0.8209
FineWeb
While using oauth2 with laravel there is a problem for response custom error message instead of oauth2 error format.For details how to use oauth2 with laravel please go here.This happens if your api error format is different rather that oauth2 error format.I fall this types of problem and i solved it.Here we know how we response custom error message while using oauth2.Default oauth2 exception middleware is We are going to remove that middleware and we wrote a new middleware.So first create a middleware in app/Http/Middleware.My middleware name is OauthExceptionMiddleware and put this middleware instead of oauth2 previous middleware in $middleware array,like this protected $middleware = [ \Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode::class, \App\Http\Middleware\OauthExceptionMiddleware::class, ]; Oauth2 Custom Exception Error Message and now check your error message,then it will show demo error message,now anyone can format his error message in this middleware how he want.
0.7097
FineWeb
What is meant by the Charter of Rights The Charter of Rights is for children and young people who are in out of home care was created after consultation with young people and children who had experienced the out-of-home-care system, carers, workers, people in government and NGOs. It outlines children’s fundamental rights, as well as what the child can expect whilst they are in care. State Charter of Rights Every State has a Charter of Rights for children and young people in out-of-home-care. The Northern Territory does not have a charter of rights in place, however it does have out-of-home-care standards. The charter addresses the following areas: Learning & achieving Each state has further information about the charter of rights, and explains ways to encourage and promote a child’s rights whilst they are in your care. Remember, that as a carer you must uphold the rights of the child in your care! What is applicable to your state In order to understand better your obligations and responsibilities as a carer, and the rights of the child in your care, please click on the links below depending in which State or Territory you are from and what is applicable to your state.
0.6618
FineWeb
Do you have a skinny body and have dreamt of having a muscular one all your life? If yes then my friend you’ve got your eye on the right post. Weirdly, gaining weight is as difficult as losing it. Have you always felt that you can’t gain weight even if you eat the whole day? You hate it when your friends call you a stick and make fun of your skinny body. Called skinny all day long can make anyone disheartened but we are here with some pro tips for which you are going to thank us certainly. - The weight gaining phase needs much of a patience anyone has. - We are going to talk about a few diet tips and exercises that are effective enough to bulk up your body and show it off when you go shirtless next time on a beach vacation. - The first and foremost thing that comes to our mind is the diet. - If you are skinny that definitely, it means that you need extra calories and the right type of calories. - Plan to eat a gram of protein per pound of your body weight every day. - Like if your weight is around 180 pounds, you should try to eat 180 grams of proteins every day. - Modify your eating habits. - Binge up on lots of starchy carbs like potatoes, rice, and oats. Gaining in bulk requires taking extra care of your nutrition such that you don’t miss out on the sources of food fats. - Include snacks like nuts, seeds, eggs, meat to your diet. - Don’t skip meals especially in the morning. - Even if you are in a hurry, don’t miss your breakfast as it the most important meal of the day. - Many of us often mistake eating more on eating unhealthy. - This often makes one unhealthy and sick. - Being sick eventually makes you lose weight and you reach exactly where you started. - Therefore, this process of gaining weight must be done cautiously by increasing the right calorie intake and decreasing the calorie burnout. - The idea is to eat and eat but eat clean not junk. Eating like a fool daylong but not training is of no use. Planning to follow a heavy diet chart to bulk up but not including exercises is a job done in vain. Exercises play a very important role when it comes to losing or gaining weight. Different exercises perform and act in different parts of your body toning each and every muscle. You can stimulate muscle growth with very few exercises as long as they’re done with heavy weight and they activate as many muscle fibers as possible. Few of the most important yet easy and effective workouts that you can never ever miss on are deadlift, Chin-up exercises, Dumbbell Floor Press, Reverse Lunge, Bulgarian Split Squat and Feet-Elevated Side plank. - The main aim should be to tone every major muscle including the back, arms, chest, abs, and legs. - This can be achieved by indulging in compound exercises like squats, clean and presses and pull-ups. - At first, you would find it a bit exhausting and no doubt it would be painful too but not losing hope and being consistent with your routine and diet plan would definitely increase your stamina. - Start increasing the intensity of your workout as soon as you realize your stamina building up. - Complete at least two exercises of three sets for each muscle group. Along with the exercises, supplements containing amino acids and carbohydrates also prove to be helpful. The people who take heavy shakes before workout experience a built-in their protein synthesis much more than the ones who drank it afterward. - After eating properly and exercising your body feels too tired to do more work hence rest in between workout sessions is as mandatory as taking a good diet and exercising. - Getting eight hours of sleep per night is crucial for growth-hormone release. In the daytime, taking a nap is even better if you can get it. - Give a one day gap after every 3 days of weight gain training. - This is suggested in order to de-stress your muscles. - According to a study, the rate of the rebuilding of your muscles is faster on your rest days if you feed your body with the required amount of carbohydrates. - In between workouts, get massages or use a foam roller to work out knots in your muscles and improve blood flow. - These techniques are meant to compensate the wear and tear during the workout days
0.8299
FineWeb
Swimming pools in residential homes are a great place for fun, family and enjoyment. What most families fail to realize is that pools can also pose a potentially life threatening risk, especially for children under the age 5. What you'll find in this post are best practices on how to improve safety in and around your pool area, basic fence regulations, and overall guidelines to keep the pool safe and fun for everyone. Supervision is the key to swimming pool safety for children. There is no substitute for active supervision from an adult. Even in a supervised public pool, never take your eyes off your child. Lifeguards provide supervision for all pool users, but you provide the personal supervision your young child needs. Keep your child within reach at all times! Below are a few points you should keep in mind when supervising children swimming. - Stay in constant visual contact. Do not just glance towards the water occasionally. - Stay within arms’ reach of toddlers and beginner swimmers at all times when they’re in or around the water. - Stay close to the water when you’re supervising children who can swim. And be ready to get in if there’s an emergency. - Take children with you if you leave the pool area, even if it is just for a moment. When you’re at a public pool, the following pointers can help keep your child safe: - Explain to your child that everyone has to obey the lifeguards’ directions. - Explain that your child should follow the pool rules, even if other children don’t. - Be aware of other people in the water, particularly when it’s crowded. Education and prevention. Educating your child to swim and teaching them about the possible dangers of drowning and hazards around the pool is proven to decrease the risk of drowning. Enrolling your child into an approved swimming lesson facility to adapt to the water, understand how to float and what to do when they find themselves in water is the best personal development for them. Below are a few ways you can educate your children. Teach your children useful swimming practices. Teach children to swim from a young age. You can start introducing your babies to water when they are about 6 months old. Teach children how to tread water, float and familiarise themselves with water. You can show this video to your children so they can learn how to tread water. Teach children that they should always swim with a supervisor. Whether you’re swimming in a pool or in a lake, teach children to swim with an adult. Teach children to never go near or in water without an adult present. Keep children safe with the right swimming aids. Remember that swimming aids such as water wings or noodles are fun toys for kids, but they should never be used in place of approved floatation devices Pool Safety Guide All pools have water reservoirs that pose a hazard to young kids, these are commonly known as skimmers, suction points or water returns. These are areas of the pool that should be avoided and children taught that they are not to play around these dangerous points. - Educate your children about the dangers of drain entanglement and entrapment and teach them to never play or swim near drains or suction outlets. - Regularly check to make sure drain covers are secure and have no cracks, and replace flat drain covers with dome-shaped ones. Best practice swimming rules - Always swim with a friend or adult - Follow all swimming rules posted at the swimming area. - Obey the lifeguard’s instructions - Do not swim if you cannot see the bottom - Avoid swimming at night - Do not push, shove, or run near the water - Swim a safe distance away from diving boards and slides - Avoid swimming in river currents - Always go down feet first in a sitting position on a slide Swimming pool safety barriers: requirements and guidelines It is now law in all Australian states that all private swimming pools or spas that can hold a depth of 300 mm or more must have safety barriers around them. Barriers are required for: - in-ground swimming pools - above-ground swimming pools - indoor swimming pools - bathing and wading pools *Check with your local council for more details. Basic Pool Fence Requirements: - Barriers are a minimum 1.2 m high - Barriers are secure and well maintained - Gates within the barrier never propped open, swings away from the pool, shuts automatically from any open position, without having to forcibly close it and self-latches when it closes. - Barriers have no gaps more than 100 mm apart - Barriers have horizontal bars at least 900 mm apart - Barriers have a “non-climb zone” to prevent children climbing over fencing into the pool area. This zone is measured in an arc shape from the top of the pool fence arching towards the ground - Doors within the barrier must self closes without the application of manual force, self latches and requires manual release. The latching device is at least 150cm off the ground and it does not open towards the pool - Windows that form part of a pool barrier must have a locking device or a security screen fixed to the building that prevents them from opening more than 10cm. - Signage around the pool must have the applicable DR ABC requirements and be in good condition and able to be read easily from 3 metres. *Please note that this is a guide only, and that further requirements or legislation may be applicable to different pools For more information, we have a number of other helpful checklists down below - NSW Swimming Pool Register - Pool Inspection Self Assessment Checklists - Information from NSW Fair Trading about Pool Safety - Information Factsheet for Sellers - Information Factsheet for Buyers - Information Factsheet for Landlords and Tenants - Information Factsheet for Real Estate Agents - Royal Life Saving Australia Pool Safety Checklist
0.754
FineWeb
Understanding the Basics of Marketing Marketing is a crucial aspect of business, encompassing various strategies and techniques to promote, advertise, and sell products or services. It is the art of effectively communicating the value and benefits of a product or service to potential customers. Marketing involves market research, segmentation, targeting, positioning, and the implementation of the marketing mix, which includes the product, price, place, and promotion. Let’s explore the key concepts and innovations in marketing that are shaping the industry today. Data-Driven Marketing and Artificial Intelligence In recent years, marketing has evolved significantly with the emergence of data-driven marketing and artificial intelligence (AI). Data-driven marketing involves leveraging customer insights and data analytics to make informed marketing decisions. By analyzing customer behavior, preferences, and purchasing patterns, businesses can tailor their marketing efforts to target specific audiences and maximize their return on investment. AI plays a crucial role in data-driven marketing, enabling businesses to automate processes, personalize customer experiences, and gain valuable insights from vast amounts of data. One of the latest innovations in AI-driven marketing is the use of chatbots. These intelligent virtual assistants use natural language processing and machine learning to engage with customers, answer their queries, and provide personalized recommendations. Chatbots not only enhance the customer experience but also help businesses save time and resources by automating customer support and lead generation. Influencer Marketing and Social Media Social media has revolutionized the way businesses connect with their customers. Platforms like Instagram, YouTube, and TikTok have given rise to a new form of marketing known as influencer marketing. Influencer marketing involves collaborating with social media influencers who have a large following and a niche audience. These influencers promote products or services through sponsored content, reviews, or endorsements, leveraging their credibility and influence to reach a wider audience. One of the key benefits of influencer marketing is its ability to target specific demographics and target markets. By selecting influencers who align with their brand values and target audience, businesses can effectively communicate their message and increase brand awareness. Moreover, influencer marketing allows for more authentic and relatable advertising, as influencers often share their personal experiences and opinions about the products or services they promote. Personalization and Customer Journey Mapping Personalization has become a fundamental aspect of modern marketing. With advances in technology and access to customer data, businesses can create personalized experiences and tailor their marketing efforts to individual customers. Personalization goes beyond addressing customers by their first name; it involves understanding their needs, preferences, and behaviors to deliver relevant and engaging content. Customer journey mapping is a technique used to understand the customer’s interactions with a brand throughout their buying journey. By mapping out the touchpoints, emotions, and pain points of the customer’s journey, businesses can identify opportunities to enhance the customer experience and drive customer loyalty. This includes personalizing content, offering recommendations based on past purchases, and delivering targeted advertisements at various stages of the customer journey. Marketing is an ever-evolving field that continues to be shaped by technological advancements and changing consumer behaviors. Data-driven marketing, influencer marketing, personalization, and customer journey mapping are just a few of the innovative strategies and techniques that are driving marketing success today. As businesses strive to stay ahead in a competitive landscape, understanding and embracing these innovations will be crucial to their marketing success. Find more details about the topic in this external resource. Marketing Blog, broaden your understanding of the subject. Expand your knowledge by visiting the related posts we’ve selected:
0.9567
FineWeb
Questions about example sentences with, and the definition and usage of "Comineza" Translations of "Comineza" You shook my days quickly, opening through the window, a landscape that was going to color. With a sigh, you steal my heart beat. Meanings and usages of similar words and phrases Words similar to comineza HiNative is a platform for users to exchange their knowledge about different languages and cultures. We cannot guarantee that every answer is 100% accurate.
0.8798
FineWeb
The results are presented of drained triaxial tests on a weakly bonded artificial soil in which thestress path direction has been changed partway through the shearing process. The effects of the previous shearingpath history on the yield and failure surfaces are examined. Yield of the bonds occurs under each stress pathdirection followed, even when yield has previously occurred along another path. This demonstrates that bondbreakdown is an anisotropic process. The position of yield was found to be independent of the previous shearingpath history of the soil and occurred at points that corresponded to a yield surface defined for the currentshearing path direction. However, the previous shearing path history of the soil did significantly affect the failureenvelope. It is suggested that the bond yield surface is kinematic, in the sense that it is an expandable/shrinkablesurface, but that it is not a moveable surface. It is postulated that the yield surface expands when volumetricstrains are compressive and shrinks when the volumetric strains are dilatent.
0.9627
FineWeb
AFAIK people like scripting because it's easier, and it makes development cycle faster. But what about when games are shipped. No one expects shipped binaries to be changed, unless there is an update or a DLC. So why developers don't compile their script files before shipping? - Faster run time - Obfuscating binary a little mode and I can't think of any possible draw backs. - Civilization 5 had lots of lua scripts inside their assets folder. - Angry birds also had a lot those scripts along side binary. - Age of Mythology had lots of xs scripts for their AI. - Civilization 4 had lots of python scripts. I'm pretty sure the list could have gotten a lot longer if I had more games installed on my device.
0.9922
FineWeb
Fusion Science and Technology / Volume 41 / Number 3P2 / May 2002 / Pages 981-987 Purification and Chemical Process / Proceedings of the Sixth International Conference on Tritium Science and Technology Tsukuba, Japan November 12-16, 2001 A method of decomposing hydrogen compounds was developed by employing a zirconium nickel (ZrNi) alloy. This method enables all tritium compounds (HTO, CH3T, C2H5T, etc.) in an exhaust gas to be decomposed into their respective elements, and the tritium itself to be removed in the form of hydrogen gas (HT). The method was developed through a series of experiments using methane. Using previous study results, a chemical reaction equation of methane decomposition on a ZrNi alloy is proposed and discussed. To ascertain the mechanism of methane decomposition on a ZrNi alloy, alloy samples were examined based on X-ray diffraction spectra and SEM electronographies before, during, and after the experiments. It was found that, as the decomposition time elapsed, peaks attributed to a pure ZrNi alloy gradually disappeared and new ones appeared in the X-ray spectra. The new peaks were attributed to the presence of ZrC, pure Ni, and a simple carbon substance. This indicates that the Zr in a carbon-bound alloy results in ZrC generation that releases Ni metal, and part of the C generated from the methane decomposition remains as a simple, as-grown substance. From these results, the decomposition reaction of methane using a ZrNi alloy can be represented by an equation involving the alpha value. The equation shows that one ZrNi molecule decomposes (1+ α) molecules of methane and generates 2(1+α) molecules of hydrogen. The alpha value was estimated based on the volume of decomposed methane and the weight of the ZrNi alloy used in the experiments. It is known that the alpha value is strongly dependent on the experimental conditions and can be used as an index to evaluate the decomposition condition.
0.995
FineWeb
A workshop dedicated to the Theoretical Virtual Observatory will take place at IAP on April 5-6th. The goal is to bring together experts of the Virtual Observatory and theoreticians who would like to make results of their simulations (e.g. databases or catalogs) or numerical codes available to the worldwild astronomical community. This "grand challenge" project features the largest N body simulation ever performed: we have simulated the 13.7 Gyr long evolution of 4096^3 dark matter particles in a 2 Gpc/h periodic box, running the RAMSES code for a couple of month on the 6144 processors of the new BULL supercomputer of the CEA supercomputing center (Centre de Calcul Recherche et Technologie). The initial displacement field was computed on a 4096^3 base grid using the MPgrafic package. We use as initial conditions parameters the best fit values inferred from the 3 years observations of the WMAP satellite. The 70 billions particles were evolved using the Particle Mesh scheme of the RAMSES code on an adaptively refined grid (AMR) with more than 140 billions cells. We use more than 10 000 fine time steps at the deepest refinement level and 737 time step at the base level. Each of the 70 billions cells of the base grid was recursively refined up to 6 additional levels of refinement. We reached a formal resolution of 262144 cells in each direction (roughly 7 kpc/h comoving). For the first time, we have performed a simulation of half the observable universe, with enough resolution to describe a Milky Way-like galaxy with more than 100 dark matter particles. Our goal is to generate a Full Sky mock catalog with a realistic galaxy distribution up to redshift 1, as well as a deeper catalog with 500 square degrees up to redshift 7. In this way, we are definitely approaching the scale of the cosmological horizon, which is the hard limit of the observable universe (hence the name of the project). The following people in the Horizon Project have spent nights and days monitoring and post-processing this huge simulation: D. Aubert, S. Colombi, J. Devriendt, P. Ocvirk, C. Pichon, S. Prunet, R. Teyssier. We describe now in more details the results of this simulation. Here is a movie describing the simulation results at the final epoch. For higher resolution animations, please contact us. A spectacular zoomable unfolded image is also available here. It represents an oblic slice through the simulation at z=0 16x16 Gpc/h accross. A mollweise representation (a projection of the sphere which attempts to display all the sphere on a plane) of the simulation at redshift 1 is shown here: It assumes that the observer views all the night sky arround him but is only sensitive to the large scale structures of the universe at a distance of roughly 5 million light years. Another partial projection is shown there: A thick slice of the simulation at z=0. is shown here: And a thiner slice . A VERY LARGE (97 Mega) image in 8192x8192 pixels is available here. Note that this image corresponds only to one level of refinement (another 5 levels are available). A composite image displaying all scales probed at the initial resolution of the simulation (without refinement). The outer region corresponds to a view of the universe on scales of 16h^-1Gpc: it is generated by unfolding the simulation while cuting a slice obliquely through the cube in order to preserve the continuity of the field (thanks to the periodicity) See also below. The intermediate region corresponds to a slice of 2h^-1Gpc, while the inner region is at the original resolution of the initial conditions. RAMSES has refined 6 times over the course of the run from that resolution. A redshift slice through the Horizon-4pi universe is shown here while a 3D view of its filaments is shown here: A VERY LARGE (62 Mega) image of the unfolded universe (16x16 Mpc/h) in 8192x8192 pixels is available here. Note that this image is generated from a () downgraded version of the initial cube. Another view of this cube is shown here: while the skeleton of the structures within this cube is shown there: Scientific Rationnal: Full sky weak lensing A map of kappa (the weak lensing signal) is archived here (IT’S QUITE LARGE!). This image is a projection onto the sky the density of the dark matter.
0.8583
FineWeb
Author: Joan DeJean Started: October 20, 2018 Finished: November 12, 2018 First Sentence: What makes a city great? Summary: [From BN] At the beginning of the seventeenth century, Paris was known for isolated monuments but had not yet put its brand on urban space. Like other European cities, it was still emerging from its medieval past. But in a mere century Paris would be transformed into the modern and mythic city we know today. Though most people associate the signature characteristics of Paris with the public works of the nineteenth century, Joan DeJean demonstrates that the Parisian model for urban space was in fact invented two centuries earlier, when the first complete design for the French capital was drawn up and implemented. As a result, Paris saw many changes. It became the first city to tear down its fortifications, inviting people in rather than keeping them out. Parisian urban planning showcased new kinds of streets, including the original boulevard, as well as public parks and the earliest sidewalks and bridges without houses. Venues opened for urban entertainment of all kinds, from opera and ballet to a pastime invented in Paris, recreational shopping. Parisians enjoyed the earliest public transportation and street lighting, and Paris became Europe's first great walking city. A century of planned development made Paris both beautiful and exciting. It gave people reasons to be out in public as never before and as nowhere else. And it gave Paris its modern identity as a place that people dreamed of seeing. By 1700, Paris had become the capital that would revolutionize our conception of the city and of urban life. Thoughts: When I think of reading a book about the history of Paris, I assume I am in for a long slog of a read. An interesting read, but a slog of a read. That was not the case with DeJean's book. The text is lively and packed with interesting information the moves along at a surprisingly quick clip. While this book does not provide a comprehensive history of the city, it gives the highlights that best illustrate how this old town became a modern city. Each chapter of DeJean's book is devoted to a different aspect of Paris' history and urban evolution. She covers everything from shopping, street lights, bridges, and public transportation. The chapter discusses the role that characteristic played in developing Paris as a city and the influence such change had elsewhere. DeJean covers how each part influenced city as place, city as a population, and city as cultural and revolutionary history. She offers fascinating insights into design decisions, revolutionary activity, and art and architecture. As the book progresses, it's easy to understand how individual decisions of architecture and innovation came to influence the city as we know it today. This is a well-rounded text that deeply explores Paris as an evolving urban center. DeJean's text is rich in description and stories that reads easily while showcasing the important role Paris plays in creating the modern city. Rating: 8/10 [Terrific]
0.541
FineWeb
Generic text embeddings are successfully used in a variety of tasks. However, they are often learnt by capturing the co-occurrence structure from pure text corpora, resulting in limitations of their ability to generalize. In this paper, we explore models that incorporate visual information into the text representation. Based on comprehensive ablation studies, we propose a conceptually simple, yet well performing architecture. It outperforms previous multimodal approaches on a set of well established benchmarks. We also improve the state-of-the-art results for image-related text datasets, using orders of magnitude less data.
0.9994
FineWeb
Thermodynamics is the study of the transfer of heat energy. There are three main laws of thermodynamics, the second being frequently brought into arguments against evolution. In any process, the total energy of the universe remains constant. There is no process that, operating in a cycle, produces no other effect than the subtraction of a positive amount of heat from a reservoir and the production of an equal amount of work. The entropy of an isolated macroscopic system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium. As temperature approaches absolute zero, the entropy of a system approaches a constant. The laws of thermodynamics have also been described in a tongue-in-cheek way as: - You can't win. - You can't break even. - And you can't get out of the game. Alternatively, thermodynamics is understood to give time a direction. Without time going forward, the universe would be a very strange place, where things would spontaneously un-break, un-burn, un-fall, etc. Argument for God from the 2nd law - Main Article: Argument from the second law of thermodynamics Creationists sometimes claim that evolution contradicts the second law of thermodynamics. This is usually due to a misunderstanding of what the second law actually says, or else due to the false assumption that the Earth is an isolated (closed) system. Argument for God from the 1st law - Main Article: Argument from conservation of energy Energy conservation means the total amount of energy in the universe is fixed. Therefore the energy requires an external cause i.e. God. Argument for a finite age of the universe from the 2nd law The amount of usable energy is decreasing. It cannot have been decreasing indefinitely, so the universe had a beginning. Fine tuning of entropy The initial conditions of the universe was low entropy. This requires a cause. Natural processes would violate the second law of thermodynamics. Therefore, God. This is a variant of the fine tuning argument. - "Only the creator of the second law of thermodynamics could violate the second law of thermodynamics, and create energy in a state of availability in the first place. " There is also a rough thermodynamic argument against the existence of an orderly God that could create the universe. - ↑ Entropy (arrow of time), Wikipedia - ↑ John M. Cimbala, Does the Second Law of Thermodynamics Prove the Existence of God?, 2000
0.9623
FineWeb
Our research focus on understanding how relevant oncogenic events can transform neural stem cells (NSCs) and oligodendrocyte progenitor cells (OPCs) to generate distinct types of childhood and adult gliomas. In human gliomas, aggressive therapy leaves behind subpopulations of tumor cells displaying properties of NSCs or OPCs, suggesting a lineage-relationship between the cell of origin and therapy-resistant tumor cells. Recent publications confirm this relationship in genetically-engineered murine models (GEMM) of glioma. Other findings suggest that radiotherapy and changes in the tumor microenvironment can drive stemness and a proneural-mesenchymal transition in murine and human tumors. Major goals: (i) Develop GEMM of glioma using relevant oncogenic events to identify the initial steps that transform NSCs and OPCs in a temporal and regional fashion. (ii) Identify drugable targets that drive stemness in glioma. (iii) Study if interstitial fluid pressure (IFP), myeloid cells, and hypoxia regulate tumor growth and relapse in glioma. Development of IDH1R132H, H3F3A, and PDGF-driven mutant glioma models Human gliomas displaying mutations in the isocitrate dehydrogenase 1/2 (IDH1/2) genes are diagnosed in young adults. The vast majority of these tumors has R132H mutations in the IDH1 gene and is characterized by a hypermethylated phenotype and a better prognosis in patients. In high-grade glioblastoma (GBM), H3F3A (K27) and H3F3A (G34) tumors are expressed in young children and adolescent patients, respectively. Human GBMs expressing platelet-derived growth factor receptor A (PDGFRA) are diagnosed both in younger patients and adults. Interestingly, IDH1R132H and H3F3A (G34) human tumors are often localized to frontal cortex, a region enriched for OPC rather than NSCs. In contrast, H3F3A (K27) tumors are localized to the midline of the brain in thalamic and pons/medulla. Using GEMM of glioma, we are currently identifying the cell of origin and the first transforming steps that are observed in developing IDH1R132H, H3F3A mutant, and PDGFB driven tumors. Targeting stemness in glioblastoma Pre-clinical experiments show that radiotherapy and treatment with temozolomide enrich for highly tumorigenic and stem-like tumor cells in human GBMs. One major obstacle to study these TPCs in glioma is the absence of robust markers. We are developing density gradient protocols that will allow a more unbiased isolation of TPCs and further profiling of markers and signaling pathways using Affymetrix microRNA arrays. Candidate markers and drugable signaling pathways are then validated using patient biopsies and xenograft models. Other projects study the mechanisms regulating dormancy of TPCs and how miRNAs regulate stemness in glioma. Influence of the tumor microenvironment on tumor growth and proneural-mesenchmal transition in glioblastoma Increasing IFP during tumor progression is a major obstacle in glioma. High IFP reduces drug uptake in solid tumors. It is still unknown how high IFP regulate tumor growth in glioma. We have developed an in vivo model that allows us to accurately measure IFP in intracranial xenografts of human GBMs. We have identified a factor that effectively reduces IFP in GBMs, leading to massive apoptosis, depleted cell proliferation, and vascular reorganization. Another project studies if radiotherapy-induced changes in mechanosensing, infiltration of myeloid cells, and hypoxia contribute to tumor regrowth following radiotherapy in human xenografts.
0.9356
FineWeb
Bio-hacking is all the rage. Just like a lot of you, I've been adding grass-fed butter to my coffee and taking supplements to make my mind more focused, increase energy, and boost work performance. Through my own experimentation I can vouch that many of these methods work, but for centuries we've known (through science and ancient practices) that meditation is probably the most beneficial life/bio-hack available. So where do you start if you have no idea how to meditate, or have tried but feel like you just can't do it? 1. Take a walk Walking meditation has been around for centuries and is practiced in many traditions. It's best when done outdoors and when it's done with intention. If possible, get yourself into nature, but an urban walk will do just fine. Bring your attention to the ground beneath your feet, the sky, and the people/sights around you. Keep your mind focused on your steps and breath. This is the simplest and quickest form of meditation I've discovered, and moving your body helps to keep your energy flowing and prevents stagnation of all kinds. 2. Take a bath Einstein has famously said he had some of his biggest breakthroughs in the bathtub. When you are in a relaxed state of mind, your brain will most likely give you some of your most connected, creative and possibly genius ideas. Focus on the warm water, clear your mind and simply relax. Some people like to chant in the bathtub or even say a prayer! Do whatever works to help you unwind. 3. Go for a drive Ohso was famous for practicing what he called driving meditation. You have to be attentive while driving. Focus on the road, your hands on the wheel, and the pedals. If possible find somewhere where you can drive for a long distance without stopping (preferably in nature but a highway will do). Keep the radio off and eliminate as many distractions as you can. I've experienced some of my biggest "a-ha" moments while making long road trips. This can also be practiced as a passenger, as long as whoever is driving is willing to participate (and remain alert on the road!). 4. Say "thank you" Gratitude is one of the easiest ways to shift your energy. If you find yourself disturbed or out of balance simply say, "thank you." You can do it inwardly or out loud if it's appropriate. You can take the practice further by naming things you're thankful for. They can be as simple as "thank you for the delicious egg" you had for breakfast or "thank you to the friend who took the time to text" that morning. Stay focused on being thankful as long as you can, even if you can only think of small or seemingly silly things--I've discovered there is always something I can find to say "thank you" about even when I'm in the worst of moods. 5. Listen to a guided meditation Guided meditations were the first way I learned to meditate. They literally walk you through a process. It's best to try a few methods (and voices frankly). But you'll soon discover yourself immersed in a meditation process and won't even notice it when 20 or 30 minutes go by. This is one of my favorite resources to access a free guided meditation to get you started. There are thousands of others available online and through streaming devices. 6. Set a timer for 2 minutes I think most people think meditation only works if you do it for long periods of time. While that's amazing, a 2-minute meditation is a great place to start. Meditation isn't an endurance sport so don't view it as such. Start small and gradually work towards longer increments. If you can seriously meditate 2-5 minutes everyday, you'll reap many benefits.
0.5813
FineWeb
na mə nəl 1. in name alone. example: The former colony was given nominal independence but had no real government of its own. example: He’s a nominal member of the church. example: As nominal governor of the state, she had no real power. 2. small or unimportant. example: Membership requires only a nominal contribution. 3. of, denoting, like, having, or providing a name. 4. in grammar, of or involving nouns or noun phrases.
0.9979
FineWeb
2 year old son likes reading it. We were however hoping he would agree to a hair cut without much fuss after reading this book but while he reads and agrees with it that next step is missing. Worth a try though as at least it's got down the loudness of his protest. - How did the image on site compare with the actual product? Accurate. - How accurate was the on site description of the product? Accurate - Please tell us about the quality of the product. Quality product as can be expected for something carrying the Blues Clues label. - Would you recommend this to a friend? Yes
0.847
FineWeb
Raz - Kids Click on the link below to work on your Raz - Kids assignment. Raz - Kids Directions 1. Click on the Raz-Kids link. (above) 2. Click on your name and sign in using your password that Mrs. Senecal gave 3. Click on Reading Assignments. 4. Choose 1 book by clicking on it. You may choose to listen (ear symbol) to the book or read it yourself. (book symbol) 5. Complete the reading and question activity for the book you chose. 6, Sign out when finished.
0.9909
FineWeb
Presentation on theme: "Gallery Walk Take your graphic notes and pen/pencil with you."— Presentation transcript: 1 Gallery Walk Take your graphic notes and pen/pencil with you. Fill out the “Approaches to Psychology” section of your graphic notes. You will have to summarize.If you see any underlined words, you must define them on the “additional notes on previous page” section.Feel free to write any questions/vocabulary that you encounter. 2 BIOLOGICAL PSYCHOLOGY Biopsychology is an interdisciplinary approach linking the perspectives and techniques of biology and psychology to understand interactions between mind/body, environment, and behavior. Biopsychology is a rapidly expanding discipline with exciting advances in areas such as psychoneuroimmunology (the exploration of brain, behavior, and immune function) and behavioral genetics (the exploration of genetic and environmental effects on behavior, personality, and mood).Famous psychologist(s): John Pinel. 3 BEHAVIORAL PSYCHOLOGY Behaviorism is primarily concerned with observable behavior, as opposed to internal events like thinking and emotion. Observable (i.e. external) behavior can be objectively and scientifically measured. Internal events, such as thinking should be explained through behavioral terms (or eliminated altogether). All behavior is learnt from the environment. We learn new behavior through classical or operant conditioning. Behavior is the result of stimulus – response (i.e. all behavior, no matter how complex, can be reduced to a simple stimulus – response association).Famous psychologist(s): John Watson, B.F. Skinner. 4 PSYCHOANALYTIC PSYCHOLOGY Psychoanalysis is both a theory of how the mind works and a treatment. It is based on the belief in the primacy of the unconscious fantasy, sexual desires (libido, penis envy, Oedipal complex), and dreams. It also examines such basic mental maneuvers as transference, projection, and defensiveness—and demonstrated how they distort our functioning. All of this is processed at the unconscious level and usually without the awareness of the individual. The psychoanalytic treatment method includes extended self-exploration with a trained therapist.Famous psychologist(s): Sigmund Freud, Carl Jung. 5 EDUCATIONAL PSYCHOLOGY Educational psychology involves the study of how people learn, including topics such as student outcomes, the instructional process, individual differences in learning, gifted learners and learning disabilities. This branch of psychology involves not just the learning process of early childhood and adolescence, but includes the social, emotional and cognitive processes that are involved in learning throughout the entire lifespan. The field of educational psychology incorporates a number of other disciplines, including developmental psychology, behavioral psychology and cognitive psychology.Famous psychologist(s): Alfred Binet, John Dewey. 6 DEVELOPMENTAL PSYCHOLOGY Developmental psychologists study the human growth and development that occurs throughout the entire lifespan. This includes not only physical development, but also cognitive, social, intellectual, perceptual, personality and emotional growth. Some of the tasks that a developmental psychologist might do include:Evaluating children to determine if they have a developmental disability.Investigating how language skills are acquired.Studying how moral reasoning develops in children.Exploring ways to help elderly individuals remain independent.Famous psychologist(s): Lawrence Kohlberg, James Mark Baldwin. 7 SOCIOCULTURAL PSYCHOLOGY The work of sociocultural theory is to explain how individual mental functioning is related to cultural, institutional, and historical context; hence, the focus of the sociocultural perspective is on the roles that participation in social interactions and culturally organized activities play in influencing psychological development. Sociocultural theory is a emerging theory in psychology that looks at the important contributions that society makes to individual development. This theory stresses the interaction between developing people and the culture in which they live.Famous psychologist(s): Lev Vygotsky, Philip Zimbardo. 8 COGNITIVE PSYCHOLOGYCognitive Psychology revolves around the notion that if we want to know what makes people tick then we need to understand the internal processes of their mind. Cognitive psychology focuses on the way humans process information, looking at how we treat information that comes in to the person (what behaviorists would call stimuli), and how this treatment leads to responses. In other words, they are interested in the variables that mediate between stimulus/input and response/output. Cognitive psychologists study internal processes including perception, attention, language, memory and thinking.Famous psychologist(s): Jean Piaget, Ulric Neisser. 9 INDUSTRIAL/ORGANIZATIONAL PSYCHOLOGY Industrial organizational psychology is the branch of psychology that applies psychological theories and principles to organizations. Often referred to as I-O psychology, this field focuses on increasing workplace productivity and related issues such as the physical and mental well-being of employees. Industrial organizational psychologists perform a wide variety of tasks, including studying worker attitudes and behavior, evaluating companies, and conducting leadership training. The overall goal of this field is to study and understand human behavior in the workplace.Famous psychologist(s): Elton Mayo, Robert Yerkes. 10 EVOLUTIONARY PSYCHOLOGY The goal of research in evolutionary psychology is to discover and understand the design of the human mind. Evolutionary psychology is an approach to psychology, in which knowledge and principles from evolutionary biology are put to use in research on the structure of the human mind. In this view, the mind is a set of information-processing machines that were designed by natural selection to solve adaptive problems faced by our hunter-gatherer ancestors. This way of thinking about the brain, mind, and behavior is changing how scientists approach old topics, and opening up new ones.Famous psychologist(s): Leda Cosmides, John Tooby. 11 HUMANISTIC PSYCHOLOGY Humanism is a psychological approach that emphasizes the study of the whole person. Humanistic psychologists look at human behavior not only through the eyes of the observer, but through the eyes of the person doing the behaving. Humanistic psychologists believe that an individual's behavior is connected to their inner feelings and self-concept. Humanists rejected behaviorism, which was considered deterministic with too much emphasis given to stimulus-response patterns. They also rejected psychoanalysis because it is also deterministic, with unconscious irrational and instinctive forces determining human thought and behavior.Famous psychologist(s): Carl Rogers, Abraham Maslow. 12 EXPERIMENTAL PSYCHOLOGY Experimental psychology is an area of psychology that utilizes scientific methods to research the mind and behavior. While students are often required to take experimental psychology courses during undergraduate and graduate school, you should really think of this subject as a methodology rather than a singular area within psychology. Many of these techniques are also used by other subfields of psychology to conduct research on everything from childhood development to social issues.Famous psychologist(s): Edwin Boring. 13 PSYCHOMETRICSThe branch of psychology that deals with the design, administration, and interpretation of quantitative tests for the measurement of psychological variables such as intelligence, aptitude, and personality traits.Famous psychologist(s): Francis Galton. 14 PERSONALITY PSYCHOLOGY Personality refers to individual differences in characteristic patterns of thinking, feeling and behaving. The study of personality focuses on two broad areas: One is understanding individual differences in particular personality characteristics, such as sociability or irritability. The other is understanding how the various parts of a person come together as a whole.Famous psychologist(s): Gordon Allport, Raymond Cattell.
0.956
FineWeb
Product details of Olympus Voice Recorder WS-853 (Original Malaysia Warranty) The digital voice recorder is powered by two AAA dry-cell batteries (model LR03) or two Olympus nickel-metal hydride rechargeable batteries. Some other features include a built-in stand to prevent surface vibrations and a direct USB computer connection to conveniently save your data and charge the batteries. The recorder displays only the necessary information in Simple Mode and supports beginners by limiting the functions in the menu to those which are frequently used. When recordings contain multiple speakers, the Voice Balancer makes softer voices louder and ensures that louder voices stay below a given threshold. This provides playback where everyone can be heard clearly. In conclusion, users can rely on the WS-853 recorder for over 1,000 hours of recording time, over 100 hours of battery life, file searching capabilities, and a noise-cancellation feature that provides clear playback quality. There is 8GB of internal memory, and there's also a microSDHC slot for memory cards up to 32GB. Specifications of Olympus Voice Recorder WS-853 (Original Malaysia Warranty)
0.635
FineWeb
Predators as Keystone Species Keystone species have a disproportionate impact on the environment. Many predators are keystone species, meaning healthy ecosystems depend on their presence. Just as the keystone at the top of an arch holds all the stones in place, predators maintain a balance that benefits all living things. Read how wolves improve beaver, fish and songbird habitat: Read how wolves help antelope to flourish:
0.998
FineWeb
Last week, Distinguished Engineer at Cypress, Gleb Bahmutov, and Senior Automation Engineer at Ansible, John Hill, presented a live webcast on how how Ansible revamped their testing efforts by deleting all of their old tests and starting from scratch with Cypress. Some of the key topics they covered include: - How the team implemented Cypress, and their initial testing strategy for their re-write - How their Cypress usage has evolved over the past two years—including how they run their entire E2E suite in 14 minutes (3 minutes faster than their unit test suite) - How they’ve drastically improved their ratio of escaped bugs, and how they use the Cypress Dashboard to lower triage time and break down barriers between their developers and QA engineers. The questions we received were all fantastic—thanks to all who attended the live broadcast via Zoom, and to those who asked live questions via Slido.
0.8522
FineWeb
Early today FireEye released an analysis of new malware, dubbed TRITON, that was conceived to disrupt operations at a Critical Infrastructure facility, allegedly located in the Middle East. FireEye responded to the incident and performed the initial analysis. The analysis has determined that the long-term goal of the attacker was to cause a physical consequence. The analysis has also determined that the attacker inadvertently shutdown operations, leading to an investigation that uncovered the whole operation.1 This is the fifth ever publicly known malware tailored to attack operational technology (following Stuxnet, Havex/Dragonfly, Blackenergy2 and Industroyer). The malware was designed to target Safety Instrumentation Systems (SIS) from Schneider Electric (Triconex 3008 processor modules specifically).2 The malicious code is well-structured into functional components, making it a proper “attack framework”. The attacker reverse-engineered the proprietary TriStation protocol to include several functions to send specific commands, such as halting the device or writing and reading memory. The malware comprises a main executable and a set of compiled Python libraries for communicating with the SIS devices. The main executable attempts to disguise itself as a legitimate Triconex software for analyzing SIS logs and, once running, loads the new logic into the targeted device(s). The malware does not leverage any previously unknown vulnerability (0-day), yet the amount of time required to assemble the attack framework for such a specific environment suggests the involvement of a well-funded actor (possibly nation state). The attacker first gained access to an SIS workstation and then deployed malicious code to reprogram the SIS controllers. This indicates that the attacker performed an extensive testing phase, by first collecting information about and then simulating similar conditions to the targeted environment. FireEye reports that the attacker made several attempts over a period of time to deliver functioning control logic to the SIS controllers. During one of these attempts, the attacker triggered some SIS controllers to enter a “fail safe” state, which automatically shut down the industrial process for no apparent reason. The asset owner promptly initiated an investigation that lead to uncovering the whole operation. This behavior points at a motive beyond causing just a process shutdown, as the attacker could have simply issued a halt command or corrupted the memory of the SIS controllers. Moreover, as the attacker might have already obtained a foothold on the DCS, manipulating or shutting down the process was already possible without having to compromise the safety systems. Similar to previous incidents, a mix of factors allowed the attacker to achieve some success throughout the operation. First, the Triconex SIS controllers feature a physical keyswitch that prevents reprogramming of the logic. However, the targeted controllers were left in “program” mode, instead of “run”, even during normal operations, easing the task for the attacker. Secondly, no specific security measure was apparently in place (whitelisting, network monitoring, etc.) leaving the asset owner totally blinded to suspicious system activities, dangerous/hazardous commands, or other network events. Thirdly, albeit not yet confirmed, the SIS network appears to not have been properly isolated, again easing the task for an attacker that could have managed to enter via a remote session (such as those used by engineers for maintenance purposes on the DCS).3 Conclusion and Recommendations While Triconex safety systems are widely deployed in a number of industries, each SIS is uniquely configured, and thus specific knowledge of the process is required. The likelihood of observing this attack at a different asset owner is rather low, unless a significant level of effort and resources are spent. Well-funded and motivated attackers (like nation-states) have demonstrated yet again that it is possible to penetrate industrial networks and trigger potentially serious consequences. Asset owners need to implement solid change management and audit procedures to slow down attackers and deploy countermeasures, like application whitelisting and ICS network security monitoring, to prevent and detect malicious events and gain an adequate level of visibility into their operational technology networks.
0.8079
FineWeb
When the independence assumption is correct, blind ICA separation of a mixed signal gives very good results. It is also used for signals that are not supposed to be generated by a mixing for analysis purposes. A simple application of ICA is the “cocktail party problem”, where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays and echoes. An important note to consider is that if N sources are present, at least N observations (i.e., microphones) are needed to get the original signals. This constitutes the square (J = D, where D is the input dimension of the data and J is the dimension of the model). Other cases of underdetermined (J < D) and overdetermined (J > D) have been investigated. The statistical method finds the independent components (aka factors, latent variables or sources) by maximizing the statistical independence of the estimated components. Non-Gaussianity, motivated by the central limit theorem, is one method for measuring the independence of the components. Non-Gaussianity can be measured, for instance, by kurtosis or approximations of negentropy. Mutual information is another popular criterion for measuring statistical independence of signals. Typical algorithms for ICA use centering, whitening (usually with the eigenvalue decomposition), and dimensionality reduction as preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Whitening and dimension reduction can be achieved with principal component analysis or singular value decomposition. Whitening ensures that all dimensions are treated equally a priori before the algorithm is run. Algorithms for ICA include infomax, FastICA, and JADE, but there are many others also. Most ICA methods are not able to extract the actual number of source signals, the order of the source signals, nor the signs or the scales of the sources. ICA is important to blind signal separation and has many practical applications. It is closely related to (or even a special case of) the search for a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent. Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case. The data is represented by the random vector and the components as the random vector . The task is to transform the observed data , using a linear static transformation W as ,,into maximally independent components measured by some function of independence. The components of the observed random vector are generated as a sum of the independent components , : weighted by the mixing weights . The same generative model can be written in vectorial form as , where the observed random vector is represented by the basis vectors . The basis vectors form the columns of the mixing matrix and the generative formula can be written as , where . Given the model and realizations (samples) of the random vector , the task is to estimate both the mixing matrix and the sources . This is done by adaptively calculating the vectors and setting up a cost function which either maximizes the nongaussianity of the calculated or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function. The original sources can be recovered by multiplying the observed signals with the inverse of the mixing matrix , also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (). If the number of basis vectors is greater than the dimensionality of the observed vectors,
0.9906
FineWeb
I have a design question. If I enlist all Entity Beans gets and sets withing the same tx there won't be ejbLoad and ejbStore callbacks. So is it a good choice to demarcate -always- all gets and sets within the same transaction? Anyway, I don't understand why ejbStore isn't called if I do setXXX within the same transaction. Is it an optimization of some containers (WLS) ? I'm not sure what the exact issue you are facing is. Assuming you are using a session bean facade, it should enforce transactional control (new, use existing, etc.) over the use case in question. Your Entity Beans would then be deployed using "Transaction Required". This way the first time an EB "joins" the transaction it's ejbLoad() will be called. Now while you are within the transaction, any getXXX() and setXXX() call do not need to persist information in the DB. Your changes are visible to all other components in the same transaction. When the transaction commits, ejbStore() will be called on your EBs. Of course when you have multiple EBs, the order in which ejbStore() is called on each EB is not defined. This can cause it's own set of problems, but I digress.
0.7053
FineWeb
Negotiating your identity as a woman in the LDS ChurchBy: Stephen Marsh I have a hobby in adr research and writing. Recently, mixed in with materials on normal negotiation and teaching negotiation was an essay on negotiating your identity written from a feminist legal approach. The article started with the fact that one result of traditional gender roles and stereotypes is that “likeable/good looking” and “competent” are considered exclusive of each other for women. If you are good looking or likeable you are “not competent.” If you are competent, you are by definition not likeable. The professors provided a list of tools for negotiating your public identity if you were female and wanted to be both. The tools apply to women anywhere they deal with gender stereotypes. [If you want the entire essay, footnotes, and their conclusions instead of my thoughts based on same, look up Negotiating your Public Identity: Women's Path to Power by Tinsley, Chedelin, Schneider & Amanatulla in Rethinking Negotiation Teaching.] You have several choices. - Worth within stereotypes. - reframe what you do as fitting within the stereotypes. For example, when you negotiate or present positions, push on behalf of groups or community. Act on behalf of groups you are nurturing or responsible for. - use softer methods. - Minimize the activation of stereotypes - Act as a part of a diverse team. Diverse teams cause people not to revert to stereotyping. - Appeal to common goals. - Take on roles that are seen as outside of the stereotypes (e.g. female lawyers are not evaluated or reacted to as “female” but rather as “lawyer” so that the stereotype that goes with “female” is often not applied to female lawyers). - Renegotiate your identity. - Create a coherent alternative story to the stereotype that applies to you. - Break out of stereotypes by adding complexity to your story or by preventing alternative narratives. - Network to activate others and use their social capital. - Destabilize stories and narratives that revert to stereotype. Interestingly enough, the law professors used Sarah Palin as an example of someone who had a great deal of success in connecting to her base as both likeable and competent and Hillary Clinton as someone who faced problems in succeeding in that endeavor as a candidate for president (and who has had success as secretary of state). To break free from stereotype you need a narrative that is both engaging (rather than threatening) and complex. E.g. a hockey mom and a hunter (Palin). Of course the real question is whether the tools that work for individuals will work for groups, and if they do, what they can work for. Do you feel a need to renegotiate your identity? If you do, what changes do you want to make, what is the narrative you seek? (Yes, this moves me back on track and back towards Zion and past my last post).
0.8703
FineWeb
Vegan diets offer a buffet of food for thought. Health and lifestyle choices weigh heavily into the discussion of pros and cons. Studies cite numerous health benefits associated with plant-based diets. As with any diet, however, education is required to ensure you get a proper balance of nutrients. Vegans are a minority in America, and a family's attitudes can determine just how positive an experience it is. Video of the Day Social Benefits of Veganism Harmony with nature and respect for animals are two core vegan values. Like vegetarians in general, vegans focus on a plant-based diet. But vegans take this a bit further and shun all animal products like eggs, dairy, leather and fur. Compassion for animals and concern for the environment are among the reasons vegans give for their lifestyle choice. Supporting sustainable agriculture, reducing the carbon footprint associated with meat-producing operations and opposition to the inhumane treatment of livestock are among the social benefits sought through this lifestyle. Concerns for Vegans The Academy of Nutrition and Dietetics states that properly planned vegan diets are nutritionally wholesome and can help prevent chronic diseases. While experts agree such diets can be healthy, the key term is "well-planned." A 2009 article published in the "American Journal of Clinical Nutrition" observed that vegan diets pose a risk for micronutrient deficiencies. Vegans in particular need to be conscious of vitamins B-12 and D, calcium, omega-3 fatty acids, iron and zinc. According to the Academy of Nutrition and Dietetics, these nutrients can be sourced without the use of meat, but it requires careful planning, and supplementation may be necessary. Great Potential for Health Benefits The benefits of a vegan diet are demonstrable. The "American Journal of Clinical Nutrition" reported that even compared to other vegetarians, vegans tend to be thinner and have lower total blood cholesterol and blood pressure. Consuming larger volumes of fruits and vegetables, vegans receive greater amounts of the antioxidants and phytochemicals associated with good cardiovascular health and reduced risk for cancers. Many researchers conclude that with proper planning, all of your dietary requirements can be achieved while avoiding the higher fat and cholesterol associated with meat. Vegans Are a Minority Only 5 percent of Americans identify themselves as vegetarian and only 2.5 percent as vegan, according to a 2012 Gallup Poll. Being a minority, a vegan voice might not be welcome at every dinner table. Meat remains the largest segment of U.S. agriculture, generating 92.3 billion pounds in 2011, because it is a staple in the average American diet. How supportive your family is of a move away from meat may determine whether the experience is positive or negative. The implications on your relationships and social life should also be considered when weighing pros and cons of a vegan diet.
0.5007
FineWeb
Mitsubishi Electric Intelligent Power Module - Adopting Full-Gate CSTBT chip. - The over-temperature protection which detects the chip surface temperature of CSTBT is adopted. - Error output signal is available from each protection upper and lower arm of IPM. - Outputting an error signal corresponding to the abnormal state (error mode identification) - General purpose inverter, servo drives and other motor controls - Volts : 650 - Amps : 75
0.9953
FineWeb
As a Graphic/UI/UX designer. We must maintain our typography concept. Because typography is the main element of design. A famous man said, “Typography is an art”. In fact, if you don’t have any design concept or any idea you can use these rules and you will be impressed. Let’s start beginning. Rules Number #1: PREMIUM Line Height. Do you work to find out the suitable line-height for your design? It’s Simple. Only multiply (the font size * 1.5 or 1.618 or 1.61). These sizes are suitable for any design. For example, The font size is 12 pt. You just apply this rules: (Font size * 1.5) = (12*1.5) = 18 This 18 is the best line-height. You must apply these rules to all your designs and you will find the best result. Rules Number #2: The letter Breathe. These rules are too much important than other rules. In this situation, you must be remembered that you need all the time to sure that you must have a bigger gap between different text blocks than within one block. Let’s see this image! Rules Number #3: Confused on The font size? The best uses method in the design world is 2X rules. This rule too vital in the design section. Suppose, you are trying to use 30 for your design header. You can pick 15 for your body text. When you use this method you will be surprised. Always keep in the main 2X rules. For example, The Body text font size is 30 pt. You can apply this rules: (Body text size * 2) = (30* 2) = 60pt. Now the header font size is 60pt. Rules Number #4: Confused on Alignment? It’s too much easy than other rules. All-time be critical and must check your alignment. As usual, when you put some Body text you can align the left. Because it’s easy to read the text. Let’s check these images. Rules Number #5: Use 1 – 2 fonts in your design. These rules are important than 1 to 4 rules. Try to use these ticks. You should not use pair of SERIF + SERIF or BOLD + BOLD. You can try this special method. Try to use this type pair (BOLD + REGULAR) or (REGULAR + BOLD) totally depend on your practice. I hope if you apply these methods in your design, I think you will see the improvement of typography arts in your daily work. These rules are made by Many Famous Designers. Who has working to find out the best typography rules for us . If you think these rules are important and helpful you can share it.
0.7334
FineWeb
The following sections explain several categories of vi commands. These include: Moving around in a file Changing and substituting text Undoing changes to text Copying and moving text In the previous sections you learned how to create, save, print, and exit a vi file. Now that you have created a file, you'll need to understand the concepts required to navigate within it. Open your practice file now, and try out each of the commands discussed in this section. When you start vi, the cursor is in the upper left corner of the vi screen. In command mode, you can move the cursor with a number of keyboard commands. Certain letter keys, the arrow keys, and the Return key, Back Space (or Delete) key, and the Space Bar can all be used to move the cursor when you're in command mode. If your machine is equipped with arrow keys, try these now. You should be able to move the cursor freely about the screen using combinations of the up, down, right, and left arrow keys. Notice that you can only move the cursor across already existing text or input spaces. If you're using vi from a remote terminal, the arrow keys may not work correctly. This will depend on your terminal emulator. If the arrow keys don't work in your case, you can use the following substitutes: If you move down when the cursor is at the bottom of the screen, or move up when the cursor is at the top of the screen, you will see the text scroll up or down. This can be an effective way to display more text in a very short file, but it can be tedious to move this way through a long file. You may have noticed that moving the cursor either past the bottom or past the top of the screen has the effect of scrolling text up or down. This works for a very short file, but it is a tedious way to move through a long file. You can page or scroll backward or forward through a file, a screen or a half-screen at a time. (To try out these commands on paint, you might want to add text so you have a longer file to work with.) Note that there is a fundamental difference between paging and scrolling. Scrolling actually scrolls the cursor up or down through the text a line at a time, as though it were on a paper scroll. Paging moves the cursor up or down through the text a screenful at a time. On a fast system, you might not notice the difference. However, if you're working from a remote terminal or in some other situation where your system is running slower than usual, this difference can become painfully apparent. vi provides many commands for inserting text. This section introduces you to the most useful of these commands. Note that each of these commands places vi in entry mode. To use any of these commands, you must first be in command mode. Remember to press Esc to make sure you are in command mode. Type A to add text to the end of a line. To see how this works, position the cursor anywhere on a text line and type A. The cursor will move to the end of the line, where you can type your additions. Press Esc when you're done. Type I to insert text at the beginning of a line. (The command will move the cursor from any position on that line.) Again, as with all the commands in this section, press Esc to return to command mode after entering the desired text. To change part of a word, place the cursor on the word, to the right of the portion to be saved. Type cw, enter the correction, and press Esc. To replace part of a line, place the cursor to the right of the portion to be saved. Type C, enter the correction, and press Esc. This changes the portion of the line from the current cursor position to the end of the line. Use this command to replace the character highlighted by the cursor with another character. Position the cursor over the character and type r, followed by just one replacement character. After the substitution, vi automatically returns to command mode (there's no need to press Esc). Correcting transposed characters takes just two keystrokes in vi. Suppose you find that you've typed "teh" when you meant to enter "the". Make the correction by putting the cursor over the first letter to be moved (in this case, e), and then type xp. The e and h will trade places - and vi will automatically return to command mode. To break a line without affecting text, move the cursor to a space where you want the line to break and type r (for "replace") followed by Return. Note that if you type r with the cursor on a character and then press Return, that character will be replaced by the Return. When editing text and making changes to a vi file, there will no doubt be times when you'll wish that you had not changed something. vi's undo commands allow you to back up one operation and continue on from there. If you make a mistake in vi or if you just change your mind once an operation is completed, you can undo your last command by pressing u immediately after the command. (There's no need to press Esc after typing u.) Pressing u a second time undoes the undo. The x command also deletes the space the character occupied--when a letter is removed from the middle of a word, the remaining letters will close up, leaving no gap. You can also delete blank spaces in a line with the x command. To delete part of a word, position the cursor on the word to the right of the part to be saved. Type dw to delete the rest of the word. You can also delete part of a line. Many word-processors allow you to "copy and paste" and "cut and paste" lines of text. The vi editor also includes these features. The vi command-mode equivalent of "copy and paste" is yank and put; the equivalent of "cut and paste" is delete and put. The methods for copying or moving small blocks of text in vi involves using a combination of the yank, delete, and put commands. To yank one line, position the cursor anywhere on the line and type yy. Now move the cursor to the line above where you want the yanked line to be put (copied), and type p. A copy of the yanked line will appear in a new line below the cursor. To place the yanked line in a new line above the cursor, type P. The yy command works well with a count: to yank 11 lines, for example, just type 11yy. Eleven lines, counting down from the cursor, will be yanked, and vi indicates this with a message at the bottom of the screen: 11 lines yanked. To move one line, position the cursor anywhere on the line and type dd. For example, to delete 5 lines, type 5dd. Next, move the cursor to the line above where you want the deleted line reinserted and type p. This inserts the text on a new line below the cursor. Alternatively, you can put the deleted line above the cursor by typing P. To repeatedly insert a group of lines in various places within a document, you can yank (or delete) the lines into a named buffer. You specify named buffers by preceding a command with double quotes (") and a name for the buffer. For example, to yank four lines into the named buffer a, type "a4yy". You can use several different buffers. For example, you might also delete text from one location and add it to several others. To delete 12 lines into the named buffer b, type "b12dd". To insert the text, precede the p or P command with n, where n is the named buffer. For example, to insert the lines saved in buffer b, type "bP". You can overwrite named buffers with new lines. The buffers are saved until you exit vi. When you use named buffers, you can safely delete and yank other text without affecting the lines you have already saved in the named buffers -- unless, of course, you purposely overwrite the named buffer. Most of the commands in the previous sections take counts. For instance, 3dd repeats the command to delete a line three times, therefore deleting three lines. 2dw deletes two words, and 4x deletes four characters or spaces. You can also use counts with commands to move the cursor, such as 3w and 2Ctrl-F. This will all become evident as you learn the vi commands. In the section "6.12 Summary of Basic vi Commands"each command that takes a count is indicated by "[count]" before the command name.
0.8781
FineWeb
What are you supposed to do when you see someone on the street corner asking for help? Give them money? Ignore them? Tell them to get a job? Our summer video series, “How To Help A Hurting Person,” teaches you how to make a difference for someone in need! Here’s a summary of the first three episodes: - Anticipate that you will see someone holding a sign asking for help. When you are prepared for the situation, you’ll be able to make the most of it. - Smile and actually say hello! Approach the person instead of avoiding them. By acknowledging the person, you show them respect. - Engage the person. Start a conversation and listen to what they have to say, even if you only have a few moments at a stop sign or traffic light. - WATCH THIS WEEK’S TIP HERE Make sure you don’t give money to a person asking for help. Instead, ask them what their greatest need is and help them by giving food, transportation tickets, or another item! Check back next week for Tip #5!
0.9148
FineWeb
Getting a flu shot is one of the easiest measures you can take to protect yourself—and those around you—from illness, hospitalization, or even death. Yet some people skip the shot because it is inconvenient, costly, or because they don’t like being poked with a needle. Some even assume that because they are fit and healthy they don’t need the vaccine. But no matter how strong you are, you can still come down with the flu. And getting the flu is far worse than catching the common cold. The flu is a respiratory infection caused by an influenza virus. The virus is highly contagious and is spread through the air or contact between people. You can get it by being near an infected person who is coughing or sneezing. If you touch a surface that an infected person has touched, you can also catch the flu if your hand comes in contact with your nose, eyes or mouth. Those exposed to the virus often develop symptoms within about four days. People usually feel miserable for a week or so, with symptoms that can include a fever, chills, aches, fatigue, a cough, or a sore throat.Most flu cases are mild and do not need medical care. Simply stay home, rest, and avoid contact with other people. But if you are in a high risk group, or you become very ill, you should contact Dr. Keith Lamy, at Austin family practice medicine clinic. He may prescribe antiviral drugs that can make you better and prevent serious complications. The virus is especially dangerous if you are have asthma or diabetes, lung or heart disease, or if your immune system is weak. Those people, as well as the elderly and pregnant women, are at higher risk of developing complications. The virus can result in severe pneumonia and blood infections. It can also cause diarrhea, ear infections, and seizures in children. Every year, thousands of people in the United States die from flu, and many are sent to the hospital. Of the three types of human flu viruses, type A and B cause the most severe symptoms. Both can cause the outbreaks that take place almost every fall and winter. Every year, flu vaccines are updated to match viruses that are circulating globally, with vaccines typically protecting against the three or four most common viruses. But viruses are constantly changing. Sometimes, a sudden change occurs in the A virus, creating a new subtype or virus that emerges from an animal population that is so different from the same subtype in humans that most people do not have immunity to the new virus, according to the Centers for Disease Control and Prevention. When such a shift happens, most people have little or no protection against the new virus. An extreme example of a shift took place in 1918, when a rare genetic change occurred to create a virus that was new to almost everyone in the world. Starting in 1918, the Spanish flu killed as many as 20 to 50 million people, making it one of the worst infectious pandemics in history. The pandemic took place in a rapid succession of three waves, with fatalities mounting as it spread across the world. Young, healthy adults were hit especially hard. Typically, influenza kills the very young and the elderly. But deaths in 1918 followed a pattern that has not been recorded before or since. Death rates for young adults were about 20 times higher than in previous years. People were struck down rapidly, sometimes dying within hours of appearing healthy. Some victims died gasping as they tried to clear their airways of bloody fluids that seeped from their nose or mouth. The level of fear and suffering was so profound that it resembled events during the Black Death bubonic plague. Although the 1918 pandemic was rare, experts can not rule out the possibility that another devastating flu pandemic could happen again. For optimal coverage, you should get a flu vaccine before the end of October. Getting it later, however, is still worthwhile. Shots are available in many locations, including Austin family practice medicine clinic. Babies younger than 6 months old are at high risk of serious flu complications, but are too young to get a vaccine. If you care for an infant, you should get a shot to help protect them. Unfortunately, it is possible to get the flu even if you have been vaccinated. You may be exposed to a virus that is not covered by the vaccine, or you may be exposed during the two weeks after you were vaccinated before your body is protected. The vaccine is most effective for healthy younger adults and older children. It works less well for older people and those with chronic illnesses. Although a vaccination is not a perfect shield, it is still your best protection against the influenza virus.
0.7789
FineWeb
If you’ve ever watched a bird hop from branch to branch in search of food, you’ve caught a glimpse at how prehistoric flying dinosaurs foraged among forest trees. That’s what researchers are saying after they trained four Pacific parrotlets (Forpus coelestis)—small, pastel-colored parrots about 13 centimeters long—to jump and fly for millet seed rewards. The researchers designed a cage decked out with perches that doubled as sensors to measure the birds’ leg forces, and surrounded the cages with high-speed cameras to study the birds’ wing beats as they moved between branches. For short jumps, the parrotlets primarily used their legs as their main source of momentum for takeoff, using their wings only for “controlled collision,” where their legs absorb the impact with the branch. During long jumps, the parrotlets mostly relied on the forces generated from wing flapping. Using their models of hopping and flapping from parrotlet data, the researchers estimated how four birdlike dinosaurs may have used their early winglike arms. Archaeopteryx and Microraptor—feathered dinosaurs that likely flew or glided between trees—would have had the most success at boosting the range of their long jumps by 20%. The larger and heavier feathered dinosaurs Protarchaeopteryx and Caudipteryx would not have been able to generate enough force from a wing beat to support their body weight or significantly increase their long jumps. The scientists surmise that Archaeopteryx developed an edge over other tree-foraging competitors by using their jumping and wing flapping to minimize energy expenditure while foraging for food in their trees. Thus, long jump Olympians of the Archaeopteryx world may have spurred the evolution of flight.
0.9913
FineWeb
What is Osteosarcoma? Osteosarcoma is a tumor associated with the bones. It is the most typical type of bones cancer which causes immature bones to be produced. Osteosarcoma is normally found at the edges of the long bones, mostly seen around the knees. Osteosarcoma is seen mostly under the age of 25 years and more in males than females. Image Credit: Adam.com Signs and Symptoms of Osteosarcoma The symptom mostly observed in patients of Osteosarcoma is pain. Patients feel high pain in the affected area. However symptoms of the Osteosarcoma may vary with respect to the size of the tumor and the position inside the body. Swelling and tenderness may also be seen in the cancerous area as a symptom if affected area is joints or near joints. If not diagnosed properly this pain may be taken as to be some common muscle or joint pain. At the same tome many other problems may have same symptoms as mentioned above. But if the patient has some persistent pain in bone which increases at night, then a visit to doctor is must as early as possible. Possible Causes of Osteosarcoma The true cause of Osteosarcoma is still unknown. Scientists are failed to discover the root cause of Osteosarcoma. However the studies show that Osteosarcoma mainly develops if any part of body is radiated. Some genetic changes or genetic diseases like Paget’s disease, which causes the bones to be enlarged or misshapen, can also be associated with the cause of Osteosarcoma. Diagnosis of Osteosarcoma Normally Osteosarcoma is diagnosed when cancer makes bones so weak that these cannot even bear a minor fall and are broken. When the doctor doubts of bone cancer he immediately may refer the patient to the specialist to perform some tests and treatment. The diagnosis of Osteosarcoma is usually done using following tests: A sample of cells is examined by pathologist. A small proportion of tumor is collected through needle by numbing the area to examine. A picture of bone is taken using X-rays. To find a clearer picture than X-ray scan is done. Some radioactive substance is injected into the vein of arm. Affected bones absorb more of this substance and these areas are highlighted as hot spots. in this test magnetism is used to have cross sectional pictures of the body. It is done inside a large metal cylinder opened from the both sides. in CT scan a 3D picture is made using a series of X-rays. CT scan is done if the doctor has doubt of cancer has also effected the lungs of the patient. Usual Treatment of Osteosarcoma is Allopathy The treatment of Osteosarcoma may include a number of therapies depending on the size and nature of the tumor. Normally it is treated using: - Chemotherapy (varying from 3 months to 6 months) - Radiation therapy All these methods are usually effective when cancer is in its early stages. These treatment methods are very costly, time taken and have lots of side effects. Homeopathic Treatment of Osteosarcoma Homeopathic medicines are very effective in treating osteosarcoma. These medicines work in many dimensions to give relief to patient like minimizing the symptoms, boosting the body’s immune system, stopping the spread of cancer cells, killing and isolating the cancer cells etc. Homeopathic method of treatment is inexpensive, has no side effects and requires no painful treatments like chemotherapy or surgery etc. At Sabeelhomeoclinic.com we don't 'claim' or 'guarantee' to cure any disease or condition, especially those which are considered as 'incurable' on the basis of modern scientific researches. Please don't use any mentioned medicines without asking from your doctor. See full disclaimer here.
0.5027
FineWeb
Numbers don’t lie. Take any storage stack – local or distributed, eventually consistent or ACID-transactional, highly available or otherwise. Ask an innocent question: how does it perform? The benchmarks – if they are current, valid, and most importantly, published – will tell only a part of the story. In reality, an infinitesimally small part. Consider the following, very modest, example with comments below: (*) To get an idea of scope and diversity of the performance tunables, let’s see some popular examples: - Ext2/3/4 man page: http://man7.org/linux/man-pages/man5/ext4.5.html - Sample Ceph config: https://github.com/ceph/ceph/blob/master/src/sample.ceph.conf - MySQL: https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html - HDFS: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml In all cases, the numbers of tunables fluctuate anywhere between 20 and 40. On top of any/all of the above there’d often be a storage transport, with its own 10 or 20 client-and-server side knobs. (**) The knobs themselves will frequently have continuous ranges. The most popular methods to enumerate continuous ranges include divide-by-a-constant and divide-by-a-power-of-two. If these two wouldn’t help, we then could just go ahead and apply Latin Hypercube Sampling – it’s brutal but still better than citing a single default and accompanying it with a stern warning not to change at your own risk, etc. (***) As for the workloads, on the most basic level they are defined by: synchronous/asynchronous and random/sequential permutations as well as read/write ratios and (application-defined) transfer sizes. They also include specified numbers of worker threads, protocol-specific containers and objects per container, and depth of the container hierarchy (if applicable). Using those primitives as a starter, and taking into account that read/write ratios are often applied at 25% increments, sequential write is different from sequential rewrite, O_DSYNC is different from NFS fsync – we then combine all this together and come up with estimates. Unsurprisingly, they all will be bigger than the 32 number from the table, by at least a couple orders of magnitude. However: this presumably corrected workload number (whatever it is) would still be a far, far cry from full workload characterization – because the latter includes I/O burstiness, spatial and temporal localities, size of the working set, compress-ability and deduplication-ability. Moreover, each of the resulting workloads must be cross-tested across a massive variety of influential environmental factors: on-disk layouts of allocated blocks/chunks/shards, the presence of snapshots and clones and their numbers, the depth of the metadata hierarchy and its distribution, the raw bit error rate as well as its post-ECC BER component. Many of these factors accumulate over time, thus adding to the condition called (quite literally) – aging. But there is more. (****) Constant traffic creates a new reality. If you have lived long enough and seen your share of performance charts, you might have noticed that a 10-minute interval may look strikingly different – before and after a couple hours of continuous workload. This nagging (unconfirmed) observation has an ample evidence – the horror stories on the web posted by unsuspecting users, multi-hour testing recommendations from the vendor community, and extensive multi-year studies: - A study titled The Tail at Store: A Revelation from Millions of Hours of Disk and SSD Deployments (documenting in particular “storage performance instability in the field”) - The most recent FAST’17 paper: On the Performance Variation in Modern Storage Stacks - A six year-long large-scale field study: Flash Reliability in Production: The Expected and the Unexpected (*****) “One would expect that repeated, carefully controlled experiments might yield nearly identical performance results but we found otherwise,” – write the authors of the FAST’ 17 paper, correctly identifying the widespread, albeit naive, trust in the technological determinism. But even though every single ‘if’ and ‘for’ statement is ostensibly quite deterministic, there is often no determinism at all when it comes to massively-complex systems. Tiny variations in the order of execution, the environment, the workload produce dramatically different performance results. Anecdotal evidence abounds with examples that create, for instance, small files in a slightly different order, and register a 15-175 times slow-down, etc. The noise, the variance, the non-reproducibility of the common benchmarks drives the only available inference: a process of measuring storage performance is genuinely stochastic. As such, it must be accompanied by first and second moments along with confidence intervals. It is difficult, however, to have at least 95% confidence when a sample size is below 100. It is, in fact, fairly impossible. Which means that the very last number in the table above – the 10 runs – must be considered totally inadequate, much like all the previously discussed numbers. (As a corollary, a single run is a fluke and an outlier if performed below the expectations. Always a silver lining.) Different sources cite different numbers. For instance, the already mentioned FAST’17 study compares three popular local filesystems. According to this research, the total benchmark time ranges anywhere between 1015 to 1033 years (per filesystem). Which, incidentally, exceeds the age of the universe by at least 4 orders of magnitude. (The good news, though, is that, given enough hardware, the time to evaluate the storage stack performance can be mitigated.) Scale is part of the problem. Suppose we have a server that 99% of the time handles requests with latency <= A. Now compare the two latency CDFs, for a single server (blue) and for 100 identical servers (red): In a 100-node cluster the odds to observe greater than A latencies skyrocket to (1 – 0.99100) = 63.4%. For an industry-grade five nines and a 1000-node cluster the same exercise gives 0.995%. Generally, the so-called tail latency becomes a real issue at scale, even when none of the specific standalone tails is fat, long, heavy or otherwise out of whack. Thus, corroborating the old adage that anything that can possibly go wrong, does with ever-growing chances. In light of the above, it should be no wonder that the performance-related discussions typically sound too open-ended at best, ambiguous or even hostile, at worst. Personally, I believe that the only way to cope with the associated unease is to state, and accept, the following: The performance of a qualified storage stack cannot be known. (By qualified, I mean any stack that stores at least one petabyte in production – which seems like a reasonable threshold today – and that is used for/by mission-critical applications requiring low latency.) The stack’s performance is inherently unknowable. The word “inherence”, by the way, originates from the Empedocles’ idea that the qualities of matter come from the relative proportions of each of the four elements: earth, water, air, and fire. This idea, as we know today, does not describe matter correctly, much like the still prevalent view that a storage system consists of four components: a controller attached to its memory and a target attached to its disk… The scale of the cluster, the size of the working set, the number of concurrently-active tiers – all these factors exponentialize the complexity of the software/hardware constructions. Freeze all of the above – and what will remain is (only!) a parameter space of all possible workloads and all valid configurations. As shown above, the parameter space in question is enormous – infinite, for all intents and purposes. Which is unfortunate, but maybe not the end of the world – if we could devise an analytical model or framework, to compute/estimate the stuff that we can never test. This model would, potentially, include a DAG for each IO request type, with edges reflecting causal and/or precedence relationships between the request’s parent and children (and their children) – at various stages of the IO execution. It would also include inter-DAG causal and precedence relationships between the concurrent IOs within a context of a single transaction which, depending on the semantic model of the storage protocol, may or may not possess some or all ACID properties. (As well as inter-transactional relationships, etc.) Further, any given IO DAG will be spanning local (virtual and physical) memory hierarchies, local (virtual and physical) disks, and – in the distributed-storage case – remote servers with their own layers of volatile and persistent caches, memories, journals, and disks. As such, this DAG would be connecting multiple cross-over points (COPs) where the IO parent and its children belong to different domains: CPU caches vs. RAM, user vs. kernel, virtual vs. physical, fast memory (or disk) vs slow memory (or disk), etc. In a simplified model/framework, every single COP becomes a queue with consumers and producers having different resources and executing at vastly different rates – from nanoseconds (CPU caches, RAM) to milliseconds (TCP, HDD): While bottlenecks and SPOFs are often in-your-face obvious and even documented, much of the performance trouble is subtle and elusive – sinister if you will. Much of it lies in and around those COPs – and here are some of the (maybe) less obvious reasons: - the number of simultaneously existing COPs is proportional to the (extreme) heterogeneity of the volatile and persistent tiers “multiplied” by the scale/velocity/volume of the concurrent IOs; - without designed-in deterministic mechanisms – for instance, resource reservations in the data path – it is extremely difficult to keep in-check utilizations on both sides of each logical COP; - none of the popular storage protocols utilize resource reservations in the data path (yet). In addition, there are the usual overheads: queuing overhead, interrupt-handling overhead, polling overhead, data copying overhead, context switching overhead, locking-of-the-shared-resources overhead, etc. All the overheads “consolidating” in and around the edges of each and every COP. To conclude this line, I’ll illustrate the importance of keeping utilization in-check. There are many ways to do that. Consider, for example, a queue that “connects” a Markovian producer with a single server – the Pollaczek–Khinchine formula: Expectedly, at high utilizations the queue length L and, therefore, the waiting time approaches infinity. The formula works for an M/G/1 queue – and not for an M/G/k queue (let alone G/G/k queue). It is also only a single queue connected to a single “server” – and not an entire hypothetical super-multi-queued DAG where the arrivals and service times are non-deterministic and non-Markovian. The only known to humanity way to deal with an exponential complexity is to decompose things down to fairly isolated modules/components, and design/implement – or, better – reuse them one by one, one at a time. Modular programming, SEDA, multi-tier architectures, workflow systems, normalized systems, microservices architecture – all that. “Let me try to explain to you” – wrote Dijkstra in the essay called On the role of scientific thought – “what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects <snip> It is what I sometimes have called the separation of concerns, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts” Today, 43 years later, a logical question to ask would be: what’s modular or pluggable about the existing storage stacks, and how do the best of designs address the combinatorial effects of (environment, workload, configuration) changes multiplied by the passing of time (and therefore, changing = aging)? Not shockingly, the answer will depend on who you ask. If you ask google, for instance, search results may appear to be limited, in a sense. And so, here’s my final point. It may sound controversial, at first glance. Outrageous, at the second. But here it is: Is SoC itself – a good thing? After all, when we separate IO performance from other vital concerns, such as: data consistency, fault tolerance, data protection and security, usability, maintain-ability and upgrade-ability, features A, B, and C, protocols D, E, and F, APIs X, Y, and Z when we do all that (separation), don’t we also, inadvertently, de-prioritize some concerns over the others? And once de-prioritized, doesn’t the concern sort of vanish? Well, think about it. Perhaps there will be answers, in due time. Until then, what remains for the prospective users (aka prospects) is – walking up and down the marketplace, being slightly dazzled by the variety, and wondering what kind of a package deal they’ll end up having…
0.7558
FineWeb
Cirrus clouds are wispy, feathery, and composed completely of ice crystals. As a warm front approaches, cirrus clouds are inclined to thicken into cirrostratus, which may, in flip, thicken and lower into altostratus, stratus, and even nimbostratus. They incessantly point out the method of a heat entrance and may thicken and decrease into stratus, then nimbostratus leading to rain or snow. They are the primary signal of an approaching heat entrance or higher-stage jet streak. The three most important sorts of excessive clouds are cirrus, cirrostratus, and cirrocumulus. In contrast to cirrus, cirrostratus clouds kind extra of a widespread, veil-like layer similar to stratus clouds in low levels. When sunlight or moonlight passes using the hexagonal-shaped ice crystals of cirrostratus clouds, the sunshine is dispersed or refracted much like gentle passing by a prism in such an approach that a familiar ring or halo may kind. Nevertheless, altostratus clouds don’t produce vital precipitation on the floor, though sprinkles or mild showers might occur from a thick alto-stratus deck. The 2 only kinds of mid-stage clouds are altostratus and altocumulus. Altostratus clouds are “Strato” sort clouds seen beneath cloudlightstore.com that possess a flat and uniform type texture within the mid ranges. Altocumulus clouds exhibit “cumulo” sort traits seen in mid-levels, i.e., heap-like clouds with convective elements. Due to chilly tropospheric temperatures at these levels, the clouds primarily are composed of ice crystals and sometimes seem skinny, streaky, and white though a low sun angle, e.g., close to sunset, can create an array of coloration on the clouds. Clouds are made up of many small water drops and ice crystals. Depending on the altitude, time of year, and vertical temperature structure of the troposphere, these clouds could also be composed of liquid water droplets, ice crystals, or a combination of the 2, including supercooled droplets, i.e.,, liquid droplets whose temperatures are beneath freezing. Low clouds happen below 6500 ft and normally include liquid water droplets or even supercooled droplets, besides throughout cold winter storms when ice crystals and snow comprise much of the clouds. Lastly, cirrocumulus clouds are layered clouds permeated with small cumuliform lumpiness. High-stage clouds occur above about 20,000 ft and are given the prefix “cirro-.”
0.9365
FineWeb
Tweet about this product: KeyCaliber's Automated Crown Jewels Assessments provide continuous, real-time risk information on critical IT/OT assets, empowering cybersecurity teams to effectively secure their critical assets. keycaliber.com #MIN137 #WEDO2021 KeyCaliber’s cybersecurity software technology modernizes how organizations identify their critical IT assets and manage their cyber risks. The traditional approach is a manual process of conducting interviews and questionnaires to produce a spreadsheet that is a subjective snapshot in time. With cloud migration and digital transformation, that approach cannot keep up with today’s complex and dynamic IT environments. KeyCaliber leverages available security data and machine learning to identify critical assets automatically, continuously, and in real time. This business intelligence provides cybersecurity teams with the risk information needed to effectively secure their critical assets, efficiently allocate resources, methodically devise roadmaps, and convincingly articulate the value of their cybersecurity program.
0.898
FineWeb
(Last Updated on November 24, 2020) Imagine you walk into a classroom and take a seat at your desk. There you find a worksheet with a tree diagram. The teacher announces that you’ll be studying trees today. She lists the vocabulary words you should add to your diagram. Now imagine instead that you walk into a classroom with a three foot wide slice of tree trunk on a table with a few magnifying glasses scattered next to it. The teacher invites you to study the tree for a few minutes and see what you see. I’m guessing I’m not the only one who would find the second scenario more interesting. Whether or not you have any interest in trees, the fact that the tree trunk is real, unexpected, accessible and different makes it intriguing. The wonder and curiosity of it all leads to learning and retention. Just like most good ideas, it’s not a new concept. A “hook” is really just anything that intrigues (or hooks) the learner and gets them curious for more. More recently, I’ve seen it called an “invitation” or a “provocation,” but the idea is the same. I was formally introduced to this idea in Building Foundations of Scientific Understanding (BFSU), a science curriculum written by Bernard Nebel, PhD. It’s one of the key methods he uses to generate interest and child-led discovery in the BFSU lessons. I’ve used this method frequently when teaching science and history, but you can find creative ways to use hooks in all subjects. Thankfully, hooks don’t have to be complicated. Sometimes it’s as easy as changing the order you present your lesson. Instead of talking at your students and then bringing out the baking soda and vinegar volcano experiment later, set the baking soda and vinegar on the counter first. Let them observe the ingredients, notice their attributes and make guesses as to what experiment you’ll be doing. From there you can ask them questions and steer the conversation toward the objectives you have for the lesson. Here’s an example of what I mean. This summer I needed to start an experiment (for BFSU Vol. II, Lesson B-16: Fungi & Bacteria I) in early August so the results would be ready for our lesson this fall. Instead of telling my kids what we were doing right away, I just arranged the supplies on our counter. Since it’s all stuff I’d be getting out for the experiment anyway, this took no extra time or effort on my part. I let the kids find the things and ask me about them. Once they were intrigued, I had them guess what we might be doing and then talk about what they already know about the materials. After a minute or two, I let them know the gist of the experiment. They took some measurements of the different materials and wrote down their hypotheses about which materials would decompose or not. Just by introducing the experiment in this way (instead of starting with vocabulary or a speech about the experiment we’ll be doing later), they were already engaged and hooked. Their curiosity made them a much more receptive audience. So simple! So effective! In many ways, it’s very much like strewing. In both cases, you’re creating an invitation to come and explore more. Instead of just spurting out information at them, you’re inviting them in to make the learning their own. The main difference between strewing and finding a hook is that there’s typically no expectation or agenda with strewing. Things you strew are optional for your kids. If they choose not to engage with an item you’ve set out for them – a bowl of pinecones, for example – then you just skip it or try again another time. Hooks, on the other hand, are there to draw kids into a topic when there typically IS a particular objective or lesson to be covered. The bowl of pinecones is there to introduce a lesson on trees or seeds. From there, the kids’ observations and discussion can be guided toward the lesson for the day. Before I knew how effective hooks are for learning, I assumed the kids needed details first – like vocabulary words and background information on the subject – before they could understand or do the “fun” parts of the lesson. The problem with that was that I often lost their interest before we even got to the good stuff. Finding a good hook to generate interest and discussion lets you work the details (background and vocabulary) into the lesson in a very natural way. When it comes to effectiveness, there’s a world of difference between you telling your not-super-interested kid “the center of the tree is called the heartwood” and them pointing at a tree stump and ASKING YOU “mom, what’s this part of the tree here called?” The key is that THEY WANT to know. Take some time to think about how you present your lessons. Are there some you could rearrange so you lead with an engaging hook? And remember, it doesn’t have to be something complex! Hooks might be… - Items or tools needed for an experiment - Ingredients for a recipe - Interesting picture books on your topic - An engaging YouTube video - Tools or supplies needed for an art project - A comic or joke to introduce your topic - A short game or puzzle relevant to your lesson - Toys or objects to start a discussion - A trip to the backyard or park to make observations or collect things Once you begin to think of ways to present lessons in this way, you’ll find you can use almost anything as a hook. As homeschoolers, we have the freedom and small class sizes to really get crazy with this. I’m still trying to find a way to make a trip to Baskin-Robbins a hook for something… 😉 Think of a hook as your lesson’s first impression – and try to make it a good one! Subscribe today to receive new posts via email!
0.8484
FineWeb
- Domain A: Knowledge and intellectual abilities The knowledge, intellectual abilities and techniques to do research. - Domain B: Personal effectiveness The personal qualities and approach to be an effective researcher. - Domain C: Research governance and organisation The knowledge of the standards, requirements and professionalism to do research. - Domain D: Engagement, influence and impact The knowledge and skills to work with others and ensure the wider impact of research. Note: the Joint Statement of the Research Councils’ Skills Training Requirements for Research Students (JSS) has been replaced by the Researcher Development Statement (RDS) as the reference document for the professional development of researchers, including postgraduate researchers. The RDS is a strategic statement setting out the knowledge, behaviours and attributes of effective and highly skilled researchers appropriate for a wide range of careers. It has been endorsed by key organisations including RCUK, UK funding bodies, UUK and the QAA. The RDS updates the JSS and extends researcher development beyond the doctoral experience. All the JSS skills and attributes have been incorporated into the RDS and a mapping of the JSS against the framework and vice versa is available for reference. The Researcher Development Framework (RDF) underlies the Researcher Development Statement (RDS) and represents a major new approach to researcher development, to enhance our capability to build the UK workforce, develop world-class researchers and build our research base. For more information on the Researcher Development Framework, associated Statement and related resources go to www.vitae.ac.uk/rdf From: "Researcher Development Statement" (RDS), Vitae, 2010.
0.9604
FineWeb
Eye charts of different variations have become a standard in vision screenings and eye exams. One of the most familiar charts associated with vision is the Snellen eye chart, designed by Dutch ophthalmologist Hermann Snellen in 1862 to measure visual acuity- how well you can see at various distances. Although there are variations of the Snellen chart used today, a traditional Snellen chart has eleven lines of block letters. The first line has one very large letter, which is one of several letters, for example E, H, or N. The following rows have increasing numbers of letters that become smaller in size as you read from the top to the bottom of the chart. The letters used on the chart are C, D, E, F, L, N, O, P, T, and Z. When taking a vision exam, one eye is covered and you are asked to read the letters of each row aloud beginning at the top of the chart. The smallest row that you can read correctly indicates the visual acuity in the eye being tested. The chart is positioned at a distance of 20 feet in the United States or 6 meters in the rest of the world. The term 20/20 vision is used to indicate the clarity and sharpness of your vision measured at a distance of 20 feet. If you have 20/20 vision, you can see clearly at 20 feet objects that can normally be seen at that distance. If you have 20/40 vision, it means that you need to be as close as 20 feet to see what a person with normal vision can see at 40 feet. The largest letter on an eye chart often represents an acuity of 20/200 which is associated with the term "legally blind." You will be asked to read the letters one eye at a time. Some people can see well at a distance, but are unable to bring nearer objects into focus, while others can see items that are close, but cannot see them far away. By having you read the chart, your eye doctor is able to ascertain whether you have difficulty with distance vision and can determine which corrective lenses can be used to improve it. Near vision problems or other vision and eye health issues may not be detected with the Snellen eye chart alone, so a comprehensive eye exam is always recommended. The next time you hop into the chair at your optometrists' office, you'll be able to understand why you have to read the letters on the chart in front of you and what the results mean for your vision.
0.6028
FineWeb
The Teaching Positive Discipline – Parenting Facilitators Workshop provides Montessori Educators with the tools they need to start teaching Positive Discipline Parenting Classes. Participants will learn how to: - Develop a deeper understanding of Positive Discipline principles and practices for parents. - Facilitate 7-Week Parenting Course for parents within their schools and local communities. - Practice Positive Discipline foundational activities. - Lead activities for parents. - Build Positive Discipline facilitation skills. - Create a support system for parents struggling with behavioral challenges. This course is currently closed
0.9996
FineWeb
If you are not familiar with the Non Virtual Interface idiom I recommend that you go and read about it. D takes this idiom to the next level by allowing to have tempalted methods within interfaces if they are final. This can be very powerfull and help reducing boilerplate code. import std.string : toStringz; final size_t read(T)(ref T data) if(!isArray!T) static assert(!is(T == const) && !is(T == immutable), "can not read into const / immutable value"); return readImpl(&data, T.sizeof, 1); final size_t read(T)(T data) if(isArray!T) static assert(!is(typeof(data) == const) && !is(typeof(data) == immutable), "can not read into const / immutable array"); return readImpl(data.ptr, typeof(data).sizeof, data.length); size_t readImpl(void* data, size_t elementSize, size_t elementCount); class FileInputStream : IInputStream m_handle = fopen(toStringz(filename), "r"); override size_t readImpl(void* data, size_t elementSize, size_t elementCount) return fread(data, elementSize, elementCount, m_handle); If you look at the above example you can see that the IInputStream interface has two templated methods which are final. Thos methods implement the behavior needed for arrays and value types. Then they forward the read in the correct way to readImpl. As a result implementing the IInputStream interface becomes really easy. The only method that needs to be implemented is the readImpl method as you see from the FileInStream class. The correct handling of different types only has to be implemented once, inside the interface, and all implementations don’t have to care about this anymore. You could also think of something similar for serializing data. A ISerializer interface which implementes all the needed type handling as templates and forwards it to simple protected methods. Then implementing different serialization targets (e.g. json, xml, binary, etc) would be quite easy and not require writing and testing template code anymore. A further advantage of this idiom is, that it lowers the amout of code bloat because the templates only get generated once for the interface and are reused by every implementation. The above code was tested with dmd 2.063.2
0.7578
FineWeb
Points, boundaries and the horizon around the presence of place, in Edward Casey's The Fate of Place We can grant that a point is a "limit of localization" - precisely the lower limit, beneath which we cannot (and need not) go. For limit, like shape, belongs primarily to what is limited and only secondarily to what does the limiting (e.g., a container). At least this is so in Aristotelian physics, given its resistance to any externally imposed mathematization. In such a physics, as Proclus suggests, "the limits surrender themselves to the things' they limit; they establish themselves in them, becoming, as it were, parts of them and being filled with their inferior characters." Indeed, in a properly Aristotelian physics, the point can even be regarded as a paradigm of the limit because of its compressed and self-contained state. As Proclus says, "All limits ... subsist covertly and indivisibly in a single form under the idea of the point." To be a boundary, by contrast, is to be exterior to something or, more exactly, to be around it, enclosing it, acting as its surrounder. As such, a boundary belongs to the container rather than to the contained-and thus properly to place conceived as the inner surface of the containing vehicle, that is, as (in Aquinas's formulation) "the terminus of the container." Like place itself, a boundary "shuts in and closes off something from what lies around it" - which is precisely what a point cannot do. Even if it is composed of points, a boundary must be at the very least linear in character if it is to function in this simultaneously en-closing and closing-off manner: hence its affinity with the idea of a "borderline." But, as linear, a boundary is the boundary of a surface or a solid, not of a point. A point is surrounded by space as immersed in it, not as bordered by it; to be itself part of a boundary, a point must be conjoined with other points so as to constitute a line. Two possible outcomes are suggested by the distinction I have just made between boundary and limit. On the one hand, the case for Aristotle's denial that a point is itself a place is strengthened: if a point is indeed a limit, it does not constitute a boundary; and since it is the latter that is essential to place on Aristotle's own model, a point cannot be a place or perhaps even an integral part of place. Self-limited in its splendid isolation and other-limiting only as part of a continuous line, a point lacks the crucial criterion of containership. On the other hand, place itself is more like a boundary than like a limit. Not only is a place two-sided in the manner of a boundary-insofar as it is inclusive and exclusive at once-but it is also like a boundary in the special signification that Heidegger detects in the ancient Greek conception of horismos, "horizon," itself derived from horos (boundary): "that from which something begins its presencing. [p. 154]" For a place is indeed an active source of presencing: within its close embrace, things get located and begin to happen.
0.8663
FineWeb
Tail Lights for the Nissan Versa One part on the Nissan Versa that owners routinely upgrade or replace is the taillight bulb. It is important to know about the different taillights there are for the Versa in order to make the best purchasing decision for the sedan.What are the different options for taillight bulbs? The main three types of car or truck light bulbs are LED, halogen, and incandescent. Many vehicles automatically come with LED light bulbs. Do your research and assess your own individual situation before purchasing new lighting parts. - Incandescent bulbs are a well-known and widely used style of lighting. These bulbs have a filament wire that is heated by an electric current, which causes the bulb to glow a warm yellow-white color. They are designed to last about 1,200 hours. - Halogen bulbs contain a tungsten filament. A small amount of halogen gas mixes with tungsten vapor and deposits it back onto the filament, extending the bulb’s lifespan and allowing it to work at higher temperatures. - LED bulbs do not include a filament, so they are both durable and shock-proof. They are energy-efficient, have low heat emissions, and can last up to 60,000 hours. Nissan makes direct replacement parts for its vehicles’ taillights. Some additional brands who manufacture replacement pieces are the following: - Maxzone Auto Parts - CEC Industries - Red (the most common) On the hatchback style model, open the back hatch door. Look behind the light area for two snap covers that will have a small opening. Insert a flat head screwdriver into the covers and carefully take them off. You will see a small nut connecting the taillight behind the covers. Always refer to the Nissan Versas owners manual if you have any difficulty locating the parts. Now follow the steps below. - Use a socket and ratchet to remove the lower nut and then remove the upper nut. - Carefully remove the entire light. It should completely separate from the body of the car. - Unscrew the bulbs socket until the piece pops out and gently replace the old part with the new part. - Place the fixture back in the same manner that it was removed. The socket should turn clockwise to properly fit into place. The entire fixture should be flush with the rest of the car. Place the nuts that you initially removed back into place and put the interior covers back on. - Test out your light by having a friend press on the brakes. Ensure that the parts are properly functioning before driving the vehicle.
0.6625
FineWeb
Citation in Web 3.0: The In-Essay Hypertext Bibliography This multimedia presentation explores how the theoretical value of ethos as an Aristotelian appeal can inform choices and practical value assigned to reference and works cited pages in twenty-first-century composition instruction. Attendees are invited to consider a hyperlink citation style in an increasingly-digital composition classroom. This presentation explores how the theoretical value of ethos as an Aristotelian appeal can inform choices and practical value assigned to constructing reference and works cited pages in twenty-first-century composition instruction. Current writing style guides such as MLA, APA, and Chicago call for the citation of source information found increasingly in digital formats. As scholarly internet databases gain popularity, persistent digital texts are linked back to institutional, peer-reviewed, digital archives. In their citation practices, however, students are often still required to learn and practice bibliographical work in accordance with the print essay. On the twenty-first-century college campus, however, students enter into a multi-literate, digital space, providing fuller access to digital technologies in and out of the classroom. To 1.) promote use of technologies, 2.) promote students’ engagement with technologies, and 3.) prompt students’ training in citation, I offer a digital method for works cited or reference page work. My presentation will include a multi-modal demonstration of this style of citation, coupled with an oral presentation on its features. Further, I will briefly explore the theoretical implications of this citation style as it pertains to students’ ethos-building in digital composition, informed by Berners-Lee, Hendler, and Lassila's Web 3.0 Theory, or the “Semantic Web.” I will also open discussion to the room regarding the potential advantages and limitations of this model.
0.8845
FineWeb
Momentum is a vector describing a "quantity of motion" or in mathematical terms p (momentum) = mass (m) times velocity (v). Conservation of Momentum In a closed system, such as when two objects collide, the total momentum remains the same, though some may transfer from one object to the other. Momentum is always conserved in a closed system, but most sporting situations in the real world are not a closed system. For example, when a baseball bat hits the ball, the ball will be squished to a certain degree. After few milliseconds, it rebounds back. This contraction and rebound action is causes the release of heat energy, and some momentum is lost, or transferred elsewhere. As momentum is the product of mass and the velocity, you can increase momentum by increase either of these elements. In sport, examples include using a heavier bat or racket and increasing running speed or hand speed. Angular momentum is the product of Moment of Inertia and Angular Velocity. Moment of Inertia is the angular counterpart to mass - it is the measure of the resistance of an object to changing its angular speed. A good example of angular momentum in action is with figure skaters. A figure skater starts a spin by pulling in his arms to lessen his Moment of Inertia. By the Conservation of Momentum Principles, the angular speed must then increase. To come out of the spin, a skater simply extends her arms to increase angular momentum and decrease angular velocity. - more physics of sport
0.9956
FineWeb
Acetone AR 2.5ltr Acetone Analytical Reagent 2.5 liter, Glass Bottle. A bottle of 100% pure acetone is colorless, flammable and volatile; methodically it is named propanone (C3H6O). Acetone is used in laboratories for cleaning glassware and in industries as a solvent. Availability: In Stock There is no Descriptions Write a review Login or Sign up to write a review- Or -
0.9661
FineWeb
When most people think about the solar system, they consider the Sun, the eight planets, and Pluto. But although space is, indeed, mostly empty the solar system is a rather complex place. And today scientists have announced the discovery of a second, mysterious object far, far away from the Sun but still in orbit around our star. The object does not yet have a name — for now it’s 2012 VP113 — and it’s a lot like Sedna, which was found in 2003. According to a research paper published in Nature today, this object is estimated to be about 280 miles across, or about the distance from Galveston to Dallas. At such a size it would likely be considered a dwarf planet, in the same category as Pluto. Sedna is about twice as large. But the orbit of this object is anything but Pluto-like, and it’s discovery is helping astronomers to better understand what conditions are like at the edge of our solar system. To put this discovery let us first take a look at the solar system. In the graphic above the scale bar is in astronomical units, with each set distance beyond 1 AU representing 10 times the previous distance. One AU is the distance from the sun to the Earth, which is about 93 million miles or 150 million kilometers. Neptune, the most distant planet from the sun, is about 30 AU. Pluto’s orbit fluctuates between 30 AU and 50 AU from the Sun. The new objects, like Sedna and 2012 VP113, have much more eccentric orbits. Sedna, for example, makes its closest approach at 76 AU, but at the most distant part of its orbit is nearly 1,000 AU from the Sun. The new object, 2012 VP113, comes within 80 AU of the Sun at perihelion. To understand how eccentric these orbits are here’s the orbit of Sedna compared to that of the other planets and Pluto (in purple) in our solar system. Astronomers believe both Sedna and the new object are members of a population of “inner Oort cloud” objects, and that there are likely many other of these tiny worlds. Here’s one more image showing both Sedna (orange), the new object, and the rest of the solar system, The distant objects not only are fascinating in and of themselves — what are they? rocky? ice balls? where Katy evacuated to?– their existence, and their nature, will help astronomers further pin down how the solar system formed.
0.857
FineWeb
Explore the unspoilt ecosystems and culture of La Gomera Discover the Gomerian Whistle, learn how to make goats cheese, uncover the secret uses of the Canarian palm tree and appreciate the local lifestyle. Swim and kayak on the stunning shoreline while visiting the different ecosystems La Gomera is famous for. The UNESCO listed Garajonay National Park, holds special walking trails inside unusual forests which nowadays are rare to find in Southern Europe and Africa. Its flowing water streams are the best-preserved in the Canary Islands. Within this area, you can also uncover 8 out of 34 plant species native to La Gomera. Fruitful wisdom sits inside the mind of many islanders, knowing traditional archaic techniques on the production of palm honey. La Gomera is the only destination in the world where palm honey is extracted from Canarian palm trees. Unearth this ancient skill that has been sourced from the Canarian tree which dates back to the first century. Another highlight is its diverse wildlife scattered on and close to the island. Look out for the 50 different forms of nesting bird species. Between La Gomera and its neighbour, Tenerife there are five different types of cetaceans that live all year round in the waters. To make sure you get this opportunity to spot them, you will embark on a responsible whale and dolphin watching boat journey. We will also Kayak on these waters allowing us to bird watch over the cliffs. To conclude your journey witness rural entrepreneurs and artisans making precious pottery and discover how goat cheese is produced. Finally, learn the culture of local shepherds and their indigenous language, La Gomerian whistle. - Visit different ecosystems in Garajonay National Park - Dine at Casa Efigenia the only traditional vegetarian restaurant on the island - Journey to the exhibition centre of palm honey in Alojera - Taste and see different uses of the La Gomera Canarian palm tree - Responsible whale and dolphin watching boat journey - Visit aborigine places around El Cercado village - Traditional pottery workshop with local artisans - Learn the pottery process at the exhibition centre of Las Loceras - Kayak and swim on the shores of La Gomera - Uncover how to make local goats cheese - Learn about the shepherd culture of La Gomera Interested? Enquire with us today. Want to know more? Click on the trip plan above for further information! IncludedExplore the unspoilt ecosystems and culture of La GomeraLocal GuideTransportationDinnerLunchBed & Breakfast Day 1: Welcome to the Canaries! - Ferry Transfer Day 2: Meander in Garajonay National Park - Meeting with guide and briefing session - Garajonay experience - Lunch at vegetarian restaurant Day 3: Uncover the magic of La Gomerian Palm Honey - Trip to Alojera - Visit the exhibition centre of palm honey Day 4: Whale and dolphin watching on the Atlantic Ocean - Whale watching and dolphin boat excursion - Lunch on boat Day 5: Learn from the locals - Visit Las Loceras exhibition centre - Pottery making workshop Day 6: Sea Kayaking and Bird watching - Sea Kayaking tour Day 7: Learn about the Shepherd Culture - Visit to a local goat cheese factory - Learn about Sheperd culture Day 8: Say Goodbyes - Transfer to ferry port - Ferry to Tenerife Sorry, the comment form is closed at this time.
0.7707
FineWeb
For people who are not sure what Brexit means, it refers to the British public’s choice on June 24 to vote to leave the European Union. Given the sheer extent to which Britain is reliant on its connections with the European Union, it should come as no surprise to learn that this has had a horrendous impact on Britain, which in turn, has sent shockwaves rippling throughout the economies of the rest of the world. (1) In part, this is because membership in the European Union came with a shared set of standards, and in part, this is because membership in the European Union came with open borders that facilitated international trade by permitting free movement of labor and free movement of products. This has hit Britain particularly hard because a lot of multinational corporations from English-speaking countries chose it as the site for their European offices because of the ease of communication that came with convenient access to the rest of the members of the European Union. However, it is also important to mention the fact that no one know what will happen next with Britain. This is because Brexit was not a binding referendum, meaning that the Britain’s leadership can refuse to submit the request to leave the European Union even if it comes at the expense of their popular support. For that matter, no one is even sure who will make up Britain’s leadership, seeing as how the chances of a general election are good while the current prime minister, David Cameron has announced his eventual plan to step down out of a shame at having lost a referendum he had called and intended to win in order to shore up his support among his Tories. Even worse, some people are even expecting a potential breakup of the United Kingdom because of the rumblings heard throughout pro-Remain region, with Scotland rumbling the loudest because one of the reasons that the Scots had voted to remain with the United Kingdom was because they would have had to jump through a lot of hoops in order to stay within the European Union as an independent country. Summed up, Brexit has caused a horrible mess that has splattered not just Britain, not just the other countries of the European Union, but also countries all around the world because of the interconnected nature of modern economies. Even worse, there is no end in sight to this horrible mess because its numerous uncertainties will not be cleared up anytime soon, meaning that until consumers, businesses, and other institutions can anticipate the future once again, there will no end to the protective measures that they are already using to shield themselves from Brexit’s impact. Protective measures that have a real chance of causing another wave of recessions like the housing crisis that kickstarted the Great Recession of the late 2000s. Here are five examples of the tech companies that Brexit is going to hit the hardest: Amazon is an excellent example of the companies feeling the effects now that the British public has voted to leave the European Union. After all, it has used Britain as a shipping hub for its European operations, which has now been made uncertain. If Britain leaves the European Union, there is a chance that Amazon will no longer be able to ship its packages from Britain to the other countries in the European Union with the same ease, meaning that Amazon will have to reduce its British operations as part of the reorientation of its internal distribution network. This is without factoring in the real chance that the European Union will move as a whole to retaliate against Britain, which is possible because Britain has not just spat in their faces but also caused real economic harm to them by leaving. (2) Microsoft is a tech titan whose stock is tumbling because of Brexit, though its situation is not as bad as that of Amazon. In main, this is because Microsoft does business worldwide, meaning that a major disruption of the existing system is in turn, a major disruption to the business conditions that make those worldwide operations possible. This is the reason that Microsoft was actually one of the corporations encouraging the British public to remain within the European Union before the vote, though unfortunately, it was not successful in that regard. (2) IBM is another example of a tech titan that is suffering because of Brexit for much the same reason as Microsoft. After all, it conducts business throughout the European Union, which includes Britain but with the new uncertainties in its business environment, this is now under threat like never before. No wonder that IBM was also one of the companies that had advocated for Britain to remain within the European Union. (2) Google is also feeling the effects of Brexit now that no one knows what will be happening with either the British market or the European markets. This can be seen in how Google’s stock saw a significant fall on the day after Brexit, though it remains to be seen whether it will suffer further effects as the consequences of Brexit continue to spread throughout economies all around the world. (2) Virgin Media is an excellent example of the British tech companies that will be hit by Brexit. In short, it is a telecommunications business, meaning that its customers will fall in number as Brexit causes both British businesses and British consumers to scale back on their use of the Internet, mobile phones, and everything else provided by Virgin Media. Even worse, Virgin Media is reliant on European labor because British labor cannot meet all of the needs of British tech companies, meaning that it has a lot of HR problems on its hands. (3. Summed up, Brexit is going to hit tech companies hard, so much so that a lot of them have been speculating about potential moves to Ireland, which has English as its predominant language but is remaining within the European Union. Unfortunately, the sheer confusion over the mess makes it difficult to speculate anything more about what will happen, which is a large part of the problem in the first place.
0.5128
FineWeb
One point five—most readers will recognize that number as the generally accepted upper limit of permissible climate warming. With current temperatures already hovering at 1.1 degrees Celsius above the historical average, the race is on to hit that target, and the likelihood that we will surpass it is growing. Even if we do manage a 1.5 degree future, that’s still warmer than today’s world, which is already seeing devastating climate impacts. So what will it actually feel like to live in a 1.5 degree world—or a 2 degree one, or even 3? The Probable Futures initiative has built a tool to help everyone imagine. Probable Futures is a newly launched climate literacy initiative with the goal of reframing the way society thinks about climate change. The initiative was founded by Dr. Spencer Glendon, a senior fellow with Woodwell Climate who, after investigating climate change as Director of Research at Wellington Management, noticed a gap in need of bridging between climate scientists and, well… everyone else. According to Dr. Glendon, although there was an abundance of available climate science, it wasn’t necessarily accessible to the people who needed to use it. The way scientists spoke about climate impacts didn’t connect with the way most businesses, governments, and communities thought about their operations. There was no easy way for individuals to pose questions of climate science and explore what the answers might mean for them. In short, the public didn’t know what questions to ask and the technical world of climate modeling wasn’t really inviting audience participation. But it desperately needed to. Because tackling climate change requires everyone’s participation. “The idea that climate change is somebody else’s job needs to go away,” Dr. Glendon says. “It isn’t anybody else’s job. It’s everybody’s job.” So, working with scientists and communicators from Woodwell, Dr. Glendon devised Probable Futures—a website that would offer tools and resources to help the public understand climate change in a way that makes it meaningful to everybody. The site employs well-established models to map changing temperatures, precipitation levels, and drought through escalating potential warming scenarios. The data is coupled with accessible content on the fundamentals of climate science and examples of it playing out in today’s world. According to the initiative’s Executive Director, Alison Smart, Probable Futures is designed to give individuals a gateway into climate science. “No matter where one might be on their journey to understand climate change, we hope Probable Futures can serve as a trusted resource. This is where you can come to understand the big picture context and the physical limits of our planet, how those systems work, and how they will change as the planet warms,” Smart says. As the world awakens to the issue of climate change, there is a growing group of individuals who will need to better understand its impacts. Supply chain managers, for example, who are now tasked with figuring out how to get their companies to zero emissions. Or parents, trying to understand how to prepare their kids for the future. Probable Futures provides the tools and encouragement to help anyone ask good questions about climate science. To that end, the site leans on storytelling that encourages visitors to imagine their lives in the context of a changing world. The maps display forecasts for 1.5, 2, 2.5, and 3 degrees of warming—our most probable futures, with nearly 3 degrees likely by the end of the century on our current trajectory. For the warming we have already surpassed, place-based stories of vulnerable human systems, threatened infrastructure, and disruptions to the natural world, give some sense of the impacts society is already feeling. According to Isabelle Runde, a Research Assistant with Woodwell’s Risk Program who helped develop the maps and data visualizations for the Probable Futures site, encouraging imagination is what sets the initiative apart from other forms of climate communication. “The imagination piece has been missing in communication between the scientific community and the broader public,” Runde says. “Probable Futures provides the framework for people to learn about climate change and enter that place [of imagination], while making it more personal.” Glendon believes that good storytelling in science communication can have the same kind of impact as well-imagined speculative fiction, which has a history of providing glimpses of the future for society to react against. Glendon uses the example of George Orwell who, by imagining unsettling yet possible worlds, influenced debates around policy and culture for decades. The same could be true for climate communication. “I’m not sure we need more science fiction about other worlds,” Glendon says. “We need fiction about the future of this world. We need an imaginative application of what we know.” Glendon hopes that the factual information on Probable Futures will spark speculative imaginings that could help push society away from a future we don’t want to see. For Smart, imagining the future doesn’t mean only painting a picture of how the world could change for the worse. It can also mean sketching out the ways in which humans will react to and shape our new surroundings for the better. “We acknowledge that there are constraints to how we can live on this planet, and imagining how we live within those constraints can be a really exciting thing,” Smart says. “We may find more community in those worlds. We may find less consumption but more satisfaction in those worlds. We may find more connection to human beings on the other side of the planet. And that’s what makes me the most hopeful.”
0.5299
FineWeb
Likely spurred on by the success of Nintendo's own Brain Training games, Majesco has announced the development of two new games designed to improve basic thinking skills and reaction time. Both Brain Boost: Beta Wave and Brain Boost: Gamma Wave are being developed under the consultation of Dr. Makoto Shichida, creator of the Right Brain Development Theory. "Each Brain Boost game offers a completely different set of fun, yet challenging, brain training problems that are designed to enhance your mental acuity," said Ken Gold, vice president of Majesco. "We believe a growing number of consumers are looking for more diversified game play experiences and the Brain Boost games will help attract a new audience to the category." Each of these games focuses on improving memory, concentration, and logical reasoning through increasingly difficult exercises in speed and accuracy. Brain Boost: Beta Wave focuses on stimulating parts of the right frontal lobe often associated with active concentration and busy thinking. Games included in Brain Boost: Beta Wave include Find a Match, Shape Recognition, Addition, Remember Sequence, and Moving Dots. Brain Boost: Gamma Wave, on the other hand, focuses more on activities designed to stimulate higher mental activity, including perception and problem solving. Brain Boost: Gamma Wave includes training games for remembering Circumstances, Faces, Images, Numbers, and Colors. Developed by Interchannel-Holon, both Brain Boost: Beta Wave and Brain Boost: Gamma Wave are scheduled to ship in November for a suggested price of $19.99.
0.9408
FineWeb
By Larry Chaffin The 1st booklet released on deploying Voice Over IP (VoIP) items from Nortel Networks, the most important provider of voice items within the world. This booklet starts with a dialogue of the present protocols used for transmitting converged info over IP in addition to an summary of Nortel’s and software program ideas for converged networks. during this part, readers will find out how H.323 permits numerous conversation units to speak with one another, and the way SIP (Session Initiation Protocol) is used to set up, regulate, and terminate multimedia classes together with VOIP mobile calls. This part subsequent introduces the reader to the Multimedia focus Server 5100, and Nortel’s complete suite of Multimedia Communications Portfolio (MCP) items. the remainder chapters of the e-book train the reader the way to layout, set up, configure, and troubleshoot the full Nortel product line. · while you are tasked with designing, fitting, configuring, and troubleshooting a converged community equipped with Nortel's Multimedia focus Server 5100, and Multimedia Communications Portfolio (MCP) items, then this is often the one e-book you would like. · It indicates how one can layout, construct, safe, and preserving a state of the art converged community to meet all your enterprise requirements · additionally covers tips to safe all of your multimedia community from malicious assaults
0.5026
FineWeb
As urbanization continues to increase and air pollution becomes a major concern in cities around the world, many European cities have implemented strict rules for environmental zones towards vehicles to reduce emissions and improve air quality. In this article, we will explore some of these restrictions. One of the most common measures taken by cities to reduce car emissions is the implementation of low emission zones. These are designated areas within a city where only vehicles with certain emission standards are allowed to enter. In Germany, these zones are identified by a green circular sign with a white letter "E" inside and are typically found in the city centers of major cities such as Berlin, Frankfurt, and Munich. To enter an LEZ in Germany, a vehicle must display a special sticker indicating that it meets the emission standards for the zone. These stickers are color-coded based on the emission standards of the vehicle. For example, a green sticker indicates that the vehicle meets the highest emission standards and is allowed in all LEZs, while a red sticker indicates that the vehicle does not meet the required standards and is not allowed in any LEZs. Fuel Efficiency Standards In addition to LEZs, many European cities have also implemented fuel efficiency standards to encourage the use of more fuel-efficient vehicles. In Germany, for example, all new cars must meet certain fuel efficiency standards set by the European Union to be registered and sold in the country. These standards are based on the carbon dioxide emissions of the vehicle, with higher emissions resulting in lower fuel efficiency ratings. Cities in Germany and other European countries may also offer incentives or discounts to drivers of fuel-efficient vehicles, such as reduced parking fees or access to special lanes on the roads. These measures are designed to encourage the use of more environmentally friendly vehicles and reduce overall emissions from the transportation sector. Public Transportation Alternatives Finally, many European cities are also promoting the use of public transportation as an alternative to driving a car. This includes expanding and improving public transit systems, as well as encouraging the use of alternatives such as biking or walking. In Germany, for example, many cities have implemented bike-sharing programs and are investing in the expansion of their public transportation networks. By providing attractive alternatives to driving a car, cities hope to reduce the number of cars on the roads and lower overall emissions. This can help to improve air quality and reduce the impact of transportation on the environment.
0.9741
FineWeb
You know and love our Must-Read IT Blogs lists, but now, say hello to the nonprofit side. The Green Grid, an industry trade consortium focused on improving energy efficiency, developed the PUE metric. The metric compares the total amount of energy consumed by the data center with the total amount of energy consumed by the data center’s IT equipment. A rating below 2 is highly effective; above 3, and there are bigger problems than the IT gear. What It Is: The calculation for PUE is: PUE = Total data center facility power consumption / IT equipment power consumption Data Center infrastructure Efficiency (DCiE) is the reciprocal of PUE (1 / PUE), defined as: DCiE = IT equipment power consumption / Total data center facility power consumption x 100% Note that The Green Grid defines PUE as being greater than or equal to 1.0, and DciE as being greater than or equal to 100 percent. What Counts: IT equipment power consumption includes energy used for servers, storage systems, network equipment, monitors, workstations, printers, copiers and other associated technologies. Total data center facility power consumption is measured as the energy going into a facility that supports power distribution units (PDUs), uninterruptible power supplies (UPSes), generators, standby battery power, cooling, lighting and IT equipment. For example, if a data center facility consumed 3,000 watts (3 kW) of energy, and IT equipment consumed 2,000 watts (2 kW), the PUE would be 1.5 and DCiE 66.667 percent. A PUE of 1.5 (considered to be good) means that energy demands are 1.5 times that of the IT equipment deployed in a given data center. On the other hand, a data center consuming 5 kW of energy with IT equipment consuming 1.5 kW would have a PUE of 3.33 or DCiE of 30 percent, which is considered less effective. Mileage May Vary: The lower the PUE, the better in terms of how efficiently a facility utilizes energy. Keep in mind that this isn’t an indicator of how effectively energy is used. It’s tempting to focus on the macro metric of PUE and attempt to reduce its value; however, this can be dangerous. The danger lies in trying to lower the PUE without looking at how performance, availability, capacity and economics (PACE) of information services being delivered would be affected. Another pitfall of using PUE for comparisons is an apples-to-oranges scenario in which different time intervals are used for facilities doing various types of work. Collect Measurements: Total data center input power can be collected via the main meter or metrics supplied by your service provider. IT equipment can be measured from the UPSes, PDUs or meters attached to servers, storage and network gear. Don’t rely on stickers or labels on IT equipment for power consumption information because these may reflect peak or surge versus continuous operating information or may provide fuse and breaker-sizing information. Maintain Perspective: Metrics are important, but they should be kept in perspective. Multiple metrics are needed — not only the PUE, but also indicators of work being performed or information services delivered. Avoid using dissimilar metrics that don’t reflect your environment or applications workload. Also keep the time interval of the metrics in consideration. Overall, IT administrators should use the PUE metric along with other measurements to effectively manage their environments. After all, the most important metrics are the ones that provide situational awareness into how your environment is functioning to deliver information services at a given level.
0.5289
FineWeb
Kelly Slater, the legendary surfer and 11-time world champion, has captivated the world with his phenomenal skills and progressive approach to the sport. Beyond his athletic achievements, fans and enthusiasts alike are curious about his personal life, including his whereabouts. In this comprehensive guide, we'll delve into the intriguing question: Where does Kelly Slater live? Home Base in Cocoa Beach, Florida: A Surfer's Paradise Kelly Slater's primary residence is nestled in the heart of Cocoa Beach, Florida, a renowned surf town known for its consistent waves and vibrant surf culture. This coastal haven provides the perfect backdrop for Slater to pursue his passion and maintain his connection to the ocean. The town's laid-back atmosphere and abundance of surf spots make it an ideal location for Slater to train, innovate, and push the boundaries of surfing. Tranquility in Kauai, Hawaii: Embracing the Aloha Spirit In addition to his home in Florida, Kelly Slater also finds solace and inspiration in Kauai, Hawaii. This lush and pristine island captivates with its breathtaking landscapes, diverse ecosystems, and legendary surf breaks. Slater's deep appreciation for nature and respect for the ocean are nurtured in this idyllic setting. Whether he's riding the waves or exploring the island's natural wonders, Kauai offers Slater a sanctuary to recharge, rejuvenate, and reconnect with his love for surfing. Global Nomad: Chasing Perfect Waves Worldwide While Kelly Slater maintains his residences in Cocoa Beach and Kauai, his surfing expeditions take him to every corner of the globe. As a professional surfer, he constantly seeks out the most challenging and rewarding waves. From the frigid waters of Iceland to the tropical shores of Tahiti, Slater's nomadic lifestyle allows him to experience the world's diverse surf breaks and share his passion with fellow surfers. This global exploration not only expands his surfing horizons but also deepens his understanding of different cultures and environments. Environmental Advocacy and Sustainable Living In addition to his surfing pursuits, Kelly Slater is a passionate advocate for environmental conservation and sustainable living. His homes in Cocoa Beach and Kauai reflect his commitment to reducing his ecological footprint. Slater actively supports organizations dedicated to protecting marine ecosystems and promoting responsible coastal development. His lifestyle choices, such as using renewable energy sources and minimizing waste, serve as an inspiration to others to make a positive impact on the planet. Conclusion: A Life Defined by Surfing and Environmental Stewardship Kelly Slater's choice of residences mirrors his multifaceted personality and passions. From his home base in Cocoa Beach to his retreat in Kauai and his global surfing adventures, Slater embodies the spirit of surfing, exploration, and environmental stewardship. Whether he's riding the waves, advocating for ocean conservation, or simply enjoying the beauty of nature, Slater continues to inspire and captivate audiences worldwide. Frequently Asked Questions - Why does Kelly Slater have multiple residences? Kelly Slater's multiple residences reflect his diverse interests and lifestyle. His home in Cocoa Beach provides a convenient base for training and surfing, while his retreat in Kauai offers a tranquil escape and connection to nature. Additionally, his travels around the world allow him to experience different cultures and surf breaks. - What does Kelly Slater do to promote environmental sustainability? Kelly Slater is an ardent advocate for environmental conservation and sustainable living. He actively supports organizations dedicated to protecting marine ecosystems and promoting responsible coastal development. Additionally, his personal choices, such as using renewable energy sources and minimizing waste, set an example for others to make a positive impact on the planet. - Where does Kelly Slater surf the most? Kelly Slater is known for his global surfing adventures, and he has surfed in various locations worldwide. However, he frequently visits well-known surf spots such as Pipeline in Hawaii, J-Bay in South Africa, and Cloudbreak in Fiji. - What are Kelly Slater's plans for the future? Kelly Slater continues to push the boundaries of surfing and environmental advocacy. He is involved in various projects aimed at promoting ocean conservation and sustainable living. Additionally, he remains an active competitor on the professional surfing circuit and shows no signs of slowing down. - How can I follow Kelly Slater's surfing journey? You can follow Kelly Slater's surfing journey through various channels. His official website and social media platforms provide regular updates on his competitions, travels, and environmental initiatives. Additionally, numerous surf magazines, websites, and documentaries feature his exploits and insights into the world of surfing.
0.738
FineWeb
Middle School Coursework - After July To add a middle school endorsement to a license, an applicant must complete six semester hours of middle school professional coursework in the following areas: - 3 semester hours of coursework in middle school philosophy, curriculum, and instructional methods for designing and teaching developmentally appropriate programs in the middle grades, including content area reading instruction; - 3 semester hours of coursework in educational psychology focusing on the developmental characteristics of early adolescents and the role of the middle grade teacher in assessment, coordination, and referral of students to health and social services. Below is a list of universities and courses that have been pre-approved. - Illinois Universities A - F - Illinois Universities G - L - Illinois Universities M - Q - Illinois Universities R - Z - Out of State Universities - Because coursework varies at different universities, please contact a teacher education institution to determine the specific courses offered to fulfill these requirements. - Universities offer middle grades courses in varying approaches. Some universities offer specific, individual courses to meet the middle grades requirements. Others infuse the content throughout several courses; in this situation, all the courses will have to be taken to meet the requirement. The applicant should be certain to understand the approach taken by the university where the applicant has enrolled.
0.9673
FineWeb
< Back to previous page Flow cytometric basophil activation tests Journal Contribution - Journal Article Subtitle:staining of exteriorized basophil granule matrix by fluorescent avidin versus appearance of CD63 Background Staining of exteriorized basophil granule matrix by fluorescent avidin might be a reliable technique to monitor basophil degranulation. This study compares the avidin‐based technique with the upregulation of CD203c and appearance of CD63 in response to various stimuli. Methods Fourteen individuals responsive to anti‐IgE, nine healthy controls, and five birch pollen‐allergic patients, and five nonresponders were studied. Activation experiments included anti‐IgE, fMLP, interleukin‐(IL)‐3, and birch pollen allergen. Basophil activation/degranulation was analyzed by flow cytometry and microscopy using anti‐CD63, anti‐CD203c, and avidin. Results Stimulation with anti‐IgE, fMLP, and relevant allergen results in upregulation of CD203c, CD63 appearance, and an increase in avidin binding. In response to anti‐IgE and allergen, upregulation of CD203c peaks within 10 min, CD63 and avidin binding reach a plateau after 10–20 min. CD63 staining leads to a bimodal distribution, avidin staining causes a unimodal shift with a less clear discrimination between degranulating and nondegranulating cells. In response to fMLP, upregulation of CD203c and CD63 and avidin binding are maximal after 2.5 min. Following incubation with anti‐IgE and fMLP, percentages of CD203c+ cells are higher than those of CD63+ and avidin+ cells, pointing to a dissociation between activation and degranulation. Percentages of CD63+ cells are systemically higher than those of avidin+ cells. Incubated with IL‐3 only upregulates CD203c, while no CD63 or avidin binding is observed. Conclusions Staining of exteriorized proteoglycans by avidin is a reliable technique to quantify basophil degranulation but offers no added value when compared to traditional assays that use CD63 as a readout. Journal: Cytometry. Part B, Clinical Cytometry Pages: 483 - 490 Keywords:Biochemistry/biophysics/molecular biology, Anatomy & pathology, Cell biology, Experimental/laboratory medicine
0.9711
FineWeb
Barg pressure is the pressure, in units of bars, above or below atmospheric pressure. The "g" at the end of the word indicates that the measurement is not absolute pressure, sometimes indicated by bara. According to The Engineering ToolBox, standard atmospheric pressure is, by definition, 1.01325 at sea-level and 0 degrees Celsius. Physics defines pressure as force per unit area. Atmospheric pressure is due to gravity acting on the air that surrounds the Earth. This is the reason standard atmospheric pressure is defined at sea level. As elevation increases, there is less pressure due to the decrease in the weight of air. According to HowStuffWorks, "Humans cannot live unprotected at pressures greatly above or below atmospheric pressure." Pressure also affects the boiling point of water, so recipes require adjustments for high altitudes. Underwater, the weight of water acts on a body to increase pressure, so it rapidly increases. Pressure acting on materials deep within the Earth helps to form metamorphic rocks. Hydromechanics, the study of the effects of force on fluids, involves the study of pressure. According to Bernoulli's Principle, streams of both liquids and gases reduce pressure. According to Pascal's Law, changes in pressures transmit to all parts of a confined fluid while holding a constant temperature.
0.9797
FineWeb
My NoSQL article is finally posted; I hope it lives up to all the foreshadowing. It is being run online at Intelligent Enterprise/Information Week, as per the link above, where Doug Henschen edited it with an admirably light touch. Below please find three excerpts* that convey the essence of my thinking on NoSQL. For much more detail, please see the article itself. *Notwithstanding my admiration for Doug’s editing, the excerpts are taken from my final pre-editing submission, not from the published article itself. My quasi-definition of “NoSQL” wound up being: NoSQL DBMS start from three design premises: - Transaction semantics are unimportant, and locking is downright annoying. - Joins are also unimportant, especially joins of any complexity. - There are some benefits to having a DBMS even so. NoSQL DBMS further incorporate one or more of three assumptions: - The database will be big enough that it should be scaled across multiple servers. - The application should run well if the database is replicated across multiple geographically distributed data centers, even if the connection between them is temporarily lost. - The database should run well if the database is replicated across a host server and a bunch of occasionally-connected mobile devices. In addition, NoSQL advocates commonly favor the idea that a database should have no fixed schema, other than whatever emerges as a byproduct of the application-writing process. I subdivided the space by saying: If not SQL, then what? A number of possibilities have been tried, with the four main groups being: - Simple key-value store. - Fully SQL/tabular. DBMS based on graphical data models are also sometimes suggested to be part of NoSQL, as are the file systems that underlie many MapReduce implementations. But as a general rule, those data models are most effective for analytic use cases somewhat apart from the NoSQL mainstream. My conclusion was: So should you adopt NoSQL technology? Key considerations include: - Immaturity. The very term “NoSQL” has only been around since 2009. Most NoSQL “products” are open source projects backed by a company of fewer than 20 employees. - Open source. Many NoSQL adopters are constrained, by money or ideology, to avoid closed-source products. Conversely, it is difficult to deal with NoSQL products’ immaturity unless you’re comfortable with the rough-and-tumble of open source software development. - Internet orientation. A large fraction of initial NoSQL implementations are for web or other internet (e.g., mobile application) projects. - Schema mutability. If you like the idea of being able to have different schemas for different parts of the same “table,” NoSQL may be for you. If you like the database reusability guarantees of the relational model, NoSQL may be a poor fit. - Project size. For a large (and suitable) project, the advantages of NoSQL technology may be large enough to outweigh its disadvantages. For a small, ultimately disposable project, the disadvantages of NoSQL may be minor. In between those extremes, you may be better off with SQL. - SQL DBMS diversity. The choice of SQL DBMS goes far beyond the “Big 3-4” of Oracle, IBM DB2, Microsoft SQL Server, and SAP/Sybase Adaptive Server Anywhere. MySQL, PostgreSQL, and other mid-range SQL DBMS – open source or otherwise – might meet your needs. So might some of the scale-out-oriented startups cited above. Or if your needs are more analytic, there’s a whole range of powerful and cost-effective specialized products, from vendors such as Netezza, Vertica, Aster Data, or EMC/Greenplum. Bottom line: For cutting-edge applications – often but not only internet-centric — NoSQL technology can make sense today. In other use cases, its drawbacks are likely to outweigh its advantages.
0.8265
FineWeb
Upon acknowledging that at present, the improvement of solar cell’s efficiency has already reached its maximum with only gradient increases if any, the two physicists from the University of Warwick, Coventry took the works of Einstein and Tesla as inspiration and came up with a new design of solar panels – double-glazing. The proposed solar panel design of Dr. Gavin Bell and Dr. Yorck Ramachers is fundamentally similar with a double-glazed window, which is composed of two glass layers and the space between them is filled with an inert gas. Whereas existing solar panels use vacuum as the filler between two layers, the new design uses an inert gas as the filler, acting as an additional insulation. Replacing the vacuum with gas could make manufacturing cost lower. How does the double-glazed solar panel work? When the sunlight strikes it, electrons are ejected from the photocathode, which is the inner layer applied with a special coating that releases those electrons during a radiation. These free electrons then transcend through the inert gas, argon in this case, and finally collected by the transparent outer layer that conducts electricity. “It’s satisfying to find a new twist on ideas dating back to the start of the 20th century, and as a materials physicist it is fascinating to be looking for materials which would operate in an environment so different to standard photocathodes,” Dr. Bell states. The scientists are hopeful that they have just initiated a new path for improving solar panels, especially by materials engineers. “Our device is radically different from standard photovoltaics, and can even be adapted for other green technologies such as turning heat directly into electricity, so we hope this work will inspire new advances.” Although the physicists consider a thin diamond film is a very good candidate, they have not yet established the photocathode’s optimum composition. They also recommend that the photocathode’s transparency is made variable to make it applicable to solar windows. “We think the materials challenge is really critical here so we wanted to encourage the materials science community to get creative,” Dr. Bell says.
0.8835
FineWeb
Every year the Northern New England Poison Center helps patients who have become sick after misidentifying mushrooms they picked to eat. Among NNEPC cases, mistakes made while foraging are the second most common cause of serious mushroom poisonings, behind only people who become sick after taking psychedelic mushrooms on purpose. Foraging mistakes don’t just happen among people who are new to mushroom gathering. Many of our cases involve people who have been foraging for years. What are some mushrooms that cause poisonings in our area? The most common case of mistaken mushroom identity we handle at the NNEPC involves the poisonous jack o’lantern mushroom (Omphalotus illudens), which can be mistaken for edible golden chanterelles (Cantharellus cibarius). Other poisonous lookalikes that are common problems in our region include: - The lilac brown bolete (Tylopilus eximius), mistaken for the edible king bolete (Boletus edulis complex) - The false morel (Gyromitra esculenta), mistaken for the yellow morel (Morchella esculenta) - The pigskin puffballs (species within Scleroderma), mistaken for edible puffballs (various species within Calvatia and Lycoperdon) What symptoms do poisonous mushrooms cause? Most poisonous mushrooms are stomach irritants and cause symptoms like stomach ache, vomiting, cramps and/or diarrhea, which can sometimes be severe. These usually appear within 30 minutes, though they may take longer. However, more dangerous mushrooms do not cause symptoms for 6 hours or longer after eating them. While these poisonings may also start with stomach cramps and diarrhea, they can lead to more severe effects. These can include seizures or damage to your liver or kidneys. Most patients recover with hospital care, but these effects can be fatal. What should I do if someone gets sick after eating a wild mushroom? - Call the poison center right away at 1-800-222-1222. The poison center can help identify the mushroom and determine what treatment is needed. - If possible, take some pictures of the mushroom or one just like that you can send to the poison center. Take one picture showing the side view of the mushroom next to a ruler, coin, pen or other object to show the size. Take another picture showing underneath the mushroom’s cap, and one from the top. - Information about where the mushroom was growing can also help the poison center—on wood or out of the ground, in the forest or on the lawn, etc. How can I prevent poisonings from foraging? Foraging always carries some risk. Even people who have been doing it for years can make mistakes or have unexpected reactions. Avoiding foraging is the only way to be 100% safe. If you are going to forage, take some training from an expert first. The North American Mycological Association has a list of mushroom clubs, including ones in Maine and New Hampshire, that may have information on available trainings. A training should cover not just identification, but also safe storage and cooking. If you have recently been poisoned by a wild mushroom, you can help prevent future mushroom poisonings by submitting a report about your experience to the North American Mycological Association Poison Case Registry.
0.6514
FineWeb
Specifications and Drawings - Assist Docs - Provided by the US Department of Defense; assistdocs provides access to Defense Standardization Program documents. - EverySpec is a free source for specifications, standards and handbooks from various government sources. - IHS GLOBAL - A source for Engineering documents, standards and specifications, along with technical information for engineers including ANSI standards, ASTM standards and military specifications. References Maintained by Airfasco A list of Fastener Weights The following references are under development. Please note that any link with a has a photo and any link with a has a drawing of the part.
0.6712
FineWeb
US 7890543 B2 An architecture and methodology for designing, deploying, and managing a distributed application onto a distributed computing system is described. 1. One or more computer readable storage media having stored thereon a plurality of instructions that implement a distributed computing system in a distributed computing environment based upon a schema, the schema comprising: at least one definition of a distributed computing system module to be implemented in the distributed computing environment, wherein the at least one definition of the distributed computing system module possesses an inheritance property such that a first definition, when derived from a second definition, inherits a setting constraint and a relationship constraint from the second definition; at least one relationship that identifies potential interactions between the modules of the distributed computing system such that the schema is used by a development tool to modify the definition and relationship and a deployment tool to implement the module in according to the definition and relationship; at least one requirement for the distributed computing environment used by the distributed computing system for a first validation to validate the distributed computing environment; and at least one requirement for the distributed computing system used by the distributed computing environment for a second validation to validate the distributed computing system. 2. The one or more computer readable storage media of 3. The one or more computer readable storage media of 4. The one or more computer readable storage media of 5. The one or more computer readable storage media of 6. The one or more computer readable storage media of 7. The one or more computer readable storage media of 8. The one or more computer readable storage media of a containment relationship, a delegation relationship, a connections relationship, a hosting relationship, and a reference relationship. 9. The one or more computer readable storage media of 10. The one or more computer readable storage media of 11. The one or more computer readable storage media of 12. The one or more computer readable storage media of 13. The one or more computer readable storage media of 14. The one or more computer readable storage media of 15. The one or more computer readable storage media of 16. The one or more computer readable storage media of 17. One or more computer readable storage media having stored thereon a plurality of instructions that implement a schema, the schema comprising: at least one distributed computing system module definition of a portion of an distributed computing system associated with a distributed-computing environment, wherein the at least one distributed computing system module definition possesses an inheritance property such that a first distributed computing system module definition, when derived from a second distributed computing system module definition, inherits a setting constraint and a relationship constraint from the second distributed computing system module definition; at least one resource definition that specifies module runtime behavior associated with the distributed computing system; and at least one endpoint definition of communication information associated with the distributed computing system. 18. One or more computer readable storage media as recited in 19. One or more computer readable storage media as recited in 20. One or more computer readable storage media as recited in 21. One or more computer readable storage media as recited in 22. One or more computer readable storage media as recited in 23. One or more computer readable storage media having stored thereon a plurality of instructions that when executed by a computer implement a design tool, the design tool comprising: a system definition model to enable defining abstractly the specifications of distributed-computing environments and distributed computing systems; and a schema to dictate how functional operations modules within the system definition model are to be specified, wherein the schema comprises: at least one requirement for the distributed-computing environments used by the distributed computing systems to validate the distributed-computing environments; and at least one requirement for the distributed computing systems used by the distributed-computing environments to validate the distributed computing systems. 24. The design tool of 25. The design tool of 26. The design tool of 27. The design tool of 28. A data structure stored on one or more computer-readable storage media that is instantiated in accordance with a schema, the schema being accessible by a plurality of tools, the plurality of tools comprising: an application development tool, whereby the application development tool defines a system comprised of communicating software and hardware components during a design phase; an application deployment tool for facilitating deployments to a plurality of host environments and a plurality of scales, whereby the application deployment tool facilitates utilizing a definition of the system developed by the application development tool to perform operations comprising: deploying the system; allocating software and hardware; and configuring the software and hardware; and an application management tool, the application management tool facilitating new management tools to perform operations comprising: driving resource allocation; the schema comprising: at least one system definition of a component of a scale-invariant distributed-application; at least one resource definition of an application runtime behavior associated with the component; at least one endpoint definition of communication information associated with the component; at least one containment relationship specifying an ability of a particular definition to contain members of other definitions; at least one delegation relationship that exposes members contained in the particular definition; at least one communication relationship that specifies available communication interactions between a plurality of definitions; at least one hosting relationship that specifies dependencies between the plurality of definitions; and at least one reference relationship that specifies ordering relationships between the plurality of definitions. 29. A method comprising: creating a data structure in accordance with a schema, the schema defining at least one definition of entities in a distributed-computing system, at least one containment relationship specifying the ability of a particular definition to contain members of other definitions, at least one delegation relationship that exposes members contained in the particular definition, at least one communication relationship that specifies available communication interactions between a plurality of definitions, at least one hosting relationship that specifies dependencies between the plurality of definitions, at least one reference relationship that specifies ordering relationships between the plurality of definitions; and populating the data structure. 30. One or more computer readable storage media having stored thereon a plurality of instructions that, when executed by a processor, cause the processor to perform a method, the method comprising: loading a definition of entities in a distributed-computing system; loading a relationship that specifies potential links between the entities in the distributed-computing system; and loading a constraint that specifies a restriction used by one of the entities to constrain the relationship in which the one of the entities participates, or a restriction used by the relationship to constrain one or more of the entities linked by the relationship. 31. The computer readable storage media of 32. The computer readable storage media of 33. The computer readable storage media of 34. A method comprising: loading a definition of entities of a distributed-computing system in a distributed-computing environment; and loading a relationship that specifies potential links interactions between the entities of the distributed-computing system such that the definition and the relationship are used during development, validation, deployment and management of the distributed-computing system, wherein the validation comprises: validating the distributed-computing system by the distributed-computing environment according to one or more requirements for the distributed-computing system; and validating the distributed-computing environment by the distributed-computing system according to one or more requirements for the distributed-computing environment. 35. The method of 36. The method of This application claims the benefit of U.S. Provisional Application No. 60/452,736, filed Mar. 6, 2003, entitled “A This patent application is related to the following U.S. patent applications (all of which are incorporated by reference): U.S. patent application Ser. No. 09/695,821, filed on Oct. 24, 2000, titled “U U.S. patent application Ser. No. 09/696,707, filed on Oct. 24, 2000, titled “S U.S. patent application Ser. No. 09/696,752, filed on Oct. 24, 2000, titled “S The invention relates to an architecture for a distributed computing system. Internet usage has exploded over the past several years and continues to grow. People have become very comfortable with many services offered on the World Wide Web (or simply “Web”), such as electronic mail, online shopping, gathering news and information, listening to music, viewing video clips, looking for jobs, and so forth. To keep pace with the growing demand for Internet-based services, there has been tremendous growth in the computer systems dedicated to hosting Websites, providing backend services for those sites, and storing data associated with the sites. One type of distributed computer system is a data center (such as an Internet data center (IDC) or an Enterprise Data Center (EDC)), which is a specifically designed complex that houses many computers for hosting network-based services. Data centers, which may also go by the names of “Webfarms” or “server farms”, typically house hundreds to thousands of computers in climate-controlled, physically secure buildings. Data centers typically provide reliable Internet access, reliable power supplies, and a secure operating environment. Today, large data centers are complex and often called upon to host multiple applications. For instance, some websites may operate several thousand computers, and host many distributed applications. These distributed applications often have complex networking requirements that require operators to physically connect computers to certain network switches, as well as manually arrange the wiring configurations within the data center to support the complex applications. As a result, this task of building physical network topologies to conform to the application requirements can be a cumbersome, time consuming process that is prone to human error. Accordingly, there is a need for improved techniques for designing and deploying distributed applications onto the physical computing system. An architecture and methodology for designing, deploying, and managing a distributed application onto a distributed computing system is described. The same numbers are used throughout the drawings to reference like features. The following disclosure describes a number of aspects pertaining to an architecture for designing and implementing a distributed computing system with large-scale application services. The disclosure includes discussion of a system definition model (SDM), which is also referred to as a service definition model, and an SDM runtime environment. The disclosure further includes design aspects such as how to model various data center components. As used herein, the term “wire” may also be referred to as “connections”, “communication”, or “communication relationship”. Also, the term “system” may be referred to as “module” and the term “resource space” may be referred to as “resources”. Additionally, the term “application space” may also be referred to as “applications”, and the term “instance space” may also be referred to as “instances”. Further, the term “class” may also be referred to as “abstract definition”, the term “port” may also be referred to as “endpoint”, and the term “type” may also be referred to as “definition”. Computing devices 102 can be any of a variety of conventional computing devices, including desktop PCs, workstations, mainframe computers, server computers, Internet appliances, gaming consoles, handheld computers, cellular telephones, personal digital assistants (PDAs), etc. One or more of devices 102 can be the same types of devices, or alternatively different types of devices. Additionally, even if multiple devices are the same types of devices, the multiple devices may still be configured differently (e.g., two devices 102 may be server computers, but may have different hardware configurations, such as different processors, different amounts of RAM, different sizes of hard disk drives, and so forth). One or more computing devices 102 may also be re-configured after being added to setting 100. For example, a particular computing device 102 may operate for a period of time (e.g., on the order of minutes, hours, days, months, etc.) performing one function, and then an administrator may decide that a different function is desirable (e.g., change from being a server computer to a workstation computer, from a web server to a local file server, etc.). The lifecycle of a system typically includes three primary phases (also referred to as stages): a design or development phase, followed by a deployment or installation phase, followed by an operations or management phase. As the model applies to all three phases of the lifecycle of a system, the model can thus be seen as an integration point for the various phases in the lifecycle of a system, and facilitates each of these phases. Additionally, by using the model knowledge can be transferred between these phases, such as knowledge regarding management of the system (e.g., being fed back to the design and development team (e.g., thereby allowing the design and development team to modify the system, such as for future versions or to improve the performance of the current version); knowledge of the structure, deployment requirements and operational behavior of the system; knowledge of the operational environment from the desktop to the data center; knowledge of the service level as observed by the end user; and so forth. Generally, during the design phase, development tools leveraging the SDM are used to define a system comprised of communicating software and hardware components. A system definition contains all information necessary to deploy and operate a distributed system, including required resources, configuration, operational features, policies, etc. During the deployment phase, the system definition is used to automatically deploy the system and dynamically allocate and configure the software and hardware (e.g., server, storage and networking) resources required. The same system definition can be used for deployments to different host environments and to different scales. During the management phase, an SDM Service in the operating system provides a system-level view for managing the system. This enables new management tools to drive resource allocation, configuration management, upgrades, and process automation from the perspective of a system. The architecture 200 employs the SDM definition model as well as a schema that defines functional operations within the SDM definition model. The definition model includes various different kinds of data structures which are collectively referred to as “definitions”. Functionality of the SDM is exposed through one or more platform services, such as application program interfaces (APIs). During the design phase for a system, a development system 202 generates a document that contains the system definition, such as an SDM document 204. Development system 202 can be any of a variety of development systems, such as the Visual Studio® development system available from Microsoft® Corporation of Redmond, Wash. SDM document 204 defines all information (also referred to herein as knowledge) related to the deployment and management of the system. Any knowledge necessary for or used when deploying the system or managing the system is included in SDM document 204. Although described herein as a single document, it is to be appreciated that the knowledge could alternatively be spread out and maintained in multiple documents. SDM document 204 includes one or more constraints (also referred to as requirements) of the system that an environment in which the system is to be deployed and/or run must satisfy. The environment itself is also described using an SDM document. Such environments can be single computing devices, or alternatively collections of computing devices (e.g., data centers), application hosts, etc. Different systems can be installed to different environments. For example, a data center may include fifty computing devices, and one system may be deployed to five of those computing devices, while another system may be deployed to thirty five of those computing devices. These requirements can take a variety of forms, such as: hardware requirements regarding the computing device(s) on which the system is to be deployed (e.g., a minimum processor speed, a minimum amount of memory, a minimum amount of free hard drive space, a minimum amount of network bandwidth available, particular security mechanisms available, and so forth), software requirements regarding the computing device(s) on which the system is to be deployed (e.g., a particular operating system, one or more other applications that also must be installed, specifications regarding how a particular system and/or the operating system is to be configured, a particular type of security or encryption in use, and so forth), other requirements regarding the computing device(s) on which the system is to be deployed (e.g., particular security keys available, data center policies that must be enforced, authentication that is used, environment topology, etc.). Requirements can also go in the other direction—that is, the environment can have constraints or requirements on the configuration of the system that is to be installed (e.g., to implement the standards or policies of the environment). These can be “explicit” requirements that are created by the operator of the environment, such as particular settings or configurations the system must have, particular functionality the system must provide or support, particular.security mechanisms the system must support, and so forth. These can also be “implicit” requirements that that arise because of a particular configuration of the environment. For example, if a host computing device in the environment is using a particular type of file system then it may not be possible for some actions to be performed using that file system (although it may be possible for those same actions to be performed using another file system). During the design and development phase of the system, SDM document 204 can be used to validate the system for one or more particular environment(s). This is a two-way validation: the system is validated for the environment and the environment is validated for the system. The environment can be validated for the system by comparing the requirements identified in the SDM document 204 with the environment and determining whether all of the requirements are satisfied by the environment. The system can be validated for the environment by comparing the requirements identified in an SDM document for the environment with the system and determining whether all of the requirements are satisfied by the system. If all of the requirements are satisfied by the environment and the system, then the designer or developer knows that the system can be deployed in and will run in the environment. However, if all of the requirements are not satisfied by the environment and/or the system, then the designer or developer is optionally informed of the requirements that were not satisfied, thereby informing the designer or developer of what changes should be made to the SDM document 204 (and correspondingly to the system) and/or to the environment in order for the system to be deployed and run in that environment. The knowledge regarding deployment of the system that is included in the SDM document 204 describes how the system is to be deployed in one or more environments. The SDM document 204 is made available to a controller 206, which includes a deployment module 208 and a management module 210. In certain embodiments, the SDM document 204 as well as all of the files of the system (e.g., binaries, data, libraries, etc.) needed to install the system are packaged together into a single container (e.g., a single file) referred to as an SDU (System Definition Unit). Controller 206 can be one or more of computing devices 102 of Deployment module 208 includes services that are used to deploy the system in the environment(s). In Different knowledge for deployment in different environments may be included in the SDM document 204. This deployment knowledge describes any changes that need to be made to the environment (e.g., changes to a system registry; folders, directories, or files that need to be created; other setting or configuration parameters of the computing device that need to be set to particular values; and so forth), as well as what files (e.g., program and/or data files) that need to be copied to the computing device(s) in the environment and any operations that need to be performed on those files (e.g., some files may need to be decompressed and/or decrypted). In many implementations, the deployment knowledge in the SDM document 204 includes, for example, information analogous to that presently found in typical setup or installation programs for systems. During the deployment process, controller 206 generates a record or store of the software and hardware resources involved in the deployment as well as the relationships between them. This record or store can subsequently be used by controller 206 during the management phase. Management module 210 includes services that are used to manage the system once it is installed in the environment(s). These services of management module 210 include one or more functions that can be called or invoked to manage the systems in the environment. The knowledge regarding management of the system that is included in the SDM document 204 describes how the system is to be managed in one or more environments. Different knowledge for managing a system in different environments may be included in the SDM document 204. The management knowledge includes any knowledge used in the management or operation of the system. Management involves, for example, configuration (and optionally subsequent reconfiguration), patching and upgrading, maintenance tasks (e.g., backup), health or performance monitoring, and so forth. Changes to deployed systems are made through management module 210. The services of management module 210 include one or more functions that can be called or invoked to make changes to one or more systems deployed in the environment. By making such changes through the management module 210, several benefits can be realized. One such benefit is that controller 206 can maintain a record of the changes that have been made. Controller 206 may maintain a copy of the SDM document 204 for the system and record in the SDM document 204 any changes that are made to the system. Alternatively, controller 206 may maintain a separate record of the changes made to the system. This record of changes maintained by controller 206 can simplify subsequent operations, such as solving problems with the system and/or environment, or when having to reinstall the system due to a hardware failure (allowing the system to be reinstalled and returned to running with the same parameters/settings as it had at the time of failure). By having such changes made through controller 206 and by having controller 206 maintain the record, some human error can be removed from the environment (e.g., if the administrator making the change is supposed to log the change in a book but forgets to do so there would be no record of the change—this problem is solved by having controller 206 maintain the record). Furthermore, by making changes to systems through controller 206, as well as deploying systems through controller 206, controller 206 can serve as the repository of knowledge about the environment, the systems deployed in the environment, and interactions between them. Knowledge regarding the environment and/or systems deployed in the environment can be readily obtained from controller 206. This knowledge can be used to ensure the consistency of the controlled environment by validating that the controlled devices in the environment reflect the state stored in the central controller 206. It should be noted that in some situations changes may be made to a system and/or environment but are not made through controller 206. For example, a computing device may be accidentally turned off or may fail. In these situations, attempts are made to reflect such changes in controller 206. These changes may be reflected in controller 206 automatically (e.g., a system may run that attempts to detect device failures and use the services of management module 210 to notify controller 206 of such failures) or may be reflected in controller 206 manually (e.g., an administrator may use the services of management module 210 to notify controller 206 of such changes). Alternatively, the changes that were made could be reversed to bring the system and/or portion of the environment back into line with the desired state of the system as recorded by controller 206. The SDM document 204 can thus be viewed as a “live” document—it can be constantly changing based on changes to the environment and/or changes to the system throughout the lifecycle of the system. System Definition Model (SDM) The system definition model (SDM) is a modeling technology used to create definitions of systems. A system is a set of related software and/or hardware resources that work together to accomplish a common function. Example systems include multi-tier line-of-business applications, Web services, e-commerce sites, and enterprise data centers. The SDM provides tools and a context for an application architect, network architect, datacenter architect, or other developer to design distributed computer applications and data centers in an abstract manner. The SDM defines a set of elements that represent functional units of the systems that will eventually be implemented by physical computer resources and software. The SDM also defines elements that are relevant to operators or other individuals that will manage a system. Additionally, the SDM captures data pertinent to development, deployment, and operations. Associated with the SDM elements is a schema that dictates how functional operations represented by the components are to be specified. A system is composed of resources, endpoints, relationships and sub-systems. Definitions of each of these items are declared in an SDM document. An SDM document is an XML document that contains one or more definitions of systems, resources, endpoints and relationships. Resources may be hardware resources or software resources. Endpoints represent communications across systems. Relationships define associations between systems, resources and endpoints. Sub-systems can be treated as complete systems and are typically part of a larger system. A system definition captures the basic structure of a dynamic system. It can be viewed as the skeleton on which all other information is added. This structure is typically specified during the development process, by architects and developers, and typically does not change frequently. In addition to the structure, the SDM can contain deployment information, installation processes, schemas for configuration, events and instrumentation, automation tasks, health models, operational policies, etc. Other information can be added by the operations staff, by vendors, and/or by management systems across the lifetime of a distributed system. SDM Schema Design Specification The SDM is designed to support description of the configuration, interaction and changes to the components in a distributed system (e.g., the modeled system). “Definitions” describe entities that exist in a system and “relationships” identify the links between the various entities. Definitions and relationships are further defined to capture semantic information relevant to the SDM. As shown in As shown in The SDM includes “abstract definitions” that provide a common categorization of system parts, provide tool support for a wide range of applications and provide the basis for definition checking at design time. A set of abstract definitions provide a comprehensive basis for service design. “Concrete definitions” represent parts of an actual application or data center design. A concrete definition is generated by selecting an abstract definition and providing an implementation that defines the concrete definition's members and setting values for its properties. Distributed applications are generated using collections of these concrete definitions. The SDM also includes “constraints” that model restrictions based on the allowed set of relationships in which an instance of a relationship can participate. Constraints are useful in describing requirements that depend on the configuration of objects involved in a relationship. For example, a constraint may be used to determine whether participants on each end of a communication protocol are using compatible security settings. A flow can be identified as part of a definition and/or a resource. This flow is used to control application behavior at runtime by propagating operator settings to the systems, sub-systems, or other components that utilize such settings. Abstract Definitions and Relationships Abstract definitions define the building blocks that check application configuration at design time and then deploy and manage an application at run time. These building blocks represent entities that exist in the modeled system. For example, abstract definitions can model files and directories, the configuration inside a web server, or the databases inside a SQL server. Abstract relationships model the interactions that can occur between abstract definitions. Relationships are binary and directed, identifying the definitions of the instances that participate in manifestations of the relationship. Relationships provide a way of associating entities with one another, thereby allowing the modeling of containment, construction and communication links between entities. Constraints are used by definitions to constrain the relationships in which they participate. Constraints are further used by relationships to constrain the definitions that can be linked. These constraints can target the definition and settings of participants in a relationship. The abstract definition space is divided into three categories: components, endpoints and resources. Abstract component definitions describe self-contained independently deployable parts of an application. These definitions represent parts of an application that interact through well-defined communication channels that can cross process and machine boundaries. Abstract endpoint definitions describe the communication endpoints that a component may expose. These abstract endpoint definitions can model all forms of communication that the system is aware of to verify system connectivity at design time and to enable connections at runtime. Abstract resource definitions describe behavior that is contained within a component. Resource definitions may have strong dependencies on other resource definitions. These dependencies can include requiring a specific installation order and initiating runtime interaction through various communication mechanisms. Abstract definitions include the ability to expose settings. In one embodiment, these settings are name-value pairs that use an XML schema to define the definition of the setting. Settings can be dynamic or static. Static settings are set during the deployment process. Dynamic settings can be changed after deployment. The code responsible for applying settings values to the running system is hosted in the SDM runtime. The SDM model supports inheritance over abstract definitions. A derived definition can extend the properties exposed by its parent and can set values for its parent's properties. A derived definition can participate in the relationships that identify its parent as a participant. As mentioned above, relationships are divided in five categories: communication (or connections), containment, delegation, hosting and reference. Communication relationships capture potential communication interactions between abstract endpoint definitions. The existence of a communication relationship indicates that it may be possible for components that expose endpoints of the identified definition to communicate. The actual establishment of the link is subject to constraints on the endpoints and the exposure of the endpoints. Containment relationships describe the ability of an abstract definition to contain members of other abstract definitions. More specifically, a containment relationship between two abstract definitions A and B allows a concrete definition that implements A to contain a member of a definition that implements B. Containment relationships model the natural nesting structures that occur when developers build applications. By containing a member of another definition, the parent is able to control the lifetime and visibility of the contained definition. All definition instances in the run time space exist as members of other definition instances, forming a completely connected set of instances. Thus, the set of containment relationships describes the allowed containment patterns that occur in the runtime space. Delegation relationships selectively expose contained members. For example, delegation can expose endpoint members from component definitions. By delegating an endpoint from an inner component, the outer component exposes the ability to communicate using a particular protocol without exposing the implementation behind the protocol. Hosting and reference relationships represent two forms of dependency relationships. A hosting relationship is used to capture knowledge regarding how to create an instance of a definition on a particular host. The hosting relationship allows the developer to create their own definition in a manner that is independent from the operation of a specific host. This relationship also allows a single definition to be deployed on multiple host types without rewriting the guest definition. The hosting relationship describes a primary dependency between abstract definitions that exists before an instance of a concrete definition is created. Each instance participates as a guest in a hosting relationship, thereby causing the hosting relationships to form a connected tree over the instance space. Reference relationships capture additional dependencies used for parameter flow and for construction ordering. Concrete Definitions and Relationships Concrete definitions are created from abstract definitions. Concrete relationships are created from abstract relationships. The combination of abstract definitions and abstract relationships defines a schema for modeling the target system. A concrete definition uses a subset of the abstract definition space to create a reusable configuration of one or more abstract definitions. The abstract definition space can be compared to the schema for a database. In this analogy, the concrete definition space represents a reusable template for a set of rows in the database. The concrete definition is validated against the abstract definition space in the same way that the rows in the database are validated against the constraints of the schema, such as foreign keys, etc. A developer can infer knowledge of the concrete definition from knowledge of the abstract definition. Thus, tools associated with the abstract definition can operate with many implementations that are derived from that abstract definition. For example, a tool that knows about abstract Web services can operate with any Web service deployed into a datacenter without requiring additional information from the developer. Each concrete definition provides an implementation for a specific abstract definition that includes extensions to the settings schema, values for settings, declarations for definition and relationship members, and constraints on the relationships in which the definition can participate. The behavior of the concrete definition follows the definition of the abstract definition. In particular, abstract component definitions become component definitions, abstract endpoint definitions become endpoint definitions and abstract resource definitions become resource definitions. Each concrete relationship provides an implementation for a specific abstract relationship that includes a settings schema and settings values, nested members of the same relationship category (e.g., hosting, containment, or communication), and constraints on the definitions that can participate in the relationship. Concrete hosting relationships define a set of hosting relationships that can map the members of one concrete definition onto another concrete definition. For example, a concrete hosting relationship can identify the bindings between a web application and the IIS host to which it will be deployed. More than one hosting relationship can exist for a particular definition, thereby allowing the developer to define deployments for specific topologies. A concrete definition can declare members of other concrete or abstract definitions—referred to as “definition members”. These definition members are then referenced from “relationship members” that define the relationships between the definition members. Definition members include references to instances of a particular definition. Settings flow can provide values for the definition or can constrain the construction parameters used when creating the definition. When declaring a definition member, the user (e.g., developer) can decide whether the definition member is created at the same time the outer component is created (referred to as “value semantics”) or whether the definition member is created by an explicit new operation that occurs at a later time (referred to as “reference semantics”). Relationship members define the relationships that definition members will participate in when they are created. If a definition member is contained in the concrete definition, then a containment relationship member is declared between the definition member and this reference for the outer definition. If the definition member is delegated, then a delegation relationship member would be defined between the definition member and a nested definition member. Communication relationship members can be declared between endpoints on definition members and dependency relationship members (reference and hosting) can be declared between definition members or nested definition members. Relationship constraints narrow the set of relationships in which a particular definition is willing to participate. Relationship constraints identify constraints on a particular relationship and on the participants at the other end of the relationship. The instance space stored in the SDM runtime identifies the current state of the modeled system. The SDM runtime contains a record of the instances that have been created and the relationships between those instances. Each instance has an associated version history that links each version to a change request. A change request is the process that creates a new instance. The change request defines a set of create, update and delete requests for definitions and relationships associated with specific members of an existing instance. The root is handled as a special case. The change request is expanded by the runtime, verified against one or more constraints, and then constructed. The expansion process identifies definition and relationship instances that are constructed implicitly as part of the construction request of the containing definition. As part of the expansion process, the settings flow is evaluated across all relationships. The verification step checks that all required relationships exist and that the relationships fulfill the necessary constraints. Finally, the construction process determines an appropriate ordering over the deployment, update, or removal of each instance. The construction process then, in the correct sequence, passes each instance to an instance manager to perform the appropriate action. Data centers can be created using multiple software components. One or more connections are configured between the multiple software components. Some of these software components may function as hosts for the application layer. Example component definitions in the host layer include IIS, SQL, AD, EXCHANGE, DNS and Biztalk. The network/OS/storage layer supports the construction of data center networks and platforms. This layer also supports the configuration of a network security model, configuration of the operating system platform and association of one or more storage devices with the operating system platform. Example component definitions in the network/OS/storage layer include VLAN, Windows, Filter and Storage. The hardware layer identifies the definitions of systems that exist in the data center and the physical connections that exist between those systems. To satisfy the relationships needed by a particular component, that component is bound to a host component that has matching capabilities. This process is referred to as “logical placement”. At deployment time, instances of the guest component are positioned on instances of the host component. This process is referred to as “physical placement”. A process for managing changes to a distributed system is associated with the SDM model. Changes to the distributed system are driven by a change request that passes through one or more processing steps before the actions in the request are distributed and executed against target systems. The following is a brief, functional discussion of how the components in Additionally, an application developer is able to design and develop their application using any of a variety of development systems, such as the Visual Studio® development system. As the developer defines components of the application and how these components relate to one another, the developer is able to validate the application description against the datacenter description 602. This is also referred to as “Design Time Validation”. Once the application is complete, the developer saves the description in an SDM and requests that the application be packaged for deployment as an SDU 604. The SDU includes the application SDM as well as the application binaries and other referenced files used to install the application. The LIM 602 and SDU 604 are fed to deployment tool 606 of a controller device 620 for deployment. Deployment tool 606 includes a user interface (UI) to enable an operator to load the desired SDU 604. Deployment tool 606 works with create CR module 630 to install the application associated with the SDU 604 in accordance with the information in the SDM within SDU 604. Additionally, SDM definitions and instances from SDU 604 are populated in a store 608 of the SDM runtime 610. SDUs are managed in SDM runtime 610 by SDU management module 640, which makes the appropriate portions of the SDUs available to other components of runtime 610 and target(s) 622. The operator can also specify what actions he or she wants to take on the targets 622 (e.g., target computing devices) on which the application is being deployed. The operator can do this via a deployment file, which is also referred to herein as a Change Request (CR). The CR is run through one or more engines 612, 614, 616, and 618. Generally, expand CR engine 612 expands the CR to identify all associated components as well as their connections and actions, flow values engine 614 flows values for the components (such as connection strings), check constraints engine 616 checks constraints between the environment and the application, and order actions engine 618 specifies the order for all of the necessary actions for the CR. To initiate change to the system (including deploying an application) or validation of a model, an operator or process submits a CR. The CR contains a set of actions that the operator wants performed over the instances in the runtime 610. These actions can be, for example, create actions, update actions, and/or delete actions. In addition to user or operator initiated change requests, there may also be expansion/automatically generated change requests that are generated as part of the expansion process, discussed in more detail below. Regardless of their source, the change requests, once fully expanded and checked, are executed by sending actions to the targets 622, such as: discover, install, uninstall and change a target instance. The CR is treated as an atomic set of actions that complete or fail as a group. This allows, for example, the constraint checking engine 616 to consider all actions when testing validity. In design time validation, the CR will be created by the SDM Compiler 628 and will contain one or the minimum of each SDM component in the SDM file. This CR of create instance commands will flow through the expansion engine 612, the flow values engine 614, and the constraint checking engine 616. Errors found in these three phases will be returned to the user via the development system he or she is using. In deployment, the operator will create a CR with the UI presented by deployment tool 606. The CR will flow through all the engines 612, 614, 616, and 618 in the SDM runtime 610, and the appropriate actions and information will be sent by CR module 632 to the appropriate target(s) 622, where the request is executed (e.g., the application is installed). The appropriate target(s) 622 for a particular installation are typically those target(s) on which the application is to be installed. When beginning to process a CR, in a definition resolution phase, create CR module 630 resolves all definitions and members that are referenced in the change request. The change request will assume that these are already loaded by the runtime 610; create CR module 630 initiates a load/compile action if they do not exist. Create CR module 630 also implements a path resolution phase where references to existing instances and instances defined by create actions within the change request are resolved. The expansion performed by expansion engine 612 is a process where, given a change request, all the remaining actions required to execute the request are populated. In general, these actions are construction and destruction actions for definition and relationship instances. The operator could optionally provide details for all the actions required to construct or destroy an instance, or alternatively portions of the process can be automated: e.g., the operator provides key information about the changes he or she wants by identifying actions on members (e.g., byReference members), and the remainder of the actions are filled in on nested members (e.g., byReference and byvalue members) and relationships. By way of another example, automated expansion can also refer to external resource managers that may make deployment decisions based on choosing devices with available resources, locating the application close to the data it requires, and so forth. Expansion engine 612 also performs “auto writing”. During auto writing, engine 612 analyzes the scale invariant grouping of components and compound components specified in the SDM and determines how the components should be grouped and interconnected when scaled to the requested level. Expansion engine 612 also performs value member expansion, reference member expansion (discovery), and relationship expansion. Value member expansion refers to identification of all of the non-reference definition members. The cardinality of these members are noted and, since all the required parameters are known, for each member create requests are added to the change request for those members whose parent is being created. If the change request contains destruction operations, then destruction operations are added for all their contained instances. Reference member expansion refers to reference members (as opposed to non-reference definition members). The cardinality of reference members is often undefined and they can have deployment time settings that require values in order for the instance to be constructed. So the process of expanding a reference member (e.g., a byReference member) can require more information about the instance than the runtime is in a position to provide. Related to reference member expansion is a process referred to as discovery, which is a process used to find instances that have already been deployed. Discovery is an action typically initiated by an operator of the environment. For example, during an install request, expansion engine 612 determines if the instance already exists, if so determines what exists and if not then creates it. An instance manager (IM) 634 on the controller 620 communicates with the instance managers 626 on the target device 622 to initiate a discovery process. The discovery process returns data regarding the instance from the target device 622 to the controller 620. The process of discovery populates reference definition members as part of a construction or update action. Typically, only reference members with object managers (instance managers that also do discovery) that support discovery participate in this process. When a new instance is discovered a check is made that the instance does not already exist in the SDM database using instance specific key values. Once it is known that it is a new instance, the instance is classified according to the definitions of the members being discovered. If the instance does not match a member or there is an ambiguous match then the member reference is left blank and the instance is marked as offline and incomplete. Relationship expansion refers to, once all the definition instances that will be constructed are known, creating relationship instances that bind the definition instances together. If definition instances are being destroyed, all relationship instances that reference the definition instances are removed. To create the relationships the member space is used to identify the configurations of the relationships that should exist between the instances. Where the definition members have cardinality greater than one the topology of the relationships is inferred from the base relationship definition. For example, for communication relationship an “auto wiring” can be done, and for host relationships a host is picked based on the algorithm associated with the hosting relationship. During a flow stage, flow values engine 614 evaluates flow across all the relationship instances. Flow values engine 614 may add update requests to the change request for instances that were affected by any altered parameter flow. Engine 614 evaluates flow by determining the set of instances that have updated settings as a result of the change request. For each of these, any outgoing settings flows that depend on the modified settings are evaluated and the target nodes added to the set of changed instances. The process continues until the set is empty or the set contains a cycle. After the flow statd, a process of duplicate detection is performed. The duplicate detection may be performed by one of the engines illustrated in Check constraints engine 616 implements a constraint evaluation phase in which all the constraints in the model are checked to see if they will still be valid after the change request has been processed. After check constraints engine 616 finishes the constraint evaluation phase, a complete list of actions is available. So, order actions engine 618 can use the relationships between components to determine a valid change ordering. Any of a variety of algorithms can be used to make this determination. Once order actions engine 618 is finished determining the ordering, deployment can be carried out by distributing subsets of the ordered set of actions that are machine specific. Once the actions have been ordered and grouped by machine, the actions as well as a copy of the necessary portion of the SDM runtime store 608 with instance information are sent to a target computing device 622. The SDM can be stored temporarily at the target device in a store cache 638. The target computing device includes a target portion 636 of the SDM runtime that communicates with SDM runtime 610. The target computing device 622 also includes an agent that contains an execution engine 624 and can communicate with the appropriate instance managers (IMs) 626 on the target device to make changes on the target, such as crate, update, and delete actions. Each action is sent as an atomic call to the instance manager 626 and the instance manager 626 returns a status message and for some actions, also returns data (e.g., for discovery). Once all the actions are completed on target 622, the target's agent returns any errors and status to the controller 620. The controller 610 then uses this information to update the SDM runtime store 608. As discussed above, change is carried out by breaking the change requests down into distributable parts based on the relationships that are affected. Once all the parts are completed (or after one or more has failed) the results are collated in the runtime 610 and a summary returned to the operator. In the event of a failure, all the actions can be “rolled back” and the system returned to the state it was in before the change was initiated. In certain embodiments, during design time validation discussed above, an SDM Compiler 628 receives an SDM file, creates a test CR, runs the test CR through the expand, flow values and check constraints engines of the SDM runtime, and returns any errors to the development system. This process provides SDM validation for deployment during design time for the developer. The public interface to SDM runtime 610 and/or controller 620 is through an object model (APIs) library. The library is a managed code object model and allows the following to be performed: The SDM runtime engine performs the reasoning on the SDM model and the functions surfaced by the APIs. The library communicates to the runtime engine as a web service with fairly coarse calls such as load SDM, create component instance and get entire SDM (for reflecting on SDM entities). The format of many of the parameters for this web service is XML with the same schema for SDM files. The engine may also perform checks on permissions. The controller 620 can make use of Instance Managers (IMs), which can be associated with any definition or relationship in the model. IMs may perform one or more of the following roles: For deployment, an instance manager (IM) plug-in on controller 620 is associated with a class host relation and is separate from the plug-in used in the development system that provides the design experience for the classes and produces the associated binaries in the SDU 604 and the settings schema. Instance managers are supplied to the SDM runtime 610 as CLR classes (e.g., in a dll assembly) that implement an instance manager interface or inherit from abstract class. An SDM Instance Manager, also referred to as an Instance Manager (IM) plug-in, provides the following functions to the controller 620: The SDM model provides a separation of concerns between the developers of applications, the designers of the software infrastructure and the architects of the data center. Each of these groups focuses on particular services and has a differing set of concerns. For example, developers may be primarily concerned with the configuration and connectivity between the hosts that they utilize, such as SQL, IIS and the CLR. Designers of the host configuration may be primarily concerned with the network topology and the OS configuration. The architects developing the network topology, OS configuration and storage mapping may be primarily concerned with the hardware that exists in the data center. The SDM enables the functional composition of systems across a horizontal and vertical axis. Composition along the horizontal axis is done with systems and subsystems. Composition along the vertical axis is done with “layers”. Applications, services, network topologies, and hardware fulfill a role in a distributed system, but are typically defined independently and owned by different teams or organizations. Layering is accomplished by components defining a set of constraints on a host and vice versa. To support this separation of concerns, the SDM exposes a concept of layering. Layering refers to using hosting relationships to bind an application to the services on which it depends without declaring those services as part of the containment structure of the application. Layering allows systems to be developed by different individuals at different times and at different levels. Different systems and subsystems within a layer can interact with one another, and also can interact with systems and subsystems of different layers. For example, a subsystem 710 in layer 708 can interact with a subsystem 712 in layer 708, as well as a subsystem 714 in layer 706. Additionally, each layer can be viewed as the environment for the next higher layer. For example layer 706 is the environment for systems and subsystems in layer 708, while layer 704 is the environment for systems and subsystems in layer 706. Each layer 702, 704, 706, and 708 has its own associated SDM document. The different layers 702, 704, 706, and 708 can represent different content. In certain embodiments, layer 702 is a hardware layer, layer 704, is a network topology and operating systems layer, layer 706 is an application hosts layer, and layer 708 is an applications layer. The hardware layer represents the physical devices (e.g., computing devices) on which the layered system is built (e.g., devices 102 of Example SDM Implementation The following discussion describes an embodiment of the schema that defines the elements of the SDM. The System Definition Model (SDM) is designed is to support description of the configuration, interaction and changes to the components in a distributed system (the modeled system). SDM is based on an object-relational model. We use objects to describe entities that exist in the system and relationships to identify the links between them. The SDM further refines objects and relationships to capture semantics that are important to the SDM. In particular, we divide objects into systems, endpoints and resources and we divide relationships into communication, containment, hosting, delegation, and reference. We use abstract definitions to provide a common categorization of system parts allowing tool support for a wide range of applications and providing the basis for type checking at design time. We expect the set of abstract definitions to provide a comprehensive basis for system design and we expect that they will change slowly over time. We build concrete object definitions that represent parts of an actual application or datacenter design. We take an abstract object definition and provide an implementation that defines the concrete type's members and setting values for its properties. We then build systems from collections of these definitions. Constraints are used to model restrictions over the allowed set of relationships that an instance can participate in. We use constraints to capture fine grained requirements that depend on the configuration of objects involved in a relationship. For example, a constraint may be used to validate that participants on each end of a communication protocol are using compatible security settings. In order to effect change on the target system, SDM uses a declarative description of the required changes called a change request. SDM defines the process that is used to expand, validate and execute a change request as part of the SDM execution model. The instance space captures both the desired and current state of the managed application. We track changes in the instance space and associate them with the change request that initiated the change. The following uml diagrams capture the broad interactions between the objects in the sdm model. For simplicity some of these interactions have been defined between base types where the actual interaction exists between derived types and as a result is more specialized. For example, communication relationships may only reference abstract endpoint definitions. An Sdm document contains information that describes the document, managers for the definitions in the document, import statements that reference other documents and a set of definitions. All sdm definititions derive from a common base definition and may contain members as shown in Members are divided by the kind of definition that they reference as shown in Setting declarations reference a setting definition. Setting values and value lists provide values for settings as shown in 2.1 The Lifecycle of an SDM Application An example lifecycle of an SDM application in accordance with certain embodiments is shown in The application is designed and implemented within the visual studio environment (block 1202). Developers implement components and then combine them within compound components. The application is described within an SDM file. In order to verify that their application will deploy within a particular datacenter a developer will bind their application to a representation of the datacenter also described in an SDM file (block 1204). This representation will include definitions for the hosts of their application components and constraints on the configuration of their application. If the binding fails, then the developer can revise their application design. Once a developer is happy with their application, they can sign and publish the application so that there is now a strong name and version associated with the application (block 1206). The published form of an application is called a Software distribution Unit (SDU). The operator takes the SDU from the developer and loads the application into the SDM runtime (block 1208). In the process of loading the application, the operator chooses the model of the datacenter to which they want to bind the application. When the operator chooses to deploy an application they supply deployment time parameters to the application and they determine the scale of the application (block 1210). This is done using a change request. Once an application is deployed, the operator can interact with the runtime to determine the configuration of the application and the setting for each part of the application (block 1212). The runtime can also verify that the actual configuration of the application matches the desired configuration as recorded in the runtime. The operator can remove a deployed application by submitting a change request (block 1214). The operator can also rollback individual changes made to the running application such as removing a service pack. In block 1216, the configuration of a running application can be changed by adding or removing parts of the deployed application such as to web frontends. The application can also be upgraded by installing newer versions of one or more of the application components. 2.2 Abstract Object and Relationship Definitions Abstract object definitions define the building blocks that we need in order to check application configuration at design time and then to deploy and manage an application at run time. These building blocks represent entities that exist in the modeled system. For example, we use abstract object definitions to model files and directories, the configuration inside a web server or the databases inside a sql server. We use abstract relationship definitions to model the interactions that can occur between abstract object definitions. Relationships are binary and directed, identifying the object definition that defines the instances that participate in manifestations of the relationship. Relationships provide a way of tying objects together so that we can model containment, construction and communication links between objects. Constraints are then used by objects to constrain the relationships they participate in and by relationships to constrain the objects that can be linked. These constraints can target both the definition and the settings of participants in a relationship. This allows a constraint to narrow the participants in a relationship to instance that are derived from a particular definition and to require that the instance have setting values that fall in a particular range. We divide Object definitions into three categories: systems, endpoints and resources. Abstract system definitions are used to describe self-contained independently deployable parts of an application. These definitions represent parts of an application that interact through well defined communication channels that can cross process and machine boundaries. Abstract endpoint definitions are used to describe the communication endpoints that a system may expose. These are used to model all forms of communication that the system should be aware of in order to verify system connectivity at design time and to enable connections at runtime. Abstract resource definitions describe behavior that is contained within a system. Resource definitions may have strong dependencies on other resource definitions. These dependencies can include requiring a specific installation order and initiating runtime interaction through undocumented communication mechanisms. All abstract object definitions share the ability to expose settings. These settings are simple name-value pairs that use xml schema to define the type of the setting. Settings can be dynamic or static, if they are static then they can only be set during the deployment process, if they are dynamic, then they can be changed after deployment. The code responsible for applying settings values to the running system is hosted in the SDM runtime. The SDM supports inheritance over abstract object definitions. A derived definitions can extend the properties exposed by its parent and can set values for its parents properties. A derived definition can participate in any of the relationships that identify its parent as a participant. Relationship definitions are divided in five categories: communication, containment, delegation, hosting, and reference. Communication relationships are used to capture potential communication interactions between abstract endpoint definitions. The existence of a communication relationship indicates that it may be possible for systems that expose endpoints of the identified definition to communicate. The actual establishment of the link is subject to constraints on the endpoints and the exposure of the endpoints. Containment relationships describe that ability for an abstract object definition to contain members of another abstract object definition. More specifically, a containment relationship between two abstract object definitions A and B allows a concrete object definition that implements A to contain a member of a concrete object definition that implements B. We use containment to model the natural nesting structures that occur when developers build applications. By containing a member object, the parent is able to control the lifetime and visibility of the contained object. All object instances in the run time space exist as members of other object instances, forming a completely connected set of instances. Thus, the set of containment relationship describes the allowed containment patterns that occur in the instance space. Delegation relationships are used to selectively expose contained object members; in particular, we use delegation to expose endpoint members from system definitions. By delegating a endpoint from a subsystem, the outer system exposes the ability to communicate on a particular protocol without exposing the implementation behind the protocol. Hosting and reference relationships are two forms of dependency relationship. A hosting relationship describes a primary dependency between abstract objects that should exist before an instance of a concrete object can be created. Every instance should participate as a guest in exactly one hosting relationship, resulting in the hosting relationships also forming a completely connected tree over the instance space. Reference relationships capture additional dependencies that can be used for parameter flow and for construction ordering. 2.3 Concrete Object and Relationship Definitions We build concrete object definitions from abstract object definitions and concrete relationship definitions from abstract relationship definitions. The combination of abstract object definitions and abstract relationship definitions defines a schema for modeling the target system. The role of a concrete object definition is to use a subset of the abstract definition space to create a reusable configuration based on one or more abstract definitions. As a simple analogy, the abstract definition space can be compared to the schema for database; the concrete object definition would then represent a reusable template for a set of rows in the database. The rows are only created in the database when an instance of the concrete object is created. To perform design time validation we can validate a concrete object definition against the abstract definition space in the same way that we would validate the rows in the database against the constraints of the schema (for example foreign keys, etc). Each concrete object definition provides an implementation for a specific abstract object definition. The implementation includes extensions to the settings schema, values for settings and declarations for object member, relationship members and constraint members and flow members. The behavior of the concrete object follows the definition of the abstract object: abstract system definition become concrete system definitions, abstract endpoint definitions become concrete endpoint definitions and abstract resource definitions become concrete resource definitions. Each concrete relationship definition provides an implementation for a specific abstract relationship definition. The implementation can include settings declarations and values, nested members of the same relationship category (hosting, containment, communication etc), and constraints on the types that can participate in the relationship. Concrete hosting relationships are used to define a mapping of the members of one concrete object onto another concrete object. For example, a concrete hosting relationship can be used to identify the bindings between a web application and the IIS host that it will be deployed to. More than one concrete hosting relationship can exist for a given type allowing the developer to define different deployment configurations for specific topologies A concrete type can declare members of other concrete or abstract objects—we call these object members. These members are then referenced from relationship members that define the relationships between the object members. Object members are used to create instances of a particular object definition. Settings flow can be used to provide values for the object. When declaring an object member, the user can decide whether the object member is created at the same time the outer system is created (value semantics) or is created by an explicit new operation that occurs at some later time (reference semantics). Relationship members define the relationships that object members will participate in when they are created. If an object member is contained by its parent, then a containment relationship member will be declared between the type member and the outer type. If the object member is delegated, then a delegation relationship member would be defined between the object member and a nested object member. Communication relationship members can be declared between endpoints on object members and dependency relationship members (reference and hosting) can be declared between object members or nested object members. Relationship constraints are used to narrow the set of relationships that a particular object is willing to participate in. They identify constraints on a particular relationship and on the participants at the other end of the relationship. 2.5 Instance Space The instance space stored in the SDM runtime reflects the current state of the modeled system. The runtime contains a complete record of the instances that have been created and the relationships between these instances. Each instance has an associated version history where each version is linked to a change request. The process of creating new instances is initiated by a change request. The change request defines a set of create, update and delete requests for types and relationships associated with specific members of an existing instance; the root is a special case. The change request is expanded by the runtime, verified against all constraints, and then constructed. The expansion process identifies object and relationship instances that are constructed implicitly as part of the construction request of the containing object and then settings flow is then evaluated across all relationships. The verification step checks that all required relationships exist and that the relationships fulfill all constraints. Finally, the construction process determines an appropriate ordering over the deployment, update or removal of each instance and then in the correct sequence passes each instance to an instance manager to perform the appropriate action. The goal of the SDM model is to allow a separation of concerns between the developers of applications, the designers of the software infrastructure and the architects of the datacenter. Each of these groups focuses on particular services and has a differing set of dependencies. For example, developers mainly care about the configuration and connectivity between the hosts that they depend on such as SQL, IIS and the CLR. Designers of the host configuration care about the network topology and the OS configuration, while the architects developing the network topology, OS configuration and storage mapping need to know about the hardware that exists in the datacenter. To support this separation of concerns, SDM exposes a concept of layering. Layering is the use of hosting relationships to bind an application to the services that it depends on without declaring those services as part of the containment structure of the application. We identify four layers as part of the SDM model . . . The hardware layer identifies the types of machines that exist in the datacenter and the physical connections that exist between these machines. In order to satisfy the relationships required of a system we bind that system to a host system that has matching capabilities. We call this process placement. At design time, we construct a concrete hosting relationship that represents a possible placement. At deployment time, we instantiate an instance of the concrete hosting relationship to bind the guest system instance to the host system instance. 2.7 Model Evaluation Associated with the SDM model is well-defined process for managing change to a distributed system. Each change is driven by a declarative change request that passes through several processing steps before the actions in the request are distributed and then executed against target systems. 3 Implementation Details There are a number of places in the SDM where we need a strong naming system for identifying objects. The following naming system allows the creator of a type to sign the definition in such a way that that the user of the definition can be sure that it is the same as the one that developer originally published. The following header is an example of an identifier for an sdm namespace: To reference a type in another namespace you need to import the namespace: Then you can use the alias to refer to types within the namespace: SDM names are scoped by the namespace in which they are defined. A namespace is identified by a name, version, language and a public key token and is contained within a single file. The base form of identity includes name, version, culture, platform and a public key token. The base identity can be used to reference an existing identity or in conjunction with a signature and a public key, to create a new strong identity. The document will be signed using the private key, allowing the user of the document to verify its contents using the public key. A public key token is a 16 character hex string that identifies the public part of a public/private key pair. This is not the public key; it is simply a 64 bit hash of the public key. A file version is defined by a four part number of the form N.N.N.N where 0<=N<65535. By convention the numbers refer to Major.Minor.Build.Revision. 3.1.3 Simpl Names Simple names are made up of alpha-numeric characters and limited punctuation. The name should start with a non-numeric character. We plan to conform to the C# definition for identifiers; the appropriate section (2.4.2) has been inserted below. The spec can be found at: Note we will not support “@” prefixed names in the sdm model. 3.1.4 Reserved Names The following is a list of reserved names that we will prevent users from using when creating names for objects in an SDM model. Within certain contexts certain names will be reserved These names are reserved because of our integration with the CLR. 3.1.5 References to Other Namespaces We allow namespaces to reference other namespaces by importing them into the current namespace and then associating an alias with the namespace. The imported namespace is referenced by name, version and public key token. Versioning will be described in section 3.16. 3.1.6 Qualified Paths Qualified paths are then either names that refer to definitions or managers defined in the current namespace or in an aliased namespace. The alias is defined in an import statement. The following simple names identify a type or in the case of a path, a nested type. 3.1.7 Definition and Member Paths A path is a sequence of names that identifies a member or setting. A path should begin with a well-known name or member name that is defined by the object or relationship associated with the path. 3.1.8 Instance Paths Paths in the instance space are based on xpaths where the element names in the xpath correspond to member names and attributes in the xpath correspond to settings. 3.1.9 Name Resolution Names that do not begin with an alias are not fully qualified. This means that the scope in which they are evaluated can change the resulting binding. An example of this is nested definitions. When resolving a nested definition name, definitions in local scope hide definitions in a broader scope. All definitions can expose settings declarations. These settings are used to describe the values that can be provided when a concrete definition is created from an abstract definition, or when a definition is references from a member within another definition. To define a setting you first need to define the definition of the setting using xsd. You can then declare a setting that uses the definition and includes a set of attributes to define the behavior of the setting. Once you have a setting declaration you can provide a value for the setting. 3.2.1 Setting Definitions We use XSD schemas to define the setting definitions used by setting declarations. We support the use of simple and complex types from a schema though other schema elements may exist to support the definition of those types. The settings definition section should contain a complete xml schema including namespace declaration and namespace imports. We will check that the imports in the xsd schema match the imports in the sdm file with the exception of the xsd schema namespace. This means that all referenced types should be defined in another sdm file; the schema cannot reference types that are defined in arbitrary xsd files. Settings should be resolvable from three separate namespaces: For this to work, we should place a number of restrictions on the way we declare settings: XSD types from imported SDM documents are accessible using QNames: Hence, for example, if Foo.sdm imports Bar.sdm, the setting types of Bar.sdm may be referenced in the settingTypes element of Foo.sdm as this example illustrates: 3.2.2 Built in Simpl Data Types The SDM supports a limited set of built in data types that are an intersection of the XSD and C# namespaces. These types are supported natively by the SDM runtime and are defined in the following table. In addition to these types, users are free to construct and use their own mapping between xsd and cls types. These types can be flowed to compatible derivations of these types in the c# and xsd type spaces. For example a value for string can be flowed to an xsd type that defined a restriction on string and any value can be flowed to a setting that accepts type=“any”. 188.8.131.52 XSD Built in Types 184.108.40.206 C# Data Types 220.127.116.11 Supported Conversions These are the conversions that exist between xsd types and cis types. 3.2.3 Setting Declaration The settings declaration section uses the setting definitions from the previous section to create named settings. Attributes are used to provide further information about each setting. 3.2.4 List Support In order to support manipulation of multivalued settings, we support simple lists of setting values. A list is a sequence of values of the same definition as the setting declaration. Lists can be flowed to other lists that that they can either replace or merge with. We do not support duplicate detection when merging values into a list as this can be done more flexibly using settings flow and we do not guarantee any form of ordering. A list declaration includes an attribute list set to true: Values are then provided using a settingValueList. When providing the list, the user can specify whether to replace or merge with previous values. The sdm supports simple manipulation of lists of values. When a path from a flow member targets a setting declaration, then the resulting behavior is dependent of the definitions at either end of the path. 3.2.5 Setting Attributes Setting attributes are used by the runtime to describe the behavior of a particular setting. 3.2.6 Setting Values Depending on whether the setting has been declared as a single value or a list, the value for the setting can be provided using either a setting value element or a setting value list element. 18.104.22.168 Setting Value A setting value is used to provide a value for a particular setting declaration. The value should match the definition associated with the declaration. If the value is declared fixed, then the provided value will be used in all derived definitions or referencing members depending on the point at which the value is fixed. Once a value is fixed it cannot be overridden. 22.214.171.124 Setting Value List A setting value list is used to provide one or more values for a setting declared as a list. When declaring the values the user can decide to merge with previous values or to overwrite all previous values. 3.2.7 Settings Inheritance Settings inheritance means that a derived definition implicitly contains all the settings declarations from the base definition. Some important aspects of settings inheritance are: 3.2.8 Type Conversions We support lossless conversions between the built in types. Other type conversions require flow in order to execute the appropriate conversions. Many of the objects in the SDM can be attributed to capture behavior that is orthogonal to core behavior of the object. We use a general attribution model defined as follows: 3.4 Definitions and Members Definition is the base from which object, relationship, constraint and flow definitions are derived. All definitions can include a settings schema, and design surface data. Each definition is identified by a simple name and references a manager. The manager is responsible for providing extension support to the SDM runtime for this particular definition. The settings schema defines the values that can be found on an instance of this definition. The DesignData element is used to contain data that is specific to the display and editing of this definition on the design surface. Members are used to identify definition instances that can exist at runtime. All members are identified by a unique name within the scope of the type, can provide settings for the definition they reference and can contain design surface specific data. 3.5 Settings Flow Settings flow is used to pass parameters between members of an object definition and between participants in relationships. As part of a flow, the user can use transformations to combine or separate setting values and to calculate new setting values. All settings flow members use a flow definition to implement the transform. A flow definition is declared in the sdm file. The following is a flow type that parses a url. A flow member is then declared within an object or relationship. The flow member provides the input for the flow definition and then directs the output from the flow to the target settings. 3.5.1 Flow Definition We use a flow definition to define a particular transform that we wish to apply to a set of setting values. The flow definition exposes a setting schema that defines the input settings (write-only settings) and the output settings (read-only settings), a DesignData section for design surface specific information such as an input interface for defining the transform and a description for use when browsing the sdm file. The flow definition is identified by name within the namespace in which it is defined. The definition also identifies a manager that will support the runtime when it evaluates the flow. We expect that the runtime will include several standard flow definition to simplify the construction of flow elements where straightforward transformations are required. Examples might include copy, merge and string substitution. Since flow definition can be parameterized, we also expect there to be one or more simple transformations that perform different actions based on configuration parameters. 3.5.2 Flow Member Each flow element identifies one or more source nodes, one or more destination nodes, some static settings and a flow definition. When the flow is evaluated, source data is collected from the source nodes, combined with settings from the flow element and passed to the flow definition for transformation. The output data is passed to the destination nodes. Re-evaluation of the flow will be triggered whenever one of the source values changes. For this reason, we need to avoid circular flows that cause values to flip flop. If the value remains constant then the loop will terminate. The runtime will detect and terminate infinite loops by keeping track of the stack depth. 3.5.3 Setting Target A settings target identifies a path to a setting value in a member or nested member that is relative to a well known name in the context in which the flow is defined. Examples of well-known names include this in a definition or reference declaration, host and guest in a hosting relationships declaration, or a target defined within a constraint declaration. The setting target also identifies the setting on the associated flow definition that will be used as either the source value or destination setting of setting identified by the path. Output path is a variation on the settingTarget that supports the semantics for fixing and replacing the target values. 3.6 Settings Constraints Constraints are used to identify restrictions on setting values of members of a definition or on the participants in a relationship. These restrictions are evaluated in the instance space at both design time and at deployment time. All setting constraints use a constraint definition to evaluate the setting values. The constraint definition uses settings declarations to identify the values it constrains. The following constraint definition implements a simple comparison function that takes two arguments and an operator, then evaluates the constraint and finally returns success or error. A constraint member then used to provide the values to the constraint type for evaluation. 3.6.1 Constraint Definition A constraint definition defines a constraint that acts on a set of input values. The constraint can be parameterized to select custom behavior or to support for a simple constraint engine that uses parameters to define its behavior. We expect that a set of standard constraint definitions will be written for simple parameter value constraints and a set of complex constraints to support known relationships between abstract objects. 3.6.2 Constraint Member A constraint member identifies a set of input values for a particular constraint definition. The member can identify static values for settings and can use input statements to bind a constraint setting to a path. 3.7 System Endpoint and Resource Definitions This section describes the schema for abstract and concrete object definitions. An abstract object definition exposes a set of setting declarations, can contain constraints on the relationships that it participates in and has an associated manager in the runtime. The following is an abstract system definition for a web server. The web server has two settings and has a relationship constraint that requires it to contain at least on vsite type. The vsite is an abstract endpoint definition that contains server binding information. A concrete system definition for a frontend webserver identifies the webserver category as static content, contains a single byReference endpoint member which can represent between 1 and 100 endpoint instances. The concrete endpoint definition for the endpoint is nested inside the system definition and it defines the ip Endpoint for the vsite to be Endpoint 80. 3.7.1 Object Definition Abstract and concrete object extend the following base object definition. In addition to the elements of the base type Definition, they share the ability to constrain the relationships that the objects participate in. 3.7.2 Abstract Object Definitions Abstract object definitions are used to define building blocks that the design surface exposes and from which all concrete objects are derived: a concrete object definition should implement an abstract object definition. Abstract object definitions extend SDM object by adding simple inheritance: the extends attribute is used to identify a base object definition for an abstract object definition. The abstract object definition then inherits the settings and relationship constraints from that base object definition. Through inheritance, the object definition can extend the settings and constraints of the abstract object definition by adding new settings and constraints. Abstract object definitions can also add constraints on the relationships that they wish to participate in. For example, an abstract object definition may require the existence of certain relationships, may constrain the object definitions that may be placed on the other end of the relationship or may constrain the settings on the instances that participate in a given relationship. 126.96.36.199 Abstract Object Definition All abstract objects can identify the layer with which they wish to be associated. If this is not provided it is assumed that the object definition can be used at any layer. Abstract object definitions can identify a base object definition that they extend, in which case they inherit the settings and constraints of that object definition and can be substituted for the base object definition in the relationships in which the base object definition participates. 188.8.131.52 Abstract Endpoint, System and Resource Object Definitions There are three classifications of abstract object definition in the SDM model, these are: abstract endpoint definition, abstract system definition and abstract resource definition. Each of these is a simple rename of abstract object definition. Endpoint definitions represent communication endpoints. The settings on the endpoint relate to its use in the binding process. For example, with a client server protocol, server endpoint definitions might use the settings schema to identify settings that are required to bind to the endpoint, client endpoint definitions might expose client specific connection attributes. System definitions are used to represent collections of data, software or hardware elements. Examples include web services, databases and switches. Resource definitions are used to capture specific elements that can be identified as part of a system definition. 3.7.3 Implicit Base Definitions All abstract object definitions that do not extend another abstract object definition implicitly extend one of the endpoint, system or resource base definitions as illustrated in The definitions of these types include base constraints that control their instantiation within the model; they can be found in System.sdm. 3.7.4 Concrete Object Definitions Concrete object definitions provide an implementation for an abstract object definition. The implementation is constructed from object and relationship members, values for the settings of implemented abstract definition, new settings declarations, flow between members and constraints on members. Concrete definitions can also contain declarations of nested definitions. These definitions can be used for members within the scope of the containing definitions and referenced in constraints outside the scope of the definition. 184.108.40.206 Base Concrete Object Definition Base concrete type extends object definition, inheriting setting declarations, design data, an optional manager reference, a name, constraints on the relationships that it can participate in, the ability to provide values for the abstract definition's settings and the ability to describe flow between its settings and its member's settings. The concrete definition then adds the ability to identify the abstract definition that it implements and several optional attributes add the ability to customize the binding behavior of the definition. 220.127.116.11 Object Member Objects members should reference either an abstract or concrete object definition. They can represent an array of instances in which case they can define the upper and lower bounds for the array. If they are a reference member, then the user instantiating the object should explicitly construct an instance for the member. If they are not a reference member, then the runtime will create an instance at the same time as the outer object is created. In an sdm model we need to differentiate members that get created when the parent is constructed and destroyed when the parent is destroyed from those that may have lifetimes independent from the parent. We use the IsReference attribute for this purpose. A simple analogy is with C++ declarations that allow stack based and heap based construction based on whether new is used to create an instance. If a member is marked as IsReference then an explicit new operation is required on the part of the operator to create an instance and associate it with the member. There are a number of reasons that we do this: 18.104.22.168 Relationship Member Relationship members identify the relationships that will exist between object members when they are created. Relationship instances are either explicitly created by the operator or implicitly created by runtime. Examples of the former are hosting relationships between instances, the latter, communication relationships between systems. 22.214.171.124.1 Hosting Member Host members are used to declare a hosting relationship between two object members. The object members may be direct members of the containing definition or nested members that have a membership relationship with the definition. There should be a membership chain between the referenced member and the containing definition. 126.96.36.199.2 Communication Member A communication member is used to declare a communication relationship between endpoint members of immediate system members of the definition. 188.8.131.52.3 Containment Member A containment member is used to declare that a type member is contained by the type. Each type member can either be contained or delegated. The containment member automatically sets the parent value of the containment relationship to be the this pointer of the relationship. 184.108.40.206.4 Delegation Member A delegation member is used to set up a delegation relationship between an endpoint definition member on the outer type and an endpoint definition member on an immediate system member of the outer type. 220.127.116.11.5 Reference Member A reference member is used to set up a reference relationship between two immediate or nested members of the outer system. 18.104.22.168 Endpoint Definition Endpoint definitions extend the base object definition by adding the ability to declare nested resource types, resource members and host, containment and reference relationship members. 22.214.171.124 Service Definition A system type extends the base type by adding support for: nested endpoint, system and resource types; endpoint, system, and resource members and host, containment, connection, delegation and reference relationships. 126.96.36.199 Resource Definition A resource type may contain nested resource type definitions, resource members, and host, containment and reference relationship members. 188.8.131.52 Relationship Rules For a particular instance of an object definition the following tables identify the cardinality associated with each of the roles that the instance can play. 184.108.40.206.1 System Rules 220.127.116.11.2 Endpoint Rules 18.104.22.168.3 Resource Rules Every instance should participate in exactly one containment relationship and at least one hosting relationship. This means that: Relationships are used to identify possible interactions between types. They are binary and directed, each identifying the type of the instances that can participate in the relationship. Relationships can also constrain the settings of the instances that participate in the relationship and can flow setting values across the relationship. The following is a possible hosting relationship for a webApplication on the webserver described in the types section. The relationship contains a constraint that verifies that the security models of the two systems are compatible and it contains a settings flow member that copies the server name from the vsite to the vdir. A relationship is used by declaring a relationship member that identifies the type members that will participate in the relationship. 3.8.1 Relationship Definition The base relationship definition adds object constraints and flow to definitions. Object constraints are statements about the setting values for the object instances that participate in an instance of this relationship. For example, a communication relationship that represents a DCOM connection may check that the security settings for client and server are compatible. In this case, there is a strict relationship between settings that could easily be captured as part of the design process; there are four factorial setting combinations over the relationship but a much smaller number of valid combinations. Flow gives the ability for the relationship developer to forward values from one instance to another. This allows the object definitions to be developed separately from their possible interactions and allows the instance to stand alone as a reference point for information rather than requiring a subset of the relationship graph in order to fully describe a particular instance. The name for the relationship should be unique within the namespace that contains the relationship. 3.8.2 Abstract Relationships Abstract relationships are relationships that are defined between two abstract object definitions. They represent possible interactions between the two definitions. 22.214.171.124 Abstract Communication Relationship Communication relationships are used to capture possible communication links between endpoint definitions. They are used to describe interaction between independently deployed software elements. The communication relationship schema extends the base relation schema by adding client and server endpoint references. The following combinations of abstract type pairs are valid for communication relationships: 126.96.36.199 Abstract Hosting Relationship Hosting relationships are used to capture the fact that a guest requires a host in order to be constructed. Since there can be more than on possible host for a guest, this implies that the hosting relationship is also responsible for the construction of the guest on a host. So in order to create an instance of an object, a hosting relationship should exist from a guest to a compatible host. For example, a hosting relationship may exist between a Webservice object definition and an IIS object definition. In this case, the relationship indicates that it may be possible to create an instance of system MyWebservice on an instance of system MyIIS using the manager on the hosting relationship assuming MyWebservice and MyIIS implement webservice and IIS respectively. We do not know whether it will be possible to create the relationship until we have evaluated constraints that exist on both the systems and the relationship. The following combinations of abstract definition pairs are valid for hosting relationships: 188.8.131.52 Abstract Containment Relationship A containment relationship between two abstract objects captures the fact that a concrete type based on the parentType can contain members based on the memberType. Containment implies that the parent instance can control the lifetime of the member instance and can delegate behavior to the member instance. The following combinations of abstract definition pairs are valid for containment relationships: 184.108.40.206 Abstract Delegation Relationship Delegation is used to forward behavior from an outer system to a contained system. The way we do this is by delegating the endpoints on the outer system to endpoints on the inner system. This effectively forwards all interaction that would have been directed to the outer system to the endpoint on the inner system. Delegation can be chained, allowing the inner system to further delegate its behavior to another system. A delegation relationship defines pairs of abstract endpoint definitions that can participate in the delegation. Each relationship identifies an abstract endpoint definition that can act as a proxy and an abstract endpoint definition to which it can delegate behavior. The following combinations of abstract type pairs are valid for delegation relationships: We may allow resource and system delegation to support binding between layers. For example, to allow IIS to expose part of the file system without having to deploy it. 220.127.116.11 Abstract Reference Relationship We use reference relationships to capture strong dependencies between instances that are in addition to the hosting relationship dependency. These dependencies are used to control construction order during deployment and flow parameters between systems during installation and update. Because reference relationships indicate a strong dependency, we cannot allow a reference relationship to cross a system boundary. This means that resources within one system cannot have dependencies on resources in another system. This would make the system no longer an independent unit of deployment. Where dependencies exist between systems, we use communication relationships. Communication relationships can change over time without requiring reinstallation of the system. The following combinations of abstract type pairs are valid for reference relationships: 3.8.3 Implicit Base Relationships All abstract relationships implicitly extend one of the base relationships definitions as illustrated in 3.8.4 Concrete Relationships Concrete relationships are relationships between two concrete object definitions. Each concrete relationship should implement an abstract relationship. The abstract relationship should be between a matching pair of abstract objects definitions that are directly or indirectly (through inheritance) implemented by the concrete object definitions. 18.104.22.168 Hosting Relationship When we deploy an application to a datacenter we need to resolve all the outstanding hosting relationships for the systems within the application. To do this the operator would need to create hosting members for each of the required hosting relationships. To simplify the task of the operator and to allow the developer to guide the deployment process, the developer can instead create a concrete hosting relationship. The concrete hosting relationship is used to group a set of hosting relationship members in such a way that the operator need only declare a single hosting member when deploying the application. The following combinations of concrete type pairs are valid for hosting relationships: For example the following concrete relationship binds a layer three system (Bike) to a layer two host (operating System). In this case, we define a setting for the hosting relationship with the default value of “system folder”. We flow this setting to one of three hosting members that define the hosting relationship between systems of the layer 3 application and systems of the layer 2 host. 22.214.171.124 Reference Relationship We can use a concrete reference relationship between two concrete types to capture specific dependencies between systems that do not involve communication relationships. For example, we can capture the fact that for one application to be installed, another should already exist. The following combinations of concrete type pairs are valid for reference relationships: 3.9 Object and Relationship Constraints We use object and relationship constraints to define the topology of the concrete space and to constrain the settings of objects when used in particular relationships. For example within an abstract object definition (A) we may want to identify that implementations of this abstract definition should contain one instance of another abstract object definition (B). Assuming that at least one appropriate containment relationship already exists, to do this we would use a relationship constraint within A that looked as follows: The constraint identifies that there should exist a containment relationship in which the implementation of A plays the role of parent and the type at the other end of the relationship (the member) is of type B. If we want more control over the configuration of B we can add a constraint on the settings of type B as follows: In this case, we added a constraint that required the name of the member to equal the string “myPort”. We can also add constraints to relationships; we call these object constraints. From within a relationship we constrain the objects that participate in the relationship. For each role in the relationship, we can identify a object definition and then we can add setting constraints to those object definitions. From the relationship perspective the cardinality is always minOccurs=1 and maxOccurs=1 so this does not appear in the constraint declaration. Finally, we can nest constraints. This gives us the ability to chain constraints together; the outer constraint sets the context for the inner constraint. The following is an example of an IIS system that hosts webapp systems that it then constrains the webApp only containendpoints of a specific type. In this case, we use a group of object constraints to specify a set of possibilities of which at least one should be true. The nested constraints form a path that we can evaluate from the outside in. Each constraint on the path can access the settings of previous instances on the path as well as the current instance. The evaluation of nested constraints is conducted as if the constraint had been defined within the identified system. From the perspective of foo the following two scenarios should be equivalent. In the first foo places a nested constraint on a contained system bar, in the second, the type bar already contains the constraint. 3.9.1 Constraint Model There are two parts to the constraint model: guards and predicates. We use guards to define the context in which we execute the predicate. For example within a relationship, we use guards to identify a particular combination of types for which we want to execute a predicate. Within a object, we use guards to identify a set of relationship to other objects. Predicates are then executed when the requirement of their guards have been met. We have two forms of predicate: setting constraints that validate setting values and group constraints that validate a set of constraints. We can nest guards within guards, in which case the inner guard is only checked when the outer guard is satisfied. This allows us to build paths that support verification of a relationship structure. The combination of a guard and its predicates can have a cardinality that indicates the number of times that the guard should match and the predicate evaluate to true. A guard is defined as either a ObjectConstraint or a RelationshipConstraint. Object constraints identify two object definitions that are associated with either end of the relationships. Relationship constraints identify a relationship definition and a target object definition. An object constraint can be optional or required while a relationship constraint has a lower bound and an upper bound. This difference in cardinality reflects the fact that a relationship can only ever identify two types while a type can participate in multiple relationships. A predicate is either a settings constraint that contains a rule or a group that contains a set of guards. The predicate is evaluated in the context of the guard. In the case of a settings constraint, the predicate can identify settings from the owner of the root guard and the context identified by each nested guard. Groups are used to identify a set of guards of which at least one should match and evaluate to true. This example shows a guard that evaluates to true whenever there is a containment relationship to a webapp. This guard can evaluate true at most one time. Further matches will result in the return of an error to the user. This example adds a predicate to the guard. The guard will only evaluate to true when the relationship and target definitions match and the setting constraint evaluates to true. If the relationship and target definition match and the setting constraint is not true then an error will be returned to the user. If the relationship and target type match and the setting constraint evaluates true more than once, then an error is returned to the user. In this example, we nest a guard within a guard. When the outer guard is true (the type that contains the constraint also contains a webapp), we then evaluate the inner guard in the context of the outer guard. That means the inner relationship constraint will be evaluated in the context of a webapp instance. The inner constraint will return true if the webApp contains zero or one vdirs, if it contains more than one vdir then the constraint will return an error to the user. The context of the object constraint is the primary object definition (the first object definition). This means that the relationship constraint will be evaluated in the context of webapp. The relationship constraint defines two possible contexts, the first is the relationship, which will be the context for object constraints, and the second is the target object definition which is the context for relationship constraints. In this example, we use a group to contain two relationships constraints that will both be evaluated in the context of the Webapp. The group will raise an error unless at least one of the relationships fire and return true. In this case, the Webapp should contain either a Vdir or a directory. 3.9.2 Base Constraint 3.9.3 Object Constraint An object constraint describes a constraint to one or both of the roles of relationship. The constraint has a name to aid identification of the constraint in the case that it fails, it contains a list of settings constraints targeted at the types associated with the roles and it may further constrain the instance to be of a object derived from the definition associated with the role. 3.9.4 Object Constraint Group An object constraint group allows sets of object constraints to be grouped together so that they can be evaluated using at-least-one semantics. The group will return an error unless at least one of object constraints matches the objects on the relationship and then its contained predicates evaluate to true. We ignore the required attribute for type constraints if the constraint is a direct member of the group. 3.9.5 Relationship Constraint Relationship constraints are used to constrain the relationships in which a object can participate. A relationship constraint identifies the relationship definition, optionally the object definition of the instance at the other end of the relationship and the cardinality of the relationship. The constraint is given a name so that it can be identified in error messages. The body of the relationship constraint contains predicates about both the relationship and the instances at the other end of the relationship. Relationship constraints can be used for a number of purposes: simply using the cardinality without additional predicates, they can be used to identify relationships that should be provided for an instance to operate correctly, with predicates they can be used narrow the set of configurations for instances that this object is willing to interact with. 3.9.6 Relationship Constraint Group A relationship constraint group allows sets of relationship constraints to be grouped together so that they can be evaluated as a predicate with at-least-one semantics. The group will return an error unless at least one of the contained relationship constraints match a relationship definition and target object and its contained predicates return true. If any of the predicated in the contained constraints returns an error, then these errors are propagated to the user. The minOccurs cardinality of the contained relationship constraints is ignored but if the maxOccurs cardinality is violated then an error will be returned to the user. 3.10 Object Manager Object managers are the mechanism by which types and relationships insert custom behavior into the runtime environment. There are several roles that a manager can support for each type that it manages: it can participate in the installation of the type, it can provide a CLR representation of the type, it can be involved in policy decisions about how bindings between types are resolved and it can provide the implementation for complex constraints and flow. All object managers roles exposed through the CLR as entry points into strongly named classes. Object managers are packaged and versioned in the same manner as other types in the sdm; they are distributed in system distribution units and their version and strong name is derived from the sdm file in which they are declared. An object manager can support one or more roles for each type that it supports. These roles include: 3.11 SDM Document Structure An sdm document provides a strong identity, versioning and localization information for a set of relationships, objects and managers. The information section of an SDM document contains human readable information to support identification and management of sdm documents. 3.12 Change Request. The initial request contains a single group of actions. As the request is processed by the runtime more structure is added through nested grouping and more actions are added as a result of the expansion and flow process. A change request that has been through this evaluation process and is now ready for execution against the target machines is called a fully qualified change request. See section 3.13 for more information. 3.12.1 Consistency Rules When actions are performed on the SDM instance space we validate that after the actions are complete all the instances in the SDM instance space are still in a consistent state. By consistent state we mean that all constraints that apply to the instance are still valid. For example, if we create an instance of a client that requires a connection to the server, when the sequence of actions used to create and connect the client is complete, the connection should exist between the client and a server. The constraints used to evaluate model consistency can be evaluated either on a per action basis or on the conclusion of a set of actions. We call these two types of consistency operational consistency and transactional consistency. If an object will be in an inconsistent after the transaction is complete we allow the user to explicitly mark that instance as offline. When an instance is offline we do not evaluate constraints that apply to the instance and the instance will not appear to exist from the perspective of other instances. This may mean that in turn all those instances should also be marked as offline. Offline is propagated from parent to child and from host to guest, so marking a system as offline will mark all its in owned instances as offline and all instances that are hosted on it offline. 3.13 Model Evaluation In this section we describe behavior of the SDM model within the scope of the SDM runtime. 3.13.1 Definition Space The definition space contains all the definitions that are known to the sdm runtime. The steps of An sdm document is presented to the runtime either as part of an sdu or as a stand alone document. We will attempt to load the file from the disk. 126.96.36.199 Schema Validation The first step is to validate that sdm document matches the sdm schema. At this point we will return errors for all unknown elements, types that are missing required elements or attributes or types that contain invalid data. 188.8.131.52 Setting Value and Type Resolution In the type resolution phase we resolve all references to types within the sdm file (anywhere a qualified name is used in the schema). First we validate that all type references that are within the scope of the document are valid. These are all type references that do not contain an alias. We then try to resolve all import statements. If we cannot resolve an import statement we create a namespace load error, if we can resolve and import statement we try to locate the type within the namespace. The namespace resolution process may generate other errors if we are forced to load the namespace from an sdm file. 184.108.40.206 Path Resolution During the path resolution phase, we try to resolve all paths to members and settings that are defined in the document. Paths that refer to members or settings with unresolved types will not raise an error. 220.127.116.11 Relationship Participation In the type space we check that a type declaration does not violate any of the constraints with respect to the participation of its members in relationships. To do this we evaluate all type and relationship constraints that have no associated settings constraints. 18.104.22.168 Instance Simulation In the instance simulation we attempt to flow values and evaluate constraints in such a way that we can identify constraints that we know should fail but not flag constraints that may or may not fail based on user input. To do this we construct a model of the instance space and evaluate flow and constraints that based on this instance space. If the flow or constraint is know to result in an error then we raise an error, if it could possibly result in an error then we raise a warning. We build an instance space change request using the minOccurs constraint on all byReference systems. When the minOccurs is 0 we create a single instance and mark it as optional. We then pass the change request through the same expansion and flow processes as we use for a standard change request We then evaluate all flows that have fully defined input values. If the input values are not fixed and could be changed by a user then we mark the output of the flow as provisional. A provisional input will chain through any flow operations that consume it. If a flow does not have complete input values and a user could provide values then we mark all the outputs of the flow as undefined. Flow from optional systems also results in provisional values. Once we have flowed values we evaluate the constraints based on these values. Constraints that fail provisioning values will be raised as warnings; a warning will also be raised when a constraint could not be evaluated due to undefined values. 3.13.2 Instance Space The model evaluation process is initiated by the submission of a declarative change request. This request will contain a set of create, update or delete operations that target instances within the runtime. We then pass the request through a series of pipeline stages before enacting the required changes on the target system as illustrated in The following sections outline the responsibilities of each expansion step. 22.214.171.124 Request Submission In order to initiate a change to the system an operator or process should submit a change request. The change request contains a set of actions that the operator wants performed over the instances in the runtime; these actions fall into three groups: create actions, update actions and delete actions. The request is then treated as an atomic set of actions that should either complete or fail as a group. This allows the constraint validation process to consider all actions in the request when evaluating whether the set of actions will result in a valid change to the model. 126.96.36.199.1 Type Resolution In the type resolution phase we resolve all types and members that are referenced in the change request. The change request will assume that these are already loaded by the runtime; the runtime will need to initiate a load/compile action if they do not exist. 188.8.131.52.2 Path Resolution During the path resolution phase we resolve references to existing instances and instances defined by create actions within the change request. Expansion is the process where we take a change request and populate all the remaining actions required to execute the request: in general these actions are construction and destruction actions for type and relationship instances. In theory the operator could provide details for all the actions required to construct or destroy an instance but we don't require this because it would make the change request authoring process very complex. Instead we try to automate as much of this process: the operator provides key information about the changes they want by identifying actions on byReference members; we then fill in the rest of the actions on nested byReference and byvalue members and relationships. 184.108.40.206.1 Value Member During the expansion stage we identify all the non-reference type members. We know the cardinality of these members and we know all the required parameters, so for each member we add create requests to the change request for those members whose parent is being created. If the change request contains destruction operations, we add destruction operations for all their contained instances. 220.127.116.11.2 Reference Member Expansion (Discovery) In general reference members require more information to construct than value members. Their cardinality is often undefined and they can have deployment time settings that require values in order for the instance to be constructed. So the process of expanding a byReference member can require more information about the instance than the runtime is in a position to provide. The process by which we obtain this information is called Discovery. The process of discovery will populate reference type members as part of a construction or update action. Only reference members with object managers that support discovery will participate in this process. When a new instance is discovered we first check that the instance does not already exist in the SDM database using instance specific key values. Once we know it is a new instance we then classify the instance according to the types of the members we are discovering. If the instance does not match a member or there is an ambiguous match then we leave the member reference blank and mark the instance as offline and incomplete. 18.104.22.168.3 Relationship Expansion Once we know all the type instances that will be constructed we create relationship instances that bind the type instances together. If type instances are being destroyed, we remove all relationship instances that reference the type instances. To create the relationships we turn to the member space to identify the configurations of the relationships that should exist between the instances. Where the type members have cardinality greater than one we have to infer the topology of the relationships. We will discuss how we do this in detail in section XX. During the flow stage we evaluate flow across all the relationship instances. This stage may add update requests to the change request for instances that were affected by the altered parameter flow. Flow is evaluated by determining the set of instances that have updated settings as a result of the change request. For each of these, any outgoing settings flows that depend on the modified settings are evaluated and the target nodes added to the set of changed instances. The process continues until the set is empty or the set contains a cycle. 22.214.171.124 Duplicate Detection The process of duplicate detection matches expanded instances against instance that already exist in the sdm data store. For example we will detect if another application has installed a shared file. When we detect that an instance already exists we can one of several actions depending on the version of the existing instance: a) we can fail the install b) we can reference count the instance c) we can upgrade the instance d) we can install side-by-side 126.96.36.199 Constraint Evaluation During the constraint evaluation phase we check that all the constraints in the model will still be valid after the change request has been processed. 188.8.131.52 Request Ordering We now have a complete list of actions, so we can use the relationships between systems to determine a valid change ordering. We distribute subsets of the orders set of actions that are machine specific. We should support cross machine synchronization of these machine specific sets. 184.108.40.206 Request Return Change is carried out by breaking the change requests down into distributable parts based on the hosting relationships that are affected. One all the parts are completed (or failed) the results are collated in the runtime and a summary returned to the user. 3.13.3 Expansion in Depth In this section we go into detail on the expansion process for types and relationships. 220.127.116.11 Reference Member Expansion (Discovery) In the same way that the hosting relationship is responsible for constructing new instances of a type, we also use the hosting relationship to discover existing type instances. The hosting relationship is uniquely placed to do this as it alone is aware of the way a type instance is represented on a host. When a reference member is marked for discovery we check to see if the hosting relationship supports discovery. If it does we pass the host instance to the relationship and ask it to return construction actions for the guest instances that it finds on the host. We use verification to discover that instances no longer exist. This again uses the hosting relationship to verify the existence of a guest on a host. If the guest no longer exists then the hosting relationship adds a destruction action to the change request. 18.104.22.168 Non Reference Member Expansion The runtime handles all non-reference member expansions by simply adding construction or destruction actions for each non-reference member of a type that has already been identified for construction or destruction within the change request. 22.214.171.124 Communication Relationship Expansion If the operator has not specified an instance of a communication relationship where a communication relationship member exists between two type members, then we expand the communication relationship by assuming a fully connected mesh between the members What does this mean? If two members are connected in the member space, then all the instances of each member should be able to see each other. Given the following two members the instance space topologies which are constrained by the cardinality of the members, as shown in When we construct communication links, delegate endpoints become transparent so that we end up with connections that match all the communication relationships that would exist if the delegate endpoints were removed. 126.96.36.199 Hosting Relationship Expansion Where hosting relationships are ambiguous we require the either the operator or the manager of the hosting relationship to determine the correct topology. If the hosting relationship supports expansion, then we pass the set of hosts and the guest to the relationship manger and ask the manager to return the correct construction action. If the manager does not support expansion then we return the change request to the operator so that they can provide more information. 188.8.131.52 Reference Relationship Expansion 184.108.40.206 Containment Relationship Expansion Containment relationships are never ambiguous so the runtime can always add the appropriate construction action to the change request. 220.127.116.11 Delegation Relationship Expansion For expansion, delegation relationships follow the same rules as communication relationships. 3.14 SDM Instance Space The follow section defines an object model for the instance space of the sdm runtime. The instance space is used to track changes to the configuration of the system that are modeled by the sdm. The instance space is structured around versioned changes initiated by change requests. Each instance can have a linear series of versions that represent atomic changes that were made to the running instance. Future versions can also exist in the runtime before they have been propagated to the running system. For this version of the SDM model we only allow linear changes for a given instance. In the future we may allow version branches and introduce a version resolution model. This would allow more than one change to be outstanding against a particular instance. Since we do allow linear versioning, we can load a series of change requests that build on previous changes. This supports prior validation of a sequence of actions that may be taken during a process such as a rolling upgrade. 3.14.1 SDM Instance All instances derive from sdm instance. They share elements that define values for the settings schema and list of members that match the members on the instance's definition. They also share a set of attributes that define a unique identifier for the instance, a version number for the instance, a name for the instance and flag that indicates whether this version represents the running state of the system. A member is used to associate the member of an instance which a set of referenced instances. The members of an instance are defined by the instance's definition. The referenced instances are the instances that have created for the members or the instances to which the members are delegated. A member may represent an array in which case there may be more than one referenced instance. A change represents a change to the instance state. It associates a change request with the set of affected instances. It also identifies he status of the change (see section XXX) and the change response if the change has been executed. 18.104.22.168 Change Status A change request can be in one of the following states: 3.14.4 Concrete Object Instance A concrete object instance represents an instance the concrete type identified by the type attribute. Since there can be real world representation for the instance we need to track whether the instance is in sync with its real world counterpart. We also want to know whether the instance will be online as a result of this change. An online instance should be valid with respect to all its constraints. An offline instance is does not appear visible to the other participants of the communication relationships that it participates in. If the instance is incomplete then further change requests are required before the instance can be taken online 3.14.5 Relationship Instances A relationship instance represents an instance of the identified relationship type. Since relationships have no direct real-world representation we do need to keep information about whether the relationship is in sync or online. Also since relationships are relatively simple we do not expect them to be incomplete, though they can fail their constraints. 22.214.171.124 Containment Instance This represents an instance of a containment relationship. 126.96.36.199 Communication Instance This represents an instance of a communication relationship. 188.8.131.52 Delegation Instance This represents an instance of a delegation relationship. 184.108.40.206 Hosting Instance This represents an instance of a hosting relationship. 220.127.116.11 Reference Instance This represents an instance of a reference relationship. The Instances group represents the set of instance elements that can exist in an sdminstance file. 3.14.7 Instance References 18.104.22.168 Instance Ref Instance ref is a simple reference to an instance. Will default to the is Current instance unless the reference is made in the context of a change request and the instance is affected by the change request. 22.214.171.124 Instance Version Ref Instance version ref identifies a particular version of an instance. 3.15 Deployment Unit Structure We need to decide what parts of the SDM model support localization and how we support localization through design and deployment of systems. We leave localization completely up to individual types and types to manage. Localization is implicit through constraints. Localization is not a first type citizen. What this means: Localization is a first type citizen of identity along with name and version. This means that localization should be taken into account anywhere where a reference is made to a type. The second approach has the potential to get very complicated from a design/ui perspective if locale is widely used as a constraint. For example if endpoints are localized or if hosts localize their guests then finding a connection/placement just got a lot more complex. If the second approach is used with b) from the first approach as the suggested mechanism then the complexity may be easier to manage but somebody has to identify, package and ship the localized resources. 3.17 Versioning and Change Management 3.17.1 General Comments Example Computer Environment Computer environment 2300 includes a general-purpose computing device in the form of a computer 2302. Computer 2302 can be, for example, a computing device 102 of The system bus 2308 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus. Computer 2302 typically includes a variety of computer readable media. Such media can be any available media that is accessible by computer 2302 and includes both volatile and non-volatile media, removable and non-removable media. The system memory 2306 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 2310, and/or non-volatile memory, such as read only memory (ROM) 2312. A basic input/output system (BIOS) 2314, containing the basic routines that help to transfer information between elements within computer 2302, such as during start-up, is stored in ROM 2312. RAM 2310 typically contains data and/or program modules that are immediately accessible to and/or presently operated on by the processing unit 2304. Computer 2302 may also include other removable/non-removable, volatile/non-volatile computer storage media. By way of example, The disk drives and their associated computer-readable media provide non-volatile storage of computer readable instructions, data structures, program modules, and other data for computer 2302. Although the example illustrates a hard disk 2316, a removable magnetic disk 2320, and a removable optical disk 2324, it is to be appreciated that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like, can also be utilized to implement the exemplary computing system and environment. Any number of program modules can be stored on the hard disk 2316, magnetic disk 2320, optical disk 2324, ROM 2312, and/or RAM 2310, including by way of example, an operating system 2326, one or more application programs 2328, other program modules 2330, and program data 2332. Each of such operating system 2326, one or more application programs 2328, other program modules 2330, and program data 2332 (or some combination thereof) may implement all or part of the resident components that support the distributed file system. A user can enter commands and information into computer 2302 via input devices such as a keyboard 2334 and a pointing device 2336 (e.g., a “mouse”). Other input devices 2338 (not shown specifically) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, and/or the like. These and other input devices are connected to the processing unit 2304 via input/output interfaces 2340 that are coupled to the system bus 2308, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB). A monitor 2342 or other type of display device can also be connected to the system bus 2308 via an interface, such as a video adapter 2344. In addition to the monitor 2342, other output peripheral devices can include components such as speakers (not shown) and a printer 2346 which can be connected to computer 2302 via the input/output interfaces 2340. Computer 2302 can operate in a networked environment using logical connections to one or more remote computers, such as a remote computing device 2348. By way of example, the remote computing device 2348 can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and the like. The remote computing device 2348 is illustrated as a portable computer that can include many or all of the elements and features described herein relative to computer 2302. Logical connections between computer 2302 and the remote computer 2348 are depicted as a local area network (LAN) 2350 and a general wide area network (WAN) 2352. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. When implemented in a LAN networking environment, the computer 2302 is connected to a local network 2350 via a network interface or adapter 2354. When implemented in a WAN networking environment, the computer 2302 typically includes a modem 2356 or other means for establishing communications over the wide network 2352. The modem 2356, which can be internal or external to computer 2302, can be connected to the system bus 2308 via the input/output interfaces 2340 or other appropriate mechanisms. It is to be appreciated that the illustrated network connections are exemplary and that other means of establishing communication link(s) between the computers 2302 and 2348 can be employed. In a networked environment, such as that illustrated with computing environment 2300, program modules depicted relative to the computer 2302, or portions thereof, may be stored in a remote memory storage device. By way of example, remote application programs 2358 reside on a memory device of remote computer 2348. For purposes of illustration, application programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 2302, and are executed by the data processor(s) of the computer. Various modules and techniques may be described herein in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. An implementation of these modules and techniques may be stored on or transmitted across some form of computer readable media. Computer readable media can be any available media that can be accessed by a computer. By way of example, and not limitation, computer readable media may comprise “computer storage media” and “communications media.” “Computer storage media” includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. “Communication media” typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier wave or other transport mechanism. Communication media also includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media. Alternatively, portions of the framework may be implemented in hardware or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) or programmable logic devices (PLDs) could be designed or programmed to implement one or more portions of the framework. Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the exemplary appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed invention. Moreover, these claims are exemplary in terms of scope and subject matter. Many other combinations and sub-combinations of the features described herein may later be claimed in patent applications claiming priority to this application.
0.9215
FineWeb
What makes for effective reading instruction? A new study indicates that an important contributor is integrating material from other subjects into reading instruction. An important international comparison test for reading is the PIRLS, administered to ten-year-olds. Hong Kong ranked 14th among 35 participating countries in the 2001 administration of the test. In 2006, Hong Kong students ranked second among 44 nations. This improvement coincided with significant changes to the reading curriculum instituted by the Curriculum Development Council of the Hong Kong government. These two changes spurred a group of researchers at the University of Hong Kong to analyze the data from the 2006 PIRLS to determine which instructional factors were associated with student reading achievement. They used a single outcome measure — reading achievement on the 2006 PIRLS by the 4,712 Hong Kong students in 144 schools who took the test. There were 39 predictor variables in several categories, including the teacher’s professional preparation, the time spent per week on reading, the types of reading materials used during instruction, the varieties of assessment and their purposes, the teacher’s perception of the class, and instructional strategies. Not surprisingly many of these variables were themselves correlated, so the researchers conducted a step-wise multiple regression to determine which were the most important. This analysis showed that four of the 39 predictor variables were critical: - the frequency with which the teacher used materials from other subjects in reading instruction. - using assessment to assign grades. - the frequency with which students took a quiz or test after reading. - using assessment to provide data for national or local monitoring. These four sources taken together accounted for about 30% of the total variance in reading scores. Most of this came from the first factor, which accounted for almost two-thirds of the predictive power of the total model. It should be noted that this study doesn’t account for the rise in Hong Kong scores, nor did it examine other predictor variables that are likely important: variables related to the school, parents, or pupils themselves. And it is always possible that other variables could be important to reading success yet do not appear significant in this analysis because they vary so little, or were not measured. Still, the results are impressive in their clarity, and important because they dovetail so well with theories of reading comprehension, described here. Once students can decode, background knowledge is crucial to reading comprehension. Ensuring that students have wide-ranging knowledge of the world ideally begins at birth, through a rich home environment. Schools must do everything possible to support and expand that knowledge base, and integrating material from other subjects into the reading curriculum is an important step in the right direction. Sources: Cheung, W. M, Tse, S. K., Lam, J. W. I. & Loh, E. K. Y. (2009). Progress in international reading literacy study 2006 (PIRLS): Pedagogical correlates of fourth-grade students in Hong Kong. Journal of Research in Reading, 32, 293-308. * * * Dan Willingham, author of Why Don’t Students Like School? A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for Your Classroom, typically posts on the first and third Mondays of each month.
0.9576
FineWeb
Even though in isolation, synthesized and actively experimented with for many decades now, and at this point the primary mode of anabolic action with all anabolic/androgenic steroids is to be a direct activation of the cellular androgen receptor and increase in protein synthesis. By supplementing external anabolic usage the increase of hormone levels, so does the androgen receptor activation, and finally the protein synthesis effect. An indirect mechanism is one that is not directly brought about by activation of the androgen receptor. But the affect androgens might have on other hormones, or even the release of locally acting hormones or growth promoters inside cells. Muscle mass disposition involves not only protein synthesis, but other factors as well such as protein breakdown and tissue nutrient transport. In recent years insulin, a hormone that increases transport of nutrients into muscle cells. In which a second way of transport indirectly of protein breakdown to cause muscle growth. Anti-Glucocorticoid Effect of Testosterone: Glucocorticoid one of the most important mechanisms of androgen action, these hormone actually have the exact opposite effect on the muscle cell than androgens, namely sending an order to release stored protein. In this process it is called catabolism (breakdown of muscle tissue.) Muscle growth is achieved when the anabolic effects of testosterone are more pronounced overall than the degenerative effects of cortisol. When steroids are administered a much higher androgen level can place glucocorticoids at a notable disadvantage. In terms, when you do AAS the catabolism effect is shortened leaving room for more muscle adaptation to protein synthesis absorption. With the affect reduced, fewer cells will be given a message to release more protein, and on the long run more will be accumulated. Primarily mechanism believed to bring this effect out is androgen displacement of glucocorticoids bound to the glucocorticoid receptor site. It is also suggested that androgens may indirectly intefere with the DNA binding to the glucocorticoid response element. Testosterone and Creatine Effect: Creatine, as creatine phosphate (CP), plays a crucial role in the manufacture of ATP (adenosine triphosphate), which is a main store of energy for the muscle tissues. As the muscle cells are stimulated to contract ATP molecules are broken down into ADP (adenosine diphosphate), which releases energy. The cells then undergo a process using creatine phosphate to rapidly restore ADP to its original structure, in order to replenish ATP concentrations. With increased levels of CP available to the cells, ATP is replenished at an enhanced rate and the muscle is both stronger and more enduring. This effect will account for some portion of the early strength increases seen during steroid therapy. Personally, the result of creatine and anabolic/androgenic steroids used in combination would result in a boosted strength and size gain, and as well increasing the recovery time during exercises. Testosterone and IGF-1: Insulin-Like Growth Factors, also has an indirect mechanism of test action on muscle mass. It has been demonstrated that increases in IGF-1 receptor concentrations in skeletal muscles are noted when elderly men are given replacement doses of testosterone. It looks like androgens are necessary for the local production and function of IGF-1 in skeletal muscle cells, independent of circulating growth hormone and IGF-1 levels. IGF-1 does have a minor effect in the muscle tissue growth in conjunction with steroid cycles.
0.7312
FineWeb
Let me start with this: We need poetry. We really do. Poetry promotes literacy, builds community, and fosters emotional resilience. It can cross boundaries that little else can. April is National Poetry Month. Bring some poetry into your hearts, homes, classrooms and schools. Here are five reasons why we need poetry in our schools. Reason #1: Poetry helps us know each other and build community. In this blog, I described how poetry can be used at the start of the year to learn about where students come from and who they are. Poetry can allow kids to paint sketches of their lives, using metaphor, imagery and symbolic language to describe painful experiences, or parts of themselves that they're not ready to share. Poetry allows kids to put language to use-to make it serve a deep internal purpose, to break rules along the way (grammar, punctuation, capitalization -- think of e.e. cummings) and to find voice, representation, community perhaps. Reason #2: When read aloud, poetry is rhythm and music and sounds and beats. Young children -- babies and preschoolers included -- may not understand all the words or meaning, but they'll feel the rhythms, get curious about what the sounds mean and perhaps want to create their own. Contrary to popular belief amongst kids, boys get really into poetry when brought in through rhythm and rhyme. It's the most kinesthetic of all literature, it's physical and full-bodied which activates your heart and soul and sometimes bypasses the traps of our minds and the outcome is that poetry moves us. Boys, too. Reason #3: Poetry opens venues for speaking and listening, much neglected domains of a robust English Language Arts curriculum. Think spoken word and poetry slams. Visit this Edutopia article for more ideas. Shared in this way, poetry brings audience, authentic audience, which motivates reluctant writers (or most writers, for that matter) . Reason #4: Poetry has space for English Language Learners. Because poems defy rules, poetry can be made accessible for ELLs -- poems can be easily scaffolded and students can find ways of expressing their voices while being limited in their vocabulary. Furthermore, poetry is universal. ELLs can learn about or read poetry in their primary language, helping them bridge their worlds. (This is not quite so true for genres such as nonfiction text that get a lot of airtime these days.) Reason #5: Poetry builds resilience in kids and adults; it fosters Social and Emotional Learning. A well-crafted phrase or two in a poem can help us see an experience in an entirely new way. We can gain insight that had evaded us many times, that gives us new understanding and strength. William Butler Yeats said this about poetry: "It is blood, imagination, intellect running together...It bids us to touch and taste and hear and see the world, and shrink from all that is of the brain only." Our schools are places of too much "brain only;" we must find ways to surface other ways of being, other modes of learning. And we must find ways to talk about the difficult and unexplainable things in life -- death and suffering and even profound joy and transformation. On this topic, Jeanette Winterson, a poet and writer, says this: "...When people say that poetry is a luxury, or an option, or for the educated middle classes, or that it shouldn't be read in school because it is irrelevant, or any of the strange and stupid things that are said about poetry and its place in our lives, I suspect that the people doing the saying have had things pretty easy. A tough life needs a tough language - and that is what poetry is. That is what literature offers -- a language powerful enough to say how it is. It isn't a hiding place. It is a finding place." A final suggestion about bringing poetry into your lives: don't analyze it, don't ask others to analyze it. Don't deconstruct it or try to make meaning of it. Find the poems that wake you up, that make you feel as if you've submerged yourself in a mineral hot spring or an ice bath; find the poems that make you feel (almost) irrational joy or sadness or delight. Find the poems that make you want to roll around in them or paint their colors all over your bedroom ceiling. Those are the poems you want to play with -- forget the ones that don't make sense. Find those poems that communicate with the deepest parts of your being and welcome them in. If you don't already have these two books, get them now! - Teaching With Fire: Poetry that Sustains the Courage to Teach - Leading from Within: Poetry that Sustains the Courage to Lead Rethinking Schools also has fantastic resources: - Linda Christensen's, Reading, Writing, And Rising Up provides a wealth of ideas to link poetry and social justice teaching. - Aquí y Allá/Here and There: Exploring Our Lives Through Poetry, by Elizabeth Schlessman. An elementary teacher uses the poetry of Jorge Argueta to help students express their feelings about leaving one country for another. - "Talking Back to the World: Turning Poetic Lines into Visual Poetry", by Renée Watson. Student poetry about "what raised me" is woven into graphic art. - "Remembering Mahmoud Darwish" by Naomi Shihab Nye. The Palestinian poet's richly descriptive style resonated with displaced peoples everywhere.
0.6835
FineWeb
The human race is a very diverse species according to the number of races, yet homo sapiens have not undergone the process of speciation. Since all males and females can reproduce together despite their race, it has been theorized that all of the human race can be traced back to a common ancestor most commonly called Mitochondrial Eve. She is believed to have a medium brown complexion, dark hair, and strong features resembling the Native American female. Mitochondrial Eve is suspected to have been in the region of East Africa. The theory states that as groups began to radiate off form the original group with Mitochondrial Eve, these different groups became isolated from each other . As they became isolated, the genetic make up of the groups began to change according to the areas that they were located and the ecological niches that the different groups developed. These are two examples of two African tribes the Maasai and Igbo. The Maasai tribe is a semi-nomadic tribe found around Kenya and Northern Tanzania. Their niche is in hunting and gathering, so because of the niche the people are taller and more slender in order to hunt cattle and other animals. The Igbo tribe is found in Nigeria. The people are of medium stature with skin that is of darker pigment due to the excessive amount of sun. Their niche is in agriculture, therefore they are not as tall as some tribes like the Maasai.Even more groups became more isolated as a result of radiation. Hawaiian people had to become accustomed to dealing with tropical climates. They have more of a complexion somewhat resembling that of the Native Americans and Mitochondrial Eve. Hawaiian people have more melanin production because of more exposure to the sun. They also lived off the land like many other of the isolated groups and had to be adapt in hunting and learning the land. The Alaskan Inuit had little exposure to the sun and developed a vary fair complexion with darker hair. They had to become accustomed to living in freezing temperatures and fishing and hunting whatever game that could be captured because of the rough terrain. Hawaiian Alaskan Inuit As groups began to radiate further than the continent of Africa, many more features and niches began to change. Take for instance the individuals that migrated to America, specifically the Native Americans. They encountered a climate much less intimidating than that of Africa, so there skin produced less melanin. To adapt to the terrain though the Native American developed distinct predominant features to withstand the environment and acquired more height. They attained a living from hunting, gathering, and learning how to use the land. Now, look at a group that radiated to Korea. Even less melanin was produced because the people had to deal with more of a humid climate with heavy rains. The people were of an average height, and there niche evolved over thousands of years to be geared towards industrialization.
0.9946
FineWeb
A grade of an Incomplete (“I”) may be granted by an instructor only if: - A Petition for Course Extension is submitted after the final date for withdrawal from the course with a “W” but before the date final grades are due. - The student is passing at the time of the request. - The student has satisfactorily completed a substantial part of the coursework (i.e., approximately 2/3). Even when the above criteria have been met, whether to grant the Incomplete or not is left to the discretion of the instructor. An Incomplete may not be appropriate in courses requiring a high degree of class participation/attendance. - The Petition for Course Extension is initiated by the student in consultation with the instructor. If the student is incapacitated, the instructor may work with their Dean’s Office on behalf of the student. - Maximum period of time to accomplish Incomplete coursework is one year. Instructors may set a deadline of less than one year. - A student will be dropped from all courses for which an incomplete course is a prerequisite if a grade is not submitted prior to the first day of the course’s term. - An “I” grade converts to a grade of “F” if coursework is not completed by the deadline indicated on the Petition for Course Extension. Once an “I” grade has converted to an “F” the “F” may not be revised by the instructor but must be appealed through the Academic Records Revision Committee. NOTE: Courses with approved extensions for a graduating student must be completed and graded within 30 calendar days of the published conferral date. - Students should not request, nor should an instructor grant, an Incomplete if the student needs to “sit in on” or retake the entire course or the majority of the coursework. [Such students should withdraw from the course in order to retake it later. Students who do not officially withdraw from a course must be assigned a grade to reflect the amount of coursework submitted relative to all course requirements. For example, if a student misses a final exam they will have earned 0 points for the final exam. - Students should consider the potential difficulties in completing Incomplete coursework in a timely manner. - Students should consider the potentially negative implications of an Incomplete on financial aid, scholarship eligibility, visa status, athletic standing, military benefit eligibility, overall academic standing, etc. - Graduating Students should request an Incomplete only under extreme circumstances. Degree conferral will not occur until the “I” is resolved. - Students on Academic Probation should discuss the implications of an Incomplete with their Academic Advisor in advance of submitting the Petition for Course Extension.
0.8392
FineWeb
Large dams worldwide, have gates within the structure to assist in controlling water flow, the effectiveness of these gates is largely determined by the performance of the rubber hydraulic seals. Trelleborg seals are available in a variety of rubber compounds and can be supplied with Kevlar® reinforcement, fabric reinforcement or teflon coatings. Trelleborg engineered products operations has a quality system certified to ISO9001- 2000 and are committed to supplying high quality, durable products which conform to the customer’s specified requirements and operating conditions. To ensure superior construction, all corner sections of seals are usually fully moulded with any vulcanised joints being in the less stressed straight sections. Trelleborg will design and manufacture rubber seals to meet your unique requirements. The brochure provides technical aspects of rubber hydraulic seals for dams and tunnels, and an extensive profile listing of our Seals. As a result of thermal expansion, creep, shrinkage and traffic loading measurements of constructions can vary. To make this movement possible, several joints can be implemented. The dimensions of the joints vary. To create a flat joint at all times and prevent leakage, a expansion joint is applied. Examples where expansion joints are used are; bridges, viaducts, concrete road surfaces, airfield runways and the inner site of tunnels. The expansion joint can be used in old as well in new constructions. In general the Omega is designed to allow axial and radial movements and/or rotation of the two bridged structures. Gap closure, an axial movement, will compress the arc of the Omega. An increase in the gap is limited by the circumference of the Omegas arc. The reinforcement of the Omega prevents elongation of the rubber. Vertical movements of the joint causes lateral deformation in the Omega seal. Rotation around the vertical axis of the structures causes compression in one vertical section and elongation in the other vertical section. To increase the movement capacity the Omega seal can be mounted with pre-compression. Many large concrete structures are too big to be poured as one monolithic unit and therefore have a number of constructions joints. Two different types of joints can be identified: - The work-joint which has to be watertight without having to accommodate movement between the adjacent concrete pours. - The expansion joint which has to accommodate water pressure as well as the possible movement between the individually poured concrete sections. In a joint, waterstops are generally used. Music note seals Trelleborg Music Note seals are manufactured with either a solid or hollow bulb depending on required load deflection criteria. The solid profile is less prone to compression set while the hollow profile is more suitable for low hydrostatic pressures. Trelleborg music note seals are often used for side seals and can be supplied with a PTFE coating. The bulb on the music note seal is designed to be forced against the seal seat when water pressure is applied. Sealing can be achieved by either bulb deflection, or stem deflection. Seals under high compression loads are usually designed with bulb deflection. Stem deflection is more suitable for: - Low compression loads - Sealing irregular surfaces - Large tolerances in the gates. Trelleborg Hump seals can be manufactured with either a single or double hump and can be supplied with a PTFE coating. Double hump seals are usually installed for sealing against a reversal of head, such as tidal river gates. Hump seals are commonly used for sealing the top edge of submerged vertical-lift gates and radial gates. The seal can be supplied with a hollow profile for low hydrostatic pressures. Trelleborg Flat seals can be supplied with either flat, chamfered or radiused seal faces. Chamfered and radiused seals reduce the seal’s contact area for ready compression and provide space for the rubber to displace when deflecting. Flat seals are commonly installed as bottom seals. Flat bottom seals on high head gates should project no more than the deflection required to seal (e.g. 3-5mm). Teflon coated seals Trelleborg engineered products operations has extensive experience in manufacturing rubber seals with teflon coatings. The PTFE is bonded to the rubber seal surface during the vulcanisation process. The inclusion of teflon on the sealing surface: - Significantly reduces the friction coefficient - Reduces the potential for sticking or “contact bonding” to the seal plate especially when the seal is under high compression for prolonged periods - Assists in reducing abrasive wear and increases the life of the seal The friction coefficient for rubber to metal is typically in the range of 0.6 to 1.4 compared to a friction for teflon to metal of typically 0.1. The friction is dependent on the seal hardness (IRHD), the surface finish of the contact face, the average surface contact pressure, the slidingspeed, and the wetness/dryness of the seal. Trelleborg recommends carbon filled PTFE because of its superior U.V. resistance properties. Speciality seals and dry dock seals Trelleborg seals for dry docks are usually designed as a lip profile with a steel baseplate vulcanised into the seal. The seal is formed by the water pressure acting on the lip. These seals are designed to respond to large movements in the gate’s position on a regular basis. The steel-reinforced baseplate ensures a rigid and watertight seal to the dock wall.
0.7046
FineWeb
What are specific types of learning disabilities? A specific learning disability is unique to the individual and can appear in a variety of ways. It may be difficult to diagnose, to determine impact, and to accommodate. Generally speaking, someone may be diagnosed with a learning disability if he or she is of average or above-average intelligence and there is a lack of achievement at age and ability level or there is a large discrepancy between achievement and intellectual ability. An untrained observer may conclude that a person with a learning disability is "lazy" or "just not trying hard enough." He may have a difficult time understanding the large discrepancy between reading comprehension and proficiency in verbal ability. The observer sees only the input and output, not the processing of the information. Deficiencies in the processing of information make learning and expressing ideas difficult or impossible tasks. Learning disabilities usually fall within four broad categories: - Spoken language-listening and speaking - Written language-reading, writing, and spelling - Arithmetic-calculation and concepts - Reasoning-organization and integration of ideas and thoughts A person with a learning disability may have discrepancies in one or all of these categories. The effects of an LD are manifested differently for different individuals and range from mild to severe. Learning disabilities may also be present along with other disabilities, such as mobility or sensory impairments. Often people with Attention-Deficit Disorder (ADD) or Attention-Deficit/Hyperactivity Disorder (ADHD) also have learning disabilities. Specific types of learning disabilities include the following: - Dysgraphia-An individual with Dysgraphia has a difficult time with the physical task of forming letters and words with a pen and paper and has difficulty producing legible handwriting. - Dyscalculia-A person with Dyscalculia has difficulty understanding and using math concepts and symbols. - Dyspraxia-Language comprehension of a person with Dyspraxia does not match language production. She may mix up words and sentences while talking. - Nonverbal Learning Disorder-A Nonverbal Learning Disorder is demonstrated by below-average motor coordination, visual-spatial organization, and social skills. - Dyslexia-An individual with Dyslexia may mix up letters within words and words within sentences while reading. He may also have difficulty spelling words correctly while writing; letter reversals are common. Some individuals with Dyslexia may also have a difficult time with navigating and route finding using right/left and/or compass directions. For information about how technology can benefit individuals with learning disabilities, consult Working Together: Computers and People with Learning Disabilities.
0.9997
FineWeb
from The Century Dictionary and Cyclopedia - n. In zoology: A large genus of common, harmless colubriform serpents; the garter-snakes, so called from their characteristic striped coloration. - n. A genus of cerambycid beetles: synonymous with Rhaphidopsis. - n. A genus of arctiid moths, having as type E. scapulosa from the Transvaal. Sorry, no etymologies found. Sorry, no example sentences found.
0.8257
FineWeb
Khan Academy for Android Also Khan Academy is included in these Apps collections: More About Khan Academy App Khan Academy Description: You can learn anything. For free. Spend an afternoon brushing up on statistics. Discover how the Krebs cycle works. Learn about the fundamentals of music notation. Get a head start on next semester's geometry fundamentals. Prepare for the SAT, GMAT, LSAT, MCAT, or NCLEX-RN. Or, if you're feeling particularly adventurous, learn how fire-stick farming changes the landscape of Australia. Whether you're a student, teacher, homeschooler, principal, adult returning to the classroom after 20 years, or a friendly alien trying to get a leg up in earthly biology — Khan Academy's learning library is available to you, for free. - Learn anything, for free: Thousands of interactive exercises, videos, and articles at your fingertips. Study math, science, economics, finance, grammar, history, government, politics, and much, much more. - Sharpen your skills: Practice exercises, quizzes, and tests with instant feedback and step-by-step hints. Follow along with what you're learning in school, or practice at your own pace. - Keep learning when you're offline: Bookmark and download your favorite content to watch videos without an internet connection. - Pick up where you left off: Your learning syncs with khanacademy.org, so your progress is always up-to-date. Learn using videos, interactive exercises, and in-depth articles in math (arithmetic, pre-algebra, algebra, geometry, trigonometry, statistics, calculus, linear algebra) science (biology, chemistry, physics), economics, humanities (art history, civics, finance), and more! (For our Indian learners, we have aligned math and science content in our app to NCERT/CBSE curriculum. Study more effectively for your 6-to-12-grade math classes with our quizzes and tests. You'll also build a strong foundation for your boards, CAT, GMAT, IIT-JEE, and other exams.) Khan Academy is a 501(c)(3) nonprofit organization, with the mission of providing a free, world-class education for anyone, anywhere. What's New in Khan Academy 7.5.2 > ∙ Bug fixes and performance improvements.
0.5396
FineWeb
There's an old proverb of King Solomon that goes something like this...As iron sharpens iron, so one friend sharpens another. Proverbs something something. (translation, mine) May morning be astir with the harvest of night; Your mind quickening to the eros of a new question. Your eyes seduced by some unintended glimpse that cut right through the surface to a source. May this be a morning of innocent beginning, When the gift within you slips clear Of the sticky web of the personal With its hurt and its hauntings, And fixed fortress corners, A morning when you become a pure vessel For what wants to ascend from silence. May your imagination know The grace of perfect danger, To reach beyond imitation, And the wheel of repetition, Deep into the call of all The unfinished and unsolved Until the veil of the unknown yields And something original begins To stir toward your senses And grow stronger in your heart In order to come to birth In a clear line of form That claims from time A rhythm not yet heard, That calls space to A different shape. May it be its own force field And dwell uniquely Between the heart and the light To surprise the hungry eye By how deftly it fits About its secret loss. As you enter into the rhythms of Fall (and for many - a season of work, study or busyness), may you become "a pure vessel for what wants to ascend from silence." And may your senses be stirred and your light shine brighter as you partake in our church community.
0.6921
FineWeb
There are many ways to join Indian Navy. One has an option to join the service as an Engineer, Officer, Sailor or as a technician, etc. Here in this blog, we will talk about the complete details of the INET Exam 2021. INET exam is conducted by the Indian Navy itself to select the officers and engineers. We will get to know the complete selection process, Required Eligibility Criteria, availability of vacancy and How to apply online, etc. Check out the complete blog below to get all the details regarding the Indian Navy Entrance Test 2021. INET Exam 2021: INET Exam is held twice a year (February and June)and conducted by the Indian Navy itself. The last vacancy has been released in the month of December 2019. But due to the ongoing Pandemic, it could not be held. All the candidates who have applied for the INET 2019 Exam will now appear for the Written Examination in the month of February 2020. Here are some important dates regarding the exam: - Application form released in November 2019 - Admit Card Download: 28 January 2021 - Written Exam Date: 20 February 2021 - Written Exam Centre: Bangalore, Kolkata, Bhopal, Coimbatore, and Vishakhapatnam, etc. - INET 2021 result date: 30 March 2021 After this exam, the next vacancy will be available in the month of June 2021. But, there is no official announcement has been made regarding the June 2021 Notification. The Selection Process and the syllabus of the Written Exam will be completely the same as it was earlier. Here is the complete selection process for INET 2021. INET 2021 Selection Process: In order to be qualified for the selection, one will need to clear three phases of selection that is consists of: - Written Examination - SSB Interview - Medical Test etc The Written Exam will be conducted in the following cities: Bangalore, Kolkata, Bhopal, Coimbatore, and Vishakhapatnam, etc. All the candidates who would clear the Written Examination will proceed for the SSB Interview and then Medical Exam at the Military Hospitals. Selection is based on the SSB Interview. It is the toughest phase of selection. The shortlisting of the candidates for the SSB Interview is done through the Written Examination. Now let’s see the eligibility criteria to apply online and later we will see the syllabus and Exam Pattern of the INET 2021 Exam. Eligibility Criteria for INET 2021: Only Indian Citizens are allowed to apply for the INET (Indian Navy Entrance Test) Examination. Here are the following criteria for INET 2021: - Only Male candidates are eligible to apply for INET. - Candidate must have done graduation in B.tech (Bachelor of Technology) or BE (Bachelor of Engineering) from any government recognized Institute or University. - Candidate must have scored at least 60% in the graduation degree. - Age of the candidate must not be less than 18 years old and it should not be more than 23 years old on the date of commencement of the course. - There is no relaxation has been given to any candidate regarding their Caste, Community or Religion, etc. - Height- At least 157cm. There is some relaxation in height for the candidates who belong to the Hilly area. To get exact details about the relaxation, Check the official website of the Indian Navy because it keeps on changing. - Candidate must be physically fit. - Candidate’s body must be free from diseases. This the complete required eligibility that is mandatory to have to apply for the INET Vacancy 2021. Now let’s see the syllabus of the INET 2021 Written Exam that one has to prepare. INET 2021 Exam Syllabus: For the Written Examination, One will need to prepare four subjects and each subject will be for 100 marks and the total number of questions would be 100. 25 questions from each subject. Each question would be carrying 4 marks and 1 mark would be deducted for each wrong attempt. Let’s see the subjects that one has to prepare: - Logical Reasoning & Numerical Ability - Mathematical Aptitude - General Knowledge - Usage of words - Sentence Completion and Corrections - Antonyms and Synonyms - Parts of Speech - Direct and Indirect Speech - Idioms and Phrases - Active and Passive Voice etc. Logical Reasoning & Numerical Ability - Reasoning and Associative Ability - Coding and Decoding - Missing Numbers and Series Completion - Decimal Fraction - Ratios and Proportion - Average and Volume - Time and Work - Speed and Distance - Market Price - Cash Price - Expenditure Problems - Profit and Loss - Percentage, Factoring (LCM and HCF) - Simple Interest - Compound Interest - Mensuration Formulas etc. General Science & Mathematical Aptitude - Nature of Matter - Electricity and its Applications - Force and Gravitation - Newton’s Laws of Motion - Work, Energy and Power - Metals and Non-Metals - Sound and Wave Motion - Atomic Structure - Chemistry – Carbon and its Compounds - Periodic Table, Acids - Bases & Salts - Nutrition and Health Physiology - Human Diseases - Basic Computer Science Arithmetic Ability - Number Systems, Algebra - Basic Trigonometry - Probability and Set Theory. - History of India - Geography- Climate / Environment - Civics – Constitution of India - Books and Authors - Defense and Wars - Important National Facts - Freedom Movement - Geographical Neighbors - Sports and Championships etc The Written Exam is not easy. There are many books available Online and Offline that can help you in covering the entire syllabus and give you a better preparation as well. Choose the right study material and then start your preparation. How to Prepare for INET Written Exam: The preparation can be easily done at home if you are aware of the competitive examination. If you are new to the competitive exams, You must join the Coaching Institute in your city and learn the basic techniques and tips to solve the question in less time and also how to manage time in exams, etc. However you prepare, remember these common points: - Choose the right study material. - Make Proper study Routine. - Always make short notes while studying. - Take Short Breaks while studying. - Do not study over. - Give equal time to each subject. - Once you are done with your syllabus. Start solving previous year question papers, Sample Paper, Model Papers, etc. - Manage your time properly so that you could attend maximum questions in the exam. - Prepare better and do your best in the examination. How to Apply for INET 2021 Exam: Application forms are accepted online only. One will need to visit the official website of the Indian Navy and follow the step by step procedure below: - Go to the official website. - Click on the INET 2021 Notification. - If you are applying for the first time. Do Registration. - After the registration, Login to your account and click on the application form. - Fill the form, You will need to have some scanned documents like a High school mark sheet, Aadhaar Card, Your sign and Passport size photograph, etc - After this all, Click on the submit button and submit your form. - You may need to pay Application form fees of 215/- for General Candidates and it is completely free for OBC and SC/ST candidates. You can pay via UPI ID, Debit Card or Credit card, etc. Download INET 2021 Admit Card: In order to download the INET 2021 Admit card. You will need to have your Login ID and Password that you made while applying online. You can also download the admit by your name and Date of Birth etc. Check the official website and click on the Admit Card Notification. Mention your details and press enter, you will get your to admit card on your screen. Take a printout of admit card. Carry your admit card to the examination center with any valid ID proof like Aadhaar Card or Driving License, Pan Card, etc. FAQs For INET Exam As per the recent announcement, The Exam will be held on 20 February 2020. All the candidates who have applied for the Written Exam can download the Admit Card from the official website from 28 January 2021 & onwards. There is no official announcement has been made regarding the INET June 2021. As per some new sources, Notification may be released in the month of May 2021. For the INET Entry, A candidate must not be less than 18 years old and should not be more than 23 years old. There is no relaxation has been given to any candidates regarding their, Caste, Community or Religion, etc.
0.7293
FineWeb
Gocha Better Excerpt The Gocha Better Excerpt plugin is all about better control over the excerpts displayed on your website. - choose range unit charts or words - set your own strings or chars used to define the end of a sentence - set your own excerpt ending - tags to leave – these are tags that should not be removed - includes .pot language file
0.632
FineWeb
We use only Grade A, Rocky Mountain Antlers. Grade A means these antlers have been collected from natural sheds in a reasonable amount of time. The white, chalky looking antlers we stay away from as they have been sitting out in the sun for too long making them brittle. We also sand the edges to make sure there are not any dangerous edges that can do damage. Elk Antlers are very beneficial for dogs for a few reasons: They help clean the teeth with all the edges. When dogs chew on this product it scraps plaque and tarter from the surface of the teeth. Long lasting and durable The Whole Antlers are geared toward the aggressive chewers that have jaws that can chomp through anything.
0.7569
FineWeb
By David Narrett Narrett examines the conflict of empires and nationalities from various views. He weighs the demanding situations dealing with local americans besides the contest among Spanish, French, British, and U.S. pursuits. In a turbulent period, the Louisiana and Florida borderlands have been shaken through tremors from the yank progressive battle and the French Revolution. by means of demonstrating pervasive intrigue and subterfuge in borderland rivalries, Narrett indicates that U.S. occur future was once no longer a linear or inevitable development. He bargains a clean interpretation of the way occasions within the Louisiana and Florida borderlands altered the North American stability of energy, and affected the background of the Atlantic world. Read Online or Download Adventurism and Empire: The Struggle for Mastery in the Louisiana-Florida Borderlands, 1762-1803 PDF Similar uk history books Eu imperialism was once terribly far-reaching: a key international old technique of the final 500 years. It locked disparate human societies jointly over a much wider zone than any prior imperial enlargement; it underpinned the repopulation of the Americas and Australasia; it was once the precursor of globalization as we now realize it. Colin Palmer, one of many ultimate chroniclers of twentieth-century British and U. S. imperialism within the Caribbean, right here tells the tale of British Guiana's fight for independence. on the heart of the tale is Cheddi Jagan, who used to be the colony's first optimal following the establishment of common grownup suffrage in 1953. This narrative describes Thatcher's upward push from the depths of her Priministership to a spot in sleek heritage. Her aid got here mostly from The Corps of The Royal Marines. What does it suggest to stay within the sleek international? How various is that global from those who preceded it, and while did we turn into smooth? In far-off Strangers, James Vernon argues that the realm was once made smooth no longer via revolution, industrialization, or the Enlightenment. in its place, he exhibits how in Britain, a spot lengthy held to be the crucible of modernity, a brand new and pretty smooth social situation emerged by way of the center of the 19th century. - The Official History of the UK Strategic Nuclear Deterrent: Volume I: From the V-Bomber Era to the Arrival of Polaris, 1945-1964 (Government Official History Series) - The Private History of the Court of England: by Sarah Green (Chawton House Library: Women's Novels) - The Nature of the English Revolution: Essays by John Morrill - Twentieth-Century Diplomacy: A Case Study of British Practice, 1963–1976 - Victory in Europe?: Britain and Germany since 1945 (Studies in Modern History) Additional resources for Adventurism and Empire: The Struggle for Mastery in the Louisiana-Florida Borderlands, 1762-1803 Adventurism and Empire: The Struggle for Mastery in the Louisiana-Florida Borderlands, 1762-1803 by David Narrett
0.9445
FineWeb
Fun Facts about Jet Planes 16 Jet Plane Fun Facts - In the 18th century, Isaac Newton was the first to theorize that a rearward-channeled explosion could propel a machine forward at a great rate of speed. - A jet propulsion system which used piston-engine exhaust gases to add heat to an otherwise pure air stream compressed by rotating fan blades in a duct was patented by Henri Coanda in 1910. - A jet aircraft flies much faster at higher altitudes as high as 33,000–49,000 ft, than a propeller-powered aircraft. - The jet engine consist of an engine with a rotary air compressor powered by a turbine, with the leftover power providing thrust via a propelling nozzle. - The Italian Caproni Campini N.1 motorjet prototype that flew on August 27, 1940 was the first flight of a jet engined aircraft to come to popular attention. - During late World War II, Germany made the first operational jet fighter which was the fastest conventional aircraft then. - The first commercial airliner was de Havilland Comet, produced in 1949 by Britain. - In 1963, Boeing produced the 727, which is widely used till date and in 1969 came out with its first jumbo jet 747. - Modern jet fighters commonly fly in excess of 1,000 mph and up to about 1,600 mph. - The noise in jet planes is due to shockwaves that form when the exhaust jet interacts with the external air. - Unmanned scramjets, military fighters are now being designed for stealth and payload. - The world’s fastest manned jet airplane which flies at about Mach 3.5 (more than 2,000 mph) is the U.S. Air Force’s SR-71 Blackbird. - The famous supersonic commercial jet airplane Concorde, made its first flight in 1976. It was produced by Aerospatiale, a French/British consortium. - Prince Alwaleed bin Talal Al Saud owns the world’s most expensive private jet, an Airbus A380, which also has a two-car garage, a stable for horses and camels, and a prayer room that rotates so it always faces Mecca. - The US makes up 49.7% of the world market for private jets; Europe 20.8%; Asia Pacific 11.8%; Latin and South America 11.6%; Africa and the Middle East: 6.1%. - Some celebrities even fly jet planes themselves, like Tom Cruise (Gulfstream IV), Jimmy Buffett (Dassault Falcon 900), and John Travolta who owns eleven jets, including a Boeing 707. Note: 12 Things You Never Knew about Drones, visit: http://mocomi.com/drones/
0.8918
FineWeb
Sections Description Video Credits Description This song answers the question: why is "Y" so great? With the letter "Y," you can do yoga in your yard and eat yogurt on a yam. Or yell "Yippee" on a rollercoaster. This resource teaches letter "Y" vocabulary. Video Credits Contributor: Print Share Why Do I Love the Letter Y | Sesame Street Copy and paste the link code above.
0.9547
FineWeb