query_id
stringlengths 32
32
| query
stringlengths 6
4.09k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
b23f3e838b7782e4446348826ff8188a | Is it better to buy US stocks on US stock exchanges as a European? | [
{
"docid": "415fd28ed28d8a9133db8d9f3e29968c",
"text": "Liquidity on dual listed equities is rarely the same on both exchanges. More liquidity means you would typically get a better price assuming you execute the trades using the same order types. It's recommended to trade where the liquidity is greater unless your trading method benefits somehow from it being lower. It's important to remember that some ADRs (some European companies listed in US) have ADR fees which vary. USD/EUR transaction fees are low when using a decent broker but you're obviously participating in the currency risk.",
"title": ""
},
{
"docid": "8f65e96af1e26f3449880727069e817d",
"text": "\"No, there are neither advantages nor disadvantages. I'll take on this question from an accounting standpoint. Financial statements, the tools at which the market determines (amongst other things) the value of a stock, are converted at year end to the home currency (see 1.1.3).If Company A has revenue of 100,000 USD and the conversion to EUR is .89, revenue in the European market will be reported as 89,000 EUR. These valuations, along with ratios, analysis, and \"\"expert\"\" opinions determine if a person should own shares in Company A. Now, if we're talking about comparing markets this is a entirely different question. Example: Should I buy stock of Company A, who is in the American market (as an European)? Should I buy stock of Company B, who is in the European market (as an American)? I would recommend this as additional level of diversification of your portfolio to inlcude possible large inflation of either the currency. The possible gains of this foreign exchange may be greater if one or the other currency becomes weak.\"",
"title": ""
}
] | [
{
"docid": "45fcc03a66afb144a4c38e299b8f4796",
"text": "\"Theoretically, it shouldn't matter which one you use. Your return should only depend on the stock returns in SGD and the ATS/SGD exchange rate (Austrian Schillings? is this an question from a textbook?). Whether you do the purchase \"\"through\"\" EUR or USD shouldn't matter as the fluctuations in either currency \"\"cancel\"\" when you do the two part exchange SGD/XXX then XXX/ATS. Now, in practice, the cost of exchanging currencies might be higher in one currency or the other. Likely a tiny, tiny amount higher in EUR. There is some risk as well as you will likely have to exchange the money and then wait a day or two to buy the stock, but the risk should be broadly similar between USD and EUR.\"",
"title": ""
},
{
"docid": "e6c723d9270816257b82bf1b4ecf93d7",
"text": "\"If I buy the one from NSY, is it the \"\"real\"\" Sinopec? No - you are buying an American Depository Receipt. Essentially some American bank or other entity holds a bunch of Sinopec stock and issues certificates to the American exchange that American investors can trade. This insulates the American investors from the cost of international transactions. The price of these ADRs should mimic the price of the underlying stock (including changes the currency exchange rate) otherwise an arbitrage opportunity would exist. Other than that, the main difference between holding the ADR and the actual stock is that ADRs do not have voting rights. So if that is not important to you then for all intents and purposes trading the ADR would be the same as trading the underlying stock.\"",
"title": ""
},
{
"docid": "f18f367b4b8b041cb81a43befb98db03",
"text": "I'm not aware of any method to own US stocks, but you can trade them as contract for difference, or CFDs as they are commonly known. Since you're hoping to invest around $1000 this might be a better option since you can use leverage.",
"title": ""
},
{
"docid": "184b63bf1790b8e69ca079b62aebdbb5",
"text": "Open an account with a US discount online broker, or with a European broker with access to the US market. I think ETRADE allow non-resident accounts, for instance, amongst others. The brokerage will be about $10, and there is no annual fee. (So you're ~1% down out of the gate, but that's not so much.) Brokers may have a minimum transaction value but very few exchanges care about the number of shares anymore, and there is no per-share fee. As lecrank notes, putting all your savings into a single company is not prudent, but having a flutter with fun money on Apple is harmless. Paul is correct that dividend cheques may be a slight problem for non-residents. Apple don't pay dividends so there's no problem in this specific case. More generally your broker will give you a cash account into which the dividends can go. You may have to deal with US tax which is more of an annoyance than a cost.",
"title": ""
},
{
"docid": "28409171ea6205d636f9f30e07fba1f0",
"text": "\"Yes and no. There are two primary ways to do this. The first is known as \"\"cross listing\"\". Basically, this means that shares are listed in the home country are the primary shares, but are also traded on secondary markets using mechanisms like ADRs or Globally Registered Shares. Examples of this method include Vodafone and Research in Motion. The second is \"\"dual listing\"\". This is when two corporations that function as a single business are listed in multiple places. Examples of this include Royal Dutch Shell and Unilever. Usually companies choose this method for tax purposes when they merge or acquire an international company. Generally speaking, you can safely buy shares in whichever market makes sense to you.\"",
"title": ""
},
{
"docid": "edc378b948cee79cb0c04d4cec76667f",
"text": "The NYSE is not the only exchange in the world (or even the only one in the USA). Amazingly, the London stock exchange works on London time, the Shanghai exchange works on Shanghai time and the Australian stock exchange works on Sydney time. In addition futures exchanges work overnight.",
"title": ""
},
{
"docid": "2ebc7fc2fe6982e3c3c583336b0bc7fb",
"text": "There's a possibility to lose money in exchange rate shifts, but just as much chance to gain money (Efficient Market Hypothesis and all that). If you're worried about it, you should buy a stock in Canada and short sell the US version at the same time. Then journal the Canadian stock over to the US stock exchange and use it to settle your short sell. Or you can use derivatives to accomplish the same thing.",
"title": ""
},
{
"docid": "b8bc5ac6fc7eafb3ec03c29d82e651ec",
"text": "\"The London Stock Exchange offers a wealth of exchange traded products whose variety matches those offered in the US. Here is a link to a list of exchange traded products listed on the LSE. The link will take you to the list of Vanguard offerings. To view those offered by other managers, click on the letter choices at the top of the page. For example, to view the iShares offerings, click on \"\"I\"\". In the case of Vanguard, the LSE listed S&P500 ETF is traded under the code VUSA. Similarly, the Vanguard All World ETF trades under the code VWRL. You will need to be patient viewing iShares offerings since there are over ten pages of them, and their description is given by the abbreviation \"\"ISH name\"\". Almost all of these funds are traded in GBP. Some offer both currency hedged and currency unhedged versions. Obviously, with the unhedged version you are taking on additional currency risk, so if you wish to avoid currency risk then choose a currency hedged version. Vanguard does not appear to offer currency hedged products in London while iShares does. Here is a list of iShares currency hedged products. As you can see, the S&P500 currency hedged trades under the code IGUS while the unhedged version trades under the code IUSA. The effects of BREXIT on UK markets and currency are a matter of opinion and difficult to quantify currently. The doom and gloom warnings of some do not appear to have materialised, however the potential for near-term volatility remains so longs as the exit agreement is not formalised. In the long-term, I personally believe that BREXIT will, on balance, be a positive for the UK, but that is just my opinion.\"",
"title": ""
},
{
"docid": "d1a109c26a029ec8504ceeeeb3d37240",
"text": "As other people have said they should register with a broker in the country they reside in that can deal in US stocks, then fill out a W8-BEN form. I have personally done this as I am from the Uk, it's not a very complicated process. I would assume that most US brokers don't allow foreign customers due to the person having to pay tax where they reside and the US brokers don't want to have to keep approximately 200 different tax codes in track.",
"title": ""
},
{
"docid": "1bb0e529a8f9a98d69d5d1581916f030",
"text": "Investors who are themselves Canadian and already hold Canadian dollars (CAD) would be more likely to purchase the TSX-listed shares that are quoted in CAD, thus avoiding the currency exchange fees that would be required to buy USD-quoted shares listed on the NYSE. Assuming Shopify is only offering a single class of shares to the public in the IPO (and Shopify's form F-1 only mentions Class A subordinate voting shares as being offered) then the shares that will trade on the TSX and NYSE will be the same class, i.e. identical. Consequently, the primary difference will be the currency in which they are quoted and trade. This adds another dimension to possible arbitrage, where not only the bare price could deviate between exchanges, but also due to currency fluctuation. An additional implication for a company to maintain such a dual listing is that they'll need to adhere to the requirements of both the TSX and NYSE. While this may have a hard cost in terms of additional filing requirements etc., in theory they will benefit from the additional liquidity provided by having the multiple listings. Canadians, in particular, are more likely to invest in a Canadian company when it has a TSX listing quoted in CAD. Also, for a company listed on both the TSX and NYSE, I would expect the TSX listing would be more likely to yield inclusion in a significant market index—say, one based on market capitalization, and thus benefit the company by having its shares purchased by index ETFs and index mutual funds that track the index. I'll also remark that this dual U.S./Canadian exchange listing is not uncommon when it comes to Canadian companies that have significant business outside of Canada.",
"title": ""
},
{
"docid": "5e8494e54f4125111114c7361174730d",
"text": "\"Am I wrong? Yes. The exchanges are most definitely not \"\"good ole boys clubs\"\". They provide a service (a huge, liquid and very fast market), and they want to be paid for it. Additionally, since direct participants in their system can cause serious and expensive disruptions, they allow only organizations that know what they're doing and can pay for any damages the cause. Is there a way to invest without an intermediary? Certainly, but if you have to ask this question, it's the last thing you should do. Typically such offers are only superior to people who have large investments sums and know what they're doing - as an inexperienced investor, chances are that you'll end up losing everything to some fraudster. Honestly, large exchanges have become so cheap (e.g. XETRA costs 2.52 EUR + 0.0504% per trade) that if you're actually investing, then exchange fees are completely irrelevant. The only exception may be if you want to use a dollar-cost averaging strategy and don't have a lot of cash every month - fixed fees can be significant then. Many banks offer investments plans that cover this case.\"",
"title": ""
},
{
"docid": "65a80f2facea4fe99eb9be9f03da3d0d",
"text": "Does the Spanish market, or any other market in euroland, have the equivalent of ETF's? If so there ought to be one that is based on something like the US S&P500 or Russell 3000. Otherwise you might check for local offices of large mutual fund companies such as Vanguard, Schwab etc to see it they have funds for sale there in Spain that invest in the US markets. I know for example Schwab has something for Swiss residents to invest in the US market. Do bear in mind that while the US has a stated policy of a 'strong dollar', that's not really what we've seen in practice. So there is substantial 'currency risk' of the dollar falling vs the euro, which could result in a loss for you. (otoh, if the Euro falls out of bed, you'd be sitting pretty.) Guess it all depends on how good your crystal ball is.",
"title": ""
},
{
"docid": "55094532cddaab9387ee3ea1019fb387",
"text": "First thing to consider is that getting your hands on an IPO is very difficult unless you have some serious clout. This might help a bit in that department (http://www.sec.gov/answers/ipoelig.htm) However, assuming you accept all that risk and requirements, YES - you can buy stocks of any kind in the US even if you are a foreigner. There are no laws prohibiting investment/buying in the US stock market. What you need is to get an online trading account from a registered brokerage house in the US. Once you are registered, you can buy whatever that is offered.",
"title": ""
},
{
"docid": "feea3c7cd647080a887e72b9affeb790",
"text": "\"Others have mentioned the exchange rate, but this can play out in various ways. One thing we've seen since the \"\"Brexit\"\" vote is that the GBP/USD has fallen dramatically, but the value of the FTSE has gone up. This is partly due to many the companies listed there operating largely outside the UK, so their value is more linked to the dollar than the pound. It can definitely make sense to invest in stocks in a country more stable than your own, if feasible and not too expensive. Some years ago I took the 50/50 UK/US option for my (UK) pension, and it's worked out very well so far.\"",
"title": ""
},
{
"docid": "5ae06451df0a095d66d02dd73776f07a",
"text": "\"Trading on specific ECNs is the easy part - you simply specify the order routing in advance. You are not buying or selling the *exact* same shares. Shares are fungible - so if I simultaneously buy one share and sell another share, my net share position is zero - even if those trades don't settle until T+3. PS \"\"The Nasdaq\"\" isn't really an exchange in the way that the CME, or other order-driven markets are. It's really just a venue to bring market makers together. It's almost like \"\"the internet,\"\" as in, when you buy something from Amazon, you're not buying it from \"\"the internet,\"\" but it was the internet that made your transaction with Amazon possible.\"",
"title": ""
}
] | fiqa |
7cfecda12a4ae570f3780dfcef43699c | Is gold subject to inflation? [duplicate] | [
{
"docid": "edf4fba292caeb83937280fef7ca1934",
"text": "\"The general argument put forward by gold lovers isn't that you get the same gold per dollar (or dollars per ounce of gold), but that you get the same consumable product per ounce of gold. In other words the claim is that the inflation-adjusted price of gold is more-or-less constant. See zerohedge.com link for a chart of gold in 2010 GBP all the way from 1265. (\"\"In 2010 GBP\"\" means its an inflation adjusted chart.) As you can see there is plenty of fluctuation in there, but it just so happens that gold is worth about the same now as it was in 1265. See caseyresearch.com link for a series of anecdotes of the buying power of gold and silver going back some 3000 years. What this means to you: If you think the stock market is volatile and want to de-risk your holdings for the next 2 years, gold is just as risky If you want to invest some wealth such that it will be worth more (in real terms) when you take it out in 40 years time than today, the stock market has historically given better returns than gold If you want to put money aside, and it to not lose value, for a few hundred years, then gold might be a sensible place to store your wealth (as per comment from @Michael Kjörling) It might be possible to use gold as a partial hedge against the stock market, as the two supposedly have very low correlation\"",
"title": ""
},
{
"docid": "500707114934997f55ec17ae6020bf57",
"text": "Gold isn't constant in value. If you look at the high price of $800 in January of 1980 and the low of $291 in 2001, you lost a lot of purchasing power, especially since money in 2001 was worth less than in 1980. People claim gold is a stable store of value but it isn't.",
"title": ""
},
{
"docid": "9c84d0cd8ba4ce0d23663e0591844911",
"text": "Gold is a risky and volatile investment. If you want an investment that's inflation-proof, you should buy index-linked government bonds in the currency that you plan to be spending the money in, assuming that government controls its own currency and has a good credit rating.",
"title": ""
},
{
"docid": "38aa011258eb268a60e1affa22392333",
"text": "No. If you have to ignore a price spike, obviously its value is not constant. Gold is a commodity, just like every other commodity.",
"title": ""
}
] | [
{
"docid": "cf90b0dcaa1f707395818029b671ef11",
"text": "\"Over time, gold has mainly a hedge against inflation, based on its scarcity value. That is, unless finds some \"\"killer app\"\" for it that would also make it a good investment. The \"\"usual\"\" ones, metallurgical, electronic, medicine, dental, don't really do the trick. It should be noted that gold performs its inflation hedge function over a long period of time, say $50-$100 years. Over shorter periods of time, it will spike for other reasons. The latest classic example was in 1979-80, and the main reason, in my opinion, was the Iranian hostage crisis (inflation was secondary.) This was a POLITICAL risk situation, but one that was not unwarranted. An attack on 52 U.S. hostages (diplomats, no less), was potenially an attack on the U.S. dollar. But gold got so pricey that it lost its \"\"inflation hedge\"\" function for some two decades (until about 2000). Inflation has not been a notable factor in 2011. But Mideastern political risk has been. Witness Egypt, Libya, and potentially Syria and other countries. Put another way, gold is less of an investment that a \"\"hedge.\"\" And not just against inflation.\"",
"title": ""
},
{
"docid": "828994998ff09473195549c23b5df865",
"text": "According to the US Mint, the Government does still have a gold reserve stored mostly in Fort Knox in Kentucky, but there is some in New York and Colorado too. Some facts from their site: That last point is an interesting one. They are basically saying, yes we have it, and no you can't see it. Some conspiracy buffs claim no one has been allowed in there to audit how much they have in over 50 years leading them to speculate that they are bluffing. Although the dollar is no longer tied to the gold standard, throwing that much gold into the market would definitely add fuel the volatility of the finance world, which already has it's share of volatility and isn't hungry for more.The impact on the price of the dollar would be quite complicated and hard to predict.",
"title": ""
},
{
"docid": "10e85a6a36f037c99bad011486f28da6",
"text": "\"Monetary policy has always been in play. The Romans frequently debased (that is, increased the alloy/base metal ratio) their coinage and the U.S. went off the gold standard numerous times, especially during wars. Really the gold standard was only adhered to when it was convenient, so historically it did not play the role that Austrians wish it would. Basically just because gold was historically used as currency does not mean that it controlled the money supply. For example, most sovereign transactions were simply recorded by symbolic \"\"money things.\"\"* And even if the gold standard did automatically protect the value of money, inflation is a hell of a lot better than deflation anyway. You want to reward people for putting their money to work, not sticking it in a mattress. *For further information on the history of gold currency and money in general, see section 2 of this article from the LSE: http://www.sciencedirect.com/science/article/pii/S0176268098000159\"",
"title": ""
},
{
"docid": "79752a2b1b328ba97110cab8bb396afd",
"text": "\"1) link? 2) It doesn't matter if they can or do, what matters is if they are *investing* (not trading) in it *more* than banks are investing in businesses. If that is the case (it is) then businesses as a whole will see the inflation first, and commodities will be playing catchup the entire time, but mostly when the investments hit recession. I will invest in the market again after they lose their mal-invested value, In historical terms the best time was to invest in them was 1980, 1938 and 1900, and the best time to get out was 1929 1960's and 2000. But to bet against them right now, with the dollar, is meaningless because the FED is deflating the currency as they go down, so it's like running on a treadmill. By holding silver I am essentially short the market, only difference is instead of holding a devaluing currency (cash) I'm holding a real money which is increasing in value. There's nothing simpler than \"\"investing\"\" in commodities for the long-term, people lose when they are making monthly/daily trades in them. Anyone who bought and held on to gold in 2000 did it for $300, and they've made infinitely more than the majority of people investing in blue chips (because they've lost value) and much more than those who invested in bonds. And that trend isn't going to stop unless the government lets the dollar deflate in which case the dollar will come to gold instead of the other way around. Until they are in equilibrium again. Historically the dow has an average of being 2 ounces of gold, peaking at 50 and trophing at .5. If it hits .5 again like it has everytime this occurs in the past. Then either gold will be $20000 or the dow will be 4000. You pick.\"",
"title": ""
},
{
"docid": "0c8627953291d60451d67d6a78b00468",
"text": "\"The \"\"conventional wisdom\"\" is that you should have about 5% of your portfolio in gold. But that's an AVERAGE. Meaning that you might want to have 10% at some times (like now) and 0% in the 1980s. Right now, the price of gold has been rising, because of fears of \"\"easing\"\" Fed monetary policy (for the past decade), culminating in recent \"\"quantitative easing.\"\" In the 1980s, you should have had 0% in gold given the fall of gold in 1981 because of Paul Volcker's monetary tightening policies, and other reasons. Why did gold prices drop in 1981? And a word of caution: If you don't understand the impact of \"\"quantitative easing\"\" or \"\"Paul Volcker\"\" on gold prices, you probably shouldn't be buying it.\"",
"title": ""
},
{
"docid": "861a9d04974ce6c228e125c840a8f454",
"text": "Mining/discovery of gold can be inflationary -- the Spanish looting of Central America for a few hundred years or the gold rush in the 19th century US are examples of that phenomenon. The difference between printing currency and mining is that you have to ability to print money on demand, while mining is limited to whatever is available to extract at a given time. The rising price of gold may be contributing to increased production, as low-grade ore that wasn't economically viable to work with in the 1980's are now affordable.",
"title": ""
},
{
"docid": "04d4827d726ea7bf03eb32ae11d2012b",
"text": "Typically in a developed / developing economy if there is high overall inflation, then it means everything will rise including property/real estate. The cost of funds is low [too much money chasing too few goods causes inflation] which means more companies borrow money cheaply and more business florish and hence the stock market should also go up. So if you are looking at a situation where industry is doing badly and the inflation is high, then it means there are larger issues. The best bet would be Gold and parking the funds into other currency.",
"title": ""
},
{
"docid": "65ee28372de3872e9a359166613cfa9a",
"text": "Money is no longer backed by gold. It's backed by the faith and credit of the issuing government. A new country,say, will first trade goods for dollars or other currency, so its ownership of gold is irrelevant. Its currency will trade at a value based on supply/demand for that currency. If it's an unstable currency, inflating too quickly, the exchange rate will reflect that as well. More than that your question kind of mixes a number of issues, loosely related. First is the gold question, second, the question of currency exchange rates and they are derived, with an example of a new country. Both interesting, but distinct processes.",
"title": ""
},
{
"docid": "dfafcc92da76fa7f7ae4390603830f17",
"text": "There is inflation, but it's hidden through various mechanisms. What do you call housing price increases and wage declines? What do you call the fed essentially paying down the inflation with free money and prices still pressuring upwards? I get the sense there is a great underlying pressure for inflation to burst out from the fed's free money pressure chamber. For all our sake, I really hope the pressure chamber holds or I'm totally wrong in the first place.",
"title": ""
},
{
"docid": "3f53751a09601e4815ee181201e20979",
"text": "\"Over on Quantitative Finance Stack Exchange, I asked and answered a more technical and broader version of this question, Should the average investor hold commodities as part of a broadly diversified portfolio? In short, I believe the answer to your question is that gold is neither an investment nor a hedge against inflation. Although many studies claim that commodities (such as gold) do offer some diversification benefit, the most credible academic study I have seen to date, Should Investors Include Commodities in Their Portfolios After All? New Evidence, shows that a mean-variance investor would not want to allocate any of their portfolio to commodities (this would include gold, presumably). Nevertheless, many asset managers, such as PIMCO, offer funds that are marketed as \"\"real return\"\" or \"\"inflation-managed\"\" and include commodities (including gold) in their portfolios. PIMCO has also commissioned some research, Strategic Asset Allocation and Commodities, claiming that holding some commodities offers both diversification and inflation hedging benefits.\"",
"title": ""
},
{
"docid": "d242c87b6a5d3359e28cd15a6f25e144",
"text": "\"No, it isn't generally believed that inflation is caused by individual banks printing money. Governments manage money supply through Central Banks (which may, or may not, be independent of the state). There are a number of theories about money supply and inflation (from Monetarist, to Keynesian, and so on). The Quantity Theory of Inflation says that long-term inflation is the result of money-supply but short-term inflation is related to events/local conditions. Short-term inflation is a symptom of economic change. It's like a cough for a doctor. It simply indicates an underlying event. When prices go up it encourages new producers to enter the market, create new supply which will then act to lower prices. In this way inflation is managed by ensuring that information travels throughout the economy. If prices go up for specific goods, then - all things being equal - supply should go up since the increase implies increasing demand. If prices go down then this implies demand has gone down and so producers will reduce supply. Obviously this isn't a perfect relationship. There is \"\"stickiness\"\" which can be caused by a whole bunch of market conditions (from banning of short-selling, to inelasticity of demand/supply). Your question isn't about quantitative easing (which is a state-led way of increasing money-supply and which could increase inflation but is hoped to increase expenditure and investment) so I won't cover that here. The important take-away is that inflation is an essential price signal to investors and business people so that they can assess market cycles. Without it we would end up with vast over- or under-supply and much greater economic disruption.\"",
"title": ""
},
{
"docid": "ac7828370d866a6e91c3a456e08d6155",
"text": "So after you learn some basics about bubbles you might then see that interest rates kept at their lowest since the days they were backed with gold may allow a bubble to form in housing. You know the bond purchases increased real estate prices right? What is it about the magic $2 Trillion that makes you think the FED hit the spot right on?",
"title": ""
},
{
"docid": "931efdb6af74a7feffd7a87fd30575f2",
"text": "Inflation is not applicable in the said example. You are better off paying 300 every month as the balance when invested will return you income.",
"title": ""
},
{
"docid": "7e87e09f896a04c14120e70119a514d9",
"text": "The United States is no longer on a gold standard, and the value of its currency is solely founded on the productivity of its economy. So I don't think there's any practical reason for the United States government to explicitly sell off a lot of gold to force the price to crash. In fact I would expect that the price of gold has very little interest for the Fed, or anyone else in a position of economic power in the government. I believe that we still have large reserves of it, but I have no idea what they are intended for, aside from being a relic of the gold standard. Best guess is that they'll be held on to just in case of an international trend back towards the gold standard, although that is unlikely on any time frame we would care about.",
"title": ""
},
{
"docid": "d3b43cf3295733598b990a5018066188",
"text": "I was being sarcastic in response to you saying that hyperinflation happens every 30-50 years in a finance subreddit.. where the second lesson (right after time value of money) is that past results in general tell us nothing about the future.",
"title": ""
}
] | fiqa |
a5cb40bb97a3118c7ad8d4b16d795c45 | Stock not available at home country nor at their local market - where should I buy it | [
{
"docid": "45fcc03a66afb144a4c38e299b8f4796",
"text": "\"Theoretically, it shouldn't matter which one you use. Your return should only depend on the stock returns in SGD and the ATS/SGD exchange rate (Austrian Schillings? is this an question from a textbook?). Whether you do the purchase \"\"through\"\" EUR or USD shouldn't matter as the fluctuations in either currency \"\"cancel\"\" when you do the two part exchange SGD/XXX then XXX/ATS. Now, in practice, the cost of exchanging currencies might be higher in one currency or the other. Likely a tiny, tiny amount higher in EUR. There is some risk as well as you will likely have to exchange the money and then wait a day or two to buy the stock, but the risk should be broadly similar between USD and EUR.\"",
"title": ""
}
] | [
{
"docid": "db571656437f699d18b3d7941b386abd",
"text": "Any large stockbroker will offer trading in US securities. As a foreign national you will be required to register with the US tax authorities (IRS) by completing and filing a W-8BEN form and pay US withholding taxes on any dividend income you receive. US dividends are paid net of withholding taxes, so you do not need to file a US tax return. Capital gains are not subject to US taxes. Also, each year you are holding US securities, you will receive a form from the IRS which you are required to complete and return. You will also be required to complete and file forms for each of the exchanges you wish to received market price data from. Trading will be restricted to US trading hours, which I believe is 6 hours ahead of Denmark for the New York markets. You will simply submit an order to the desired market using your broker's online trading software or your broker's telephone dealing service. You can expect to pay significantly higher commissions for trading US securities when compared to domestic securities. You will also face potentially large foreign exchange fees when exchaning your funds from EUR to USD. All in all, you will probably be better off using your local market to trade US index or sector ETFs.",
"title": ""
},
{
"docid": "42a6227caae2ab12663e34c5bcc7f38b",
"text": "Check out WorldCap.org. They provide fundamental data for Hong Kong stocks in combination with an iPad app. Disclosure: I am affiliated with WorldCap.",
"title": ""
},
{
"docid": "b891f946fa4bcd62c8d9379a78d169d9",
"text": "I agree that a random page on the internet is not always a good source, but at the same time I will use Google or Yahoo Finance to look up US/EU equities, even though those sites are not authoritative and offer zero guarantees as to the accuracy of their data. In the same vein you could try a website devoted to warrants in your market. For example, I Googled toronto stock exchange warrants and the very first link took me to a site with all the information you mentioned. The authoritative source for the information would be the listing exchange, but I've spent five minutes on the TSX website and couldn't find even a fraction of the information about that warrant that I found on the non-authoritative site.",
"title": ""
},
{
"docid": "351caceff65bf83be90d557d5c8a94f5",
"text": "I stock is only worth what someone will pay for it. If you want to sell it you will get market price which is the bid.",
"title": ""
},
{
"docid": "f744364c976f38ef461e3449e043a277",
"text": "You seem to think that stock exchanges are much more than they actually are. But it's right there in the name: stock exchange. It's a place where people exchange (i.e. trade) stocks, no more and no less. All it does is enable the trading (and thereby price finding). Supposedly they went into mysterious bankruptcy then what will happen to the listed companies Absolutely nothing. They may have to use a different exchange if they're planning an IPO or stock buyback, that's all. and to the shareholder's stock who invested in companies that were listed in these markets ? Absolutley nothing. It still belongs to them. Trades that were in progress at the moment the exchange went down might be problematic, but usually the shutdown would happen in a manner that takes care of it, and ultimately the trade either went through or it didn't (and you still have the money). It might take some time to establish this. Let's suppose I am an investor and I bought stocks from a listed company in NYSE and NYSE went into bankruptcy, even though NYSE is a unique business, meaning it doesn't have to do anything with that firm which I invested in. How would I know the stock price of that firm Look at a different stock exchange. There are dozens even within the USA, hundreds internationally. and will I lose my purchased stocks ? Of course not, they will still be listed as yours at your broker. In general, what will happen after that ? People will use different stock exchanges, and some of them migth get overloaded from the additional volume. Expect some inconveniences but no huge problems.",
"title": ""
},
{
"docid": "c0882afa2daa5a742a7c8776b1dfbe50",
"text": "No, you shouldn't buy it. The advice here is to keep any existing holdings but not make new purchases of the stock.",
"title": ""
},
{
"docid": "3a5e26a54c14df9789647c1dea47ee96",
"text": "There are some brokers in the US who would be happy to open an account for non-US residents, allowing you to trade stocks at NYSE and other US Exchanges. Some of them, along with some facts: DriveWealth Has support in Portuguese Website TD Ameritrade Has support in Portuguese Website Interactive Brokers Account opening is not that straightforward Website",
"title": ""
},
{
"docid": "05d0b4242ad67dfe15d9e25e4266cc40",
"text": "One risk not mentioned is that foreign stock might be thinly traded on your local stock market, so you will find it harder to buy and sell, and you will be late to the game if there is some sudden change in the share price in the original country.",
"title": ""
},
{
"docid": "4f90586bfcfdc4185d30d01836631f40",
"text": "The easiest route for you to go down will be to consult wikipedia, which will provide a comprehensive list of all US stock exchanges (there are plenty more than the ones you list!). Then visit the websites for those that are of interest to you, where you will find a list of holiday dates along with the trading schedule for specific products and the settlement dates where relevant. In answer to the other part of your question, yes, a stock can trade on multiple exchanges. Typically (unless you instruct otherwise), your broker will route your order to the exchange where it can be matched at the most favorable price to you at that time.",
"title": ""
},
{
"docid": "6150cd134f4e7c7e266d5fe0ce92ef87",
"text": "The essential difference b/n ADR and a common share is that ADR do not have Voting rights. Common share has. There are some ADR that would in certain conditions get converted to common stock, but by and large most ADR's would remain ADR's without any voting rights. If you are an individual investor, this difference should not matter as rarely one would hold such a large number of shares to vote on General meeting on various issues. The other difference is that since many countries have regulations on who can buy common shares, for example in India an Non Resident cannot directly buy any share, hence he would buy ADR. Thus ADR would be priced more in the respective market if there is demand. For example Infosys Technologies, an India Company has ADR on NYSE. This is more expensive around 1.5 times the price of the common share available in India (at current exchange rate). Thus if you are able to invest with equal ease in HK (have broker / trading account etc), consider the taxation of the gains in HK as well the tax treatment in US for overseas gains then its recommended that you go for Common Stock in HK. Else it would make sense to buy in US.",
"title": ""
},
{
"docid": "d666c38057c10de0df25b0b819739a26",
"text": "It doesn't matter which exchange a share was purchased through (or if it was even purchased on an exchange at all--physical share certificates can be bought and sold outside of any exchange). A share is a share, and any share available for purchase in New York is available to be purchased in London. Buying all of a company's stock is not something that can generally be done through the stock market. The practical way to accomplish buying a company out is to purchase a controlling interest, or enough shares to have enough votes to bind the board to a specific course of action. Then vote to sell all outstanding shares to another company at a particular fixed price per share. Market capitalization is an inaccurate measure of the size of a company in the first place, but if you want to quantify it, you can take the number of outstanding shares (anywhere and everywhere) and multiply them by the price on any of the exchanges that sell it. That will give you the market capitalization in the currency that is used by whatever exchange you chose.",
"title": ""
},
{
"docid": "899f4d3246f1f739b5e7d07e75a5f20d",
"text": "yes, there does need to be demand. on heavily traded stocks, there is no reason to be concerned. on thinly traded equities, you will want to check the market depth before placing a sell. the company is likely not the one that is buying your shares on the open market.",
"title": ""
},
{
"docid": "90da52d0db0ff30eb04f78eb18a7a3d0",
"text": "While most all Canadian brokers allow us access to all the US stocks, the reverse is not true. But some US brokers DO allow trading on foreign exchanges. (e.g. Interactive Brokers at which I have an account). You have to look and be prepared to switch brokers. Americans cannot use Canadian brokers (and vice versa). Trading of shares happens where-ever two people get together - hence the pink sheets. These work well for Americans who want to buy-sell foreign stocks using USD without the hassle of FX conversions. You get the same economic exposure as if the actual stock were bought. But the exchanges are barely policed, and liquidity can dry up, and FX moves are not necessarily arbitraged away by 'the market'. You don't have the same safety as ADRs because there is no bank holding any stash of 'actual' stocks to backstop those traded on the pink sheets.",
"title": ""
},
{
"docid": "ad0238d88d414fea8b5afbebfdffccf9",
"text": "What I ended up doing was finding where each ticker of Novo was registered (what exchange), then individually looking up the foreign taxation rules of the containing country. Luckily, most companies only have a few tickers so this wasn't too hard in the end.",
"title": ""
},
{
"docid": "9a1ad7c42d95f740cc786d3707e3ce4d",
"text": "You might have to pay a premium for the stocks on the dividend tax–free exchanges. For example, HSBC on the NYSE yields 4.71% versus HSBC on the LSE which yields only 4.56%. Assuming the shares are truly identical, the only reason for this (aside from market fluctuations) is if the taxes are more favorable in the UK versus the US, thus increasing demand for HSBC on the LSE, raising the price, and reducing the yield. A difference of 0.15% in yield is pretty insignificant relative to a 30% versus 0% dividend tax. But a key question is, does your country have a foreign tax credit like the US does? If so you (usually) end up getting that 30% back, just delayed until you get your tax return, and the question of which exchange to buy on becomes not so clear cut. If your country doesn't have such a tax credit, then yes, you'll want to buy on an exchange where you won't get hit with the dividend tax. Note that I got this information from a great article I read several months back (site requires free registration to see it all unfortunately). They discuss the case of UN versus UL--both on the NYSE but ADRs for Unilever in the Netherlands and the UK, respectively. The logic is very similar to your situation.",
"title": ""
}
] | fiqa |
832be3bba6225e04d89c5a1088b954ff | How to invest in Japan's stock market from the UK | [
{
"docid": "6b6e8cff3e2d1fc406184a1a836df5a9",
"text": "Use an exchange traded fund ETF, namely SPDR MSCI Japan EUR Hdg Ucits ETF. It is hedged and can be bought in the UK by this broker State Street Global Advisors on the London Stock Exchange LSE. Link here. Article on JAPAN ETF hedged in Sterling Pound here.",
"title": ""
}
] | [
{
"docid": "a0adec367236de98979284b2f06191c3",
"text": "I'm in the US as well, but some basic things are still the same. You need to trade through a broker, but the need for a full service broker is no longer necessary. You may be able to get by with a web based brokerage that charges less fees. If you are nervous, look for a big name, and avoid a fly by night company. Stick with non-exotic investments. don't do options, or futures or Forex. You may even want to skip shares all together and see if UK offers something akin to an index fund which tracks broad markets (like the whole of the FTSE 100 or the S&P 500) as a whole.",
"title": ""
},
{
"docid": "6e1a49099026facd9c7a976bb9804035",
"text": "I searched for FTSE 100 fund on Yahoo Finance and found POW FTSE RAF UK 100 (PSRU.L), among many others. Google Finance is another possible source that immediately comes to mind.",
"title": ""
},
{
"docid": "2b1a8a2a609b0f853660a8786305f123",
"text": "just pick a good bond and invest all your money there (since they're fairly low risk) No. That is basically throwing away your money and why would you do that. And who told you they are low risk. That is a very wrong premise. What factors should I consider in picking a bond and how would they weigh against each other? Quite a number of them to say, assuming these aren't government bonds(US, UK etc) How safe is the institution issuing the bond. Their income, business they are in, their past performance business wise and the bonds issued by them, if any. Check for the bond ratings issued by the rating agencies. Read the prospectus and check for any specific conditions i.e. bonds are callable, bonds can be retired under certain conditions, what happens if they default and what order will you be reimbursed(senior debt take priority). Where are interest rates heading, which will decide the price you are paying for the bond. And also the yield you will derive from the bond. How do you intend to invest the income, coupon, you will derive from the bonds. What is your time horizon to invest in bonds and similarly the bond's life. I have invested in stocks previously but realized that it isn't for me Bonds are much more difficult than equities. Stick to government bonds if you can, but they don't generate much income, considering the low interest rates environment. Now that QE is over you might expect interest rates to rise, but you can only wait. Or go for bonds from stable companies i.e. GE, Walmart. And no I am not saying you buy their bonds in any imaginable way.",
"title": ""
},
{
"docid": "3167b26b3d85953e30d252c7ae9aa5d5",
"text": "You can look into specific market targeted mutual funds or ETF's. For Norway, for example, look at NORW. If you want to purchase specific stocks, then you'd better be ready to trade on local stock exchanges in local currency. ETrade allows trading on some of the international stock exchanges (in Asia they have Hong Kong and Japan, in Europe they have the UK, Germany and France, and in the Americas they have the US and Canada). Some of the companies you're interested in might be trading there.",
"title": ""
},
{
"docid": "bf3a242bd5867bd1ea8d2adf71e360c9",
"text": "It's called carry-trade. They can borrow from governments that have 0% int rates, exchange it for dollars, and then buy u.s. treasuries. Japan would never ever raise their interest rates as their economy runs on keynesian fumes.",
"title": ""
},
{
"docid": "6f551685dbb152c7bc454f5022cfba94",
"text": "The link provided by DumbCoder (below) is only relevant to UK resident investors and does not apply if you live in Malaysia. I noticed that in a much older question you asked a similar question about taxes on US stocks, so I'll try and answer both situations here. The answer is almost the same for any country you decide to invest in. As a foreign investor, the country from which you purchase stock cannot charge you tax on either income or capital gains. Taxation is based on residency, so even when you purchase foreign stock its the tax laws of Malaysia (as your country of residence) that matter. At the time of writing, Malaysia does not levy any capital gains tax and there is no income tax charged on dividends so you won't have to declare or pay any tax on your stocks regardless of where you buy them from. The only exception to this is Dividend Withholding Tax, which is a special tax taken by the government of the country you bought the stock from before it is paid to your account. You do not need to declare this tax as it his already been taken by the time you receive your dividend. The rate of DWT that will be withheld is unique to each country. The UK does not have any withholding tax so you will always receive the full dividend on UK stocks. The withholding tax rate for the US is 30%. Other countries vary. For most countries that do charge a withholding tax, it is possible to have this reduced to 15% if there is a double taxation treaty in place between the two countries and all of the following are true: Note: Although the taxation rules of both countries are similar, I am a resident of Singapore not Malaysia so I can't speak from first hand experience, but current Malaysia tax rates are easy to find online. The rest of this information is common to any non-US/UK resident investor (as long as you're not a US person).",
"title": ""
},
{
"docid": "432563b151d2e6afcfa8c7f9f577f54b",
"text": "I use and recommend barchart.com. Again you have to register but it's free. Although it's a US system it has a full listing of UK stocks and ETFs under International > London. The big advantage of barchart.com is that you can do advanced technical screening with Stochastics and RS, new highs and lows, moving averages etc. You're not stuck with just fundamentals, which in my opinion belong to a previous era. Even if you don't share that opinion you'd still find barchart.com useful for UK stocks.",
"title": ""
},
{
"docid": "ea024c1c19d8d8a040dd4a8b2cba45b4",
"text": "The Japanese stock market offers a wide selection of popular ETFs tracking the various indices and sub-indices of the Tokyo Stock Exchange. See this page from the Japan Exchange Group site for a detailed listing of the ETFs being offered on the Tokyo exchange. As you have suggested, one would expect that Japanese investors would be reluctant to track the local market indices because of the relatively poor performance of the Japanese markets over the last couple of decades. However, this does not appear to be the case. In fact, there seems to be a heavy bias towards Tokyo indices as measured by the NAV/Market Cap of listed ETFs. The main Tokyo indices - the broad TOPIX and the large cap Nikkei - dominate investor choice. The big five ETFs tracking the Nikkei 225 have a total net asset value of 8.5Trillion Yen (72Billion USD), while the big four ETFs tracking the TOPIX have a total net asset value of 8.0Trillion Yen (68Billion USD). Compare this to the small net asset values of those Tokyo listed ETFs tracking the S&P500 or the EURO STOXX 50. For example, the largest S&P500 tracker is the Nikko Asset Management S&P500 ETF with net asset value of just 67Million USD and almost zero liquidity. If I remember my stereotypes correctly, it is the Japanese housewife that controls the household budget and investment decisions, and the Japanese housewife is famously conservative and patriotic with their investment choices. Japanese government bonds have yielded next to nothing for as long as I can remember, yet they remain the first choice amongst housewives. The 1.3% yield on a Nikkei 225 ETF looks positively generous by comparison and so will carry some temptations.",
"title": ""
},
{
"docid": "e1690ef048e092b7227b71e406ca5b96",
"text": "If you have such a long term investment goal there really is no reason to try time the markets, 1990s market high was nothing compared to 1999s market high which was nothing to 2006 etc and so on(years quoted as example). Also consider cost of opportunity missed by holding back investing your immediately available investment capital and have it sit in a bank account for 18-24months, collecting meager returns instead of a 5-10% potential return for example(which isnt a strech by any means). Now if you re really hell-bent on timing the market, since you re in the UK, if you really want to attempt it, I would pay close attention to Brexit news and talks that are scheduled for 2018 onwards. Any delays on that deal and/or potential bad development may lead to speculation and temporary lows for you to buy in. If thats worth the effort and cost of opportunity mentioned before is up to you.",
"title": ""
},
{
"docid": "c75297b62f73553ec352cda7a9fff1b6",
"text": "\"I've done exactly what you say at one of my brokers. With the restriction that I have to deposit the money in the \"\"right\"\" way, and I don't do it too often. The broker is meant to be a trading firm and not a currency exchange house after all. I usually do the exchange the opposite of you, so I do USD -> GBP, but that shouldn't make any difference. I put \"\"right\"\" in quotes not to indicate there is anything illegal going on, but to indicate the broker does put restrictions on transferring out for some forms of deposits. So the key is to not ACH the money in, nor send a check, nor bill pay it, but rather to wire it in. A wire deposit with them has no holds and no time limits on withdrawal locations. My US bank originates a wire, I trade at spot in the opposite direction of you (USD -> GBP), wait 2 days for the trade to settle, then wire the money out to my UK bank. Commissions and fees for this process are low. All told, I pay about $20 USD per xfer and get spot rates, though it does take approx 3 trading days for the whole process (assuming you don't try to wait for a target rate but rather take market rate.)\"",
"title": ""
},
{
"docid": "686c79bee148b44dfd8d5893636b200c",
"text": "Does this make sense? I'm concerned that by buying shares with post tax income, I'll have ended up being taxed twice or have increased my taxable income. ... The company will then re-reimburse me for the difference in stock price between the vesting and the purchase share price. Sure. Assuming you received a 100-share RSU for shares worth $10, and your marginal tax rate is 30% (all made up numbers), either: or So you're in the same spot either way. You paid $300 to get $1,000 worth of stock. Taxes are the same as well. The full value of the RSU will count as income either way, and you'll either pay tax on the gains of the 100 shares in your RSU our you'll pay tax on gains on the 70 shares in your RSU and the 30 shares you bought. Since they're reimbursing you for any difference the cost basis will be the same (although you might get taxed on the reimbursement, but that should be a relatively small amount). This first year I wanted to keep all of the shares, due to tax reasons and because believe the share price will go up. I don't see how this would make a difference from a tax standpoint. You're going to pay tax on the RSU either way - either in shares or in cash. how does the value of the shares going up make a difference in tax? Additionally I'm concerned that by doing this I'm going to be hit by my bank for GBP->USD exchange fees, foreign money transfer charges, broker purchase fees etc. That might be true - if that's the case then you need to decide whether to keep fighting or decide if it's worth the transaction costs.",
"title": ""
},
{
"docid": "8b97d4bf72e0dd05ddcdc5bb3403b6ae",
"text": "There is no country tag, so I will answer the question generally. Is it possible...? Yes, it's possible and common. Is it wise? Ask Barings Bank whether it's a good idea to allow speculative investing.",
"title": ""
},
{
"docid": "0dbe36e7d333c6d096f12c3665f9261e",
"text": "I think I have a better answer for this since I have been an investor in the stock markets since a decade and most of my money is either made through investing or trading the financial markets. Yes you can start investing with as low as 50 GBP or even less. If you are talking about stocks there is no restriction on the amount of shares you can purchase the price of which can be as low as a penny. I stared investing in stocks when I was 18. With the money saved from my pocket money which was not much. But I made investments on a regular period no matter how less I could but I would make regular investments on a long term. Remember one thing, never trade stock markets always invest in it on a long term. The stock markets will give you the best return on a long term as shown on the graph below and will also save you money on commission the broker charge on every transaction. The brokers to make money for themselves will ask you to trade stocks on short term but stock market were always made to invest on a long term as Warren Buffet rightly says. And if you want to trade try commodities or forex. Forex brokers will offer you accounts with as low as 25 USD with no commissions. The commission here are all inclusive in spreads. Is this true? Can the average Joe become involved? Yes anyone who wants has an interest in the financial markets can get involved. Knowledge is the key not money. Is it worth investing £50 here and there? Or is that a laughable idea? 50 GBP is a lot. I started with a few Indian Rupees. If people laugh let them laugh. Only morons who don't understand the true concept of financial markets laugh. There are fees/rules involved, is it worth the effort if you just want to see? The problem with today's generation of people is that they fear a lot. Unless you crawl you dont walk. Unless you try something you dont learn. The only difference between a successful person and a not successful person is his ability to try, fail/fall, get back on feet, again try untill he succeeds. I know its not instant money, but I'd like to get a few shares here and there, to follow the news and see how companies do. I hear that BRIC (brasil, russia, india and china) is a good share to invest in Brazil India the good thing is share prices are relatively low even the commissions. Mostly ROI (return on investment) on a long term would almost be the same. Can anyone share their experiences? (maybe best for community wiki?) Always up for sharing. Please ask questions no matter how stupid they are. I love people who ask for when I started I asked and people were generous enough to answer and so would I be.",
"title": ""
},
{
"docid": "b61508d8f8e827dbf949055ad91010b6",
"text": "\"Ultimately you are as stuck as all other investors with low returns which get taxed. However there are a few possible mitigations. You can put up to 15k p.a. into a \"\"normal\"\" ISA (either cash or stocks & shares, or a combination) if your target is to generate the depost over 5 years you should maximise the amount you put in an ISA. Then when you come to buy, you cash in that part needed to top up your other savings for a deposit - i.e. keep the rest in for long term savings. The help to buy ISA might be helpful, but yes there is a limit on the purchase price which in London will restrict you. Several banks are offering good interest on limited sums in current accounts - Santander is probably the best you can get 3% (taxed) on up to 20K - this is a good \"\"safe\"\" return. Just open a 123 Account, arrange to pay out a couple of DDs and pay in £500 a month (you can take the £500 straight out again). I think Lloyds and TSB also offer similar but on much smaller ammounts. Be warned this strategy taken to the limit will involve some complexity checking your various accounts each month. After that you will end up trading better returns for greater risk by using more volatile stock market investments rather than cash deposits.\"",
"title": ""
},
{
"docid": "c06c8d85dd054196a8f18a0c3aee6c83",
"text": "\"Shares are for investors. Most of the rich are investors. Unfortunately, the reverse is not true. But if you want to get rich, the first step is to become an investor. (The second is to become a SUCCESSFUL investor. 50 pounds might be too little. Try to start with at least 500 at a time. You can ADD amounts of 50 pounds. There are definitely fees involved. You will \"\"pay for lessons.\"\" But it will be worth it, if you become even a moderately successful investor. As for rules, they'll teach you the rules. Everyone wants your business. People have gotten (modestly) rich, buying shares here and there. One man told me of investing $600 in a company called Limited, and ending up with $12,000 some years later. BRIC is not a \"\"share.\"\" It is an acronym for four countries \"\"of the future.\"\" High risk, high reward here.\"",
"title": ""
}
] | fiqa |
51173ae59f3d25a1f7080292c6faf77d | How to Explain “efficient frontier” to child? | [
{
"docid": "ad8a6813ffead5acedb9417d1db3f382",
"text": "\"I would let them get their hands dirty, learn by practicing. Below you can find a simple program to generate your own efficient frontier, just 29 lines' python. Depending on the age, adult could help in the activity but I would not make it too lecturing. With child-parent relationship, I would make it a challenge, no easy money anymore -- let-your-money work-for-you -attitude, create the efficient portfolio! If there are many children, I would do a competition over years' time-span or make many small competitions. Winner is the one whose portfolio is closest to some efficient portfolio such as lowest-variance-portfolio, I have the code to calculate things like that but it is trivial so build on the code below. Because the efficient frontier is a good way to let participants to investigate different returns and risk between assets classes like stocks, bonds and money, I would make the thing more serious. The winner could get his/her designed portfolio (to keep it fair in your budget, you could limit choices to index funds starting with 1EUR investment or to ask bottle-price-participation-fee, bring me a bottle and you are in. No money issue.). Since they probably don't have much money, I would choose free software. Have fun! Step-by-step instructions for your own Efficient Frontier Copy and run the Python script with $ python simple.py > .datSimple Plot the data with $ gnuplot -e \"\"set ylabel 'Return'; set xlabel 'Risk'; set terminal png; set output 'yourEffFrontier.png'; plot '.datSimple'\"\" or any spreadsheet program. Your first \"\"assets\"\" could well be low-risk candies and some easy-to-stale products like bananas -- but beware, notice the PS. Simple Efficient-frontier generator P.s. do not stagnate with collectibles, such as candies and toys, and retailer products, such as mangos, because they are not really good \"\"investments\"\" per se, a bit more like speculation. The retailer gets a huge percentage, for further information consult Bogleheads.org like here about collectible items.\"",
"title": ""
},
{
"docid": "2365d7e4f4e52609e7a63b7d46330c4b",
"text": "I know you really like bananas, but don't you think you would get tired of them after a while? Better stock up on some kiwi and mango just to mix it up a bit. I wouldn't want to risk eating only banana sandwiches, banana ice cream and banana bread for the rest of my life. I have don't think I could take it. Same goes for mango and kiwi, but I think if I had all three I could probably get along just fine.",
"title": ""
}
] | [
{
"docid": "edc8f5acc6acb7c172f0f6631a96b3aa",
"text": "You do know that since the shale gas boom started the cost of energy declined significantly, don't you? Your theory is a bit simplistic and had more than a few holes. GDP growth is not that contingent on energy prices. How do you factor in increasing fuel efficiency in your theory?",
"title": ""
},
{
"docid": "9da812b213cc5c3e9363a2765e600f72",
"text": "This creates high levels of efficiency, but is efficiency really what we need? A robot is very efficient, and increases income, but for who? When a corporation absorbs another, it does so *with the expectation* that redundant jobs will be shed, and that the income from those jobs will go to the shareholders. And so one can see where income inequality has it's roots.",
"title": ""
},
{
"docid": "9fa77761a09cfe9d1742dd2a47672057",
"text": "\"From his other work he certainly buys the truism that the markets are efficient at a 0-th approximation. But not entirely efficient. See e.g. Schleifer \"\"The limits to arbitrage\"\". Most if not all the money made by managed funds is at the expense of customers, not from outperforming the market. Ripping customers' faces off is a lot easier than beating the market.\"",
"title": ""
},
{
"docid": "fb0927a7b7d0b22ddb6786217aef90d2",
"text": "I don't know what you are asking. Can you give me an example of what fits the question? That you use phrases like profit extraction make me think we have a different assumption base, so I think we have to find common syntactic ground before we can exchange ideas in a meaningful way. I would like to do so, though, so I hope you respond.",
"title": ""
},
{
"docid": "3749bd9223d2080c026d8c67c9ac9201",
"text": "\"Translation : Funds managers that use traditionnal methods to select stocks will have less success than those who use artificial intelligence and computer programs to select stocks. Meaning : The use of computer programs and artificial intelligence is THE way to go for hedge fund managers in the future because they give better results. \"\"No man is better than a machine, but no machine is better than a man with a machine.\"\" Alternative article : Hedge-fund firms, Wall Street Journal. A little humour : \"\"Whatever is well conceived is clearly said, And the words to say it flow with ease.\"\" wrote Nicolas Boileau in 1674.\"",
"title": ""
},
{
"docid": "f6565dd0aa33decf3ce5cdb619b40921",
"text": "Another suggestion I heard on the radio was to give the child the difference between the name brand they want, and the store brand they settle on. Then that money can be accumulated as savings. Saving money is as important a feature of the family economy as earning money. Be careful with what you have a child do for reward vs what you have them do as a responsibility. Don't set a dangerous precedent that certain work does not need to be done unless compensation is on the table. You might have a child who relies on external motivations only to do things, which can make school work and future employment hard. I would instead have my child do yard work, but while doing it explain opportunity costs of doing the work yourself vs hiring out. I would show my kid how saving money earns interest, and how that is essentially free money.",
"title": ""
},
{
"docid": "100c16089b98c6da4bdec9e3d52ba91b",
"text": "\"The raw question is as follows: \"\"You will be recommending a purposed portfolio to an investment committee (my class). The committee runs a foundation that has an asset base of $4,000,000. The foundations' dual mandates are to (a) preserve capital and (b) to fund $200,000 worth of scholarships. The foundation has a third objective, which is to grow its asset base over time.\"\" The rest of the assignment lays out the format and headings for the sections of the presentation. Thanks, by the way - it's an 8 week accelerated course and I've been out sick for two weeks. I've been trying to teach myself this stuff, including the excel calculations for the past few weeks.\"",
"title": ""
},
{
"docid": "a65efd5afe866ce7d2d86bb59793098c",
"text": "\"In the UK there is a School Rewards System used in many schools to teach kids and teens about finance and economy. In the UK there is a framework for schools called \"\"Every Child Matters\"\" in which ‘achieving economic well-being’ is an important element. I think is important to offer to offer a real-life vehicle for financial learning beyond the theory.\"",
"title": ""
},
{
"docid": "ffd72d35894bb567068dfa4974d92543",
"text": "Well if your looking to explain inflation to children, I would use this example. Take two fruits they like IE: Apples and Oranges. Give them both 2 of each. Ask them how many of your apples would you give for 1 orange and how many apples would you want to get 1 orange(most likely they will say 1). Now give them 5 more apples each. Then ask them the same question. In economics and finance many things can not be proven, so to tell you what QE will do for a fact can't be said, you can only be told theories. There are to many variables.",
"title": ""
},
{
"docid": "1eaa8459652b36589027858764a119e6",
"text": "\"This is the best tl;dr I could make, [original](http://blogs.lse.ac.uk/politicsandpolicy/book-review-after-piketty/) reduced by 93%. (I'm a bot) ***** > In After Piketty: The Agenda for Economics and Inequality, editors Heather Boushey, J. Bradford DeLong and Marshall Steinbaum bring together contributors to reflect on the influence of Thomas Piketty's Capital in the Twenty-First Century and to draw attention to topics less explored in Piketty's analysis. > After Piketty: The Agenda for Economics and Inequality, edited by Heather Boushey, J. Bradford De Long and Marshall Steinbaum, further explores the 'process by which wealth is accumulated' and the 'powerful forces' that shape the divergence. > Even if, as described by Piketty, the process of accumulation and the forces of power that render wealth inequality prove correct - that is, r > g - even then, for Branko Milanovic, high inequality is avoidable. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/758ftg/after_piketty_the_agenda_for_economics_and/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ \"\"Version 1.65, ~224779 tl;drs so far.\"\") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr \"\"PM's and comments are monitored, constructive feedback is welcome.\"\") | *Top* *keywords*: **Piketty**^#1 **Capital**^#2 **Inequality**^#3 **wealth**^#4 **work**^#5\"",
"title": ""
},
{
"docid": "a7b912d294a04211b50ef946ef32180e",
"text": "Context: assessed project as part of a tech internship in an investment bank, working in small groups, task: to design an app using a financial solution that has an ethical impact. We came up with (what I hope) is the smart sounding idea of an app for impoverished farmers in developing countries who lack the information and market access that larger or unionised farmers would have. The app would use some kind of decision tree (which may grow with machine learning) to assess the best kind of crops to grow given the region and local conditions for max profitability, and would provide access to some kind of derivatives market, and insurance, allowing the farmers a consistent income. Struggling with the specifics of how the business/finance side of things would work. Let's assume the bank builds the app and puts it on the app-store as part of some charitable initiative. 1) Would the farmers go directly to the bank to price and purchase derivatives on their crops? 2) Options/futures/forwards - which is most appropriate, or would it make sense to offer all three? 3) Is there any way this could be adjusted to provide regular payments, with a lump sum at the end? Would this be something the bank chooses to do, or is there an existing financial instrument for this kind of setup? 4) How does this tie in with the commodities markets (or is that what the derivatives already are?) 5) Probably a long-shot but any chance of know-your-customer implementations for countries with terrible infrastructure? 6) (probably a dumb question) Say the farmer sells an option on his crops, would this be sold directly to the bank or a 3rd party who'd be more interested in actually buying the crop? Or is this a far too simplistic view? Basically who would the farmer actually deliver the crops to? 7) Any other issues/oversights? Any ideas to make this sound somewhat viable? It doesn't have to strictly be realistic in any sense, but it also can't be flat-out incorrect! tyvm in advance, edited to make questions clearer",
"title": ""
},
{
"docid": "37f50f2c22c9d006a2e8b44a7fadccb5",
"text": "Thanks for the correction it was just a story my dad would tell me so the details have likely blurred. My point is that even if we could automate all physical jobs there could still be work to do, distribution of resources would likely need to be handled differently then now some sort of utopian communism or something not really the main point though.",
"title": ""
},
{
"docid": "993d8ae63cd270d4ad5880f4866f600e",
"text": "\"No, there's no justification for saying that the resource \"\"needs to be used in a way which is most productive\"\". That's not consistent with either capitalism (which does not take a moral stance, but observes that it goes to the highest bidder) or with social welfare (which is concerned with maintaining a reasonable rate of employment). And we were not discussing govt employment.\"",
"title": ""
},
{
"docid": "0805a7b927cefad4bf4b37891f454293",
"text": "\"A kid can lose everything he owns in a crap shoot and live. But a senior citizen might not afford medical treatment if interest rates turn and their bonds underperform. In modern portfolio theory, risk/\"\"aggression\"\" is measured by beta and you get more return by increasing risk. Risk-adjusted return is measured by the Sharpe ratio and the efficient frontier shows how much return you get for each level of risk. For simplicity, we will assume that choosing beta is the only investment choice you make. You are buying a house tomorrow all cash, you should set aside that much in liquid assets today. (Return = who cares, Beta = 0) Your kids go to college in 5 years, so you invest funds now with a 5 year investment horizon to produce, with a reasonable level of certainty, the needed cash then. (Beta = low) You wish to leave money in your estate. Invest for the highest return with a horizon of your lifetime. (Return = maximum, Beta = who cares) In other words, you set risk based on how important your expenses are now or later. And your portfolio is a weighted average. On paper, let's say you have sold yourself into indentured servitude. In return you have received a paid-up-front annuity which pays dividends and increases annually. For someone in their twenties: This adds up to a present value of $1 million. When young, the value of lifetime remaining wages is high. It is also low risk, you will probably find a job eventually in any market condition. If your portfolio is significantly smaller than $1 million this means that the low risk of future wages pulls down your beta, and therefore: Youth invest aggressively with available funds because they compensate large, low-risk future earnings to meet their desired risk appetite.\"",
"title": ""
},
{
"docid": "5da547d40bc58ce938dde6001001f3f8",
"text": "Growth and efficiency can occur independently of each other. For instance, if an economy consists of one inefficient business and then a second more efficient business opens to compete agains the first the overall efficiency increases while the economy grows. New industries tend to be inefficient at the beginning (since initiation is more important than optimisation) and then become more efficient over time. Agriculture is an amazingly efficient business if you consider how many people now produce the amount of food we consume in comparison to only 100 years ago. Plus, efficiency is not only about producing extra widgets. You could produce the same number of widgets for lower cost. Outsourcing to China (taking advantage of their lower cost of production) increases the efficiency of the US economy, but also increases the efficiency of the Chinese economy (since extra work is created producing more things). Lower costs in the US lead to increased investment in other production. Increased production in China leads to the rising wages there. Growth can be achieved in both places for very different reasons. So, no, growth doesn't have to come about through less efficiency.",
"title": ""
}
] | fiqa |
c730a4a23ad7a70b6b223c7321a6d0da | How to decide if I should take my money with me or leave it invested in my home country? | [
{
"docid": "a6840bb77480d78d9db4803102ba102e",
"text": "I will attempt to answer three separate questions here: The standard answer is that an emergency fund should not be in an investment that can lose value. The safest course of action is to put it in a savings account or other very low risk investment somewhere. This question becomes: can a reasonable and low risk investment in Sweden be comparable to or better than a low risk investment in Brazil? Inflation in Brazil has averaged a little less than 6% over the last 10 years with a recent spike up above 8%. A cursory search indicates interest rates on savings accounts in Brazil are outpacing inflation so you might still expect a positive return on money in a savings account there. By contrast, Sweden's inflation rate has been around 1% over the last 10 years and has hovered around 0 or even deflation in recent years. Swedish interest rates for savings accounts right now are very low, nearly 0%. Putting money in a savings account in Sweden would likely hold its value or lose a slight amount of value. Based on this, you might be better off leaving your emergency fund invested in BRL in Brazil. The answer to this a little unclear. The Brazilian stock market has been all over the place in the last 10 years, with a slight downard trend in recent years. In comparison, Sweden's stock market has shown fairly consistent growth in spite of the big dip in 2008. Given this, it seems like the fairest comparison would your current 13% ROI investment in Brazil vs. a fund or ETF that tracks the Swedish stock market index. If we assume a consistent 13% ROI on your investment in Brazil and a consistent inflation rate of 6%, your adjusted ROI there would be around 7% per year. The XACT OMS30 ETF that tracks the Swedish OMS 30 Index has a 10 year annualized return of 9.81%. If you subtract 0.8% inflation, you get an adjusted ROI 9%. Based on this, Sweden may be a safer place for longer term, moderate risk investments right now.",
"title": ""
},
{
"docid": "7a7f9c6a3108ffd71e5572a253d49803",
"text": "The key is whether you plan to stay in Sweden forever, or plan to move back to Brazil after completion of 2 years. If you have not decided, best is stay invested in Brazil. Generally markets factor in currency prices so if you move the money into Krona and try and move it back it would in ideal market be more or less same. In reality it may be more or less and can't be predicted.",
"title": ""
}
] | [
{
"docid": "28fed650e9e4cc59a4dba20e8648f303",
"text": "Typically, the higher interest rates in local currency cover about the potential gain from the currency exchange rate change - if not, people would make money out of it. However, you only know this after the fact, so either way you are taking a risk. Depending on where the local economy goes, it is more secure to go with US$, or more risky. Your guess is as good as anyone. If you see a chance for a serious meltdown of the local economy, with 100+% inflation ratios and possibly new money, you are probably better off with US$. On the other hand, if the economy develops better than expected, you might have lost some percentage of gain. Generally, investing in a more stable currency gets you slightly less, but for less risk.",
"title": ""
},
{
"docid": "44b8a72d907e3394b395de649fd6c6d4",
"text": "\"If you \"\"have no immediate plans for the money and will probably not return to Switzerland for a long time or at all\"\" then it might be best just to exchange the money so then you can use/invest it in the UK. Maybe keep a bill or two for memory-sake - I do that whenever I travel to a foreign country.\"",
"title": ""
},
{
"docid": "bfd53d833372fd0defb4861f75c8925e",
"text": "If your country of residence is going to be Germany, it is advisable to move money to Germany at the earliest opportunity. It is hard to predict what will happen in future, i.e. whether Reais will rise or fall compared to Euro. The question of whether to leave the funds in Brazil or not, should be looked at: If you had money in Euro, would you have moved it to Brazil or kept it in Germany?",
"title": ""
},
{
"docid": "22eb978738fd1c98a3ff89e48dc890fb",
"text": "One way of looking at this (just expanding on my comment on Dheer's answer): If the funds were in EUR in Germany already and not in the UK, would you be choosing to move them to the UK (or a GBP denominated bank account) and engage in currency speculation, betting that the pound will improve? If you would... great, that's effectively exactly what you're doing: leave the money in GBP and hope the gamble pays off. But if you wouldn't do that, well you probably shouldn't be leaving the funds in GBP just because they originated there; bring them back to Germany and do whatever you'd do with them there.",
"title": ""
},
{
"docid": "35ed04b2dace3b1397574bc03dc60917",
"text": "\"As for the letting the \"\"wise\"\" people only make the decisions, I guess that would be a bit odd in the long run. Especially when you get more experienced or when you don't agree with their decision. What you could do, is make an agreement that always 3/4 (+/-) of the partners must agree with an investment. This promotes your involvement in the investments and it will also make the debate about where to invest more alive, fun and educational). As for the taxes I can't give you any good advice as I don't know how tax / business stuff works in the US. Here in The Netherlands we have several business forms that each have their own tax savings. The savings mostly depend on the amount of money that is involved. Some forms are better for small earnings (80k or less), other forms only get interesting with large amounts of money (100k or more). Apart from the tax savings, there could also be some legal / technical reasons to choose a specific form. Again, I don't know the situation in your country, so maybe some other folks can help. A final tip if your also doing this for fun, try to use this investment company to learn from. This might come in handy later.\"",
"title": ""
},
{
"docid": "aa90a5bbfd6d0baf7ace26b24986c434",
"text": "\"The topic you are apparently describing is \"\"safe withdrawal rates\"\", more here. Please, note that the asset allocation is crucial decision with your rates. If you continue to keep a lot in cash, you cannot withdraw too much money \"\"to live and to travel\"\" because the expected return from cash is too low in the long run. In contrast, if you moved to more sensible decision like 30% bonds and 70% world portfolio -- the rates will me a lot different. As you are 30 years old, you could pessimist suppose to live next 100 years -- then your possible withdrawal rates would be much lower than let say over 50 years. Anyway besides deciding asset allocation, you need to estimate the time over which you need your assets. You have currently 24% in liquid cash and 12% in bonds but wait you use the word \"\"variety of funds\"\" with about 150k USD, what are they? Do you have any short-term bonds or TIPS as inflation hedge? Do you miss small and value? What is your sector allocation between small-med-large and value-blend-growth? If you are risk-averse, you could add some value small. Read the site, it does much better job than any question-answer site can do (the link above).\"",
"title": ""
},
{
"docid": "7e2700c8f97122b868a4a0ebfbcc9257",
"text": "Which of these two factors is likely to be more significant? There is long term trend that puts one favourable with other. .... I realise that I could just as easily have lost 5% on the LSE and made 5% back on the currency, leaving me with my original investment minus various fees; or to have lost 5% on both. Yes that is true. Either of the 3 scenarios are possible. Those issues aside, am I looking at this in remotely the right way? Yes. You are looking at it the right way. Generally one invests in Foreign markets for;",
"title": ""
},
{
"docid": "89e762cfa1ea779ab51e8ebebce04405",
"text": "There contracts called an FX Forwards where you can get a feel for what the market thinks an exchange rate will be in the future. Now exchange rates are notoriously uncertain, but it is worth noting that at current prices market believes your Krona will be worth only 0.0003 Euro less three years from now than it is worth now. So, if you are considering taking money out of your investments and converting it to Euro and missing out on three years of dividends and hopefully capital gains its certainly possible this may work out for you but this is unlikely. If you are at all uncertain that you will actually move this is an even worse idea as paying to convert money twice would be an additional expense on top of the missed returns. There are FX financial products (futures and forwards) where you can get exposure to FX without having to put the full amount down. This could help hedge your house value but this can be extremely expensive over time for individual investors and would almost certainly not work in your favor. Something that could help reduce your risk a bit would be to invest more heavily in European even Irish (and British?) stocks which will move along with the currency and economy. You can lose some diversification doing this, but it can help a little.",
"title": ""
},
{
"docid": "8400613fe1604536e0f9484699465382",
"text": "You should check this with a tax accountant or tax preparation expert, but I encountered a similar situation in Canada. Your ISA income does count as income in a foreign country, and it is not tax exempt (the tax exemption is only because the British government specifically says so). You would need to declare the income to the foreign government who would almost certainly charge you tax on it. There are a couple of reasons why you should probably keep the funds in the ISA, especially if you are looking to return. First contribution limits are per year, so if you took the money out now you would have to use future contribution room to put it back. Second almost all UK savings accounts deduct tax at source, and its frankly a pain to get it back. Leaving the money in an ISA saves you that hassle, or the equal hassle of transferring it to an offshore account.",
"title": ""
},
{
"docid": "0a493da20b1cbd404298095c658da479",
"text": "My 0,02€ - I probably live in the same country as you. Stop worrying. The Euro zone has a 100.000€ guaranty deposit. So if any bank should fail, that's the amount you'll receive back. This applies to all bank accounts and deposits. Not to any investments. You should not have more than 100.000€ in any bank. So, lucky you, if you have more than that money, divide between a number of banks. As for the Euro, there might be an inflation, but at this moment the USA and China are in a currency battle that 'benefits' the Euro. Meaning you should not invest in dollars or yuan at this time. Look for undervalued currency to invest in as they should rise against the Euro.",
"title": ""
},
{
"docid": "4d9f05f39288a85e40d0d2571f7e15c5",
"text": "\"You are in your mid 30's and have 250,000 to put aside for investments- that is a fantastic position to be in. First, let's evaluate all the options you listed. Option 1 I could buy two studio apartments in the center of a European capital city and rent out one apartment on short-term rental and live in the other. Occasionally I could Airbnb the apartment I live in to allow me to travel more (one of my life goals). To say \"\"European capital city\"\" is such a massive generalization, I would disregard this point based on that alone. Athens is a European capital city and so is Berlin but they have very different economies at this point. Let's put that aside for now. You have to beware of the following costs when using property as an investment (this list is non-exhaustive): The positive: you have someone paying the mortgage or allowing you to recoup what you paid for the apartment. But can you guarantee an ROI of 10-15% ? Far from it. If investing in real estate yielded guaranteed results, everyone would do it. This is where we go back to my initial point about \"\"European capital city\"\" being a massive generalization. Option 2 Take a loan at very low interest rate (probably 2-2.5% fixed for 15 years) and buy something a little nicer and bigger. This would be incase I decide to have a family in say, 5 years time. I would need to service the loan at up to EUR 800 / USD 1100 per month. If your life plan is taking you down the path of having a family and needed the larger space for your family, then you need the space to live in and you shouldn't be looking at it as an investment that will give you at least 10% returns. Buying property you intend to live in is as much a life choice as it is an investment. You will treat the property much different from the way something you rent out gets treated. It means you'll be in a better position when you decide to sell but don't go in to this because you think a return is guaranteed. Do it if you think it is what you need to achieve your life goals. Option 3 Buy bonds and shares. But I haven't the faintest idea about how to do that and/or manage a portfolio. If I was to go down that route how do I proceed with some confidence I won't lose all the money? Let's say you are 35 years old. The general rule is that 100 minus your age is what you should put in to equities and the rest in something more conservative. Consider this: This strategy is long term and the finer details are beyond the scope of an answer like this. You have quite some money to invest so you would get preferential treatment at many financial institutions. I want to address your point of having a goal of 10-15% return. Since you mentioned Europe, take a look at this chart for FTSE 100 (one of the more prominent indexes in Europe). You can do the math- the return is no where close to your goals. My objective in mentioning this: your goals might warrant going to much riskier markets (emerging markets). Again, it is beyond the scope of this answer.\"",
"title": ""
},
{
"docid": "d4cf54d625f0e9e8aa173e10c8e25d23",
"text": "You would need to look at all aspects: - Current Rate of Interest in US compare to China - Current Exchange rate and the rate in Future when you want the money back - Any tax / Regulatory implications of keeping the money in US - Any furture regulations that may hamper your access to these funds If you are planning to stay in China and at some point in time want to get the money back, In my opinion it would make more sense to do it today like you are doing, rather than take the risk of exchange rate and regulations. Further the current low risk returns in US are near zero. The inflation in US is of no concern to you. On the other hand you have a decent return in China. If you know that at some point in future you would need USD [either moving to US, or large purchase in USD], then it would make sense to keep the funds in USD.",
"title": ""
},
{
"docid": "62769608f166b86eac37da984ac5e9f8",
"text": "\"Nobody has mentioned your \"\"risk tolerance\"\" and \"\"investment horizon\"\" for this money. Any answer should take into account whether you can afford to lose it all, and how soon you'll need your investment to be both liquid and above water. You can't make any investment decision at all and might as well leave it in a deposit-insured, zero-return account until you inderstand those two terms and have answers for your own situation.\"",
"title": ""
},
{
"docid": "d51a448fad7717083cd1dff308d57a4c",
"text": "\"I agree with Grade 'Eh' Bacon's answer, but there are a couple of ideas that are relevant to your particular situation: If I were you, I would invest at least half of the cash in growth ETFs because you're young enough that market variability doesn't affect you and long term growth is important. The rest should be invested in safer investments (value and dividend ETFs, bonds, cash) so that you have something to live off in the near term. You said you wanted to invest ethically. The keyword to search is \"\"socially responsible ETFs\"\". There are many, and if this is important to you, you'll have to read their prospectus to find one that matches your ethics. Since you're American, the way I understand it, you need to file taxes on income; selling stocks at a gain is income. You want to make sure that as your stocks appreciate, you sell some every year and immediately rebuy them so that you pay a small tax bill every year rather than one huge tax bill 20 years from now. Claiming about $20600 of capital gains every year would be tax free assuming you are not earning any other money. I would claim a bit more in years where you make a lot. You can mitigate your long term capital gains tax exposure by opening a Roth IRA and maxing that out. Capital gains in the Roth IRA are not taxable. Even if you don't have income from working, you can have some income if you invest in stocks that pay dividends, which would allow you to contribute to a Roth IRA. You should figure where you're going to be living because you will want to minimize the currency risk of having your money in USD while you're living abroad. If the exchange rate were to change by a lot, you might find yourself a lot poorer. There are various hedging strategies, but the easiest one is to invest some of your money in securities of the country you'll be living in. You should look into how you'll be converting money into the foreign currency. There are sometimes way of minimizing the spread when converting large amounts of money, e.g., Norbert's gambit. Shaving off 1.5% when exchanging $100k saves $1500.\"",
"title": ""
},
{
"docid": "b0e4bd48a4341838e9c01b29e8b6da44",
"text": "\"Gold has value because for the most of the history of mankind's use of money, Gold and Silver have repeatedly been chosen by free markets as the best form of money. Gold is durable, portable, homogeneous, fungible, divisible, rare, and recognizable. Until 1971, most of the world's currencies were backed by Gold. In 1971, the US government defaulted on its obligation to redeem US Dollars (by which most other currencies were backed) in Gold, as agreed to by the Bretton Woods agreement of 1944. We didn't choose to go off the Gold Standard, we had no choice - Foreign Central Banks were demanding redeption in Gold, and the US didn't have enough - we inflated too much. I think that the current swell of interest in Gold is due to the recent massive increase in the Federal Reserve's balance sheet, plus the fast growing National debt, plus a looming Social Security / Medicare crisis. People are looking for protection of their savings, and they wish to \"\"opt-out\"\" of the government bail-outs, government deficits, government run health-care, and government money printing. They are looking for a currency that doesn't have a counter-party. \"\"Gold is money and nothing else\"\" - JP Morgan \"\"In the absence of the gold standard, there is no way to protect savings from confiscation through inflation. There is no safe store of value. If there were, the government would have to make its holding illegal, as was done in the case of gold. If everyone decided, for example, to convert all his bank deposits to silver or copper or any other good, and thereafter declined to accept checks as payment for goods, bank deposits would lose their purchasing power and government-created bank credit would be worthless as a claim on goods. The financial policy of the welfare state requires that there be no way for the owners of wealth to protect themselves. This is the shabby secret of the welfare statists' tirades against gold. Deficit spending is simply a scheme for the confiscation of wealth. Gold stands in the way of this insidious process. It stands as a protector of property rights. If one grasps this, one has no difficulty in understanding the statists' antagonism toward the gold standard.\"\" - Alan Greenspan\"",
"title": ""
}
] | fiqa |
83e91ac3632312f41f9f59d70738568a | Iraqi Dinars. Bad Investment, or Worst Investment? | [
{
"docid": "b641857f44222cf4077614e121bfed37",
"text": "\"Iraq is a US vassal/puppet state. I'm not sure what 500 South Vietnamese Dong were worth in 1972, but today the paper currency is worth $10 in mint condition. I'd suggest blackjack or craps as an alternate \"\"investment\"\".\"",
"title": ""
},
{
"docid": "40f4b295402b38de190ba9198138eea9",
"text": "\"Currency, like gold and other commodities, isn't really much of an investment at all. It doesn't actually generate any return. Its value might fluctuate at a different rate than that of the US dollar or Euro, but that's about it. It might have a place as a very small slice of a basket of global currencies, but most US / European households don't actually need that sort of basket; it's really more of a risk-management strategy than an investment strategy and it doesn't really reflect the risks faced by an ordinary family in the US (or Europe or similar). Investments shouldn't generally be particularly \"\"exciting\"\". Generally, \"\"exciting\"\" opportunities mean that you're speculating on the market, not really investing in it. If you have a few thousand dollars you don't need and don't mind losing, you can make some good money speculating some of the time, but you can also just lose it all too. (Maybe there's a little room for excitement if you find amazing deals on ordinary investments at the very bottom of a stock market crash when decent, solid companies are on sale much cheaper than they ordinarily are.)\"",
"title": ""
},
{
"docid": "b4940a56597daa47fcd9f02797c22a8e",
"text": "\"Once a currency loses value, it never regains it. Period. Granted there have been short term periods of deflation, as well as periods where, due to relative value fluctuation, a currency may temporarily gain value against the U.S. dollar (or Euro, Franc, whatever) but the prospect of a currency that's lost 99.99% of its value will reclaim any of that value is an impossibility. Currency is paper. It's not stock. It's not a hard commodity. It has no intrinsic value, and no government in history has ever been motivated to \"\"re-value\"\" its currency. Mind you, there have been plenty of \"\"reverse splits\"\" where a government will knock off the extraneous zeroes to make handling units of the currency more practical.\"",
"title": ""
}
] | [
{
"docid": "56430c0f9c7afce78e726fbb7b8e8cfc",
"text": "\"For real. AAA treasury bonds are used a safe investment vehicle for the reason of \"\"its the US government, its safe\"\", which is pretty similar to the \"\"dude, who doesnt pay their mortgage?!\"\" line of thinking. You got people dumping money into these derivatives and suddenly someone goes \"\"oh yeah you just bought a bunch of bad debt that should be rated 'junk'. Oops.\"\"\"",
"title": ""
},
{
"docid": "43dc85864d4e91c60c56b2e9969d2747",
"text": "You have stumbled upon a classic trading strategy known as the carry trade. Theoretically you'd expect the exchange rate to move against you enough to make this a bad investment. In reality this doesn't happen (on average). There are even ETFs that automate the process for you (and get better transaction costs and lending/borrowing rates than you ever could): DBV and ICI.",
"title": ""
},
{
"docid": "6d87984f8fd0b68c76fb7161190f20fd",
"text": "\"The risk is that greece defaults on it's debts and the rest of the eurozone chose to punish it by kicking it out of the Eurozone and cutting off it's banks from ECB funds. Since the greek government and banks are already in pretty dire straits this would leave greece with little choice but to forciblly convert deposits in those banks to a \"\"new drachma\"\". The exchange rate used for the forced conversions would almost certainly be unfavorable compared to market rates soon after the conversion. There would likely be capital controls to prevent people pulling their money out in the runup to the forced conversion. While I guess they could theoretically perform the forced conversion only on Euro deposits this seems politically unlikely to me.\"",
"title": ""
},
{
"docid": "bac039338b7d35deb88310614fc1cdde",
"text": "Swaps form backstop to a shit load of int'l trade. Liquidity of currency is a huge factor in being a govt reserve currency, which USD currently has the VAST majority of holdings. This agreement is a shove against USE dominance in trade settlements, which is negative. Also challenges us general capital markets dominance a bit",
"title": ""
},
{
"docid": "3a3d14531b68486b10dd13d0e195dd6c",
"text": "Surely there's more to the picture than this. Breaking the borrowing limit by going to 3.5% multiple times is surely no where near as bad as going to 18% once. On top of this there was a fundamental difference in the use of capital. Borrowing to invest (as Germany did) has been ridiculously beneficial for them.",
"title": ""
},
{
"docid": "82e1f714bcf875df2343789d9907506a",
"text": "\"I think you're confusing risk analysis (that is what you quoted as \"\"Taleb Distribution\"\") with arguments against taking risks altogether. You need to understand that not taking a risk - is by itself a risk. You can lose money by not investing it, because of the very same Taleb Distribution: an unpredictable catastrophic event. Take an example of keeping cash in your house and not investing it anywhere. In the 1998 default of the Russian Federation, people lost money by not investing it. Why? Because had they invested the money - they would have the investments/properties, but since they only had cash - it became worthless overnight. There's no argument for or against investing on its own. The arguments are always related to the investment goals and the risk analysis. You're looking for something that doesn't exist.\"",
"title": ""
},
{
"docid": "bf5cff032b3c606fd579c65de622fc8c",
"text": "\"This is the best tl;dr I could make, [original](https://www.iea.org/newsroom/news/2017/july/global-energy-investment-fell-for-a-second-year-in-2016-as-oil-and-gas-spending-c.html) reduced by 86%. (I'm a bot) ***** > Global energy investment fell by 12% in 2016, the second consecutive year of decline, as increased spending on energy efficiency and electricity networks was more than offset by a continued drop in upstream oil and gas spending, according to the International Energy Agency's annual World Energy Investment report. > Global energy investment amounted to USD 1.7 trillion in 2016, or 2.2% of global GDP. For the first time, spending on the electricity sector around the world exceeded the combined spending on oil, gas and coal supply. > The United States saw a sharp decline in oil and gas investment, and accounted for 16% of global spending. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/6nqnvn/july_global_energy_investment_fell_for_a/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ \"\"Version 1.65, ~168423 tl;drs so far.\"\") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr \"\"PM's and comments are monitored, constructive feedback is welcome.\"\") | *Top* *keywords*: **energy**^#1 **investment**^#2 **spender**^#3 **year**^#4 **Global**^#5\"",
"title": ""
},
{
"docid": "30fa168edb336ccfa1b078fa57141eac",
"text": "It's a failed state. The belief that people are going to hunker down in austerity and pay off national debt was always silly. Sucks for the bond holders, but all these people who bet against sovereign debt defaults have bet wrong.",
"title": ""
},
{
"docid": "dbc64f14b943f2e06cf0c2f73f84da57",
"text": "\"It's a risky \"\"investment\"\" given how capricious the government can be, and like another poster said... if you don't understand it and can't explain proof of work, cryptographic hashes, and how and why there's a limit to total BTC ever, as well as changing difficulty to PoW, and what implications that those things *might* have...\"",
"title": ""
},
{
"docid": "6207d6f6b6c4c84fc02c0153c0fc89f6",
"text": "I would strongly recommend investing in assets and commodities. I personally believe fiat money is losing its value because of a rising inflation and the price of oil. The collapse of the euro should considerably affect the US currency and shake up other regions of the world in forex markets. In my opinion, safest investment these days are hard assets and commodities. Real estate, land, gold, silver(my favorite) and food could provide some lucrative benefits. GL mate!",
"title": ""
},
{
"docid": "8bb3af5a8fa64af758bd62a10abb09a3",
"text": "It looks like the advice the rep is giving is based primarily on the sunk cost fallacy; advice based on a fallacy is poor advice. Bob has recognised this trap and is explicitly avoiding it. It is possible that the advice that the rep is trying to give is that Fund #1 is presently undervalued but, if so, that is a good investment irrespective if Bob has lost money there before or even if he has ever had funds in it.",
"title": ""
},
{
"docid": "d6e1ebfd5858d23a9a1a36033872c622",
"text": "Damn. I just looked to confirm. Without reinvestment (but just holding dividends), looks like the return through 2013 is 0 and then there have been decent returns since 2013, but holy shit, it does not look good at all. All of that not adjusted for inflation. I wonder how much of that shitty performance can be reasonably classified as 'necessary' due to the spin-off of GE Capital, which generated a lot of the profit for GE but really was a huge problem for them when liquidity became scarce in 2008.",
"title": ""
},
{
"docid": "05314110242eda6406d27e495479cf4a",
"text": "I asked a followup question on the Islam site. The issue with Islam seems to be that exchanging money for other money is 'riba' (roughly speaking usury). There are different opinions, but it seems that in general exchanging money for 'something else' is fine, but exchanging money for other money is forbidden. The physicality of either the things or the money is not relevant (though again, opinions may differ). It's allowed to buy a piece of software for download, even though nothing physical is ever bought. Speculating on currency is therefore forbidden, and that's true whether or not a pile of banknotes gets moved around at any point. But that's my interpretation of what was said on the Islam site. I'm sure they would answer more detailed questions.",
"title": ""
},
{
"docid": "e20f3e4ce98f05d239b4bab297a2d671",
"text": "\"I think the issues you have listed will definitely be some of the larger ones over the next few months even. In relation to interest rates though I think inflation/deflation will aslo start to become an increasinly debated topic, especially if they decide on more QE. Over the next few months into the election I expect to see a flight to quality from the big money and then from retail investors (aren't we already seeing this?). This would bear negative for most stock averages and indexs and positive for \"\"safe\"\" assets such as gold, treasuries, what else? In my opinion the industry that stands to take the biggest losses are the financials, particulary the TBTF banks. This will be a large issue in the election and there is really no way they can walk away from the Europe situation unharmed. In the event of a war (Israel/Iran I assume you mean), I would imagine oil would come up from the relative low it is at now. This would then increase the appeal of Nat Gas which I don't think can stay at the price level it is at now. tl;dr - bearish for the stock markets, bullish on safe assets as doubt in the system increases significantly\"",
"title": ""
},
{
"docid": "a44435b3dddd0316af46719e2b187580",
"text": "\"This is the best tl;dr I could make, [original](http://www.imf.org/en/Publications/WP/Issues/2017/10/20/The-Macroeconomic-and-Distributional-Effects-of-Public-Investment-in-Developing-Economies-45222) reduced by 48%. (I'm a bot) ***** > This paper provides new empirical evidence of the macroeconomic effects of public investment in developing economies. > Using public investment forecast errors to identify unanticipated changes in public investment, the paper finds that increased public investment raises output in the short and medium term, with an average short-term fiscal multiplier of about 0.2. > We find some evidence that the effects are larger: during periods of slack; in economies operating with fixed exchange rate regimes; in more closed economies; in countries with lower public debt; and in countries with higher investment efficiency. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/77ql6o/imfthe_macroeconomic_and_distributional_effects/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ \"\"Version 1.65, ~232230 tl;drs so far.\"\") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr \"\"PM's and comments are monitored, constructive feedback is welcome.\"\") | *Top* *keywords*: **public**^#1 **investment**^#2 **IMF**^#3 **paper**^#4 **view**^#5\"",
"title": ""
}
] | fiqa |
dff3ac7c7d7360c3b65fb0f675b91bcc | What can a CPA do that an EA cannot, and vice versa? | [
{
"docid": "db7066668be87ff24036ca2a1d7be378",
"text": "Enrolled Agents typically specialize only in tax matters. Their status allows them to represent clients before the IRS (which a CPA can also do) See the IRS site regarding Enrolled Agents Their focus is much narrower than a CPA and you would only hire them for advice or representation with tax related matters. (e.g. you'd not hire an enrolled agent to do an external audit) A CPA is a much broader certification, covering accounting in general, of which taxes are only a portion. A CPA may or may not specialize in tax matters, so if you have a tax related issue, especially an audit, review or appeal, you may want to query a prospective CPA as to their experience with tax matters and representing clients, appeals, etc. You would likely be better off with an EA than a CPA who eschews tax work and specializes in other things such as financial auditsOn the other hand if you have need of advice that is more generalized to accounting, audits, etc then you'd want to talk with a CPA as opposed to an EA",
"title": ""
},
{
"docid": "d7a6eff56f3a33ccc3d36c129fba03cd",
"text": "\"Although they may have some similar functions, CPAs and Enrolled Agents operate in two rather different areas of the accounting \"\"space.\"\" CPAs deal with financial statements, usually of corporations. They're the people you want to go to if you are making an investment, or if you own your own business, and need statements of pretax profit and loss prepared. Although a few of them are competent in taxation, the one thing many of them are weak at is tax rules, and this is where enrolled agents come in. Enrolled agents are more concerned with personal tax liability. They can 1) calculate your income taxes, and 2) represent you in hearings with the IRS because they've taken courses with IRS agents, and are considered by them to be almost \"\"one of us.\"\" Many enrolled agents are former IRS agents, actually. But they are less involved with corporate accounting, including things that might be of interest to stock holders. That's the CPA's province.\"",
"title": ""
}
] | [
{
"docid": "741bc6bbd2701d09e7f33b2bdc87bc33",
"text": "I've done my taxes using turbotax for years and they were not simple, Schedule C (self-employed), rental properties, ESPP, stock options, you name it. It's a lot of work and occasionally i did find bugs in TurboTax. ESPP were the biggest pain surprisingly. The hardest part is to get all the paperwork together and you'd have to do it when you hire an accountant anyway. That said this year i am using an accountant as i incorporated and it's a whole new area for me that i don't have time to research. Also in case of an audit i'd rather be represented by a pro. I think the chance of getting audited is smaller when a CPA prepares your return.",
"title": ""
},
{
"docid": "6da3ec1ab9296aa05074dbbc608a1c5c",
"text": "\"Exactly what accounts are affected by any given transaction is not a fixed thing. Just for example, in a simple accounting system you might have one account for \"\"stock on hand\"\". In a more complex system you might have this broken out into many accounts for different types of stock, stock in different locations, etc. So I can only suggest example specific accounts. But account type -- asset, liability, capital (or \"\"equity\"\"), income, expense -- should be universal. Debit and credit rules should be universal. 1: Sold product on account: You say it cost you $500 to produce. You don't say the selling price, but let's say it's, oh, $700. Credit (decrease) Asset \"\"Stock on hand\"\" by $500. Debit (increase) Asset \"\"Accounts receivable\"\" by $700. Credit (increase) Income \"\"Sales\"\" by $700. Debit (increase) Expense \"\"Cost of goods sold\"\" by $500. 2: $1000 spent on wedding party by friend I'm not sure how your friend's expenses affect your accounts. Are you asking how he would record this expense? Did you pay it for him? Are you expecting him to pay you back? Did he pay with cash, check, a credit card, bought on credit? I just don't know what's happening here. But just for example, if you're asking how your friend would record this in his own records, and if he paid by check: Credit (decrease) Asset \"\"checking account\"\" by $1000. Debit (increase) Expense \"\"wedding expenses\"\" by $1000. If he paid with a credit card: Credit (increase) Liability \"\"credit card\"\" by $1000. Debit (increase) Expense \"\"wedding expenses\"\" by $1000. When he pays off the credit card: Debit (decrease) Liability \"\"credit card\"\" by $1000. Credit (decrease) Asset \"\"cash\"\" by $1000. (Or more realistically, there are other expenses on the credit card and the amount would be higher.) 3: Issue $3000 in stock to partner company I'm a little shakier on this, I haven't worked with the stock side of accounting. But here's my best stab: Well, did you get anything in return? Like did they pay you for the stock? I wouldn't think you would just give someone stock as a present. If they paid you cash for the stock: Debit (increase) Asset \"\"cash\"\". Credit (decrease) Capital \"\"shareholder equity\"\". Anyone else want to chime in on that one, I'm a little shaky there. Here, let me give you the general rules. My boss years ago described it to me this way: You only need to know three things to understand double-entry accounting: 1: There are five types of accounts: Assets: anything you have that has value, like cash, buildings, equipment, and merchandise. Includes things you may not actually have in your hands but that are rightly yours, like money people owe you but haven't yet paid. Liabilities: Anything you owe to someone else. Debts, merchandise paid for but not yet delivered, and taxes due. Capital (some call it \"\"capital\"\", others call it \"\"equity\"\"): The difference between Assets and Liabilities. The owners investment in the company, retained earnings, etc. Income: Money coming in, the biggest being sales. Expenses: Money going out, like salaries to employees, cost of purchasing merchandise for resale, rent, electric bill, taxes, etc. Okay, that's a big \"\"one thing\"\". 2: Every transaction must update two or more accounts. Each update is either a \"\"debit\"\" or a \"\"credit\"\". The total of the debits must equal the total of the credits. 3: A dollar bill in your pocket is a debit. With a little thought (okay, sometimes a lot of thought) you can figure out everything else from there.\"",
"title": ""
},
{
"docid": "32b44a14f4784baafbf92a7751d9d834",
"text": "You're correct, there's always a conflict of interest in private professions whether you're a CPA, doctor or lawyer. There's always a possibility of backroom dealings. The only true response is that governmental bodies like the SEC, IRS and otherwise affiliated private organizations like the AICPA can take away your license to practice, send you to jail, or fine you thousands of dollars and ruin your life - if you're caught. I would personally draw a line between publicly traded corporations, amoral as you said, and public accounting. A CPA firm's responsibility is to the public even though they aren't a governmental body. Accounting records are required to do business with banks and the IRS. Without public confidence in the profession, CPA firms wouldn't exist. It's truly an incentive to do a good job and continually gain confidence. They incidentally make money along the way.",
"title": ""
},
{
"docid": "feea0eff339a0989ce65653ff1c2e360",
"text": "how many transactions per year do you intend? Mixing the funds is an issue for the reasons stated. But. I have a similar situation managing money for others, and the solution was a power of attorney. When I sign into my brokerage account, I see these other accounts and can trade them, but the owners get their own tax reporting.",
"title": ""
},
{
"docid": "f469aad776f005ed531a025b282f05ad",
"text": "This is great! I'm not a CPA, but work in finance. As such, my course/professional work is focused more on the economic and profitability aspects of transfer pricing. As you might imagine, it tended to analyze corporate strategy decisions under various cost allocation models, which you thoroughly discuss. I would agree with the statement that it is based on the matching principle but would like to add that transfer pricing is interesting as it falls under several fields: accounting, finance, and economics. Fundamentally it is based on the matching principal, but it's real world applications are based on all three (it's often used to determine divisional and even individual sales peoples profitability; as is the case with bank related funds transfer pricing on stuff like time deposits). In this case, the correct accounting principal allows you to, when done properly, better understand the economics, strategy, and operations of an organization. In effect, when done correctly, it provides transparency for strategic decision making to executives. As I said, since my coursework tended to focus more on that aspect, I definitely have a natural tendency towards it. This is an amazing explanation (esp. about interest on M&A bridge loans, I get that) of the more detailed stuff! Truthfully, I'm not as familiar with it and was just trying to show more of the conceptual than nitty-gritty. Thanks for the reply!",
"title": ""
},
{
"docid": "a9408501dd90d771fa160500d54ae2e4",
"text": "I agree with what you said regarding not wanting to put the work in to get it. When I was coming out of school and deciding whether or not to pursue my CPA (I went into industry (Private Accounting), not Public Accounting) my manager told me this. What would you think of someone who went to law school and didn't take the bar? Seems ridiculous doesn't it? Well the same, kind of, can be applied to the CPA/CA - if you have a degree in accounting... why not take the CPA exam?",
"title": ""
},
{
"docid": "a5280605812e2385284b2802e8d5a509",
"text": "Do Alice and Bob have to figure out the fair market value of their services and report that as income or something? Yes, exactly that. See Topic 420. Note that if the computer program is for Bob's business, Bob might be able to deduct it on his taxes. Similarly, if the remodeling is on Alice's business property, she might be able to deduct it. There might also be other tax advantages in certain circumstances.",
"title": ""
},
{
"docid": "8abf2a3264e7f55ce05a5f76d498e9b4",
"text": "After completing 4 years of undergrad in accounting, I was planning to complete my CA. But I haven't been able to get a job with one of the big four here in Canada. Currently working in corporate sales. I think having a CPA/CA/CMA/CGA whatever is quite beneficial in the finance industry. Of course it's not mandatory for the most part and CFA makes a lot more sense, but it gives you a better understanding of how numbers/ratios are actually created. If anything it will give you a leg up from other people. It certainly won't hurt to have. Do you mind me asking which firm you're with?",
"title": ""
},
{
"docid": "0426f28fe3338906029840877b17c603",
"text": "I think the OP is getting lost in designations. Sounds to me that what he wants is a 'financial advisor' not an 'investment advisor'. Does he even have investments? Does he want to be told which securities to buy? Or is he wanting advice on overall savings, insurance, tax-shelters, retirement planning, mortgages, etc. Which is a different set of skills - the financial advisor skill set. Accountants don't have that skill set. They know operating business reporting, taxes and generally how to keep it healthy and growing. They can do personal tax returns (as a favour to only the owners of the business they keep track of usually). IMO they can deal with the reporting but not the planning or optimization. But IMO the OP should just read up and learn this stuff for himself. Accreditation mean nothing. Eg. the major 'planner' brand teaches factually wrong stuff about RRSPs - which are the backbone of Canadian's finances.",
"title": ""
},
{
"docid": "1aeb7f4002099cbc82dbdf8af140fe4a",
"text": "\"Here's another perspective - I work in FP&A at a large Chicago bank. I originally was interested in treasury, but here it is mostly cash management/reserve requirements/etc. - not really my cup of tea. A lot of our M&A activity and fundraising roles are held by former I-Bankers. It might be hard to make it into that role. I don't have any \"\"Accounting Functions\"\" - the accounting teams do that. While I am involved in planning and forecasting, I also provide a lot of strategic analysis on projects and ad-hoc/pro-forma reporting. Here, FP&A is more the voice of the financial function in strategic decision making. One other piece of advice - see if you can talk to some people on the team and see what their day-to-day is like. If you're interested in FP&A in the true sense and not accounting/GLs, try looking for a larger firm.\"",
"title": ""
},
{
"docid": "e00600b8c9b513bf47e9ac9b44d2d07a",
"text": "To give you an idea, HBS will often do interviews at McKinsey offices. Accounting has nothing to do with what we're talking about and the pay grades are completely different. A partner at KPMG or PWC is going to make as much as some 5 years out in a good investment bank.",
"title": ""
},
{
"docid": "558829fd0ec87b6d76150683dfaed01f",
"text": "Finance and accounting go together like peanut butter and jelly. Having said that, you really should (read: need [to]) determine what part of finance you're interested in, because that's the only way to give you an informed answer as to whether you should pursue a CPA / CFA / MBA. With all of that in mind, your post -- in my opinion -- really comes across as you sounding like you don't want to put in the work to pass the CPA. If that's the case, finance is *really* not the field you want to be in. Lastly, experience is not a substitute for having your CPA license, but rather a compliment.",
"title": ""
},
{
"docid": "3e22751def8b89bb10e4d0bed0c140c5",
"text": "\"In June 2016 the American Institute of CPAs sent a letter to the IRS requesting guidance on this question. Quoting from section 4 of this letter, which is available at https://www.aicpa.org/advocacy/tax/downloadabledocuments/aicpa-comment-letter-on-notice-2014-21-virtual-currency-6-10-16.pdf If the IRS believes any property transaction rules should apply differently to virtual currency than to other types of property, taxpayers will need additional guidance in order to properly distinguish the rules and regulations. Section 4, Q&A-1 of Notice 2014-21 states that “general tax principles applicable to property transactions apply to transactions using virtual currency,” which is guidance that is generally helpful in determining the tax consequences of most virtual currency transactions. However, if there are particular factors that distinguish one virtual currency as like-kind to another virtual currency for section 1031 purposes, the IRS should clarify these details (e.g., allowing the treatment of virtual currency held for investment or business as like-kind to another virtual currency) in the form of published guidance. Similarly, taxpayers need specific guidance of special rules or statutory interpretations if the IRS determines that the installment method of section 453 is applied differently for virtual currency than for other types of property. So, at the very least, a peer-reviewed committee of CPAs finds like-kind treatment to have possible grounds for allowance. I would disagree with calling this a \"\"loophole,\"\" however (edit: at least from the viewpoint of the taxpayer.) At a base technological level, a virtual currency-to-virtual currency exchange consists of exchanging knowledge of one sequence of binary digits (private key) for another. What could be more \"\"like-kind\"\" than this?\"",
"title": ""
},
{
"docid": "986c9acc7c40e3a524b8ef9cff81fbe9",
"text": "I just scanned in a single sheet summary of my last two years tax returns. It is something our CPA does for us. How would I post it? Don't worry, I marked out all the personal information. What is says is I paid over $50K in taxes in 2015. Last year we had one of our biggest contracts put on hold, so I only paid $20K. I won't have this years figures, because we don't submit them to our CPA until the end of the year. However, this year, we just bought out two other owners at $1.2M, which makes me a 33% owner. The contract is getting restarted (knock on wood), which all together means my personal tax liability is going to be well over $100K. My company is a commercial company, but we work with the government, and matter of fact some of the stuff we produce was designed and developed by the government (as is many of today's modern inventions - I think you would be surprised). So lets tackle it one at a time. Pick one of those things that commercial does better than government. P.s. Higher taxes doesn't mean higher for you, a lot of times it means higher for guys like me or way better than me (which I am perfectly fine with, and matter of fact would support). People who use infastructure more - like large corporations - should pay more for it...",
"title": ""
},
{
"docid": "4f03d187b00e10733007a280dd18faf3",
"text": "\"Create a meaningful goal for yourself which would distract you from impulsively spending all your money and help you to direct it towards something more meaningful. Maybe you're curious about just how little money you can live off of in one year and you're up for a challenge. Maybe you want to take a whole year off from work. A trip around the world. Or create a financial independence account, the money that is put into this account should NEVER be touched, the idea is to live off of the interest that it throws off. I strongly suggest that you listen to the audio book \"\"PROSPERITY CONSCIOUSNESS\"\" by Fredric Lehrman. You can probably find a copy at your local library, or buy if off of amazon.\"",
"title": ""
}
] | fiqa |
66f016a9a52b2e7042bce8659380f504 | Can I lose more on Forex than I deposit? | [
{
"docid": "d62e3a39316e279e4ee8a1655d33359f",
"text": "\"If you don't use leverage you can't lose more than you invested because you \"\"play\"\" with your own money. But even with leverage when you reach a certain limit (maintenance margin) you will receive a margin call from your broker to add more funds to your account. If you don't comply with this (meaning you don't add funds) the broker will liquidate some of the assets (in this case the currency) and it will restore the balance of the account to meet with his/her maintenance margin. At least, this is valid for assets like stocks and derivatives. Hope it helps! Edit: I should mention that\"",
"title": ""
},
{
"docid": "427085ec3144fea0f18f8ce045d8159b",
"text": "It's the same as with equities. If you're just buying foreign currencies to hold, you can't lose more than you invest. But if you're buying derivatives (e.g. forward contracts or spread bets), or borrowing to buy on margin, you can certainly lose more than you invest.",
"title": ""
},
{
"docid": "7ddfae851426da2f8a259924a8dc6188",
"text": "FX is often purchased with leverage by both retail and wholesale speculators on the assumption daily movements are typically more restrained than a number of other asset classes. When volatility picks up unexpectedly these leveraged accounts can absolutely be wiped out. While these events are relatively rare, one happened as recently as 2016 when the Swiss National Bank unleashed the Swiss Franc from its Euro mooring. You can read about it here: http://www.reuters.com/article/us-swiss-snb-brokers-idUSKBN0KP1EH20150116",
"title": ""
},
{
"docid": "494d72c2a2f9d9d50116da81823b8b82",
"text": "Contrary to what other people said I believe that even without leverage you can lose more that you invest when you short a FX. Why? because the amount it can go down is alwasy limited to zero but it can, potentially, go up without limit. See This question for a mored detailed information.",
"title": ""
}
] | [
{
"docid": "9c18093cba429319b80d538cd41a3589",
"text": "> Theoretically you'd expect the exchange rate to move against you enough to make this a bad investment. Actually, the theoretical and intentional expectation is that the currency with the highest interest rate should appreciate even more. Canada has traditionally offered an interest rate premium over the US specifically to help the strength of its currency and attract capital to stay there. > In reality this doesn't happen Because carry trades/fx have so little margin requirements, and so many speculators on one side of the trade, there is a significant short squeeze risk any time there is a de-risking shock to the economy. Any unwinding impulse, scares other carry trade participants to unwind, and then forces many more to unwind.",
"title": ""
},
{
"docid": "69ac9022804733592e6acd79726b8624",
"text": "You are losing something - interest on your deposit. That money you are giving to the bank is not earning interest so you are losing money considering inflation is eating into it.",
"title": ""
},
{
"docid": "84b5b8c8ef42cad5494a1aef39fc1fab",
"text": "\"how can I get started knowing that my strategy opportunities are limited and that my capital is low, but the success rate is relatively high? A margin account can help you \"\"leverage\"\" a small amount of capital to make decent profits. Beware, it can also wipe out your capital very quickly. Forex trading is already high-risk. Leveraged Forex trading can be downright speculative. I'm curious how you arrived at the 96% success ratio. As Jason R has pointed out, 1-2 trades a year for 7 years would only give you 7-14 trades. In order to get a success rate of 96% you would have had to successful exploit this \"\"irregularity\"\" at 24 out of 25 times. I recommend you proceed cautiously. Make the transition from a paper trader to a profit-seeking trader slowly. Use a low leverage ratio until you can make several more successful trades and then slowly increase your leverage as you gain confidence. Again, be very careful with leverage: it can either greatly increase or decrease the relatively small amount of capital you have.\"",
"title": ""
},
{
"docid": "a2faa57a75bcfd515df2e8d966c4416e",
"text": "In the UK there are spread betting firms (essentially financial bookmakers) that will take large bets 24x7. Plus, interbank forex is open 24x7 anyway. And there are a wide array of futures markets in different jurisdictions. There are plenty of ways to find organizations who are willing to take the opposite position that you do, day or night, provided that you qualify.",
"title": ""
},
{
"docid": "50601f94359f51fc0159db8c6d469f19",
"text": "Interactive Brokers advertises the percent of profitable forex accounts for its own customers and for competitors. They say they have 46.9% profitable accounts which is higher than the other brokers listed. It's hard to say exactly how this data was compiled- but I think the main takeaway is that if a broker actually advertises that most accounts lose money, it is probably difficult to make money. It may be better for other securities because forex is considered a very tough market for retail traders to compete in. https://www.interactivebrokers.com/en/?f=%2Fen%2Ftrading%2Fpdfhighlights%2FPDF-Forex.php",
"title": ""
},
{
"docid": "37ecbc9531e92ed178dd05f3ac000953",
"text": "Swiss Central bank has a floor of 1.20, the reason why I have this pair is that my downside is limited. The actual differential is about 20 bps but im leveraged 50:1, which gives me a fun 10% annualized. I bought in at 1.2002, meaning my max downside on an investment of 20K (gives 1mn of exposure), is 200$ and my potential profit is 2000. Creating a risk to return of 10:1.",
"title": ""
},
{
"docid": "f24297fb61becba24d76ac71c8ec800e",
"text": "\"This is an old post I feel requires some more love for completeness. Though several responses have mentioned the inherent risks that currency speculation, leverage, and frequent trading of stocks or currencies bring about, more information, and possibly a combination of answers, is necessary to fully answer this question. My answer should probably not be the answer, just some additional information to help aid your (and others') decision(s). Firstly, as a retail investor, don't trade forex. Period. Major currency pairs arguably make up the most efficient market in the world, and as a layman, that puts you at a severe disadvantage. You mentioned you were a student—since you have something else to do other than trade currencies, implicitly you cannot spend all of your time researching, monitoring, and investigating the various (infinite) drivers of currency return. Since major financial institutions such as banks, broker-dealers, hedge-funds, brokerages, inter-dealer-brokers, mutual funds, ETF companies, etc..., do have highly intelligent people researching, monitoring, and investigating the various drivers of currency return at all times, you're unlikely to win against the opposing trader. Not impossible to win, just improbable; over time, that probability will rob you clean. Secondly, investing in individual businesses can be a worthwhile endeavor and, especially as a young student, one that could pay dividends (pun intended!) for a very long time. That being said, what I mentioned above also holds true for many large-capitalization equities—there are thousands, maybe millions, of very intelligent people who do nothing other than research a few individual stocks and are often paid quite handsomely to do so. As with forex, you will often be at a severe informational disadvantage when trading. So, view any purchase of a stock as a very long-term commitment—at least five years. And if you're going to invest in a stock, you must review the company's financial history—that means poring through 10-K/Q for several years (I typically examine a minimum ten years of financial statements) and reading the notes to the financial statements. Read the yearly MD&A (quarterly is usually too volatile to be useful for long term investors) – management discussion and analysis – but remember, management pays themselves with your money. I assure you: management will always place a cherry on top, even if that cherry does not exist. If you are a shareholder, any expense the company pays is partially an expense of yours—never forget that no matter how small a position, you have partial ownership of the business in which you're invested. Thirdly, I need to address the stark contrast and often (but not always!) deep conflict between the concepts of investment and speculation. According to Seth Klarman, written on page 21 in his famous Margin of Safety, \"\"both investments and speculations can be bought and sold. Both typically fluctuate in price and can thus appear to generate investment returns. But there is one critical difference: investments throw off cash flow for the benefit of the owners; speculations do not. The return to the owners of speculations depends exclusively on the vagaries of the resale market.\"\" This seems simple and it is; but do not underestimate the profound distinction Mr. Klarman makes here. (and ask yourself—will forex pay you cash flows while you have a position on?) A simple litmus test prior to purchasing a stock might help to differentiate between investment and speculation: at what price are you willing to sell, and why? I typically require the answer to be at least 50% higher than the current salable price (so that I have a margin of safety) and that I will never sell unless there is a material operating change, accounting fraud, or more generally, regime change within the industry in which my company operates. Furthermore, I then research what types of operating changes will alter my opinion and how severe they need to be prior to a liquidation. I then write this in a journal to keep myself honest. This is the personal aspect to investing, the kind of thing you learn only by doing yourself—and it takes a lifetime to master. You can try various methodologies (there are tons of books) but overall just be cautious. Money lost does not return on its own. I've just scratched the surface of a 200,000 page investing book you need to read if you'd like to do this professionally or as a hobbyist. If this seems like too much or you want to wait until you've more time to research, consider index investing strategies (I won't delve into these here). And because I'm an investment professional: please do not interpret anything you've read here as personal advice or as a solicitation to buy or sell any securities or types of securities, whatsoever. This has been provided for general informational purposes only. Contact a financial advisor to review your personal circumstances such as time horizon, risk tolerance, liquidity needs, and asset allocation strategies. Again, nothing written herein should be construed as individual advice.\"",
"title": ""
},
{
"docid": "1d2e532cb2e72389086f1be14335fde0",
"text": "Yes. So? Are you saying that OP was just unlucky because he didn't realize that forex wasn't covered under SIPC? I would agree with that, but then, had he read the terms and conditions and considerable paperwork that he was required to sign, he would've known.",
"title": ""
},
{
"docid": "f37da9c64177f790479271443715f132",
"text": "\"It is not clear to me why you believe you can lose more than you put in, without margin. It is difficult and the chances are virtually nil. However, I can think of a few ways. Lets say you are an American, and deposit $1000. Now lets say you think the Indian rupee is going to devalue relative to the Euro. So that means you want to go long EURINR. Going long EURINR, without margin, is still different than converting your INRs into Euros. Assume USDINR = 72. Whats actually happening is your broker is taking out a 72,000 rupee loan, and using it to buy Euros, with your $1000 acting as collateral. You will need to pay interest on this loan (about 7% annualized if I remember correctly). You will earn interest on the Euros you hold in the meantime (for simplicity lets say its 1%). The difference between interest you earn and interest you pay is called the cost of carry, or commonly referred to as 'swap'. So your annualized cost of carry is $60 ($10-$70). Lets say you have this position open for 1 year, and the exchange rate doesnt move. Your total equity is $940. Now lets say an asteroid destroys all of Europe, your Euros instantly become worthless. You now must repay the rupee loan to close the trade, the cost of which is $1000 but you only have $940 in your account. You have lost more than you deposited, using \"\"no margin\"\". I would actually say that all buying and selling of currency pairs is inherently using margin, because they all involve a short sale. I do note that depending on your broker, you can convert to another currency. But thats not what forex traders do most of the time.\"",
"title": ""
},
{
"docid": "d3207224e410452dea55c68e15e4aaf4",
"text": "Whether it's historically stronger or weaker isn't going to have an impact on you; the forex exposure you have is going forward if the exchange rates change you will have missed out on having more or less value by leaving it in a certain currency. (Ignoring fees) Say you exchange €85 for $100, if while you're in the US the Euro gets stronger than it currently is, and the exchange rate changes to €8:$10; then you will lose out on €5 if you try to change it back, and the opposite is true if the euro gets weaker than it currently is you would gain money on exchanging it back. Just look at it as though you're buying dollars like it were a commodity. If the euro gets stronger it buys more dollars and you should've held onto it in euros, if it gets weaker it buys less dollars and you were better off having it in dollars. You would want to use whichever currency you think will be weaker or gain the least against the dollar while you're here.",
"title": ""
},
{
"docid": "3eb8a9c983ff88ae23bb3a03f78f8179",
"text": "Greek bank deposits are backed by the Greek government and by the European Central Bank. So in order to lose money under the insurance limits of 100k euros the ECB would need to fail in which case deposit insurance would be the least of most peoples worries. On the other hand I have no idea how easy or hard it is to get to money from a failed bank in Greece. In the US FDIC insurance will usually have your money available in a couple of days. If there isn't a compelling reason to keep the money in a Greek bank I wouldn't do it.",
"title": ""
},
{
"docid": "b2c9c9f9ca946c1c69d9e4d37512428b",
"text": "No, it's not a good idea. You started by saying you'd like to invest, but then mentioned something that's not an investment, it's a speculation. Both Forex and CFDs are not really investments. They are a zero sum game where over time, it's a pool of your money, the other trader's money, and the broker, redistributed over time. If you truly wish to invest, you'll read up on the process, understand your own long term goals, and put aside X% (say 5-15) of your monthly income. You should look into investments that are long term, and will fund your retirement 30-40 years hence.",
"title": ""
},
{
"docid": "7c5e4cc3f975021d306cac2f5730af64",
"text": "It's very simple. Use USDSGD. Here's why: Presenting profits/losses in other currencies or denominations can be useful if you want to sketch out the profit/loss you made due to foreign currency exposure but depending on the audience of your app this may sometimes confuse people (like yourself).",
"title": ""
},
{
"docid": "61cce25bf7d6e1960d57634868b4996f",
"text": "\"You've asked eleven different questions here. Therefore, The first thing I'd recommend is this: Don't panic. Seek answers to your questions systematically, one at a time. Search this site (and others) to see if there are answers to some of them. You're in good shape if for no other reason than you're asking these when you're young. Investing and saving are great things to do, but you also have time going for you. I recommend that you use your \"\"other eight hours per day\"\" to build up other income streams. That potentially will get you far more than a 2% deposit. Any investment can be risky or safe. It depends on both your personal context and that of the larger economy. The best answers will come from your own research and from your advisors (since they will be able to see where you are financially, and in life).\"",
"title": ""
},
{
"docid": "cbef79be90e2e82d24e6214699fd271e",
"text": "No free lunch You cannot receive risk-free interest on more money than you actually put down. The construct you are proposing is called 'Carry Trade', and will yield you the interest-difference in exchange for assuming currency risk. Negative expectation In the long run one would expect the higher-yielding currency to devalue faster, at a rate that exactly negates the difference in interest. Net profit is therefore zero in the long run. Now factor in the premium that a (forex) broker charges, and now you may expect losses the size of which depends on the leverage chosen. If there was any way that this could reliably produce a profit even without friction (i.e. roll-over, transaction costs, spread), quants would have already arbitraged it away. Intransparancy Additionaly, in my experience true long-term roll-over costs in relation to interest are a lot harder to compute than, for example, the cost of a stock transaction. This makes the whole deal very intransparant. As to the idea of artificially constructing a USD/USD pair: I regret to tell you that such a construct is not possible. For further info, see this question on Carry Trade: Why does Currency Carry Trade work?",
"title": ""
}
] | fiqa |
e9529326c199290453e411dfa9508fb4 | How to invest in gold at market value, i.e. without paying a markup? | [
{
"docid": "96a7f25ee20dc1b974b4c5e296b433dd",
"text": "if you bought gold in late '79, it would have taken 30 years to break even. Of all this time it was two brief periods the returns were great, but long term, not so much. Look at the ETF GLD if you wish to buy gold, and avoid most of the buy/sell spread issues. Edit - I suggest looking at Compound Annual Growth Rate and decide whether long term gold actually makes sense for you as an investor. It's sold with the same enthusiasm as snake oil was in the 1800's, and the suggestion that it's a storehouse of value seems nonsensical to me.",
"title": ""
},
{
"docid": "afddbbed11db47d06d77751f3d76f112",
"text": "\"And you have hit the nail on the head of holding gold as an alternative to liquid currency. There is simply no way to reliably buy and sell physical gold at the spot price unless you have millions of dollars. Exhibit A) The stock symbol GLD is an ETF backed by gold. Its shares are redeemable for gold if you have more than 100,000 shares then you can be assisted by an \"\"Authorized Participant\"\". Read the fund's details. Less than 100,000 shares? no physical gold for you. With GLD's share price being $155.55 this would mean you need to have over 15 million dollars, and be financially solvent enough to be willing to exchange the liquidity of shares and dollars for illiquid gold, that you wouldn't be able to sell at a fair price in smaller denominations. The ETF trades at a different price than the gold spot market, so you technically are dealing with a spread here too. Exhibit B) The futures market. Accepting delivery of a gold futures contract also requires that you get 1000 units of the underlying asset. This means 1000 gold bars which are currently $1,610.70 each. This means you would need $1,610,700 that you would be comfortable with exchanging for gold bars, which: In contrast, securitized gold (gold in an ETF, for instance) can be hedged very easily, and one can sell covered calls to negate transaction fees, hedge, and collect dividends from the fund. quickly recuperating any \"\"spread tax\"\" that you encounter from opening the position. Also, leverage: no bank would grant you a loan to buy 4 to 20 times more gold than you can actually afford, but in the stock market 4 - 20 times your account value on margin is possible and in the futures market 20 times is pretty normal (\"\"initial margin and maintenance margin\"\"), effectively bringing your access to the spot market for physical gold more so within reach. caveat emptor.\"",
"title": ""
},
{
"docid": "e2ad1073731e8909e52ab00388e1e62a",
"text": "ETF's are great products for investing in GOLD. Depending on where you are there are also leveraged products such as CFD's (Contracts For Difference) which may be more suitable for your budget. I would stick with the big CFD providers as they offer very liquid products with tight spreads. Some CFD providers are MarketMakers whilst others provide DMA products. Futures contracts are great leveraged products but can be very volatile and like any leveraged product (such as some ETF's and most CFD's), you must be aware of the risks involved in controlling such a large position for such a small outlay. There also ETN's (Exchange Traded Notes) which are debt products issued by banks (or an underwriter), but these are subject to fees when the note matures. You will also find pooled (unallocated to physical bullion) certificates sold through many gold institutions although you will often pay a small premium for their services (some are very attractive, others have a markup worse than the example of your gold coin). (Note from JoeT - CFDs are not authorized for trading in the US)",
"title": ""
},
{
"docid": "6739f7b487afcbf39fc92d7f5b1b0c3d",
"text": "I agree that there is no reliable way to buy gold for less than spot, no more than there is for any other commodity. However, you can buy many things below market from motivated sellers. That is why you see so many stores buying gold now. It will be hard to find such sellers now with the saturation of buyers, but if you keep an eye on private sales and auctions you may be able to pick up something others miss.",
"title": ""
},
{
"docid": "8c507717d9501648c82e19ba942fa209",
"text": "This is an excellent question; kudos for asking it. How much a person pays over spot with gold can be negotiated in person at a coin shop or in an individual transaction, though many shops will refuse to negotiate. You have to be a clever and tough negotiator to make this work and you won't have any success online. However, in researching your question, I dug for some information on one gold ETF OUNZ - which is physically backed by gold that you can redeem. It appears that you only pay the spot price if you redeem your shares for physical gold: But aren't those fees exorbitant? After all, redeeming for 50 ounces of Gold Eagles would result in a $3,000 fee on a $65,000 transaction. That's 4.6 percent! Actually, the fee simply reflects the convenience premium that gold coins command in the market. Here are the exchange fees compared with the premiums over spot charged by two major online gold retailers: Investors do pay an annual expense ratio, but the trade-off is that as an investor, you don't have to worry about a thief breaking in and stealing your gold.",
"title": ""
}
] | [
{
"docid": "a8f1abe5d6acad4a5681cbee71690432",
"text": "\"Invest in other currencies and assets that have \"\"real\"\" value. And personally I don't count gold as something of real value. Of course its used in the industry but besides that its a pretty useless metal and only worth something because everybody else thinks that everybody thinks its worth something. So I would buy land, houses, stocks, ...\"",
"title": ""
},
{
"docid": "da970b33c88bfcf180ba2e428bd05130",
"text": "\"There are gold index funds. I'm not sure what you mean by \"\"real gold\"\". If you mean you want to buy physical gold, you don't need to. The gold index funds will track the price of gold and will keep you from filling your basement up with gold bars. Gold index funds will buy gold and then issue shares for the gold they hold. You can then buy and sell these just like you would buy and sell any share. GLD and IAU are the ticker symbols of some of these funds. I think it is also worth pointing out that historically gold has a been a poor investment.\"",
"title": ""
},
{
"docid": "d2f7b297afb74669d216bbe219f2ae73",
"text": "There are various exchanges around the world that handle spot precious metal trading; for the most part these are also the primary spot foreign exchange markets, like EBS, Thomson Reuters, Currenex (website seems to be down), etc. You can trade on these markets through brokers just like you can trade on stock markets. However, the vast majority of traders on these exchanges do not intend to hold any bullion ownership at the end of the day; they want to buy as much as they sell each day. A minority of traders do intend to hold metal positions for longer periods, but I doubt any of them intend to actually go collect bullion from the exchange. I don't think it's even possible. Really the only way to get bullion is to pay a service fee to a dealer like you mentioned. But on an exchange like the ones above you have to pay three different fees: So in the end you can't even get the spot price on the exchanges where the spot prices are determined. You might even come out ahead by going to a dealer. You should try to find a reputable dealer, and go in knowing the latest trade prices. An honest dealer will have a website showing you the current trade prices, so you know that they expect you to know the prices when you come in. For example, here's a well-known dealer in Chicago that happily shows you the spot prices from KITCO so you can decide whether their service fee is worth it or not.",
"title": ""
},
{
"docid": "08cec8c13d6cc51c6f85f6b481c17691",
"text": "Owning physical gold (assuming coins): Owning gold through a fund:",
"title": ""
},
{
"docid": "cdffb915d0dd1bd742154da933a60b2b",
"text": "The points given by DumbCoder are very valid. Diversifying portfolio is always a good idea. Including Metals is also a good idea. Investing in single metal though may not be a good idea. •Silver is pretty cheap now, hopefully it will be for a while. •Silver is undervalued compared to gold. World reserve ratio is around 1 to 11, while price is around 1 to 60. Both the above are iffy statements. Cheap is relative term ... there are quite a few metals more cheaper than Silver [Copper for example]. Undervalued doesn't make sense. Its a quesiton of demand and supply. Today Industrial use of Silver is more widespread, and its predecting future what would happen. If you are saying Silver will appreciate more than other metals, it again depends on country and time period. There are times when even metals like Copper have given more returns than Silver and Gold. There is also Platinum to consider. In my opinion quite a bit of stuff is put in undervalued ... i.e. comparing reserve ratio to price in absolute isn't right comparing it over relative years is right. What the ratio says is for every 11 gms of silver, there is 1 gm of Gold and the price of this 1 gm is 60 times more than silver. True. And nobody tell is the demand of Silver 60 times more than Gold or 11 times more than Gold. i.e. the consumption. What is also not told is the cost to extract the 11 gms of silver is less than cost of 1 gm of Gold. So the cheapness you are thinking is not 100% true.",
"title": ""
},
{
"docid": "4d7d32aa6bacabb609be5bda2008d0c4",
"text": "By mentioning GLD, I presume therefore you are referring to the SPRD Gold Exchange Traded Fund that is intended to mirror the price of gold without you having to personally hold bullion, or even gold certificates. While how much is a distinctly personal choice, there are seemingly (at least) three camps of people in the investment world. First would be traditional bond/fixed income and equity people. Gold would play no direct role in their portfolio, other than perhaps holding gold company shares in some other vehicle, but they would not hold much gold directly. Secondly, at the mid-range would be someone like yourself, that believes that is in and of itself a worthy investment and makes it a non-trivial, but not-overriding part of their portfolio. Your 5-10% range seems to fit in well here. Lastly, and to my taste, over-the-top, are the gold-gold-gold investors, that seem to believe it is the panacea for all market woes. I always suspect that investment gurus that are pushing this, however, have large positions that they are trying to run up so they can unload. Given all this, I am not aware of any general rule about gold, but anything less than 10% would seem like at least a not over-concentration in the one area. Once any one holding gets much beyond that, you should really examine why you believe that it should represent such a large part of your holdings. Good Luck",
"title": ""
},
{
"docid": "0b1b4d9b1b9d014f7d6ce32132da3509",
"text": "You are really tangling up two questions here: Q1: Given I fear a dissolution of the Euro, is buying physical gold a good response and if so, how much should I buy? I see you separately asked about real estate, and cash, and perhaps other things. Perhaps it would be better to just say: what is the right asset allocation, rather than asking about every thing individually, which will get you partial and perhaps contradictory answers. The short answer, knowing very little about your case, is that some moderate amount of gold (maybe 5-10%, at most 25%) could be a counterbalance to other assets. If you're concerned about government and market stability, you might like Harry Browne's Permanent Portfolio, which has equal parts stocks, bonds, cash, and gold. Q2: If I want to buy physical gold, what size should I get? One-ounce bullion (about 10 x 10 x 5mm, 30g) is a reasonably small physical size and a reasonable monetary granularity: about $1700 today. I think buying $50 pieces of gold is pointless: However much you want to have in physical gold, buy that many ounces.",
"title": ""
},
{
"docid": "8cc918d7d360e8385f3ff962b9230f3a",
"text": "\"The difficulty with investing in mining and gold company stocks is that they are subject to the same market forces as any other stocks, although they may whether those forces better in a crisis than other stocks do because they are related to gold, which has always been a \"\"flight to safety\"\" move for investors. Some investors buy physical gold, although you don't have to take actual delivery of the metal itself. You can leave it with the broker-dealer you buy it from, much the way you don't have your broker send you stock certificates. That way, if you leave the gold with the broker-dealer (someone reputable, of course, like APMEX or Monex) then you can sell it quickly if you choose, just like when you want to sell a stock. If you take delivery of a security (share certificate) or commodity (gold, oil, etc.) then before you can sell it, you have to return it to broker, which takes time. The decision has much to do with your investing objectives and willingness to absorb risk. The reason people choose mutual funds is because their money gets spread around a basket of stocks, so if one company in the fund takes a hit it doesn't wipe out their entire investment. If you buy gold, you run the risk (low, in my opinion) of seeing big losses if, for some reason, gold prices plummet. You're \"\"all in\"\" on one thing, which can be risky. It's a judgment call on your part, but that's my two cents' worth.\"",
"title": ""
},
{
"docid": "bad177efac3dfd6b41b35d802005ab10",
"text": "Without getting into whether you should invest in Gold or Not ... 1.Where do I go and make this purchase. I would like to get the best possible price. If you are talking about Physical Gold then Banks, Leading Jewelry store in your city. Other options are buying Gold Mutual Fund or ETF from leading fund houses. 2.How do I assure myself of quality. Is there some certificate of quality/purity? This is mostly on trust. Generally Banks and leading Jewelry stores will not sell of inferior purity. There are certain branded stores that give you certificate of authenticity 3.When I do choose to sell this commodity, when and where will I get the best cost? If you are talking about selling physical gold, Jewelry store is the only place. Banks do not buy back the gold they sold you. Jewelry stores will buy back any gold, however note there is a buy price and sell price. So if you buy 10 g and sell it back immediately you will not get the same price. If you have purchased Mutual Funds / ETF you can sell in the market.",
"title": ""
},
{
"docid": "257a8f93e0de0801f8797cea3e791f6e",
"text": "Buy gold, real coins not paper. And do not keep it in a bank.",
"title": ""
},
{
"docid": "31d6992cf6ec96afe2148aa04cd54d57",
"text": "I agree with buying gold, as this is truly the worldwide currency and will only increase in value if the Euro fails. The only issue will be if your country confiscates all citizen's gold ( it has happened many times throughout history. As for ETFs, be careful because unless you purchase these in terms of other currencies (I am assuming you aren't), than the ETF you own is still in terms of Euros, making the whole investment worthless if you are trying to avoid Euro currency risk.",
"title": ""
},
{
"docid": "c63f60ca1d9b71a71bf801ba065460cb",
"text": "There are bullion dealers who will buy gold no matter its form. You won't get the spot price as it's probably being bought same as junk jewelry or any other gold needing to be melted and recast. If this is your concern, you should buy a fireproof safe, the kind people use to store their important documents, and add the gold value to home insurance policy. Do not get a safe deposit box at the bank, see mbhunter's comment and link.",
"title": ""
},
{
"docid": "ab6cc8d9826ecf75e8add750017c25d1",
"text": "\"Don't put all your eggs in one basket and don't assume that you know more than the market does. The probability of gold prices rising again in the near future is already \"\"priced in\"\" as it were. Unless you are privy to some reliable information that no one else knows (given that you are asking here, I'm guessing not), stay away. Invest in a globally diversified low cost portfolio of primarily stocks and bonds and don't try to predict the future. Also I would kill for a 4.5% interest rate on my savings. In the USA, 1% is on the high side of what you can get right now. What is inflation like over there?\"",
"title": ""
},
{
"docid": "2234ad152a94b06edf2086f30592fe80",
"text": "I am not interested in watching stock exchange rates all day long. I just want to place it somewhere and let it grow Your intuition is spot on! To buy & hold is the sensible thing to do. There is no need to constantly monitor the stock market. To invest successfully you only need some basic pointers. People make it look like it's more complicated than it actually is for individual investors. You might find useful some wisdom pearls I wish I had learned even earlier. Stocks & Bonds are the best passive investment available. Stocks offer the best return, while bonds are reduce risk. The stock/bond allocation depends of your risk tolerance. Since you're as young as it gets, I would forget about bonds until later and go with a full stock portfolio. Banks are glorified money mausoleums; the interest you can get from them is rarely noticeable. Index investing is the best alternative. How so? Because 'you can't beat the market'. Nobody can; but people like to try and fail. So instead of trying, some fund managers simply track a market index (always successfully) while others try to beat it (consistently failing). Actively managed mutual funds have higher costs for the extra work involved. Avoid them like the plague. Look for a diversified index fund with low TER (Total Expense Ratio). These are the most important factors. Diversification will increase safety, while low costs guarantee that you get the most out of your money. Vanguard has truly good index funds, as well as Blackrock (iShares). Since you can't simply buy equity by yourself, you need a broker to buy and sell. Luckily, there are many good online brokers in Europe. What we're looking for in a broker is safety (run background checks, ask other wise individual investors that have taken time out of their schedules to read the small print) and that charges us with low fees. You probably can do this through the bank, but... well, it defeats its own purpose. US citizens have their 401(k) accounts. Very neat stuff. Check your country's law to see if you can make use of something similar to reduce the tax cost of investing. Your government will want a slice of those juicy dividends. An alternative is to buy an index fund on which dividends are not distributed, but are automatically reinvested instead. Some links for further reference: Investment 101, and why index investment rocks: However the author is based in the US, so you might find the next link useful. Investment for Europeans: Very useful to check specific information regarding European investing. Portfolio Ideas: You'll realise you don't actually need many equities, since the diversification is built-in the index funds. I hope this helps! There's not much more, but it's all condensed in a handful of blogs.",
"title": ""
},
{
"docid": "a336e432920f71cf5cf7ca918fa8eb41",
"text": "I have a bank account in the US from some time spent there a while back. When I wanted to move most of the money to the UK (in about 2006), I used XEtrade who withdrew the money from my US account and sent me a UK cheque. They might also offer direct deposit to the UK account now. It was a bit of hassle getting the account set up and linked to my US account, but the transaction itself was straightforward. I don't think there was a specific fee, just spread on the FX rate, but I can't remember for certain now - I was transfering a few thousand dollars, so a relatively small fixed fee would probably not have bothered me too much.",
"title": ""
}
] | fiqa |
63a695c353796268065fcf3650a4df09 | How to reach an apt going against inflation | [
{
"docid": "0c4147d5a2bd6edd3b1cc6c2f729528f",
"text": "Inflation of the type currently experienced in Argentina is particularly hard to deal with. Also, real estate prices in global cities such as Buenos Aires and even secondary cities have grown significantly. There are no full solutions to this problem, but there are a few things that can really help.",
"title": ""
}
] | [
{
"docid": "50473990a1f2b82126d6e9f61a574282",
"text": "Inflation protected securities (i-bonds or TIPS). TIPS stands for Treasury Inflation Protected Securities. By very definition, they tend to protect your savings against inflation. They won't beat inflation, but will keep up with it. TIPS or iBonds have two parts. A fixed interest part and a variable interest portion which varies depending upon the current rates. The combined rate would match the inflation rate. They can be bought directly from the treasury (or from a broker or bank who might charge a commission)",
"title": ""
},
{
"docid": "8eb29fc32076f8d336f8e79cecafdc86",
"text": "There are two industrial sectors with a recent history of raising revenue and profit faster than inflation: education and health care. While there is indeed some political risk, my assumption is these sectors would continue to beat inflation even under a theoretical socialist President Bernie Sanders. There are several such sector funds available from popular low ER mutual fund companies; I don't believe this forum likes specific commercial investment touting so I decline to name specific ones.",
"title": ""
},
{
"docid": "af9e3804fe0ba09f7a01b49f444fe670",
"text": "\"The classic definition of inflation is \"\"too much money chasing too few goods.\"\" Low rates and QE were intended to help revive a stalled economy, but unfortunately, demand has not risen, but rather, the velocity of money has dropped like a rock. At some point, we will see the economy recover and the excess money in the system will need to be removed to avoid the inflation you suggest may occur. Of course, as rates rise to a more normal level, the price of all debt will adjust. This question may not be on topic for this board, but if we avoid politics, and keep it close to PF, it might remain.\"",
"title": ""
},
{
"docid": "8aca5ab77ad9a7c18b6ceeb4300f23be",
"text": "$10k isn't really enough to make enough money to offset the extremely high risks in investing in options in this area. Taking risks is great, but a sure losing proposition isn't a risk -- it's a gamble. You're likely to get wiped out with leveraged options, since you don't have enough money to hedge your bets. Timing is critical... look at the swings in valuation in the stock market between the Bear Sterns and Lehman collapses in 2009. If you were highly leveraged in QQQQ that you bought in June 2009, you would have $0 in November. With $10k, I'd diversify into a mixture of foreign cash (maybe ETFs like FXF, FXC, FXY), emerging markets equities and commodities. Your goal should be to preserve investment value until buying opportunities for depressed assets come around. Higher interest rates that come with inflation will be devastating to the US economy, so if I'm betting on high inflation, I want to wait for a 2009-like buying opportunity. Then you buy depressed non-cyclical equities with easy to predict cash flows like utilities (ConEd), food manufacturers (General Mills), consumer non-durables (P&G) and alcohol/tobacco. If they look solvent, buying commodity ETFs like the new Copper ETFs or interests in physical commodities like copper, timber, oil or other raw materials with intrinsic value are good too. I personally don't like gold for this purpose because it doesn't have alot of industrial utility. Silver is a little better, but copper and oil are things with high intrinsic value that are always needed. As far as leverage goes, proceed with caution. What happens when you get high inflation? High cost of capital.",
"title": ""
},
{
"docid": "991a3c3f2d868d20ef41153c719b87fe",
"text": "Recessions are prolonged by less spending and wages being 'sticky' downward. My currency, the 'wallark', allows a company to pay its workers in it's own scrip instead of dollars which they can use to purchase its goods, thus reducing it's labor costs and allowing prices to fall faster. While scrip in the past purposely devalued to discourage hoarding, the wallark hold's it's purchasing power. The difference is, a worker can only use it to purchase their company's good *on the date the wallark was earned or before*. In other words, each good is labeled with a date it was put on display for sale, if a worker earns scrip on that same day, they can trade the scrip for that good, or any good that was on the shelves on that day it was earned *or before that date*. Any good that comes onto market after the date that particular wallark was earned cannot be purchased with that wallark(which is dated), and must be purchased either with dollars or with wallark that was earned on that good's date or after. This incentivizes spending without creating inflation, and allows costs to fall which helps businesses during rough economic times. Please feel free to read it, and comment on my site! Any feedback is welcome!",
"title": ""
},
{
"docid": "4b47b1fed185fd92a2718eccc810c8dc",
"text": "\"So, what's your actual plan/strategy/suggestion to combat this, again? Are you planning on buying physical gold, other precious metals (again, tangible--not paper), and buying & investing in real estate? This isn't a sarcastic question; I want to go down this hypothetical path in the thought experiment a bit further. For example: for a US investor, could *part* of the strategy be to \"\"move to a state with no state income tax\"\" to preserve as much income as possible in order to invest that income in one of the target categories? Is careful selection of primary residence (real estate) in a location most likely to appreciate part of the strategy? Is moving your investment accounts offshore to a tax haven part of the strategy?\"",
"title": ""
},
{
"docid": "f968ac77c114449dadf53ee74f7830b8",
"text": "You can't get there from here. This isn't the right data. Consider the following five-year history: 2%, 16%, 32%, 14%, 1%. That would give a 13% average annual return. Now compare to -37%, 26%, 15%, 2%, 16%. That would give a 4% average annual return. Notice anything about those numbers? Two of them are in both series. This isn't an accident. The first set of five numbers are actual stock market returns from the last five years while the latter five start three years earlier. The critical thing is that five years of returns aren't enough. You'd need to know not just how you can handle a bull market but how you do in a bear market as well. Because there will be bear markets. Also consider whether average annual returns are what you want. Consider what actually happens in the second set of numbers: But if you had had a steady 4% return, you would have had a total return of 21%, not the 8% that would have really happened. The point being that calculating from averages gives misleading results. This gets even worse if you remove money from your principal for living expenses every year. The usual way to compensate for that is to do a 70% stock/30% bond mix (or 75%/25%) with five years of expenses in cash-equivalent savings. With cash-equivalents, you won't even keep up with inflation. The stock/bond mix might give you a 7% return after inflation. So the five years of expenses are more and more problematic as your nest egg shrinks. It's better to live off the interest if you can. You don't know how long you'll live or how the market will do. From there, it's just about how much risk you want to take. A current nest egg of twenty times expenses might be enough, but thirty times would be better. Since the 1970s, the stock market hasn't had a long bad patch relative to inflation. Maybe you could squeak through with ten. But if the 2020s are like the 1970s, you'd be in trouble.",
"title": ""
},
{
"docid": "dfafcc92da76fa7f7ae4390603830f17",
"text": "There is inflation, but it's hidden through various mechanisms. What do you call housing price increases and wage declines? What do you call the fed essentially paying down the inflation with free money and prices still pressuring upwards? I get the sense there is a great underlying pressure for inflation to burst out from the fed's free money pressure chamber. For all our sake, I really hope the pressure chamber holds or I'm totally wrong in the first place.",
"title": ""
},
{
"docid": "0339acde124bc7d1ff0f4bbec49f66dc",
"text": "\"To begin with, bear in mind that over the time horizon you are talking about, the practical impact of inflation will be quite limited. Inflation for 2017 is forecast at 2.7%, and since you are talking about a bit less than all of 2017, and on average you'll be withdrawing your money halfway through, the overall impact will be <1.3% of your savings. You should consider whether the effort and risk involved in an alternative is worth a few hundred pounds. If you still want to beat inflation, the best suggestion I have is to look at peer-to-peer lending. That comes with some risk, but I think over the course of 1 year, it's quite limited. For example, Zopa is currently offering 3.1% on their \"\"Access\"\" product, and RateSetter are offering 2.9% on the \"\"Everyday\"\" product. Both of these are advertised as instant access, albeit with some caveats. These aren't FSCS-guaranteed bank deposits, and they do come with some risk. Firstly, although both RateSetter and Zopa have a significant level of provision against bad debt, it's always possible that this won't be enough and you'll lose some of your money. I think this is quite unlikely over a one-year time horizon, as there's no sign of trouble yet. Secondly, there's \"\"liquidity\"\" risk. Although the products are advertised as instant access, they are actually backed by longer-duration loans made to people who want to borrow money. For you to be able to cash out, someone else has to be there ready to take your place. Again, this is very likely to be possible in practice, but there's no absolute guarantee.\"",
"title": ""
},
{
"docid": "7f36c05d8eff0f82f58f3cdf2cc742d0",
"text": "The safest investment in the United States is Treasures. The Federal Reserve just increased the short term rate for the first time in about seven years. But the banks are under no obligation to increase the rate they pay. So you (or rather they) can loan money directly to the United States Government by buying Bills, Notes, or Bonds. To do this you set up an account with Treasury Direct. You print off a form (available at the website) and take the filled out form to the bank. At the bank their identity and citizenship will be verified and the bank will complete the form. The form is then mailed into Treasury Direct. There are at least two investments you can make at Treasury Direct that guarantee a rate of return better than the inflation rate. They are I-series bonds and Treasury Inflation Protected Securities (TIPS). Personally, I prefer the I-series bonds to TIPS. Here is a link to the Treasury Direct website for information on I-series bonds. this link takes you to information on TIPS. Edit: To the best of my understanding, the Federal Reserve has no ability to set the rate for notes and bonds. It is my understanding that they can only directly control the overnight rate. Which is the rate the banks get for parking their money with the Fed overnight. I believe that the rates for longer term instruments are set by the market and are not mandated by the Fed (or anyone else in government). It is only by indirect influence that the Fed tries to change long term rates.",
"title": ""
},
{
"docid": "e8853208397ad305d6c63eab911edbb7",
"text": "You can look at TIPS (which have some inflation protection built in). Generally short term bonds are better than long if you expect rates to rise soon. Other ways that you can protect yourself are to choose higher yield corporate bonds instead of government bonds, or to use foreign bonds. There are plenty of bond funds like Templeton Global or ETFs that offer such features. Find one that will work for you.",
"title": ""
},
{
"docid": "9c84d0cd8ba4ce0d23663e0591844911",
"text": "Gold is a risky and volatile investment. If you want an investment that's inflation-proof, you should buy index-linked government bonds in the currency that you plan to be spending the money in, assuming that government controls its own currency and has a good credit rating.",
"title": ""
},
{
"docid": "e0560dc94d403ac5117bf22eb7e78265",
"text": "Inflation is already impacting hard to reach places: Sakhalin, Yakutsk, and Kaliningrad (Russia's small enclave in the middle of Europe). While major metropolitan areas can easily deflect inflation, better distribution networks, major distributors, faster access to bureaucratic machinary, isolated regions are already fealing the impact of the sanctions.",
"title": ""
},
{
"docid": "47d7e6b46352b8e46c514f9e74f02502",
"text": "There are several local currency initiatives in the US list here. Most are attempts to normalize a value as a living wage, or encourage local consumption networks. If you are in the catchment region of one of these, see if you can get a grant or loan to get started (if you are willing to buy into the philosophy of the group such as a $10 minimum wage) m",
"title": ""
},
{
"docid": "511d0076eb13439460e7ae3d17d7bec1",
"text": "\"Inflation means that the more money you create, the less it has value. To that I say, \"\"Meh.\"\" A funnier way of gaining wealth, which is the ultimate goal to stealing currency, would be to gain a great deal of money (through robbery or other means) then attempt to trigger a deflationary spiral while sitting on the cash. Sure it might be difficult, but I'm pretty sure the key is jacking up Fed interest rates and blowing up money printers.\"",
"title": ""
}
] | fiqa |
29d1b1406a9c5d15e16396f5e323c56b | Is a currency “hedged” ETF actually a more speculative instrument than an unhedged version? | [
{
"docid": "1e640c432f9c2f8e44e602ec3db6e3b1",
"text": "Currency hedge means that you are somewhat protected from movements in currency as your investment is in gold not currency. So this then becomes less speculative and concentrates more on your intended investment. EDIT The purpose of the GBSE ETF is aimed for investors living in Europe wanting to invest in USD Gold and not be effected by movements in the EUR/USD. The GBSE ETF aims to hedge against the effects of the currency movements in the EUR/USD and more closely track the USD Gold price. The 3 charts below demonstrate this over the past 5 years. So as is demonstrated the performance of the GBSE ETF closely matches the performance of the USD Gold price rather than the EUR Gold price, meaning someone in Europe can invest in the fund and get the appropriate similar performance as investing directly into the USD Gold without being affected by currency exchange when changing back to EUR. This is by no way speculative as the OP suggests but is in fact serving the purpose as per the ETF details.",
"title": ""
},
{
"docid": "26ceaf89f25dc15d761e3c7c15c56718",
"text": "\"The risk of any investment is measured by its incremental effect on the volatility of your overall personal wealth, including your other investments. The usual example is that adding a volatile stock to your portfolio may actually reduce the risk of your portfolio if it is negatively correlated with the other stuff in your portfolio. Common measures of risk, such as beta, assume that you have whole-market diversified portfolio. In the case of an investment that may or may not be hedged against currency movements, we can't say whether the hedge adds or removes risk for you without knowing what else is in your portfolio. If you are an EU citizen with nominally delimited savings or otherwise stand to lose buying power if the Euro depreciates relative to the dollar, than the \"\"hedged\"\" ETF is less risky than the \"\"unhedged\"\" version. On the other hand, if your background risk is such that you benefit from that depreciation, then the reverse is true. \"\"Hedging\"\" means reducing the risk already present in your portfolio. In this case it does not refer to reducing the individual volatility of the ETF. It may or may not do that but individual asset volatility and risk are two very different things.\"",
"title": ""
},
{
"docid": "20f5e8dda815a97019c151c8a937f3d1",
"text": "\"Overall, since gold has value in any currency (and is sort of the ultimate reserve currency), why would anyone want to currency hedge it? Because gold is (mostly) priced in USD. You currency hedge it to avoid currency risk and be exposed to only the price risk of Gold in USD. Hedging it doesn't mean \"\"less speculative\"\". It just means you won't take currency risk. EDIT: Responding to OP's questions in comment what happens if the USD drops in value versus other major currencies? Do you think that the gold price in USD would not be affected by this drop in dollar value? Use the ETF $GLD as a proxy of gold price in USD, the correlation between weekly returns of $GLD and US dollar index (measured by major world currencies) since the ETF's inception is around -47%. What this says is that gold may or may not be affected by USD movement. It's certainly not a one-way movement. There are times where both USD and gold rise and fall simultaneously. Isn't a drop in dollar value fundamentally currency risk? Per Investopedia, currency risk arises from the change in price of one currency in relation to another. In this context, it's referring to the EUR/USD movement. The bottom line is that, if gold price in dollar goes up 2%, this ETF gives the European investor a way to bring home that 2% (or as close to that as possible).\"",
"title": ""
},
{
"docid": "4716c4aba4846bb7b7f17bbdd83f777e",
"text": "I will just try to come up with a totally made up example, that should explain the dynamics of the hedge. Consider this (completely made up) relationship between USD, EUR and Gold: Now lets say you are a european wanting to by 20 grams of Gold with EUR. Equally lets say some american by 20 grams of Gold with USD. Their investment will have the following values: See how the europeans return is -15.0% while the american only has a -9.4% return? Now lets consider that the european are aware that his currency may be against him with this investment, so he decides to hedge his currency. He now enters a currency-swap contract with another person who has the opposite view, locking in his EUR/USD at t2 to be the same as at t0. He now goes ahead and buys gold in USD, knowing that he needs to convert it to EUR in the end - but he has fixed his interestrate, so that doesn't worry him. Now let's take a look at the investment: See how the european now suddenly has the same return as the American of -9.4% instead of -15.0% ? It is hard in real life to create a perfect hedge, therefore you will most often see that the are not totally the same, as per Victors answer - but they do come rather close.",
"title": ""
}
] | [
{
"docid": "7dba0900e0c8d2e5b352741e92b3abfd",
"text": "Equal weighted indexes are not theoretically meant to be less volatile or less risky; they're just a different way to weigh stocks in an index. If you had a problem that hurt small caps more than large caps, an equal weighted index will be hurt more than a market-cap weighted one. On the other hand, if you consider that second rung companies have come up to replace the top layer, it makes sense to weigh them on par. History changes on a per-country basis - in India, for instance, the market's so small at the lower-cap end that big money chases only the large caps, which go up more in a liquidity driven move. But in a more secular period (like the last 18 months) we see that smaller caps have outperformed.",
"title": ""
},
{
"docid": "a3b870bb13360f8f4603071e8bca3010",
"text": "I use the following allocation in my retirement portfolio: I prefer these because: Expense Ratios Oh, and by their very definition, ETFs are very liquid. EDIT: The remaining 10% is the speculative portion of my portfolio. Currently, I own shares in HAP (as a hedge against rising commodity prices) and TIP (as a hedge against hyperinflation).",
"title": ""
},
{
"docid": "a4fec76dba1c60a221c6dc87a3037197",
"text": "The opposite of a hedge is nothing. Because if you don't want to hedge you bets, you don't, therefore you merely have the original bet. The opposite state of being hedged, is being unhedged.",
"title": ""
},
{
"docid": "625c51b04a0f46376f261af653ae8fa1",
"text": "If you do not understand the volatility of the fx market, you need to stop trading it, immediately. There are many reasons that fx is riskier than other types of investing, and you bear those risks whether you understand them or not. Below are a number of reasons why fx trading has high levels of risk: 1) FX trades on the relative exchange rate between currencies. That means it is a zero-sum game. Over time, the global fx market cannot 'grow'. If the US economy doubles in size, and the European economy doubles in size, then the exchange rate between the USD and the EUR will be the same as it is today (in an extreme example, all else being equal, yes I know that value of currency /= value of total economy, but the general point stands). Compare that with the stock market - if the US economy doubles in size, then effectively the value of your stock investments will double in size. That means that stocks, bonds, etc. tied to real world economies generally increase when the global economy increases - it is a positive sum game, where many players can be winners. On the long term, on average, most people earn value, without needing to get into 'timing' of trades. This allows many people to consider long-term equity investing to be lower risk than 'day-trading'. With FX, because the value of a currency is in its relative position compared with another currency, 1 player is a winner, 1 player is a loser. By this token, most fx trading is necessarily short-term 'day-trading', which by itself carries inherent risk. 2) Fx markets are insanely efficient (I will lightly state that this is my opinion, but one that I am not alone in holding firmly). This means that public information about a currency [ie: economic news, political news, etc.] is nearly immediately acted upon by many, many people, so that the revised fx price of that currency will quickly adjust. The more efficient a market is, the harder it is to 'time a trade'. As an example, if you see on a news feed that the head of a central bank authority made an announcement about interest rates in that country [a common driver of fx prices], you have only moments to make a trade before the large institutional investors already factor it into their bid/ask prices. Keep in mind that the large fx players are dealing with millions and billions of dollars; markets can move very quickly because of this. Note that some currencies trade more frequently than others. The main currency 'pairs' are typically between USD and / or other G10 country-currencies [JPY, EUR, etc.]. As you get into currencies of smaller countries, trading of those currencies happens less frequently. This means that there may be some additional time before public information is 'priced in' to the market value of that currency, making that currency 'less efficient'. On the flip side, if something is infrequently traded, pricing can be more volatile, as a few relatively smaller trades can have a big impact on the market. 3) Uncertainty of political news. If you make an fx trade based on what you believe will happen after an expected political event, you are taking risk that the event actually happens. Politics and world events can be very hard to predict, and there is a high element of chance involved [see recent 'expected' election results across the world for evidence of this]. For something like the stock market, a particular industry may get hit every once in a while with unexpected news, but the fx market is inherently tied to politics in a way that may impact exchange rates multiple times a day. 4) Leveraging. It is very common for fx traders to borrow money to invest in fx. This creates additional risk because it amplifies the impact of your (positive or negative) returns. This applies to other investments as well, but I mention it because high degrees of debt leveraging is extremely common in FX. To answer your direct question: There are no single individual traders who spike fx prices - that is the impact you see of a very efficient market, with large value traders, reacting to frequent, surprising news. I reiterate: If you do not understand the risks associated with fx trade, I recommend that you stop this activity immediately, at least until you understand it better [and I would recommend personally that any amateur investor never get involved in fx at all, regardless of how informed you believe you are].",
"title": ""
},
{
"docid": "e9479291259074533e355387dc6805eb",
"text": "\"The difference is in the interrelation between the varied investments you make. Hedging is about specifically offsetting a possible loss in an investment by making another related investment that will increase in value for the same reasons that the original investment would lose value. Gold, for instance, is often regarded as the ultimate hedge. Its value is typically inversely correlated to the rest of the market as a whole, because its status as a material, durable store of value makes it a preferred \"\"safe haven\"\" to move money into in times of economic downturn, when stock prices, bond yields and similar investments are losing value. That specific behavior makes investing in gold alongside stocks and bonds a \"\"hedge\"\"; the increase in value of gold as stock prices and bond yields fall limits losses in those other areas. Investment of cash in gold is also specifically a hedge against currency inflation; paper money, account balances, and even debt instruments like bonds and CDs can lose real value over time in a \"\"hot\"\" economy where there's more money than things to buy with it. By keeping a store of value in something other than currency, the price of that good will rise as the currencies used to buy it decrease in real value, maintaining your level of real wealth. Other hedges are more localized. One might, for example, trade oil futures as a hedge on a position in transportation stocks; when oil prices rise, trucking and airline companies suffer in the short term as their margins get squeezed due to fuel costs. Currency futures are another popular hedge; a company in international business will often trade options on the currencies of the companies it does business in, to limit the \"\"jitters\"\" seen in the FOREX spot market caused by speculation and other transient changes in market demand. Diversification, by contrast, is about choosing multiple unrelated investments, the idea being to limit losses due to a localized change in the market. Companies' stocks gain and lose value every day, and those companies can also go out of business without bringing the entire economy to its knees. By spreading your wealth among investments in multiple industries and companies of various sizes and global locations, you insulate yourself against the risk that any one of them will fail. If, tomorrow, Kroger grocery stores went bankrupt and shuttered all its stores, people in the regions it serves might be inconvenienced, but the market as a whole will move on. You, however, would have lost everything if you'd bet your retirement on that one stock. Nobody does that in the real world; instead, you put some of your money in Kroger, some in Microsoft, some in Home Depot, some in ALCOA, some in PG&E, etc etc. By investing in stocks that would be more or less unaffected by a downturn in another, if Kroger went bankrupt tomorrow you would still have, say, 95% of your investment next egg still alive, well and continuing to pay you dividends. The flip side is that if tomorrow, Kroger announced an exclusive deal with the Girl Scouts to sell their cookies, making them the only place in the country you can get them, you would miss out on the full possible amount of gains you'd get from the price spike if you had bet everything on Kroger. Hindsight's always 20/20; I could have spent some beer money to buy Bitcoins when they were changing hands for pennies apiece, and I'd be a multi-millionaire right now. You can't think that way when investing, because it's \"\"survivor bias\"\"; you see the successes topping the index charts, not the failures. You could just as easily have invested in any of the hundreds of Internet startups that don't last a year.\"",
"title": ""
},
{
"docid": "482154794dd04f56b16ebffc9084f877",
"text": "How is it possible that long term treasury bonds, which the government has never defaulted on, can hold more risk as an ETF then the stock market index? The risk from long-term bonds isn't that the government defaults, but that interest rates go up before you get paid, so investors want bonds issued more recently at higher interest rates, rather than your older bonds that pay at a lower rate (so the price for your bonds goes down). This is usually caused by higher inflation rates which reduce the value of the interest that you will be paid. Do you assume more risk investing in bond ETFs than you would investing in individual bonds? If you are choosing the right ETFs, there should be a lower amount of risk because the ETFs are taking care of the difficult work of buying a variety of bonds. Are bond ETFs an appropriate investment vehicle for risk diversification? Yes, if you are investing in bonds, exchange traded funds are an appropriate way to buy them. The markets for ETFs are usually very liquid.",
"title": ""
},
{
"docid": "1a5261fd35e60a67b52827496240db6b",
"text": "\"Like Jeremy T said above, silver is a value store and is to be used as a hedge against sovereign currency revaluations. Since every single currency in the world right now is a free-floating fiat currency, you need silver (or some other firm, easily store-able, protect-able, transportable asset class; e.g. gold, platinum, ... whatever...) in order to protect yourself against government currency devaluations, since the metal will hold its value regardless of the valuation of the currency which you are denominating it in (Euro, in your case). Since the ECB has been hesitant to \"\"print\"\" large amounts of currency (which causes other problems unrelated to precious metals), the necessity of hedging against a plummeting currency exchange rate is less important and should accordingly take a lower percentage in your diversification strategy. However, if you were in.. say... Argentina, for example, you would want to have a much larger percentage of your assets in precious metals. The EU has a lot of issues, and depreciation of hard assets courtesy of a lack of fluid currency/capital (and overspending on a lot of EU governments' parts in the past), in my opinion, lessens the preservative value of holding precious metals. You want to diversify more heavily into precious metals just prior to government sovereign currency devaluations, whether by \"\"printing\"\" (by the ECB in your case) or by hot capital flows into/out of your country. Since Eurozone is not an emerging market, and the current trend seems to be capital flowing back into the developed economies, I think that diversifying away from silver (at least in overall % of your portfolio) is the order of the day. That said, do I have silver/gold in my retirement portfolio? Absolutely. Is it a huge percentage of my portfolio? Not right now. However, if the U.S. government fails to resolve the next budget crisis and forces the Federal Reserve to \"\"print\"\" money to creatively fund their expenses, then I will be trading out of soft assets classes and into precious metals in order to preserve the \"\"real value\"\" of my portfolio in the face of a depreciating USD. As for what to diversify into? Like the folks above say: ETFs(NOT precious metal ETFs and read all of the fine print, since a number of ETFs cheat), Indexes, Dividend-paying stocks (a favorite of mine, assuming they maintain the dividend), or bonds (after they raise the interest rates). Once you have your diversification percentages decided, then you just adjust that based on macro-economic trends, in order to avoid pitfalls. If you want to know more, look through: http://www.mauldineconomics.com/ < Austrian-type economist/investor http://pragcap.com/ < Neo-Keynsian economist/investor with huge focus on fiat currency effects\"",
"title": ""
},
{
"docid": "041ce37bd0f111523e88e92d4ce75aaf",
"text": "\"Large multinationals who do business in multiple locales hedge even \"\"stable\"\" currencies like the Euro, Yen and Pound - because a 5-10% adverse move in an exchange rate is highly consequential to the bottom line. I doubt any of them are going to be doing significant amounts of business accepting a currency with a 400% annual range. And why should they? It's nothing more than another unit of payment - one with its own problems.\"",
"title": ""
},
{
"docid": "555223a44e7e0de664852d58805003da",
"text": "You can, and people do. More a Japanese thing than a US thing but I guess they've had super low interest rates for longer. Its called 'the carry trade' and is the reason the NZD is artificially high (which as an NZ exporter I find kinda annoying). Particularly popular with the so called 'japanese housewife' investor. It also causes the NZD to plunge every time the US stock market dips - because the NZD is held mostly as a moderately risky investment, not for trade purposes. Presumably in a down market hedge funds need to cash in their carry trades to cover margins or something? As another person said the primary risk is currency fluctuations. Unfortunately such currencies are highly volatile and tied to stock market volatility. tl;dr It'd be nice if you all quit treating my national currency as an investment opportunity - then i could get on with my business as an New Zealand exporter ;-)",
"title": ""
},
{
"docid": "505ca7e221596c6b8fd0ab08c320d875",
"text": "Your assumption that funds sold in GBP trade in GBP is incorrect. In general funds purchase their constituent stocks in the fund currency which may be different to the subscription currency. Where the subscription currency is different from the fund currency subscriptions are converted into the fund currency before the extra money is used to increase holdings. An ETF, on the other hand, does not take subscriptions directly but by creation (and redemption) of shares. The principle is the same however; monies received from creation of ETF shares are converted into the fund currency and then used to buy stock. This ensures that only one currency transaction is done. In your specific example the fund currency will be USD so your purchase of the shares (assuming there are no sellers and creation occurs) will be converted from GBP to USD and held in that currency in the fund. The fund then trades entirely in USD to avoid currency risk. When you want to sell your exposure (supposing redemption occurs) enough holdings required to redeem your money are sold to get cash in USD and then converted to GBP before paying you. This means that trading activity where there is no need to convert to GBP (or any other currency) does not incur currency conversion costs. In practice funds will always have some cash (or cash equivalents) on hand to pay out redemptions and will have an idea of the number and size of redemptions each calendar period so will use futures and swaps to mitigate FX risk. Where the same firm has two funds traded in different currencies with the same objectives it is likely that one is a wrapper for the other such that one simply converts the currency and buys the other currency denominated ETF. As these are exchange traded funds with a price in GBP the amount you pay for the ETF or gain on selling it is the price given and you will not have to consider currency exchange as that should be done internally as explained above. However, there can be a (temporary) arbitrage opportunity if the price in GBP does not reflect the price in USD and the exchange rate put together.",
"title": ""
},
{
"docid": "96347bc9f864460e64c7d4b3f9adb866",
"text": "My understanding is that all ETF options are American style, meaning they can be exercised before expiration, and so you could do the staggered exercises as you described.",
"title": ""
},
{
"docid": "b9584a6f6554b2d2367ec417532961f0",
"text": "e.g. a European company has to pay 1 million USD exactly one year from now While that is theoretically possible, that is not a very common case. Mostly likely if they had to make a 1 million USD payment a year from now and they had the cash on hand they would be able to just make the payment today. A more common scenario for currency forwards is for investment hedging. Say that European company wants to buy into a mutual fund of some sort, say FUSEX. That is a USD based mutual fund. You can't buy into it directly with Euros. So if the company wants to buy into the fund they would need to convert their Euros to to USD. But now they have an extra risk parameter. They are not just exposed to the fluctuations of the fund, they are also exposed to the fluctuations of the currency market. Perhaps that fund will make a killing, but the exchange rate will tank and they will lose all their gains. By creating a forward to hedge their currency exposure risk they do not face this risk (flip side: if the exchange rate rises in a favorable rate they also don't get that benefit, unless they use an FX Option, but that is generally more expensive and complicated).",
"title": ""
},
{
"docid": "865240ab604dba7ad74efcc5a828f86a",
"text": "\"I was in a similar situation, and used FX trading to hedge against currency fluctuations. I bought the \"\"new\"\" currency when the PPP implied valuation of my \"\"old\"\" currency was high, and was able to protect quite a bit of purchasing power that I would have lost without the hedge. Unfortunately you get taxed for the \"\"gain\"\" you made, but still helpful. In terms of housing market, you could look into a Ireland REIT index, but it may not correlate well with the actual house prices you are looking for.\"",
"title": ""
},
{
"docid": "8ad24bc70d108b7e7ae6c9e178439439",
"text": "I've wondered why anyone thinks it will be more than a speculation instrument. I'd love to hear arguments for it but they always fall short as soon as I think about countries inflating/deflating the value of their own currency.",
"title": ""
},
{
"docid": "b28010f24ba136e0758ff60ec4c89ee2",
"text": "\"While I'm sure there's some truth to the argument that unsophisticated retail investors index against the S&P 500 thinking that they're tracking \"\"the market,\"\" I don't think it makes sense to steer the S&P 500 in that direction to cater to that lowest common denominator. The *ad absurdum* conclusion of that course of action, of course, would be to abolish the S&P 500 entirely and move those assets into the S&P Total Market Index. But clearly there's value in having an index that tracks US large-caps with single share classes, just as there's value in having an index that tracks US large-caps in general. As for whether it will be a loss for passive investors...it will be interesting to see how that pans out. Maybe good corporate governance and direct accountability by managers really do contribute positively to returns in the long run and investors will benefit from this change as a result. Or maybe this will result in companies with multiple share classes being undervalued and create an opportunity to earn outsized returns by investing in them, and indexes that omit those companies will underperform. Only time will tell.\"",
"title": ""
}
] | fiqa |
f35fb93160b28a7119a109c20e88b3ac | How to make use of EUR/USD fluctuations in my specific case? | [
{
"docid": "eda543db876b5d150a730688db867bef",
"text": "This is called currency speculation, and it's one of the more risky forms of investing. Unless you have a crystal ball that tells you the Euro will move up (or down) relative to the Dollar, it's purely speculation, even if it seems like it's on an upswing. You have to remember that the people who are speculating (professionally) on currency are the reason that the amount changed, and it's because something caused them to believe the correct value is the current one - not another value in one direction or the other. This is not to say people don't make money on currency speculation; but unless you're a professional investor, who has a very good understanding of why currencies move one way or the other, or know someone who is (and gives free advice!), it's not a particularly good idea to engage in it - while stock trading is typically win-win, currency speculation is always zero-sum. That said, you could hedge your funds at this point (or any other) by keeping some money in both accounts - that is often safer than having all in one or the other, as you will tend to break even when one falls against the other, and not suffer significant losses if one or the other has a major downturn.",
"title": ""
},
{
"docid": "60f3356747247bc63b7afea2a1b05324",
"text": "Remember that converting from EU to USD and the other way around always costs you money, at least 0.5% per conversion. Additionally, savings accounts in EU and USA have different yields, you may want to compare which country offers you the best yields and move your money to the highest yielding account.",
"title": ""
},
{
"docid": "3aeef25d59c01d9382647746f9d7cada",
"text": "\"I would make this a comment but I am not allowed apparently. Unless your continent blows up, you'll never lost all your money. Google \"\"EUR USD\"\" if you want news stories or graphs on this topic. If you're rooting for your 10k USD (but not your neighbors), you want that graph to trend downward.\"",
"title": ""
}
] | [
{
"docid": "db7a27bf0afb30d12a004f760578f6a8",
"text": "\"is there anything I can do now to protect this currency advantage from future volatility? Generally not much. There are Fx hedges available, however these are for specialist like FI's and Large Corporates, traders. I've considered simply moving my funds to an Australian bank to \"\"lock-in\"\" the current rate, but I worry that this will put me at risk of a substantial loss (due to exchange rates, transfer fees, etc) when I move my funds back into the US in 6 months. If you know for sure you are going to spend 6 months in Australia. It would be wise to money certain amount of money that you need. So this way, there is no need to move back funds from Australia to US. Again whether this will be beneficial or not is speculative and to an extent can't be predicted.\"",
"title": ""
},
{
"docid": "c4928107daac55e5455a1f8a674e89ce",
"text": "Use other currencies, if available. I'm not familiar with the banking system in South Africa; if they haven't placed any currency freezes or restrictions, you might want to do this sooner than later. In full crises, like Russian and Ukraine, once the crisis worsened, they started limiting purchases of foreign currencies. PayPal might allow currency swaps (it implies that it does at the bottom of this page); if not, I know Uphold does. Short the currency Brokerage in the US allow us to short the US Dollar. If banks allow you to short the ZAR, you can always use that for protection. I looked at the interest rates in the ZAR to see how the central bank is offsetting this currency crisis - WOW - I'd be running, not walking toward the nearest exit. A USA analogy during the late 70s/early 80s would be Paul Volcker holding interest rates at 2.5%, thinking that would contain 10% inflation. Bitcoin Comes with significant risks itself, but if you use it as a temporary medium of exchange for swaps - like Uphold or with some bitcoin exchanges like BTC-e - you can get other currencies by converting to bitcoin then swapping for other assets. Bitcoin's strength is remitting and swapping; holding on to it is high risk. Commodities I think these are higher risk right now as part of the ZAR's problem is that it's heavily reliant on commodities. I looked at your stock market to see how well it's done, and I also see that it's done poorly too and I think the commodity bloodbath has something to do with that. If you know of any commodity that can stay stable during uncertainty, like food that doesn't expire, you can at least buy without worrying about costs rising in the future. I always joke that if hyperinflation happened in the United States, everyone would wish they lived in Utah.",
"title": ""
},
{
"docid": "b9584a6f6554b2d2367ec417532961f0",
"text": "e.g. a European company has to pay 1 million USD exactly one year from now While that is theoretically possible, that is not a very common case. Mostly likely if they had to make a 1 million USD payment a year from now and they had the cash on hand they would be able to just make the payment today. A more common scenario for currency forwards is for investment hedging. Say that European company wants to buy into a mutual fund of some sort, say FUSEX. That is a USD based mutual fund. You can't buy into it directly with Euros. So if the company wants to buy into the fund they would need to convert their Euros to to USD. But now they have an extra risk parameter. They are not just exposed to the fluctuations of the fund, they are also exposed to the fluctuations of the currency market. Perhaps that fund will make a killing, but the exchange rate will tank and they will lose all their gains. By creating a forward to hedge their currency exposure risk they do not face this risk (flip side: if the exchange rate rises in a favorable rate they also don't get that benefit, unless they use an FX Option, but that is generally more expensive and complicated).",
"title": ""
},
{
"docid": "15404acf93f7162857cc0bc696e09b11",
"text": "\"There are firms that let you do this. I believe that Saxo Bank is one such firm (note that I'm not endorsing the company at all, and have no experience with it) Keep in mind that the reason that these currencies are \"\"exotic\"\" is because the markets for trading are small. Small markets are generally really bad for retail/non-professional investors. (Also note: I'm not trying to insult Brazil or Thailand, which are major economies. In this context, I'm specifically concerned with currency trading volume.)\"",
"title": ""
},
{
"docid": "1045b2db53cd0bc42ef37ebd4f8aad91",
"text": "About the inflation or low interest rates in both the countries is out of the equation especially since rupee is always a low currency compared to Euro. You cannot make profit in Euros using rupee or vice-versa. It all depends on where you want to use the money, in India or Europe? If you want use the money from fixed deposit in Europe, then buy fixed deposit in euros from Europe. If you want to use the money in India, then convert the euros and buy FD in India.",
"title": ""
},
{
"docid": "057c8941ff4fd43be95685dd3b8b1374",
"text": "I'm sorry I guess what i meant to say was, what's the downside here? Why isn't everyone doing this, what am i missing? Someone clarified that i'm completely exposed to FX risk if I bring it back. What if I am IN australia, how would I do this, short USD's?",
"title": ""
},
{
"docid": "ffed5c7119959ba1d41c3d6541485cca",
"text": "You could buy some call options on the USD/INR. That way if the dollar goes up, you'll make the difference, and if the dollar goes down, then you'll lose the premium you paid. I found some details on USD/INR options here Looks like the furthest out you can go is 3 months. Note they're european style options, so they can only be exercised on the expiration date (as opposed to american style, which can be exercised at any time up to the expiration date). Alternatively, you could buy into some futures contracts for the USD/INR. Those go out to 12 months. With futures if the dollar goes up, you get the difference, if the dollar goes down, you pay the difference. I'd say if you were going to do something like this, stick with the options, since the most you could lose there is the premium you put up for the option contracts. With futures, if it suddenly moved against you you could find yourself with huge losses. Note that playing in the futures and options markets are an easy way to get burned -- it's not for the faint of heart.",
"title": ""
},
{
"docid": "71973b471b6779c847e78549ccae7fb6",
"text": "Rather than screwing around with foreign currencies, hop over to Germany and open an account at the first branch of Deutsche or Commerzbank you see. If the euro really does disintegrate, you want to have your money in the strongest country of the lot. Edit: and what I meant to say is that if the euro implodes, you'll end up with deutschmarks, which, unlike the new IEP, will *not* need to devalue. (And in the meantime, you've still got euros, so you have no FX risk.)",
"title": ""
},
{
"docid": "83d9ae6ad60870a09c431cbe4c9498a1",
"text": "\"I suggest that you're really asking questions surrounding three topics: (1) what allocation hedges your risks but also allows for upside? (2) How do you time your purchases so you're not getting hammered by exchange rates? (3) How do you know if you're doing ok? Allocations Your questions concerning allocation are really \"\"what if\"\" questions, as DoubleVu points out. Only you can really answer those. I would suggest building an excel sheet and thinking through the scenarios of at least 3 what-ifs. A) What if you keep your current allocations and anything in local currency gets cut in half in value? Could you live with that? B) What if you allocate more to \"\"stable economies\"\" and your economy recovers... so stable items grow at 5% per year, but your local investments grow 50% for the next 3 years? Could you live with that missed opportunity? C) What if you allocate more to \"\"stable economies\"\" and they grow at 5%... while SA continues a gradual slide? Remember that slow or flat growth in a stable currency is the same as higher returns in a declining currency. I would trust your own insights as a local, but I would recommend thinking more about how this plays out for your current investments. Timing You bring up concerns about \"\"timing\"\" of buying expensive foreign currencies... you can't time the market. If you knew how to do this with forex trading, you wouldn't be here :). Read up on dollar cost averaging. For most people, and most companies with international exposure, it may not beat the market in the short term, but it nets out positive in the long term. Rebalancing For you there will be two questions to ask regularly: is the allocation still correct as political and international issues play out? Have any returns or losses thrown your planned allocation out of alignment? Put your investment goals in writing, and revisit it at least once a year to evaluate whether any adjustments would be wise to make. And of course, I am not a registered financial professional, especially not in SA, so I obviously recommend taking what I say with a large dose of salt.\"",
"title": ""
},
{
"docid": "cb4539d14a460c05bbedaebb6a7be667",
"text": "Trying to engage in arbitrage with the metal in nickels (which was actually worth more than a nickel already, last I checked) is cute but illegal, and would be more effective at an industrial scale anyway (I don't think you could make it cost-effective at an individual level). There are more effective inflation hedges than nickels and booze. Some of them even earn you interest. You could at least consider a more traditional commodities play - it's certainly a popular strategy these days. A lot of people shoot for gold, as it's a traditional hedge in a crisis, but there are concerns that particular market is overheated, so you might consider alternatives to that. Normal equities (i.e. the stock market) usually work out okay in an inflationary environment, and can earn you a return as they're doing so.... and it's not like commodities aren't volatile and subject to the whims of the world economy too. TIPs (inflation-indexed Treasury bonds) are another option with less risk, but also a weaker return (and still have interest rate risks involved, since those aren't directly tied to inflation either).",
"title": ""
},
{
"docid": "6207d6f6b6c4c84fc02c0153c0fc89f6",
"text": "I would strongly recommend investing in assets and commodities. I personally believe fiat money is losing its value because of a rising inflation and the price of oil. The collapse of the euro should considerably affect the US currency and shake up other regions of the world in forex markets. In my opinion, safest investment these days are hard assets and commodities. Real estate, land, gold, silver(my favorite) and food could provide some lucrative benefits. GL mate!",
"title": ""
},
{
"docid": "889b617c42eb36f14a26d3441f38a8f3",
"text": "Have you tried calling a Forex broker and asking them if you can take delivery on currency? Their spreads are likely to be much lower than banks/ATMs.",
"title": ""
},
{
"docid": "898ce44c82eb87251d3e0d36b6907dda",
"text": "You could go further and do a carry trade by borrowing EUR at 2% and depositing INR at 10%. All the notes above apply, and see the link there.",
"title": ""
},
{
"docid": "1cfa763eb7329a1cea601b1c91dda9c7",
"text": "\"In short, yes. By \"\"forward selling\"\", you enter into a futures contract by which you agree to trade Euros for dollars (US or Singapore) at a set rate agreed to by both parties, at some future time. You are basically making a bet; you think that the dollar will gain on the Euro and thus you'd pay a higher rate on the spot than you've locked in with the future. The other party to the contract is betting against you; he thinks the dollar will weaken, and so the dollars he'll sell you will be worth less than the Euros he gets for them at the agreed rate. Now, in a traditional futures contract, you are obligated to execute it, whether it ends up good or bad for you. You can, to avoid this, buy an \"\"option\"\". By buying the option, you pay the other party to the deal for the right to say \"\"no, thanks\"\". That way, if the dollar weakens and you'd rather pay spot price at time of delivery, you simply let the contract expire un-executed. The tradeoff is that options cost money up-front which is now sunk; whether you exercise the option or not, the other party gets the option price. That basically creates a \"\"point spread\"\"; you \"\"win\"\" if the dollar appreciates against the Euro enough that you still save money even after buying the option, or if the dollar depreciates against the Euro enough that again you still save money after subtracting the option price, while you \"\"lose\"\" if the exchange rates are close enough to what was agreed on that it cost you more to buy the option than you gained by being able to choose to use it.\"",
"title": ""
},
{
"docid": "bcbd96d50a6f159f56b3bc04413bca94",
"text": "\"We're in agreement, I just want retail investors to understand that in most of these types of discussions, the unspoken reality is the retail sector trading the market is *over*. This includes the mutual funds you mentioned, and even most index funds (most are so narrowly focused they lose their relevance for the retail investor). In the retail investment markets I'm familiar with, there are market makers of some sort or another for specified ranges. I'm perfectly fine with no market makers; but retail investors should be told the naked truth as well, and not sold a bunch of come-ons. What upsets me is seeing that just as computers really start to make an orderly market possible (you are right, the classic NYSE specialist structure was outrageously corrupt), regulators turned a blind eye to implementing better controls for retail investors. The financial services industry has to come to terms whether they want AUM from retail or not, and having heard messaging much like yours from other professionals, I've concluded that the industry does *not* want the constraints with accepting those funds, but neither do they want to disabuse retail investors of how tilted the game is against them. Luring them in with deceptively suggestive marketing and then taking money from those naturally ill-prepared for the rigors of the setting is like beating up the Downs' Syndrome kid on the short bus and boasting about it back on the campus about how clever and strong one is. If there was as stringent truth in marketing in financial services as cigarettes, like \"\"this service makes their profit by encouraging the churning of trades\"\", there would be a lot of kvetching from so-called \"\"pros\"\" as well. If all retail financial services were described like \"\"dead cold cow meat\"\" describes \"\"steak\"\", a lot of retail investors would be better off. As it stands today, you'd have to squint mighty hard to see the faintly-inscribed \"\"caveat emptor\"\" on financial services offerings to the retail sector. Note that depending upon the market setting, the definition of retail differs. I'm surprised the herd hasn't been spooked more by the MF Global disaster, for example, and yet there are some surprisingly large accounts detrimentally affected by that incident, which in a conventional equities setting would be considered \"\"pros\"\".\"",
"title": ""
}
] | fiqa |
1b0602c270f461024239600ff5778522 | Understanding SEC Filings | [
{
"docid": "2639dfbfda29a4b457a716086b92953d",
"text": "The most important filings are: Form 10-K, which is the annual report required by the U.S. Securities and Exchange Commission (SEC) and Form 10-Q, for the interim quarters.",
"title": ""
},
{
"docid": "49f2eb68845aafe0cfeda952031ae99d",
"text": "There are a whole host of types of filings. Some of them are only relevant to companies that are publicly traded, and other types are general to just registered corps in general. ... and many more: http://reportstream.io/explore/has-form Overall, reading SEC filings is hard, and for some, the explanations of those filings is worth paying for. Source: I am currently trying to build a product that solves this problem.",
"title": ""
}
] | [
{
"docid": "57fb897c059fe117bf76781c5306adb8",
"text": "\"Thanks for the response. I am using WRDS database and we are currently filtering through various variables like operating income, free cash flow etc. Main issue right now is that the database seems to only go up to 2015...is there a similar database that has 2016 info? filtering out the \"\"recent equity issuance or M&A activity exceeding 10% of total assets\"\" is another story, namely, how can I identify M&A activity? I suppose we can filter it with algorithm stating if company's equity suddenly jumps 10% or more, it get's flagged\"",
"title": ""
},
{
"docid": "4f9c71289d37594b5040af9865061a3a",
"text": "\"You can infer some of the answers to your questions from the BATS exchange's market data page and its associated help page. (I'm pretty sure a page like this exists on each stock exchange's website; BATS just happens to be the one I'm used to looking at.) The Matched Volume section refers to all trades on a given date that took place on \"\"lit\"\" exchanges; that is, where a public protected US stock exchange's matching engine helped a buyer and a seller find each other. Because there are exactly 11 such exchanges in existence, it's easy to show 100% of the matched volume broken down into 11 rows. The FINRA & TRF Volume section refers to all trades on a given date that took place on \"\"non-lit\"\" exchanges. These types of trades include dark pool volume and any other trade that is not required to take place in public but is required to be reported (the R in TRF) to FINRA. There are three venues via which these trades may be reported to FINRA -- NASDAQ's, NYSE's, and FINRA's own ADF. They're all operated under the purview of FINRA, so the fact that they're \"\"located at\"\" NASDAQ or NYSE is a red herring. (For example, from the volume data it's clear that the NASDAQ facility does not only handle NASDAQ-listed (Tape C) securities, nor does the NYSE facility only handle NYSE-listed (Tape A) securities or anything like that.) The number of institutions reporting to each of the TRFs is large -- many more than the 11 public exchanges -- so the TRF data is not broken down further. (Also I think the whole point of the TRFs is to report in secret.) I don't know enough details to say why the NASDTRF has always handled more reporting volume than the other two facilities. Of course, since we can't see inside the TRF reporting anyway, it's sort of a moot point.\"",
"title": ""
},
{
"docid": "47d2401e8c9dcd835a24ea517a73bda6",
"text": "I've seen this tool. I'm just having a hard time finding where I can just get a list of all the companies. For example, you can get up to 100 results at a time, if I just search latest filings for 10-K. This isn't really an efficient way to go about what I want.",
"title": ""
},
{
"docid": "ff49b9a4ec21562562c1d00890c4883e",
"text": "Just look at the filing date of the 10Q and then Yahoo the closing price or Google it. I assume you are looking for market reactions to SEC filings? If you want to look at the closing stock price for the end of the period which the filing covers, it's like on the first page of the filing when the period (either quarterly or yearly) ends. This data is generally less useful, however, because it really is just another day in the market for the company. The actual release of the data to the public is more important.",
"title": ""
},
{
"docid": "cf60d6c3f98bdfe60fe02e3a4d9ce7e3",
"text": "\"Apologize - replied without actually looking at the financials. After reviewing -- Starbuck's financial statements use the line item \"\"Cost of sales including occupancy costs.\"\" This is very different than \"\"hiding\"\" rent in COGS, as they plainly describe what it represents. Anyone who wants to derive true cost of goods sold without occupancy costs can look in the footnotes of the financials to find the lease expense for the year and subtract it. This line item is used by multiple public companies (Whole Foods is one that comes to mind), and regardless of their true motives, they have convinced the SEC that they think it gives the consumer the most accurate view of their business operations. As with all financial statements, the footnotes play a crucial role in understanding how a business works. If you want to find opportunities for future value or an Achilles heel, look in the notes.\"",
"title": ""
},
{
"docid": "e7586dc4b0b2e7053a50e9deabdc4059",
"text": "I think you're looking for the public float: Public float or the unqualified term may also refer to the number of outstanding shares in the hands of public investors as opposed to company officers, directors, or controlling-interest investors. Assuming the insider held shares are not traded, these shares are the publicly traded ones. The float is calculated by subtracting restricted shares from outstanding shares. As mentioned, Treasury stock is probably the most narrow definition of restricted stock (not publicly traded), but shares held by corporate officers or majority investors are often included in the definition as well. In any case, the balance sheet is indeed a good place to start.",
"title": ""
},
{
"docid": "6ed31ce88106900d05930df8c45fe709",
"text": "SEC forms are required when declaring insider activity. An insider is defined by the SEC to be a person or entity which (i) beneficially owns 10% or more of the outstanding shares of the company, (ii) is an officer or director of the company, or (iii), in the case of insider trading, does so based on knowledge which is not otherwise publically available at the time. At any rate, the person or entity trading the stock is required to file certain forms. Form 3 is filed when a person first transitions into the status of an insider (by becoming an officer, director, or beneficial owner of a certain percentage of stock). Form 4 is filed when an existing insider trades stock under the company's symbol. Form 5 is filed when certain insider trades of small value are reported later than usual. *More information can be found at the SEC's website. Another possibility is that a large number of options or derivatives were exercised by an officer, director, or lending institution. In the cases of officers or directors, this would need to be declared with an SEC form 4. For an institution exercising warrants obtained as a result of a lending agreement, either form 3 or 4 would need to be filed. In addition to the above possibilities, username passing through pointed out a very likely scenario in his answer, as well.",
"title": ""
},
{
"docid": "158613481e53d89848c31269ff5ff721",
"text": "I don't think it makes sense to allow accounting numbers that you are not sure how to interpret as being a sell sign. If you know why the numbers are weird and you feel that the reason for it bodes ill about the future, and if you think there's a reason this has not been accounted for by the market, then you might think about selling. The stock's performance will depend on what happens in the future. Financials just document the past, and are subject to all kinds of lumpiness, seasonality, and manipulation. You might benefit from posting a link to where you got your financials. Whenever one computes something like a dividend payout ratio, one must select a time period over which to measure. If the company had a rough quarter in terms of earnings but chose not to reduce dividends because they don't expect the future to be rough, that would explain a crazy high dividend ratio. Or if they were changing their capital structure. Or one of many other potentially benign things. Accounting numbers summarize a ton of complex workings of the company and many ratios we look at could be defined in several different ways. I'm afraid that the answer to your question about how to interpret things is in the details, and we are not looking at the same details you are.",
"title": ""
},
{
"docid": "23061d98412c27df8c5b17ecfd36c5a8",
"text": "The balance sheet and income statements are located in the 10-K and 10-Q filings for all publicly traded companies. It will be Item 8.",
"title": ""
},
{
"docid": "915ee91396f3b08a0d4af728c8f3d5da",
"text": "\"According to the IRS, you must have written confirmation from your broker \"\"or other agent\"\" whenever you sell shares using a method other than FIFO: Specific share identification. If you adequately identify the shares you sold, you can use the adjusted basis of those particular shares to figure your gain or loss. You will adequately identify your mutual fund shares, even if you bought the shares in different lots at various prices and times, if you: Specify to your broker or other agent the particular shares to be sold or transferred at the time of the sale or transfer, and Receive confirmation in writing from your broker or other agent within a reasonable time of your specification of the particular shares sold or transferred. If you don't have a stockbroker, I'm not sure how you even got the shares. If you have an actual stock certificate, then you are selling very specific shares and the purchase date corresponds to the purchase date of those shares represented on the certificate.\"",
"title": ""
},
{
"docid": "909417d8d10021a49861245cd34381e3",
"text": "\"Not to detract from the other answers at all (which are each excellent and useful in their own right), but here's my interpretation of the ideas: Equity is the answer to the question \"\"Where is the value of the company coming from?\"\" This might include owner stakes, shareholder stock investments, or outside investments. In the current moment, it can also be defined as \"\"Equity = X + Current Income - Current Expenses\"\" (I'll come back to X). This fits into the standard accounting model of \"\"Assets - Liabilities = Value (Equity)\"\", where Assets includes not only bank accounts, but also warehouse inventory, raw materials, etc.; Liabilities are debts, loans, shortfalls in inventory, etc. Both are abstract categories, whereas Income and Expense are hard dollar amounts. At the end of the year when the books balance, they should all equal out. Equity up until this point has been an abstract concept, and it's not an account in the traditional (gnucash) sense. However, it's common practice for businesses to close the books once a year, and to consolidate outstanding balances. When this happens, Equity ceases to be abstract and becomes a hard value: \"\"How much is the company worth at this moment?\"\", which has a definite, numeric value. When the books are opened fresh for a new business year, the Current Income and Current Expense amounts are zeroed out. In this situation, in order for the big equation to equal out: Assets - Liabilities = X + Income - Expeneses the previous net value of the company must be accounted for. This is where X comes in, the starting (previous year's) equity. This allows the Assets and Liabilities to be non-zero, while the (current) Income and Expenses are both still zeroed out. The account which represents X in gnucash is called \"\"Equity\"\", and encompasses not only initial investments, but also the net increase & decreases from previous years. While the name would more accurately be called \"\"Starting Equity\"\", the only problem caused by the naming convention is the confusion of the concept Equity (X + Income - Expenses) with the account X, named \"\"Equity\"\".\"",
"title": ""
},
{
"docid": "3d718680b0cd151f64d4cb4d777842e0",
"text": "\"Oh, I understand now -- we're having an absurd, meaningless conversation about an obscure theoretical point. When you can tell me how you can determine a \"\"minimum cash\"\" level from a public company's filings, we can continue the discussion. Otherwise, make a simplifying assumption and move on. I misunderstood -- I thought we were actually trying to understand the difference between enterprise value and equity value / understand the implication of an enterprise value multiple.\"",
"title": ""
},
{
"docid": "3291ee40c53d2a8029846397a034b05e",
"text": "The actual financial statements should always be referenced first before opening or closing a position. For US companies, they are freely available on EDGAR. Annual reports are called 10-Ks, and quarterly reports are called 10-Qs. YHOO and GOOG do a great job of posting financials that are quickly available, but money.msn has the best. These should be starting point, quick references. As you can see, they may all have the same strange accounting. Sometimes, it's difficult to find the information one seeks in the consolidated financial statements as in this case, so searching through the filing is necessary. The notes can be helpful, but Ctrl-F seems to do everything I need when I want something in a report. In AAPL's case, the Interest expense can be found in Note 3.",
"title": ""
},
{
"docid": "202224a0944b3a276e486131bda2c304",
"text": "It does raise the question of whether investment bank analysts are doing their job when advising clients on IPOs. Sadly, no one, and I literally mean not a single person, reads a registration statement in its entirety. That's why I find this criticism of the JOBS Act particularly stupid. The problem isn't that enough information isn't getting out, it's that too few investors and analysts actually do anything with it.",
"title": ""
},
{
"docid": "bf0540111a2051185227f72005547c32",
"text": "\"Generally if you are using FIFO (first in, first out) accounting, you will need to match the transactions based on the number of shares. In your example, at the beginning of day 6, you had two lots of shares, 100 @ 50 and 10 @ 52. On that day you sold 50 shares, and using FIFO, you sold 50 shares of the first lot. This leaves you with 50 @ 50 and 10 @ 52, and a taxable capital gain on the 50 shares you sold. Note that commissions incurred buying the shares increase your basis, and commissions incurred selling the shares decrease your proceeds. So if you spent $10 per trade, your basis on the 100 @ 50 lot was $5010, and the proceeds on your 50 @ 60 sale were $2990. In this example you sold half of the lot, so your basis for the sale was half of $5010 or $2505, so your capital gain is $2990 - 2505 = $485. The sales you describe are also \"\"wash sales\"\", in that you sold stock and bought back an equivalent stock within 30 days. Generally this is only relevant if one of the sales was at a loss but you will need to account for this in your code. You can look up the definition of wash sale, it starts to get complex. If you are writing code to handle this in any generic situation you will also have to handle stock splits, spin-offs, mergers, etc. which change the number of shares you own and their cost basis. I have implemented this myself and I have written about 25-30 custom routines, one for each kind of transaction that I've encountered. The structure of these deals is limited only by the imagination of investment bankers so I think it is impossible to write a single generic algorithm that handles them all, instead I have a framework that I update each quarter as new transactions occur.\"",
"title": ""
}
] | fiqa |
ab87483724591f55c6e344d69fd2ef66 | Get interest on $100K by spending only $2K using FOREX rollovers? | [
{
"docid": "febf4114d614ef8371b4a237f32ce7e9",
"text": "\"I'm smart enough to know that the answer to your questions is 'no'. There is no arbitrage scenario where you can trade currencies and be guaranteed a return. If there were, the thousands of PhD's and quants at hedge funds like DEShaw and Bridgewater would have already figured it out. You're basically trying to come up with a scenario that is risk free yet yields you better than market interest rates. Impossible. I'm not smart enough to know why, but my guess is that your statement \"\"I only need $2k margin\"\" is incorrect. You only need $2k as capital, but you are 'borrowing' on margin the other 98k and you'll need to pay interest on that borrowed amount, every day. You also run the risk of your investment turning sour and the trading firm requiring a higher margin.\"",
"title": ""
},
{
"docid": "cbef79be90e2e82d24e6214699fd271e",
"text": "No free lunch You cannot receive risk-free interest on more money than you actually put down. The construct you are proposing is called 'Carry Trade', and will yield you the interest-difference in exchange for assuming currency risk. Negative expectation In the long run one would expect the higher-yielding currency to devalue faster, at a rate that exactly negates the difference in interest. Net profit is therefore zero in the long run. Now factor in the premium that a (forex) broker charges, and now you may expect losses the size of which depends on the leverage chosen. If there was any way that this could reliably produce a profit even without friction (i.e. roll-over, transaction costs, spread), quants would have already arbitraged it away. Intransparancy Additionaly, in my experience true long-term roll-over costs in relation to interest are a lot harder to compute than, for example, the cost of a stock transaction. This makes the whole deal very intransparant. As to the idea of artificially constructing a USD/USD pair: I regret to tell you that such a construct is not possible. For further info, see this question on Carry Trade: Why does Currency Carry Trade work?",
"title": ""
},
{
"docid": "605802582d7668a70b363758d5881d8e",
"text": "I work at a FOREX broker, and can tell you that what you want to do is NOT possible. If someone is telling you it is, they're lying. You could (in theory) make money from the SWAP (the interest you speak of is called SWAP) if you go both short and long on the same currency, but there are various reasons why this never works. Furthermore, I don't know of any brokers that are paying positive SWAP (the interest you speak of is called SWAP) on any currency right now.",
"title": ""
},
{
"docid": "93ed9100864a8c4146441b8c7bc0dab5",
"text": "Now, is there any clever way to combine FOREX transactions so that you receive the US interest on $100K instead of the $2K you deposited as margin? Yes, absolutely. But think about it -- why would the interest rates be different? Imagine you're making two loans, one for 10,000 USD and one for 10,000 CHF, and you're going to charge a different interest rate on the two loans. Why would you do that? There is really only one reason -- you would charge more interest for the currency that you think is less likely to hold its value such that the expected value of the money you are repaid is the same. In other words, currencies pay a higher interest when their value is expected to go down and currencies pay a lower interest when their value is expected to go up. So yes, you could do this. But the profits you make in interest would have to equal the expected loss you would take in the devaluation of the currency. People will only offer you these interest rates if they think the loss will exceed the profit. Unless you know better than them, you will take a loss.",
"title": ""
}
] | [
{
"docid": "1611faea12bf19b2154ee123778d95d2",
"text": "\"HSBC, Hang Seng, and other HK banks had a series of special savings account offers when I lived in HK a few years ago. Some could be linked to the performance of your favorite stock or country's stock index. Interest rates were higher back then, around 6% one year. What they were effectively doing is taking the interest you would have earned and used it to place a bet on the stock or index in question. Technically, one way this can be done, for instance, is with call options and zero coupon bonds or notes. But there was nothing to strategize with once the account was set up, so the investor did not need to know how it worked behind the scenes... Looking at the deposit plus offering in particular, this one looks a little more dangerous than what I describe. See, now we are in an economy of low almost zero interest rates. So to boost the offered rate the bank is offering you an account where you guarantee the AUD/HKD rate for the bank in exchange for some extra interest. Effectively they sell AUD options (or want to cover their own AUD exposures) and you get some of that as extra interest. Problem is, if the AUD declines, then you lose money because the savings and interest will be converted to AUD at a contractual rate that you are agreeing to now when you take the deposit plus account. This risk of loss is also mentioned in the fine print. I wouldn't recommend this especially if the risks are not clear. If you read the fine print, you may determine you are better off with a multicurrency account, where you can change your HK$ into any currency you like and earn interest in that currency. None of these were \"\"leveraged\"\" forex accounts where you can bet on tiny fluctuations in currencies. Tiny being like 1% or 2% moves. Generally you should beware anything offering 50:1 or more leverage as a way to possibly lose all of your money quickly. Since you mentioned being a US citizen, you should learn about IRS form TD F 90-22.1 (which must be filed yearly if you have over $10,000 in foreign accounts) and google a little about the \"\"foreign account tax compliance act\"\", which shows a shift of the government towards more strict oversight of foreign accounts.\"",
"title": ""
},
{
"docid": "0848988ee6bf5d902b7090dcbc46de00",
"text": "The location does matter in the case where you introduce currency risk; by leaving you US savings in USD, you're basically working on the assumption that the USD will not lose value against the EUR - if it does and you live in the EUR-zone, you've just misplaced some of your capital. Of course that also works the other way around if the USD appreciates against the EUR, you gained some money.",
"title": ""
},
{
"docid": "28f5fd1be3e440ee825ed5e611e92156",
"text": "\"My visa would put the goods on the current monthly balance which is no-interest, but the cash part becomes part of the immediate interest-bearing sum. There is no option for getting cash without paying immediate interest, except perhaps for buying something then immediately returning it, but most merchants will do a refund to the card instead of cash in hand. This is in New Zealand, other regions may have different rules. Also, if I use the \"\"cheque\"\" or \"\"savings\"\" options at the eftpos machine instead of the \"\"credit\"\" option, then I can have cash immediately, withdrawn from my account, with no interest charge. However the account has to have sufficient balance to do so.\"",
"title": ""
},
{
"docid": "d4617c15d1388f86ec15ea8a6de965f5",
"text": "An offset account is simply a savings account which is linked to a loan account. Instead of earning interest in the savings account and thus having to pay tax on the interest earned, it reduces the amount of interest you have to pay on the loan. Example of a 100% offset account: Loan Amount $100,000, Offset Balance $20,000; you pay interest on the loan based on an effective $80,000 loan balance. Example of a 50% offset account: Loan Account $100,000, Offset Balance $20,000; you pay interest on the loan based on an effective $90,000 loan balance. The benefit of an offset account is that you can put all your income into it and use it to pay all your expenses. The more the funds in the offset account build up the less interest you will pay on your loan. You are much better off having the offset account linked to the larger loan because once your funds in the offset increase over $50,000 you will not receive any further benefit if it is linked to the smaller loan. So by offsetting the larger loan you will end up saving the most money. Also, something extra to think about, if you are paying interest only your loan balance will not change over the interest only period and your interest payments will get smaller and smaller as your offset account grows. On the other hand, if you are paying principal and interest then your loan balance will reduce much faster as your offset account increases. This is because with principal and interest you have a minimum amount to pay each month (made up of a portion of principal and a portion of interest). As the offset account grows you will be paying less interest, so a larger portion of the principal is paid off each month.",
"title": ""
},
{
"docid": "924c06ef4114ce9a9f421443152b2e88",
"text": "\"As previously answered, the solution is margin. It works like this: You deposit e.g. 1'000 USD at your trading company. They give you a margin of e.g. 1:100, so you are allowed to trade with 100'000 USD. Let's say you buy 5'000 pieces of a stock at $20 USD (fully using your 100'000 limit), and the price changes to $20.50 . Your profit is 5000* $0.50 = $2'500. Fast money? If you are lucky. Let's say before the price went up to 20.50, it had a slight dip down to $19.80. Your loss was 5000* $0.2 = 1'000$. Wait! You had just 1000 to begin with: You'll find an email saying \"\"margin call\"\" or \"\"termination notice\"\": Your shares have been sold at $19.80 and you are out of business. The broker willingly gives you this credit, since he can be sure he won't loose a cent. Of course you pay interest for the money you are trading with, but it's only for minutes. So to answer your question: You don't care when you have \"\"your money\"\" back, the trading company will always be there to give you more as long as you have deposit left. (I thought no one should get margin explained without the warning why it is a horrible idea to full use the ridiculous high margins some broker offer. 1:10 might or might not be fine, but 1:100 is harakiri.)\"",
"title": ""
},
{
"docid": "889b617c42eb36f14a26d3441f38a8f3",
"text": "Have you tried calling a Forex broker and asking them if you can take delivery on currency? Their spreads are likely to be much lower than banks/ATMs.",
"title": ""
},
{
"docid": "fff62a931e555cafd9c3710b6eda3f33",
"text": "\"What about the escudo balance in my checking account in Cabo Verde? Are the escudos that I held for months or years, before eventually deciding to change to dollars, considered an investment? Don't know. You tell us. Investment defined as an activity taken to produce income. Did you put the money in the checking account with a full expectation of profits to be made from that? Or you only decided that it is an investment in retrospective, after the result is known, because it provides you more tax benefit? To me it sounds like you have two operating currencies and you're converting between them. Doesn't sound like an investment. Generally, from my experience, bank accounts are not considered investments (even savings accounts aren't). Once you deposit into a CD or bond or money market - you get a cash-equivalent which can be treated as an investment. But that's my personal understanding, if there are large amounts involved, I'd suggest talking to a US-licensed CPA/EA specializing on expats in your area. Pub 54 is really a reference for only the most trivial of the questions an expat may have. It doesn't even begin to describe the complexity of the monstrosity that is called \"\"The US Tax Code for Expats and Foreigners\"\".\"",
"title": ""
},
{
"docid": "cd25cc79df75f8dd9273d36f27a005e1",
"text": "Technically, yes, you can do this. It's a form of arbitrage: you're taking advantage of a small price difference between two markets. But is it worth the hassle of keeping on top of the overdraft and making sure you don't incur any accidental penalties or fees? Interest rates are super low, and floating £1000 or £2000, you're only going to generate £10-20 per year in a basic savings account.",
"title": ""
},
{
"docid": "e673718faaf37ffb0a789565e6e80b43",
"text": "You would need to check with Bank as it varies from Bank to Bank. You can break the FD's. Generally you don't loose the interest you have earned for 1 years, however the rate of interest will be reduced. i.e. if the rate was 7% for 1 year FD and 8% for 2 years FD, when you break after a year you will get only 7%. Generally this can happen in few hours but definitely in 2 days. You can get a Loan against FD's. Generally the rate of interest is 2% higher than FD rate. There is also initial processing fee, etc. Check with the Bank, it may take few days to set things up.",
"title": ""
},
{
"docid": "ca428c4ae49ef766ae9176b7c2efa90a",
"text": "I won't make any assumptions about the source of the money. Typically however, this can be an emotional time and the most important thing to do is not act rashly. If this is an amount of money you have never seen before, getting advice from a fee only financial adviser would be my second step. The first step is to breathe and promise yourself you will NOT make any decisions about this money in the short term. Better to have $100K in the bank earning nearly zero interest than to spend it in the wrong way. If you have to receive the money before you can meet with an adviser, then just open a new savings account at your bank (or credit union) and put the money in there. It will be safe and sound. Visit http://www.napfa.org/ and interview at least three advisers. With their guidance, think about what your goals are. Do you want to invest and grow the money? Pay off debt? Own a home or new large purchase? These are personal decisions, but the adviser might help you think of goals you didn't imagine Create a plan and execute it.",
"title": ""
},
{
"docid": "7395386482e12327b4aac3ac117887ab",
"text": "You can use Norbet's Gambit to convert between USD and CAD either direction. I have never personally done this, but I am planning to convert some CAD to USD soon, so I can invest in USD index funds without paying the typical 2% conversion fee. You must be okay with waiting a few days for the trades to settle, and okay with the fact that the exchange rate will almost certainly change before you sell the shares in the opposite currency. The spread on DLR.TO is about 0.01% - 0.02%, and you also have brokerage commissions and fees. If you use a discount broker the commissions and fees can be quite low. EG. To transfer $5000 USD to CAD using Questrade, you would deposit the USD into a Questrade account and then purchase ~500 units of DLR.U.TO , since it is an ETF there is no commission on the purchase. Then you request that they journal the shares over to DLR.TO and you sell them in CAD (will have about a $5 fee in CAD, and lose about $1 on the spread) and withdraw. The whole thing will have cost you $6 CAD, in lieu of ~$100 you would pay if you did a straightforward conversion with a 2% exchange fee. The difference in fees scales up as the amount you transfer increases. Someone has posted the chat log from when they requested their shares be journaled from DLR.TO to DLR.U.TO here. It looks like it was quite straightforward. Of course there is a time-cost, and the nuisance of signing up for an maintaining an account with a broker if you don't have one already. You can do it on non discount-brokers, but it will only be worth it to do it with a larger amount of money, since the commissions are larger. Note: If you have enough room to hold the CAD amount in your TFSA and will still have that much room at the end of the calendar year, I recommend doing the exchange in a TFSA account. The taxes are minimal unless the exchange rate changes drastically while your trades are settling (from capital gains or losses while waiting a few days for the trades to settle), but they are annoying to calculate, if you do it often. Warning if you do it in a TFSA be sure not to over contribute. Every time you deposit counts as a contribution and your withdrawals don't count against the limit until the next calendar year.",
"title": ""
},
{
"docid": "128d222913be065a4e270541bff04ba4",
"text": "Depends on the countries and their rules about moving money across the border, but in this case that appears entirely reasonable. Of course it would be a gamble unless you can predict the future values of currency better than most folks; there is no guarantee that the exchange rate will move in any particular direction. I have no idea whether any tax is due on profit from currency arbitrage.",
"title": ""
},
{
"docid": "8ee0cf90186bff11bd3da57fd10154e0",
"text": "\"As is so often the case, there is an asterisk next to that 2.5% interest offer. It leads you to a footnote which says: Savings Interest Rate Offer of 2.5% is available between January 1, 2015 and March 31, 2015 on all net new deposits made between January 1, 2015 and March 31, 2015 to a maximum of $250,000.00 per Account registration. You only earn 2.5% interest on deposits made during those three months. Also, on the full offer info page, it says: During the Offer Period, the Bank will calculate Additional Interest on eligible net new deposits and: All interest payments are ineligible for the purposes of calculating Additional Interest and will not be calculated for the purposes of determining eligible daily balances. In other words, any interest paid into an Applicable Account, including Additional Interest, will not be treated as a new deposit for subsequently calculating Additional Interest payments. I couldn't totally parse out all the details of the offer from their legalese, but what it sounds like is you will earn 2.5% interest on money that you deposit into the account during those three months. Any interest you accrue during that time will not count as a deposit in this sense, and so will not earn 2.5% compounded returns. The \"\"During the Offer Period\"\" qualification also makes it sound like this extra interest will only be paid during the three months (presumably at a 2.5% annualized rate, but I can't see where it actually says this). So essentially you are getting a one-time bonus for making deposits during a specific three-month period. The account doesn't really earn 2.5% interest in the normal sense. The long-term interest rate will be what it normally is for their savings accounts, which this page says is 1.05%.\"",
"title": ""
},
{
"docid": "a0a837bb59550e224a7b7b583c1f7dc1",
"text": "You shouldn't be charged interest, unless possibly because your purchases involve a currency conversion. I've made normal purchases that happened to involve changes in currency. The prices were quoted in US$ to me. On the tail end, though, the currency change was treated as a cash advance, which accrues interest immediately.",
"title": ""
},
{
"docid": "97d2304c009c366add62833f7a2fd500",
"text": "You can check the website for the company that manages the fund. For example, take the iShares Nasdaq Biotechnology ETF (IBB). iShares publishes the complete list of the fund's holdings on their website. This information isn't always easy to find or available, but it's a place to start. For some index funds, you should just be able to look up the index the fund is trying to match. This won't be perfect (take Vanguard's S&P 500 ETF (VOO); the fund holds 503 stocks, while the S&P 500 index is comprised of exactly 500), but once again, it's a place to start. A few more points to keep in mind. Remember that many ETF's, including equity ETF's, will hold a small portion of their assets in cash or cash-equivalent instruments to assist with rebalancing. For index funds, this may not be reflected in the index itself, and it may not show up in the list of holdings. VOO is an example of this. However, that information is usually available in the fund's prospectus or the fund's site. Also, I doubt that many stock ETF's, at least index funds, change their asset allocations all that frequently. The amounts may change slightly, but depending on the size of their holdings in a given stock, it's unlikely that the fund's manager would drop it entirely.",
"title": ""
}
] | fiqa |
42e1f36525dfe18292b8aa9169f0be9e | How can I invest in gold without taking physical possession? | [
{
"docid": "fd76bf49f90e365dbefa44a87fbeae98",
"text": "You could buy shares of an Exchange-Traded Fund (ETF) based on the price of gold, like GLD, IAU, or SGOL. You can invest in this fund through almost any brokerage firm, e.g. Fidelity, Etrade, Scotttrade, TD Ameritrade, Charles Schwab, ShareBuilder, etc. Keep in mind that you'll still have to pay a commission and fees when purchasing an ETF, but it will almost certainly be less than paying the markup or storage fees of buying the physical commodity directly. An ETF trades exactly like a stock, on an exchange, with a ticker symbol as noted above. The commission will apply the same as any stock trade, and the price will reflect some fraction of an ounce of gold, for the GLD, it started as .1oz, but fees have been applied over the years, so it's a bit less. You could also invest in PHYS, which is a closed-end mutual fund that allows investors to trade their shares for 400-ounce gold bars. However, because the fund is closed-end, it may trade at a significant premium or discount compared to the actual price of gold for supply and demand reasons. Also, keep in mind that investing in gold will never be the same as depositing your money in the bank. In the United States, money stored in a bank is FDIC-insured up to $250,000, and there are several banks or financial institutions that deposit money in multiple banks to double or triple the effective insurance limit (Fidelity has an account like this, for example). If you invest in gold and the price plunges, you're left with the fair market value of that gold, not your original deposit. Yes, you're hoping the price of your gold investment will increase to at least match inflation, but you're hoping, i.e. speculating, which isn't the same as depositing your money in an insured bank account. If you want to speculate and invest in something with the hope of outpacing inflation, you're likely better off investing in a low-cost index fund of inflation-protected securities (or the S&P500, over the long term) rather than gold. Just to be clear, I'm using the laymen's definition of a speculator, which is someone who engages in risky financial transactions in an attempt to profit from short or medium term fluctuations This is similar to the definition used in some markets, e.g. futures, but in many cases, economists and places like the CFTC define speculators as anyone who doesn't have a position in the underlying security. For example, a farmer selling corn futures is a hedger, while the trading firm purchasing the contracts is a speculator. The trading firm doesn't necessarily have to be actively trading the contract in the short-run; they merely have no position in the underlying commodity.",
"title": ""
},
{
"docid": "f87e691cf0d2cbc4dbe43f3e6a856f8b",
"text": "\"In addition to the possibility of buying gold ETFs or tradable certificates, there are also firms specializing in providing \"\"bank accounts\"\" of sorts which are denominated in units of weight of precious metal. While these usually charge some fees, they do meet your criteria of being able to buy and sell precious metals without needing to store them yourself; also, these fees are likely lower than similar storage arranged by yourself. Depending on the specifics, they may also make buying small amounts practical (buying small amounts of physical precious metals usually comes with a large mark-up over the spot price, sometimes to the tune of a 50% or so immediate loss if you buy and then immediately sell). Do note that, as pointed out by John Bensin, buying gold gets you an amount of metal, the local currency value of which will vary over time, sometimes wildly, so it is not the same thing as depositing the original amount of money in a bank account. Since 2006, the price of an ounce (about 31.1 grams) of gold has gone from under $500 US to over $1800 US to under $1100 US. Few other investment classes are anywhere near this volatile. If you are interested in this type of service, you might want to check out BitGold (not the same thing at all as Bitcoin) or GoldMoney. (I am not affiliated with either.) Make sure to do your research thoroughly as these may or may not be covered by the same regulations as regular banks, particularly if you choose a company based outside of or a storage location outside of your own country.\"",
"title": ""
}
] | [
{
"docid": "08cec8c13d6cc51c6f85f6b481c17691",
"text": "Owning physical gold (assuming coins): Owning gold through a fund:",
"title": ""
},
{
"docid": "e99561df31a588a4c5bc1887c090010d",
"text": "\"Invest in gold. Maybe will not \"\"make\"\" money but at least preserve the value.\"",
"title": ""
},
{
"docid": "bad177efac3dfd6b41b35d802005ab10",
"text": "Without getting into whether you should invest in Gold or Not ... 1.Where do I go and make this purchase. I would like to get the best possible price. If you are talking about Physical Gold then Banks, Leading Jewelry store in your city. Other options are buying Gold Mutual Fund or ETF from leading fund houses. 2.How do I assure myself of quality. Is there some certificate of quality/purity? This is mostly on trust. Generally Banks and leading Jewelry stores will not sell of inferior purity. There are certain branded stores that give you certificate of authenticity 3.When I do choose to sell this commodity, when and where will I get the best cost? If you are talking about selling physical gold, Jewelry store is the only place. Banks do not buy back the gold they sold you. Jewelry stores will buy back any gold, however note there is a buy price and sell price. So if you buy 10 g and sell it back immediately you will not get the same price. If you have purchased Mutual Funds / ETF you can sell in the market.",
"title": ""
},
{
"docid": "5affdedc6246219e3093477fd999126e",
"text": "Reddit doesn't have a ton of resources to offer you as you learn about where to invest, you want to start reading up on actual investing sites. You might start with Motley Fool, StockTwits, Seeking Alpha, Marketwatch, etc. I agree with hipster's take, if all countries are going to keep printing money and expanding their debts and craziness, gold has a bright future. Land, petroleum, commodities, and precious metals have an intrinsic worth that will still be there regardless of what currencies are doing, versus bonds which are merely promises to pay, which will be paid off in devalued money, or stocks which are just promises of future earnings. Think about spreading your risk in a few different places, one chunk here, one chunk there. Some people in the US now are big on dividend paying stocks in lieu of bonds which only pay a percent, which is negative return after inflation. Some people buy 'royalty trust' units, which throw off income from oil leases as dividends. You might want to park a portion in a different currency, but dollar funds aren't going to pay interest and Switzerland plans to keep devaluing its currency as people keep bidding the price up. I don't know if you are allowed to buy CEF, a bullion-backed fund out of Canada in your country, but that's one way to own gold & silver. But with the instability out there, you might prefer a bit of the real thing stashed in a safe place. Or if you have a bit of family land, maybe just be sure you can pay the taxes to keep it; or pursue any other way to own 'real stuff' that will still be worth something after all hell breaks loose.",
"title": ""
},
{
"docid": "51f09d8025fb86f43c74dfdb82941039",
"text": "\"Two points: One, yes -- the price of gold has been going up. [gold ETF chart here](http://www.google.com/finance?chdnp=1&chdd=1&chds=1&chdv=1&chvs=maximized&chdeh=0&chfdeh=0&chdet=1349467200000&chddm=495788&chls=IntervalBasedLine&q=NYSEARCA:IAU&ntsp=0&ei=PQhvUMjiAZGQ0QG5pQE) Two, the US has confiscated gold in the past. They did it in the 1930s. Owning antique gold coins is stupid because you're paying for gold + the supply / demand imbalance forced upon that particular coin by the coin collector market. If you want to have exposure to gold in your portfolio, the cheapest way is through an ETF. If you want to own physical gold because a) it's shiny or b) you fear impending economic collapse -- you're probably better off with bullion from a reputable dealer. You can buy it in grams or ounces -- you can also buy it in coins. Physical gold will generally cost you a little more than the spot price (think 5% - 10%? -- not really sure) but it can vary wildly. You might even be able to buy it for under the spot price if you find somebody that isn't very bright willing to sell. Buyer beware though -- there are lots of shady folks in the \"\"we buy gold\"\" market.\"",
"title": ""
},
{
"docid": "8c68426680872d7e198afdc2edd7f1fd",
"text": "Best way would probably be to go buy gold or some other liquid item and then just sell it back for cash. Or buy items from stores and return them. Most stores that don't give store credit will give cash or put it on your CC.",
"title": ""
},
{
"docid": "8cc918d7d360e8385f3ff962b9230f3a",
"text": "\"The difficulty with investing in mining and gold company stocks is that they are subject to the same market forces as any other stocks, although they may whether those forces better in a crisis than other stocks do because they are related to gold, which has always been a \"\"flight to safety\"\" move for investors. Some investors buy physical gold, although you don't have to take actual delivery of the metal itself. You can leave it with the broker-dealer you buy it from, much the way you don't have your broker send you stock certificates. That way, if you leave the gold with the broker-dealer (someone reputable, of course, like APMEX or Monex) then you can sell it quickly if you choose, just like when you want to sell a stock. If you take delivery of a security (share certificate) or commodity (gold, oil, etc.) then before you can sell it, you have to return it to broker, which takes time. The decision has much to do with your investing objectives and willingness to absorb risk. The reason people choose mutual funds is because their money gets spread around a basket of stocks, so if one company in the fund takes a hit it doesn't wipe out their entire investment. If you buy gold, you run the risk (low, in my opinion) of seeing big losses if, for some reason, gold prices plummet. You're \"\"all in\"\" on one thing, which can be risky. It's a judgment call on your part, but that's my two cents' worth.\"",
"title": ""
},
{
"docid": "25a38b50c7fa018f6d9168ae1325fc2f",
"text": "\"Since you are going to be experiencing a liquidity crisis that even owning physical gold wouldn't solve, may I suggest bitcoins? You will still be liquid and people anywhere will be able to trade it. This is different from precious metals, whereas even if you \"\"invested\"\" in gold you would waste considerable resources on storage, security and actually making it divisible for trade. You would be illiquid. Do note that the bitcoin currency is currently more volatile than a Greek government bond.\"",
"title": ""
},
{
"docid": "bffeaf61787f6b4ab0868de12b79540f",
"text": "\"I got started by reading the following two books: You could probably get by with just the first of those two. I haven't been a big fan of the \"\"for dummies\"\" series in the past, but I found both of these were quite good, particularly for people who have little understanding of investing. I also rather like the site, Canadian Couch Potato. That has a wealth of information on passive investing using mutual funds and ETFs. It's a good next step after reading one or the other of the books above. In your specific case, you are investing for the fairly short term and your tolerance for risk seems to be quite low. Gold is a high-risk investment, and in my opinion is ill-suited to your investment goals. I'd say you are looking at a money market account (very low risk, low return) such as e.g. the TD Canadian Money Market fund (TDB164). You may also want to take a look at e.g. the TD Canadian Bond Index (TDB909) which is only slightly higher risk. However, for someone just starting out and without a whack of knowledge, I rather like pointing people at the ING Direct Streetwise Funds. They offer three options, balancing risk vs reward. You can fill in their online fund selector and it'll point you in the right direction. You can pay less by buying individual stock and bond funds through your bank (following e.g. one of the Canadian Couch Potato's model portfolios), but ING Direct makes things nice and simple, and is a good option for people who don't care to spend a lot of time on this. Note that I am not a financial adviser, and I have only a limited understanding of your needs. You may want to consult one, though you'll want to be careful when doing so to avoid just talking to a salesperson. Also, note that I am biased toward passive index investing. Other people may recommend that you invest in gold or real estate or specific stocks. I think that's a bad idea and believe I have the science to back this up, but I may be wrong.\"",
"title": ""
},
{
"docid": "58449f8023032c3e88340a3a6ff677d6",
"text": "Redditors! Buy gold and demand that the financial institution who sold it deliver it. When they try to buy enough gold to cover their short the price will explode, free money. Disclaimer: you all have to do it or it won't work",
"title": ""
},
{
"docid": "0f09a405c8242b6ac42a50f5bbd2bd20",
"text": "\"Getting \"\"physical stocks\"\" will in most cases only be for the \"\"fun of it\"\". Most stocks nowadays are registered electronically and thus the physical stock will be of no value - it will just be a certificate saying that you own X amount of shares in company X; but this information is at the same time registered electronically. Stocks are not like bearer bonds, the certificate itself contains no value and is registered to each individual/entity. Because the paper itself is worthless, stealing it will not affect your amount of stock with the company. This is true for most stocks - there may exist companies who live in the 70s and do not keep track of their stock electronically, but I suspect it will only be very few (and most likely very small and illiquid companies).\"",
"title": ""
},
{
"docid": "250e59e43c4663a659e26028f92aa583",
"text": "I would track it using a regular asset account. The same way I would track the value of a house, a car, or any other personal asset. ETA: If you want automatic tracking, you could set it up as a stock portfolio holding shares of the GLD ETF. One share of GLD represents 1/10 ounce of gold. So, if you have 5 ounces of gold, you would set that up in Quicken as 50 shares of GLD.",
"title": ""
},
{
"docid": "e26623e08553c09696cac38fbef44909",
"text": "gold is incredibly volatile, I tried spreadbetting on it. During the month of its highest gain, month beginning to month end, I was betting it would go up - and I still managed to lose money. It went down so much, that my stop loss margin would kick in. Don't do things with gold in the short term its a very small and liquid market. My advice with gold, actually buy some physical gold as insurance.",
"title": ""
},
{
"docid": "726992b37e38e2c0e01dabc5117201c3",
"text": "\"GLD, IAU, and SGOL are three different ETF's that you can invest in if you want to invest in gold without physically owning gold. Purchasing an ETF is just like purchasing a stock, so you're fine on that front. Another alternative is to buy shares of companies that mine gold. An example of a single company is Randgold Resources (GOLD), and an ETF of mining companies is GDX. There are also some more complex alternatives like Exchange traded notes and futures contracts, but I wouldn't classify those for the \"\"regular person.\"\" Hope it helps!\"",
"title": ""
},
{
"docid": "f942f83af50827f1778ff784b6e6f832",
"text": "You can also use ICS<GO> on Bloomberg and choose the right category (many subcategories, probably you'll start on home builders or something like that). If that doesn't work, press F1 twice and ask it to an analyst. I'm sure they have this info.",
"title": ""
}
] | fiqa |
74c8f356341aad2e2729a19aaa290cba | What to do with south african currency free fall | [
{
"docid": "c4928107daac55e5455a1f8a674e89ce",
"text": "Use other currencies, if available. I'm not familiar with the banking system in South Africa; if they haven't placed any currency freezes or restrictions, you might want to do this sooner than later. In full crises, like Russian and Ukraine, once the crisis worsened, they started limiting purchases of foreign currencies. PayPal might allow currency swaps (it implies that it does at the bottom of this page); if not, I know Uphold does. Short the currency Brokerage in the US allow us to short the US Dollar. If banks allow you to short the ZAR, you can always use that for protection. I looked at the interest rates in the ZAR to see how the central bank is offsetting this currency crisis - WOW - I'd be running, not walking toward the nearest exit. A USA analogy during the late 70s/early 80s would be Paul Volcker holding interest rates at 2.5%, thinking that would contain 10% inflation. Bitcoin Comes with significant risks itself, but if you use it as a temporary medium of exchange for swaps - like Uphold or with some bitcoin exchanges like BTC-e - you can get other currencies by converting to bitcoin then swapping for other assets. Bitcoin's strength is remitting and swapping; holding on to it is high risk. Commodities I think these are higher risk right now as part of the ZAR's problem is that it's heavily reliant on commodities. I looked at your stock market to see how well it's done, and I also see that it's done poorly too and I think the commodity bloodbath has something to do with that. If you know of any commodity that can stay stable during uncertainty, like food that doesn't expire, you can at least buy without worrying about costs rising in the future. I always joke that if hyperinflation happened in the United States, everyone would wish they lived in Utah.",
"title": ""
},
{
"docid": "1724c351ce737d25fe94caa86ed5cfe1",
"text": "Transfer your savings to a dollar-based CD. Or even better, buy some gold on them.",
"title": ""
}
] | [
{
"docid": "647740b4ae71f5a6f13b36593cb3f041",
"text": "The default of the country will affect the country obligations and what's tied to it. If you have treasury bonds, for example - they'll get hit. If you have cash currency - it will get hit. If you're invested in the stock market, however, it may plunge, but will recover, and in the long run you won't get hit. If you're invested in foreign countries (through foreign currency or foreign stocks that you hold), then the default of your local government may have less affect there, if at all. What you should not, in my humble opinion, be doing is digging holes in the ground or probably not exchange all your cash for gold (although it is considered a safe anchor in case of monetary crisis, so may be worth considering some diversifying your portfolio with some gold). Splitting between banks might not make any difference at all because the value won't change, unless you think that one of the banks will fail (then just close the account there). The bottom line is that the key is diversifying, and you don't have to be a seasoned investor for that. I'm sure there are mutual funds in Greece, just pick several different funds (from several different companies) that provide diversified investment, and put your money there.",
"title": ""
},
{
"docid": "4d0c682843b282a6198ecc012f163746",
"text": "This. Why not convert the 50k euro to dollars and AUD, and invest in a basket of companies that trade on American/Australian exchanges instead. You could hold a bit of gold, but I would definitely not put everything into gold.",
"title": ""
},
{
"docid": "b7228ac919920c2b403555de25be31a4",
"text": "If you are really worried your best bet is to move all your cash from Sterling into a foreign currency that you think will be resilient should Brexit occur. I would avoid the Euro! You could look at the US Dollar perhaps, make sure you are aware of the charges for moving the money over and back again, as you will at some stage probably want to get back into Sterling once it settles down, if it does indeed fall. Based on my experience on the stock markets (I am not a currency trader) I would expect the pound to fall fairly sharply on a vote for Brexit and the Euro to do the same. Both would probably rebound quite quickly too as even if there is a Brexit vote it doesn't mean the UK Government will honour the outcome or take the steps quickly. ** I AM NOT A FINANCIAL ADVISOR AND HAVE NO QUALIFICATIONS AS SUCH **",
"title": ""
},
{
"docid": "3f7751528f0ca251b5245b7b1ff8442f",
"text": "\"I'm assuming the central bank of Italy will initiate quantitative easing (print money) to pay for this. This should devalue the euro. This may be \"\"rescuing\"\" these banks but it's fucking over everyone who is saving/trading with the euro.\"",
"title": ""
},
{
"docid": "0afc4be53a7d5723c723f6f6974db822",
"text": "\"The biggest risk you have when a country defaults on its currency is a major devaluation of the currency. Since the EURO is a fiat currency, like almost all developed nations, its \"\"promise\"\" comes from the expectation that its union and system will endure. The EURO is a basket of countries and as such could probably handle bailing out countries or possibly letting some default on their sovereign debt without killing the EURO itself. A similar reality happens in the United States with some level of regularity with state and municipal debt being considered riskier than Federal debt (it isn't uncommon for cities to default). The biggest reason the EURO will probably lose a LOT of value initially is if any nation defaults there isn't a track record as to how the EU member body will respond. Will some countries attempt to break out of the EU? If the member countries fracture then the EURO collapses rendering any and all EURO notes useless. It is that political stability that underlies the value of the EURO. If you are seriously concerned about the risk of a falling EURO and its long term stability then you'd do best buying a hedge currency or devising a basket of hedge currencies to diversify risk. Many will recommend you buy Gold or other precious metals, but I think the idea is silly at best. It is not only hard to buy precious metals at a \"\"fair\"\" value it is even harder to sell them at a fair value. Whatever currency you hold needs to be able to be used in transactions with ease. Doesn't do you any good having $20K in gold coins and no one willing to buy them (as the seller at the store will usually want currency and not gold coins). If you want to go the easy route you can follow the same line of reasoning Central Banks do. Buy USD and hold it. It is probably the world's safest currency to hold over a long period of time. Current US policy is inflationary so that won't help you gain value, but that depends on how the EU responds to a sovereign debt crisis; if one matures.\"",
"title": ""
},
{
"docid": "cef4fa3efefe86f85f703ff4e020704f",
"text": "\"If there is a very sudden and large collapse in the exchange rate then because algorithmic trades will operate very fast it is possible to determine “x” immediately after the change in exchange rate. All you need to know is the order book. You also need to assume that the algorithmic bot operates faster than all other market participants so that the order book doesn’t change except for those trades executed by the bot. The temporarily cheaper price in the weakened currency market will rise and the temporarily dearer price in the strengthened currency market will fall until the prices are related by the new exchange rate. This price is determined by the condition that the total volume of buys in the cheaper market is equal to the total volume of sells in the dearer market. Suppose initially gold is worth $1200 on NYSE or £720 on LSE. Then suppose the exchange rate falls from r=0.6 £/$ to s=0.4 £/$. To illustrate the answer lets assume that before the currency collapse the order book for gold on the LSE and NYSE looks like: GOLD-NYSE Sell (100 @ $1310) Sell (100 @ $1300) <——— Sell (100 @ $1280) Sell (200 @ $1260) Sell (300 @ $1220) Sell (100 @ $1200) ————————— buy (100 @ $1190) buy (100 @ $1180) GOLD-LSE Sell (100 @ £750) Sell (100 @ £740) ————————— buy (200 @ £720) buy (200 @ £700) buy (100 @ £600) buy (100 @ £550) buy (100 @ £530) buy (100 @ £520) <——— buy (100 @ £500) From this hypothetical example, the automatic traders will buy up the NYSE gold and sell the LSE gold in equal volume until the price ratio \"\"s\"\" is attained. By summing up the sell volumes on the NYSE and the buy volumes on the LSE, we see that the conditions are met when the price is $1300 and £520. Note 800 units were bought and sold. So “x” depends on the available orders in the order book. Immediately after this, however, the price of the asset will be subject to the new changes of preference by the market participants. However, the price calculated above must be the initial price, since otherwise an arbitrage opportunity would exist.\"",
"title": ""
},
{
"docid": "998e630126f66709e84a3de6ba91fdda",
"text": "If any Euro countries leave the Euro, they will have to impose capital flow restrictions - it's a given, to avoid a complete implosion of the entire system. The idea of retroactive controls is very interesting - this may be one of the first times in a currency collapse that such a system would be feasible (i.e., both the country being fled from and the countries being fled to are under common control). No doubt they would try such a thing if they thought they could get away with it.",
"title": ""
},
{
"docid": "041245ddb1f9ce5576e6d63afde087e8",
"text": "\"The danger to your savings depends on how much sovereign debt your bank is holding. If the government defaults then the bank - if it is holding a lot of sovereign debt - could be short funds and not able to meet its obligations. I believe default is the best option for the Euro long term but it will be painful in the short term. Yes, historically governments have shut down banks to prevent people from withdrawing their money in times of crisis. See Argentina circa 2001 or US during Great Depression. The government prevented people from withdrawing their money and people could do nothing while their money rapidly lost value. (See the emergency banking act where Title I, Section 4 authorizes the US president:\"\"To make it illegal for a bank to do business during a national emergency (per section 2) without the approval of the President.\"\" FDR declared a banking holiday four days before the act was approved by Congress. This documentary on the crisis in Argentina follows a woman as she tries to withdraw her savings from her bank but the government has prevented her from withdrawing her money.) If the printing press is chosen to avoid default then this will allow banks and governments to meet their obligations. This, however, comes at the cost of a seriously debased euro (i.e. higher prices). The euro could then soon become a hot potato as everyone tries to get rid of them before the ECB prints more. The US dollar could meet the same fate. What can you do to avert these risks? Yes, you could exchange into another currency. Unfortunately the printing presses of most of the major central banks today are in overdrive. This may preserve your savings temporarily. I would purchase some gold or silver coins and keep them in your possession. This isolates you from the banking system and gold and silver have value anywhere you go. The coins are also portable in case things really start to get interesting. Attempt to purchase the coins with cash so there is no record of the purchase. This may not be possible.\"",
"title": ""
},
{
"docid": "2a4101d422ea1202cbc43ffd2a8abbf0",
"text": "Are you going to South Africa or from? (Looking on your profile for this info.) If you're going to South Africa, you could do worse than to buy five or six one-ounce krugerrands. Maybe wait until next year to buy a few; you may get a slightly better deal. Not only is it gold, it's minted by that country, so it's easier to liquidate should you need to. Plus, they go for a smaller premium in the US than some other forms of gold. As for the rest of the $100k, I don't know ... either park it in CD ladders or put it in something that benefits if the economy gets worse. (Cheery, ain't I? ;) )",
"title": ""
},
{
"docid": "991a3c3f2d868d20ef41153c719b87fe",
"text": "Recessions are prolonged by less spending and wages being 'sticky' downward. My currency, the 'wallark', allows a company to pay its workers in it's own scrip instead of dollars which they can use to purchase its goods, thus reducing it's labor costs and allowing prices to fall faster. While scrip in the past purposely devalued to discourage hoarding, the wallark hold's it's purchasing power. The difference is, a worker can only use it to purchase their company's good *on the date the wallark was earned or before*. In other words, each good is labeled with a date it was put on display for sale, if a worker earns scrip on that same day, they can trade the scrip for that good, or any good that was on the shelves on that day it was earned *or before that date*. Any good that comes onto market after the date that particular wallark was earned cannot be purchased with that wallark(which is dated), and must be purchased either with dollars or with wallark that was earned on that good's date or after. This incentivizes spending without creating inflation, and allows costs to fall which helps businesses during rough economic times. Please feel free to read it, and comment on my site! Any feedback is welcome!",
"title": ""
},
{
"docid": "b599fa547d14e731b3f8685f44242823",
"text": "of course the value will be non zero however it would be very small, as all the countries would not leave at once... if its a piig holding the bag it would fall precipitously, if its a AAA (non france) it would go to 1.5 Its very path dependent on whom leaves when",
"title": ""
},
{
"docid": "6bd58cfcf59df1678bf6560942b4d86c",
"text": "No, there is nothing on the sidelines. Currency is an investment. There is no such thing as uninvested wealth. If you had a million in USD at the beginning of 2017, you would currently be out about sixty grand. There is no neutral way to store wealth.",
"title": ""
},
{
"docid": "f223389ac294be1c02dff830429e81dd",
"text": "First question: Any, probably all, of the above. Second question: The risk is that the currency will become worth less, or even worthless. Most will resort to the printing press (inflation) which will tank the currency's purchasing power. A different currency will have the same problem, but possibly less so than yours. Real estate is a good deal. So are eggs, if you were to ask a Weimar Germany farmer. People will always need food and shelter.",
"title": ""
},
{
"docid": "d2323f60dcf6807c1151a04b0999014a",
"text": "\"You can place the orders like you suggested. This would be useful in a market that is moving quickly where you want to be reasonably sure of execution but don't want the full exposure of a market order. This won't jump your spot in the queue though in the sense that you won't get ahead of other orders that are \"\"ready\"\" for execution just because you have crossed the spread aggressively.\"",
"title": ""
},
{
"docid": "e2cb477959dec39a9ffffc1413e15915",
"text": "The monetary supply isn't a fixed number like in the old days of the [gold standard](https://en.wikipedia.org/wiki/Gold_standard) is part of the answer. Also, the actual spending of that one thousand dollars -- where the money is spent and on what -- does make a significant difference on how the overall economy is effected: People spending it on food, transportation and housing isn't going to drive up the costs of a Porshe 911.",
"title": ""
}
] | fiqa |
9f64749b324b7f94b11268656b59c5ae | Pros/cons for buying gold vs. saving money in an interest-based account? | [
{
"docid": "799a9ee5e202bf0686256b32b8c4a361",
"text": "\"As Michael McGowan says, just because gold has gone up lots recently does not mean it will continue to go up by the same amount. This plot: shows that if your father had bought $20,000 in gold 30 years ago, then 10 years ago he would have slightly less than $20,000 to show for it. Compare that with the bubble in real estate in the US: Update: I was curious about JoeTaxpayer's question: how do US house prices track against US taxpayer's ability to borrow? To try to answer this, I used the house price data from here, the 30 year fixed mortgages here and the US salary information from here. To calculate the \"\"ability to borrow\"\" I took the US hourly salary information, multiplied by 2000/12 to get a monthly salary. I (completely arbitrarily) assumed that 25 per cent of the monthly salary would be used on mortgage payments. I then used Excel's \"\"PV\"\" (Present Value) function to calculate the present value of the thirty year fixed rate mortgage. The resulting graph is below. The correlation coefficient between the two plots is 0.93. There are so many caveats on what I've done in ~15 minutes, I don't want to list them... but it certainly \"\"gives one furiously to think\"\" !! Update 2: OK, so even just salary information correlates very well with the house price increases. And looking at the differences, we can see that perhaps there was a spike or bubble in house prices over and above what might be expected from salary-only or ability-to-borrow.\"",
"title": ""
},
{
"docid": "99c8e924a6429b9e56cd3a540c31c768",
"text": "\"There's too much here for one question. So no answer can possibly be comprehensive. I think little of gold for the long term. I go to MoneyChimp and see what inflation did from 1974 till now. $1 to $4.74. So $200 inflates to $950 or so. Gold bested that, but hardly stayed ahead in a real way. The stock market blew that number away. And buying gold anytime around the 1980 runup would still leave you behind inflation. As far as housing goes, I have a theory. Take median income, 25% of a month's pay each month. Input it as the payment at the going 30yr fixed rate mortgage. Income rises a bit faster than inflation over time, so that line is nicely curved slightly upward (give or take) but as interest rates vary, that same payment buys you far more or less mortgage. When you graph this, you find the bubble in User210's graph almost non-existent. At 12% (the rate in '85 or so) $1000/mo buys you $97K in mortgage, but at 5%, $186K. So over the 20 years from '85 to 2005, there's a gain created simply by the fact that money was cheaper. No mania, no bubble (not at the median, anyway) just the interest rate effect. Over the same period, inflation totaled 87%. So the same guy just keeping up with inflation in his pay could then afford a house that was 3.5X the price 20 years prior. I'm no rocket scientist, but I see few articles ever discussing housing from this angle. To close my post here, consider that homes have grown in size, 1.5%/yr on average. So the median new home quoted is actually 1/3 greater in size in 2005 than in '85. These factors all need to be normalized out of that crazy Schiller-type* graph. In the end, I believe the median home will always tightly correlate to the \"\"one week income as payment.\"\" *I refer here to the work of professor Robert Schiller partner of the Case-Schiller index of home prices which bears his name.\"",
"title": ""
},
{
"docid": "493570ee85e4ae71f109ba9f05e40ae9",
"text": "Just because gold performed that well in the past does not mean it will perform that well in the future. I'm not saying you should or should not buy gold, but the mere fact that it went up a lot recently is not sufficient reason to buy it. Also note that on the house, an investment that accrues continuous interest for 30 years at an annual rate of about 7.7% will multiply by a factor of 10 in 30 years. That rate is pretty high by today's standards, but it might have been more feasible in the past (I don't know historical interest rates very well). Yet again note that the fact that houses went up a lot over the last 30 years does not mean they will continue to do so.",
"title": ""
},
{
"docid": "c3c3f7d8b8ea34d9e2946cdc47094ef5",
"text": "What you are seeing is the effects of inflation. As money becomes less valuable it takes more of it to buy physical things, be they commodities, shares in a company's stock, and peoples time (salaries). Just about the only thing that doesn't track inflation to some degree is cash itself or money in an account since that is itself what is being devalued. So the point of all this is, buying anything (a house, gold, stocks) that doesn't depreciate (a car) is something of a hedge against inflation. However, don't be tricked (as many are) into thinking that house just made you a tidy sum just because it went up in value so much over x years. Remember 1) All the other houses and things you'd spend the money on are a lot more expensive now too; and 2) You put a lot more money into a house than the mortgage payment (taxes, insurance, maintenance, etc.) I'm with the others though. Don't get caught up in the gold bubble. Doing so now is just speculation and has a lot of risk associated with it.",
"title": ""
}
] | [
{
"docid": "522126a55f542900e3ee89f63cfd3395",
"text": "\"Given the current low interest rates - let's assume 4% - this might be a viable option for a lot of people. Let's also assume that your actual interest rate after figuring in tax considerations ends up at around 3%. I think I am being pretty fair with the numbers. Now every dollar that you save each month based on the savings and invest with a higher net return of greater than 3% will in fact be \"\"free money\"\". You are basically betting on your ability to invest over the 3%. Even if using a conservative historical rate of return on the market you should net far better than 3%. This money would be significant after 10 years. Let's say you earn an average of 8% on your money over the 10 years. Well you would have an extra $77K by doing interest only if you were paying on average of $500 a month towards interest on a conventional loan. That is a pretty average house in the US. Who doesn't want $77K (more than you would have compared to just principal). So after 10 years you have the same amount in principal plus $77k given that you take all of the saved money and invest it at the constraints above. I would suggest that people take interest only if they are willing to diligently put away the money as they had a conventional loan. Another scenario would be a wealthier home owner (that may be able to pay off house at any time) to reap the tax breaks and cheap money to invest. Pros: Cons: Sidenote: If people ask how viable is this. Well I have done this for 8 years. I have earned an extra 110K. I have smaller than $500 I put away each month since my house is about 30% owned but have earned almost 14% on average over the last 8 years. My money gets put into an e-trade account automatically each month from there I funnel it into different funds (diversified by sector and region). I literally spend a few minutes a month on this and I truly act like the money isn't there. What is also nice is that the bank will account for about half of this as being a liquid asset when I have to renegotiate another loan.\"",
"title": ""
},
{
"docid": "701044a51a7f47011eb598f92c1ca560",
"text": "Gold's valuation is so stratospheric right now that I wonder if negative numbers (as in, you should short it) are acceptable in the short run. In the long run I'd say the answer is zero. The problem with gold is that its only major fundamental value is for making jewelry and the vast majority is just being hoarded in ways that can only be justified by the Greater Fool Theory. In the long run gold shouldn't return more than inflation because a pile of gold creates no new wealth like the capital that stocks are a claim on and doesn't allow others to create new wealth like money lent via bonds. It's also not an important and increasingly scarce resource for wealth creation in the global economy like oil and other more useful commodities are. I've halfway-thought about taking a short position in gold, though I haven't taken any position, short or long, in gold for the following reasons: Straight up short-selling of a gold ETF is too risky for me, given its potential for unlimited losses. Some other short strategy like an inverse ETF or put options is also risky, though less so, and ties up a lot of capital. While I strongly believe such an investment would be profitable, I think the things that will likely rise when the flight-to-safety is over and gold comes back to Earth (mainly stocks, especially in the more beaten-down sectors of the economy) will be equally profitable with less risk than taking one of these positions in gold.",
"title": ""
},
{
"docid": "8ac2209c513ee6c964e7277b426315ba",
"text": "Gold is a commodity. It has a tracked price and can be bought and sold as such. In its physical form it represents something real of signifigant value that can be traded for currency or barted. A single pound of gold is worth about 27000 dollars. It is very valuable and it is easily transported as opposed to a car which loses value while you transport it. There are other metals that also hold value (Platinum, Silver, Copper, etc) as well as other commodities. Platinum has a higher Value to weight ratio than gold but there is less of a global quantity and the demand is not as high. A gold mine is an investement where you hope to take out more in gold than it cost to get it out. Just like any other business. High gold prices simply lower your break even point. TIPS protects you from inflation but does not protect you from devaluation. It also only pays the inflation rate recoginized by the Treasury. There are experts who believe that the fed has understated inflation. If these are correct then TIPS is not protecting its investors from inflation as promised. You can also think of treasury bonds as an investment in your government. Your return will be effectively determined by how they run their business of governing. If you believe that the government is doing the right things to help promote the economy then investing in their bonds will help them to be able to continue to do so. And if consumers buy the bonds then the treasury does not have to buy any more of its own.",
"title": ""
},
{
"docid": "f27db9be9f670568435ea70473cb7ef7",
"text": "Well, people have been saying interest rates have to go up for years now and have been wrong so far. Also there is an opportunity cost in waiting to buy - if another five years passes with nothing happen, you earn 0% on checking accounts, but at least earn 1.65% per year or so on your 10y bond.",
"title": ""
},
{
"docid": "e99561df31a588a4c5bc1887c090010d",
"text": "\"Invest in gold. Maybe will not \"\"make\"\" money but at least preserve the value.\"",
"title": ""
},
{
"docid": "029604fb1bc4681115e58f3ce904a708",
"text": "Gold's value starts with the fact that its supply is steady and by nature it's durable. In other words, the amount of gold traded each year (The Supply and Demand) is small relative to the existing total stock. This acting as a bit of a throttle on its value, as does the high cost of mining. Mines will have yields that control whether it's profitable to run them. A mine may have a $600/oz production cost, in which case it's clear they should run full speed now with gold at $1200, but if it were below $650 or so, it may not be worth it. It also has a history that goes back millennia, it's valued because it always was. John Maynard Keynes referred to gold as an archaic relic and I tend to agree. You are right, the topic is controversial. For short periods, gold will provide a decent hedge, but no better than other financial instruments. We are now in an odd time, where the stock market is generally flat to where it was 10 years ago, and both cash or most commodities were a better choice. Look at sufficiently long periods of time, and gold fails. In my history, I graduated college in 1984, and in the summer of 82 played in the commodities market. Gold peaked at $850 or so. Now it's $1200. 50% over 30 years is hardly a storehouse of value now, is it? Yet, I recall Aug 25, 1987 when the Dow peaked at 2750. No, I didn't call the top. But I did talk to a friend advising that I ignore the short term, at 25 with little invested, I only concerned myself with long term plans. The Dow crashed from there, but even today just over 18,000 the return has averaged 7.07% plus dividends. A lengthy tangent, but important to understand. A gold fan will be able to produce his own observation, citing that some percent of one's holding in gold, adjusted to maintain a balanced allocation would create more positive returns than I claim. For a large enough portfolio that's otherwise well diversified, this may be true, just not something I choose to invest in. Last - if you wish to buy gold, avoid the hard metal. GLD trades as 1/10 oz of gold and has a tiny commission as it trades like a stock. The buy/sell on a 1oz gold piece will cost you 4-6%. That's no way to invest. Update - 29 years after that lunch in 1987, the Dow was at 18448, a return of 6.78% CAGR plus dividends. Another 6 years since this question was asked and Gold hasn't moved, $1175, and 6 years' worth of fees, 2.4% if you buy the GLD ETF. From the '82 high of $850 to now (34 years), the return has a CAGR of .96%/yr or .56% after fees. To be fair, I picked a relative high, that $850. But I did the same choosing the pre-crash 2750 high on the Dow.",
"title": ""
},
{
"docid": "347d5c851c80fcd03aeb5473b2a53959",
"text": "\"IRAs have huge tax-advantages. You'll pay taxes when you liquidate gold and silver. While volatile, \"\"the stock market has never produced a loss during any rolling 15-year period (1926-2009)\"\" [PDF]. This is perhaps the most convincing article for retirement accounts over at I Will Teach You To Be Rich. An IRA is just a container for your money and you may invest the money however you like (cash, stocks, funds, etc). A typical investment is the purchase of stocks, bonds, and/or funds containing either or both. Stocks may pay dividends and bonds pay yields. Transactions of these things trigger capital gains (or losses). This happens if you sell or if the fund manager sells pieces of the fund to buy something in its place (i.e. transactions happen without your decision and high turnover can result in huge capital gains). In a taxable account you will pay taxes on dividends and capital gains. In an IRA you don't ever pay taxes on dividends and capital gains. Over the life of the IRA (30+ years) this can be a huge ton of savings. A traditional IRA is funded with pre-tax money and you only pay tax on the withdrawal. Therefore you get more money upfront to invest and more money compounds into greater amounts faster. A Roth IRA you fund with after-tax dollars, but your withdrawals are tax free. Traditional versus Roth comparison calculator. Here are a bunch more IRA and 401k calculators. Take a look at the IRA tax savings for various amounts compared to the same money in a taxable account. Compounding over time will make you rich and there's your reason for starting young. Increases in the value of gold and silver will never touch compounded gains. So tax savings are a huge reason to stash your money in an IRA. You trade liquidity (having to wait until age 59.5) for a heck of a lot more money. Though isn't it nice to be assured that you will have money when you retire? If you aren't going to earn it then, you'll have to earn it now. If you are going to earn it now, you may as well put it in a place that earns you even more. A traditional IRA has penalties for withdrawing before retirement age. With a Roth you can withdraw the principal at anytime without penalty as long as the account has been open 5 years. A traditional IRA requires you take out a certain amount once you reach retirement. A Roth doesn't, which means you can leave money in the account to grow even more. A Roth can be passed on to a spouse after death, and after the spouse's death onto another beneficiary. more on IRA Required Minimum Distributions.\"",
"title": ""
},
{
"docid": "5ec249d15cdf8b304ba16f6bff83fc77",
"text": "\"Nobody can give you a definitive answer. To those who suggest it's expensive at these prices, [I'd point to this chart](http://treo.typepad.com/.a/6a0120a6002285970c014e8c39f2c3970d-850wi) showing the price of gold versus the global money supply over the past decade or so. It's not conclusive, but it's evidence that gold tracks the money supply relatively well. There might be a bit of risk premium baked in that it would shed in a stable economy, but that premium is unknowable. It's also (imo) probably worth the protection it provides. In an inflationary scenario (Euro devaluation) gold will hold its buying power very well. It also fares well in a deflationary environment, just not quite as well as holding physical currency. Note that in such an environment, bank defaults are a big danger: that 50k might only be safe under your mattress (rather than in a fractionally reserved bank account). If you're buying gold, certificates aren't exactly a bad option, although there still exists the counterparty risk of the agent storing your gold, as well as political risk of the nation where it's being held. Buying physical bullion ameliorates these risks, but then you face the problem of protecting it. Safe deposit boxes, a home safe, or burying it in your backyard are all possible options. The merits of each, I'll leave as an exerice to the reader. Foreign currency might be a little bit better than the Euro, but as we've seen in the past year or so, the Swiss Franc has been devalued to match the Euro in the proverbial \"\"race to the bottom\"\". It's probably not much better than another fiat currency. I don't know anything about Norway. Edit: Depending on your time horizon, my personal opinion would be to put no less than 5-10% of your savings in a hard store of value (e.g. gold, silver, platinum). Depending on your risk appetite, you could probably stand to put a lot more into it, especially given the Eurozone turmoil. Of course, as with anything else, your mileage may vary, past performance does not guarantee future results, this is not investment advice, seek professional medical help if you experience an erection lasting longer than four hours.\"",
"title": ""
},
{
"docid": "edf4fba292caeb83937280fef7ca1934",
"text": "\"The general argument put forward by gold lovers isn't that you get the same gold per dollar (or dollars per ounce of gold), but that you get the same consumable product per ounce of gold. In other words the claim is that the inflation-adjusted price of gold is more-or-less constant. See zerohedge.com link for a chart of gold in 2010 GBP all the way from 1265. (\"\"In 2010 GBP\"\" means its an inflation adjusted chart.) As you can see there is plenty of fluctuation in there, but it just so happens that gold is worth about the same now as it was in 1265. See caseyresearch.com link for a series of anecdotes of the buying power of gold and silver going back some 3000 years. What this means to you: If you think the stock market is volatile and want to de-risk your holdings for the next 2 years, gold is just as risky If you want to invest some wealth such that it will be worth more (in real terms) when you take it out in 40 years time than today, the stock market has historically given better returns than gold If you want to put money aside, and it to not lose value, for a few hundred years, then gold might be a sensible place to store your wealth (as per comment from @Michael Kjörling) It might be possible to use gold as a partial hedge against the stock market, as the two supposedly have very low correlation\"",
"title": ""
},
{
"docid": "dd8e5ca4888ff871a3b76ce481bb3bd5",
"text": "\"First of all, bear in mind that there's no such thing as a risk-free investment. If you keep your money in the bank, you'll struggle to get a return that keeps up with inflation. The same is true for other \"\"safe\"\" investments like government bonds. Gold and silver are essentially completely speculative investments; over the years their price tends to vary quite wildly, so unless you really understand how those markets work you should steer well clear. They're certainly not low risk. Repeatedly buying a property to sell in a couple of years time is almost certainly a bad idea; you'll end up paying substantial transaction fees each time that would wipe out a lot of the possible profit, and of course there's always the risk that prices would go down not up. Buying a property to keep - and preferably live in - might be a decent option once you have a good deposit saved up. It's very hard to say where prices will go in future, on the one hand London prices are very high by historical standards, but on the other hand supply is likely to remain severely constrained for years to come. I tend to think of a house as something that I need one of for the rest of my life, and so in one sense not owning a house to live in is a gamble that house prices and rents won't go up substantially. If you own a house, you're insulated from changes in rent etc and even if prices crash at least you still have somewhere to live. However that argument only works really well if you expect to keep living in the same area under most circumstances - house prices might crash in your area but not elsewhere.\"",
"title": ""
},
{
"docid": "5e202bfb617d559d8a4363c6f6ce12c3",
"text": "\"I don't think there is a recession proof investment.Every investment is bound to their ups and downs. If you buy land, a change in law can change the whole situation it may become worthless, same applies for home as well. Gold - dependent on world economy. Stock - dependent on world economy Best way is to stay ever vigilant of world around you and keep shuffling from one investment to another balance out your portfolio. \"\"The most valuable commodity I know of is information.\"\" - Wall Street -movie\"",
"title": ""
},
{
"docid": "08a7e80ef513aa042d2107370f60bbf5",
"text": "\"I pretty much only use my checking. What's the downside? Checking accounts don't pay as much interest as savings account. Oh, but wait, interest rates have been zero for nearly 10 years. So there is very little benefit to keeping money in my savings account. In fact, I had two savings accounts, and Well Fargo closed one of them because I hadn't used it in years. Downsides of savings accounts: You are limited to 5 transfers per month into or out of them. No such limit with checking. Upsides of savings accounts: Well, maybe you will be less likely to spend the money. Why don't you just have your pay go into your checking and then just transfer \"\"extra money\"\" out of it, rather than the reverse? If you want to put money \"\"away\"\" so that you save it, assuming you're in the U.S.A., open a traditional IRA. Max deposit of $5500/year, and it reduces your taxable income. It's not a bad idea to have a separate account that you don't touch except for in an emergency. But, for me, the direction of flow is from work, to checking, to savings.\"",
"title": ""
},
{
"docid": "b117ffc4be3b40ef6e57f576be646797",
"text": "\"It may seem like you cannot live without this trip, but borrowing money is a bad habit to get into. Especially for things like vacations. Your best bet is to save up money and only ever pay cash for things that will decrease in value. Please note that while it is a bad idea to borrow money for things that decrease in value, the \"\"opposite\"\" is not necessarily true. That is, it is not necessarily a good idea to borrow money for things that will increase in value; or, will earn you income. False assumptions can cloud our judgement. The housing bubble and the stock market crash of 1929 are two examples, but there are many others.\"",
"title": ""
},
{
"docid": "ab6cc8d9826ecf75e8add750017c25d1",
"text": "\"Don't put all your eggs in one basket and don't assume that you know more than the market does. The probability of gold prices rising again in the near future is already \"\"priced in\"\" as it were. Unless you are privy to some reliable information that no one else knows (given that you are asking here, I'm guessing not), stay away. Invest in a globally diversified low cost portfolio of primarily stocks and bonds and don't try to predict the future. Also I would kill for a 4.5% interest rate on my savings. In the USA, 1% is on the high side of what you can get right now. What is inflation like over there?\"",
"title": ""
},
{
"docid": "51f09d8025fb86f43c74dfdb82941039",
"text": "\"Two points: One, yes -- the price of gold has been going up. [gold ETF chart here](http://www.google.com/finance?chdnp=1&chdd=1&chds=1&chdv=1&chvs=maximized&chdeh=0&chfdeh=0&chdet=1349467200000&chddm=495788&chls=IntervalBasedLine&q=NYSEARCA:IAU&ntsp=0&ei=PQhvUMjiAZGQ0QG5pQE) Two, the US has confiscated gold in the past. They did it in the 1930s. Owning antique gold coins is stupid because you're paying for gold + the supply / demand imbalance forced upon that particular coin by the coin collector market. If you want to have exposure to gold in your portfolio, the cheapest way is through an ETF. If you want to own physical gold because a) it's shiny or b) you fear impending economic collapse -- you're probably better off with bullion from a reputable dealer. You can buy it in grams or ounces -- you can also buy it in coins. Physical gold will generally cost you a little more than the spot price (think 5% - 10%? -- not really sure) but it can vary wildly. You might even be able to buy it for under the spot price if you find somebody that isn't very bright willing to sell. Buyer beware though -- there are lots of shady folks in the \"\"we buy gold\"\" market.\"",
"title": ""
}
] | fiqa |
e8bbcf91ae40d415a585146e1ca3f8db | How can I profit on the Chinese Real-Estate Bubble? | [
{
"docid": "7f827721412df38aabe25fe0136f47c0",
"text": "\"Perhaps buying some internationally exchanged stock of China real-estate companies? It's never too late to enter a bubble or profit from a bubble after it bursts. As a native Chinese, my observations suggest that the bubble may exist in a few of the most populated cities of China such as Beijing, Shanghai and Shenzhen, the price doesn't seem to be much higher than expected in cities further within the mainland, such as Xi'an and Chengdu. I myself is living in Xi'an. I did a post about the urban housing cost of Xi'an at the end of last year: http://www.xianhotels.info/urban-housing-cost-of-xian-china~15 It may give you a rough idea of the pricing level. The average of 5,500 CNY per square meter (condo) hasn't fluctuated much since the posting of the entry. But you need to pay about 1,000 to 3,000 higher to get something desirable. For location, just search \"\"Xi'an, China\"\" in Google Maps. =========== I actually have no idea how you, a foreigner can safely and easily profit from this. I'll just share what I know. It's really hard to financially enter China. To prevent oversea speculative funds from freely entering and leaving China, the Admin of Forex (safe.gov.cn) has laid down a range of rigid policies regarding currency exchange. By law, any native individual, such as me, is imposed of a maximum of $50,000 that can be converted from USD to CNY or the other way around per year AND a maximum of $10,000 per day. Larger chunks of exchange must get the written consent of the Admin of Forex or it will simply not be cleared by any of the banks in China, even HSBC that's not owned by China. However, you can circumvent this limit by using the social ID of your immediate relatives when submitting exchange requests. It takes extra time and effort but viable. However, things may change drastically should China be in a forex crisis or simply war. You may not be able to withdraw USD at all from the banks in China, even with a positive balance that's your own money. My whole income stream are USD which is wired monthly from US to Bank of China. I purchased a property in the middle of last year that's worth 275,000 CNY using the funds I exchanged from USD I had earned. It's a 43.7% down payment on a mortgage loan of 20 years: http://www.mlcalc.com/#mortgage-275000-43.7-20-4.284-0-0-0.52-7-2009-year (in CNY, not USD) The current household loan rate is 6.12% across the entire China. However, because this is my first property, it is discounted by 30% to 4.284% to encourage the first house purchase. There will be no more discounts of loan rate for the 2nd property and so forth to discourage speculative stocking that drives the price high. The apartment I bought in July of 2009 can easily be sold at 300,000 now. Some of the earlier buyers have enjoyed much more appreciation than I do. To give you a rough idea, a house bought in 2006 is now evaluated 100% more, one bought in 2008 now 50% more and one bought in the beginning of 2009 now 25% more.\"",
"title": ""
},
{
"docid": "0432509d0463cf57dfe90785b82f0d78",
"text": "Create, market and perform seminars advising others how to get rich from the Chinese Real-Estate Bubble. Much more likely to be profitable; and you can do it from the comfort of your own country, without currency conversions.",
"title": ""
}
] | [
{
"docid": "133154f62f8331a8df866bfc4aab2f0b",
"text": "\"The trade-off seems to be quite simple: \"\"How much are you going to get if you sell it\"\" against \"\"How much are you going to get if you rent it out\"\". Several people already hinted that the rental revenue may be optimistic, I don't have anything to add to this, but keep in mind that if someone pays 45k for your apartment, the net gains for you will likely be lower as well. Another consideration would be that the value of your apartment can change, if you expect it to rise steadily you may want to think twice before selling. Now, assuming you have calculated your numbers properly, and a near 0% opportunity cost: 45,000 right now 3,200 per year The given numbers imply a return on investment of 14 years, or 7.1%. Personal conclusion: I would be surprised if you can actually get a 3.2k expected net profit for an apartment that rents out at 6k per year, but if you are confident the reward seems to be quite nice.\"",
"title": ""
},
{
"docid": "2827cf778c230ac62baa016936a44c42",
"text": "Serious answer: If 7 banks owned the vast majority of houses for sale -- that is, on their balance sheet, at the peak of the housing bubble -- there would be. These 7 LCD companies produced the majority of LCDs globally. Real estate is far more decentralized, and in many times the bank merely provided financing for a third-party sale (from the builder, from any one of a hundred or so real estate companies, etc). (But I am assuming you probably weren't looking for a factual answer, anyway.)",
"title": ""
},
{
"docid": "5af78f8ae516b739e9b1687d9f881c08",
"text": "The right time to buy real estate is easy to spot. It's when it is difficult to get loans or when real estate agents selling homes are tripping over each other. It's the wrong time to buy when houses are sold within hours of the sign going up. The way to profit from equities over time is to dollar-cost average a diversified portfolio over time, while keeping cash reserves of 5-15% around. When major corrections strike, buy a little extra. You can make money at trading. But it requires that you exert a consistent effort and stay up to date on your investments and future prospects.",
"title": ""
},
{
"docid": "1d3076b1b2a9e936b239cfe2cddfc971",
"text": "It is worth noting first that Real Estate is by no means passive income. The amount of effort and cost involved (maintenance, legal, advertising, insurance, finding the properties, ect.) can be staggering and require a good amount of specialized knowledge to do well. The amount you would have to pay a management company to do the work for you especially with only a few properties can wipe out much of the income while you keep the risk. However, keshlam's answer still applies pretty well in this case but with a lot more variability. One million dollars worth of property should get you there on average less if you do much of the work yourself. However, real estate because it is so local and done in ~100k chunks is a lot more variable than passive stocks and bonds, for instance, as you can get really lucky or really unlucky with location, the local economy, natural disasters, tenants... Taking out loans to get you to the million worth of property faster but can add a lot more risk to the process. Including the risk you wouldn't have any money on retirement. Investing in Real Estate can be a faster way to retirement than some, but it is more risky than many and definitely not passive.",
"title": ""
},
{
"docid": "5bf3487c2e9cffeaedd48bd6196fafaa",
"text": "\"China's regulators, it seems, are on the attack. Guo Shuqing, chairman of the China Banking Regulatory Commission, announced recently that he'd resign if he wasn't able to discipline the banking system. Under his leadership, the CBRC is stepping up scrutiny of the role of trust companies and other financial institutions in helping China's banks circumvent lending restrictions. The People's Bank of China has also been on the offensive. It has recently raised the cost of liquidity, attacked riskier funding structures among smaller banks, and discontinued a program that effectively monetized one-fifth of last year's increase in lending. Are the regulators finally getting serious about reining in credit creation? The answer is an easy one: Yes if they're willing to allow economic growth to slow substantially, probably to 3 percent or less, and no if they aren't. inRead invented by Teads This is because there's a big difference between China's sustainable growth rate, based on rising demand driven by household consumption and productive investment, and its actual GDP growth rate, which is boosted by massive lending to fund investment projects that are driven by the need to generate economic activity and employment. Economists find it very difficult to formally acknowledge the difference between the two rates, and many don't even seem to recognize that it exists. Yet this only shows how confused economists are about gross domestic product more generally. The confusion arises because a country's GDP is not a measure of the value of goods and services it creates but rather a measure of economic activity. In a market economy, investment must create enough additional productive capacity to justify the expenditure. If it doesn't, it must be written down to its true economic value. This is why GDP is a reasonable proxy in a market economy for the value of goods and services produced. But in a command economy, investment can be driven by factors other than the need to increase productivity, such as boosting employment or local tax revenue. What's more, loss-making investments can be carried for decades before they're amortized, and insolvency can be ignored. This means that GDP growth can overstate value creation for decades. That's what has happened in China. In the first quarter of 2017, China added debt equal to more than 40 percentage points of GDP -- an amount that has been growing year after year. In 2011, the World Economic Forum predicted that China's debt would increase by a worrying $20 trillion by 2020. By 2016, it had already increased by $22 trillion, according to the most conservative estimates, and at current rates it will increase by as much as $50 trillion by 2020. These numbers probably understate the reality. If all this debt hasn't boosted China's GDP growth to substantially more than its potential growth rate, then what was the point? And why has it proven so difficult for the government to rein it in? It has promised to do so since 2012, yet credit growth has only accelerated, reaching some of the highest levels ever seen for a developing country. The answer is that credit creation had to accelerate to boost GDP growth above the growth rate of productive capacity. Much, if not most, of China's 6.5 percent GDP growth is simply an artificial boost in economic activity with no commensurate increase in the capacity to create goods and services. It must be fully reversed once credit stops growing. To make matters worse, if high debt levels generate financial distress costs for the economy -- as already seems to be happening -- the amount that must be reversed will substantially exceed the original boost. Once credit is under control, China will have lower but healthier GDP growth rates. If the economy rebalances, most Chinese might not even notice: It would require that household income -- which has grown much more slowly than GDP for nearly three decades -- now grow faster, so that the sharp slowdown in economic growth won't be matched by an equivalent slowdown in wage growth. Clear thinking from leading voices in business, economics, politics, foreign affairs, culture, and more. Share the View Enter your email Sign Up But to manage this rebalancing requires substantial transfers of wealth from local governments to ordinary households to boost consumption. This is why China hasn't been able to control credit growth in the past. The central government has had to fight off provincial \"\"vested interests,\"\" who oppose any substantial transfer of wealth. Without these transfers, slower GDP growth would mean higher unemployment. Whether regulators can succeed in reining in credit creation this time is ultimately a political question, and depends on the central government's ability to force through necessary reforms. Until then, as long as China has the debt capacity, GDP growth rates will remain high. But to see that as healthy is to miss the point completely.\"",
"title": ""
},
{
"docid": "cdc8ee4b63ae9ac426fd4dad8942a239",
"text": "Huh, well it's working for me. I've got 3 properties and am a little over 25% of my goal to never work again. How would you suggest one get rich? I assume you have a better plan than he does?",
"title": ""
},
{
"docid": "cd0b25899dfe8a0d7965310d6cfc769b",
"text": "Playing the markets is simple...always look for the sucker in the room and outsmart him. Of course if you can't tell who that sucker is it's probably you. If the strategy you described could make you rich, cnbc staff would all be billionaires. There are no shortcuts, do your research and decide on a strategy then stick to it in all weather or until you find a better one.",
"title": ""
},
{
"docid": "e0b589d58e89dc2487eaf6e429674240",
"text": "\"Americans are snapping, like crazy. And not only Americans, I know a lot of people from out of country are snapping as well, similarly to your Australian friend. The market is crazy hot. I'm not familiar with Cleveland, but I am familiar with Phoenix - the prices are up at least 20-30% from what they were a couple of years ago, and the trend is not changing. However, these are not something \"\"everyone\"\" can buy. It is very hard to get these properties financed. I found it impossible (as mentioned, I bought in Phoenix). That means you have to pay cash. Not everyone has tens or hundreds of thousands of dollars in cash available for a real estate investment. For many Americans, 30-60K needed to buy a property in these markets is an amount they cannot afford to invest, even if they have it at hand. Also, keep in mind that investing in rental property requires being able to support it - pay taxes and expenses even if it is not rented, pay to property managers, utility bills, gardeners and plumbers, insurance and property taxes - all these can amount to quite a lot. So its not just the initial investment. Many times \"\"advertised\"\" rents are not the actual rents paid. If he indeed has it rented at $900 - then its good. But if he was told \"\"hey, buy it and you'll be able to rent it out at $900\"\" - wouldn't count on that. I know many foreigners who fell in these traps. Do your market research and see what the costs are at these neighborhoods. Keep in mind, that these are distressed neighborhoods, with a lot of foreclosed houses and a lot of unemployment. It is likely that there are houses empty as people are moving out being out of job. It may be tough to find a renter, and the renters you find may not be able to pay the rent. But all that said - yes, those who can - are snapping.\"",
"title": ""
},
{
"docid": "1695261b4ee40cb33966686a30309dac",
"text": "Well, Taking a short position directly in real estate is impossible because it's not a fungible asset, so the only way to do it is to trade in its derivatives - Investment Fund Stock, indexes and commodities correlated to the real estate market (for example, materials related to construction). It's hard to find those because real estate funds usually don't issue securities and rely on investment made directly with them. Another factor should be that those who actually do have issued securities aren't usually popular enough for dealers and Market Makers to invest in it, who make it possible to take a short position in exchange for some spread. So what you can do is, you can go through all the existing real estate funds and find out if any of them has a broker that let's you short it, in other words which one of them has securities in the financial market you can buy or sell. One other option is looking for real estate/property derivatives, like this particular example. Personally, I would try to computationally find other securities that may in some way correlate with the real estate market, even if they look a bit far fetched to be related like commodities and stock from companies in construction and real estate management, etc. and trade those because these have in most of the cases more liquidity. Hope this answers your question!",
"title": ""
},
{
"docid": "99c930926902e10d8b135a90ddfbcc9a",
"text": "THANK YOU so much! That is exactly what I was looking for. Unfortunately I'm goign to be really busy for 7 days but I'd love to tear through some of this material and ask you some questions if you don't mind. What do you do for a living now? Still in real estate? Did you go toward the brokerage side or are you still consulting? What's the atmosphere/day-to-day like?",
"title": ""
},
{
"docid": "51efd4c92fe5580c043a1793767c9e62",
"text": "No, there is no linkage to the value of real estate and inflation. In most of the United States if you bought at the peak of the market in 2006, you still haven't recovered 7+ years later. Also real estate has a strong local component to the price. Pick the wrong location or the wrong type of real estate and the value of your real estate will be dropping while everybody else sees their values rising. Three properties I have owned near Washington DC, have had three different price patterns since the late 80's. Each had a different starting and ending point for the peak price rise. You can get lucky and make a lot, but there is no way to guarantee that prices will rise at all during the period you will own it.",
"title": ""
},
{
"docid": "001ad7f8030aa55b992aab75c2bd3b7d",
"text": "This is one way in which the scheme could work: You put your own property (home, car, piece of land) as a collateral and get a loan from a bank. You can also try to use the purchased property as security, but it may be difficult to get 100% loan-to-value. You use the money to buy a property that you expect will rise in value and/or provide rent income that is larger than the mortgage payment. Doing some renovations can help the value rise. You sell the property, pay back the loan and get the profits. If you are fast, you might be able to do this even before the first mortgage payment is due. So yeah, $0 of your own cash invested. But if the property doesn't rise in value, you may end up losing the collateral.",
"title": ""
},
{
"docid": "3fe13b33eb0c57418a1a75e14034bc8e",
"text": "I know there are a lot of papers on bubbles already, but I was always interested in how many were retail/individually driven vs institutionally driven bubbles - at least who plays a larger role. The American pension crisis is also another interesting topic that may be fun to write about. With everyone calling doomsday on them, maybe you can shed some light on some (if any) of the bs or touch on what options they may actually have to survive. Topics on investor behaviour is always a safe bet - potential returns lost due to home bias, investment behaviour of millennials, bla bla bla. The dangers of the rise of passive investments (ETFs) if any & if it actually generates more room to capture alpha since there may be greater inefficiencies. Also the impact of a stock's return relative to the amount of ETFs it is a constituent of So many things - pls advise what you end up deciding to choose and post the paper when the time comes!",
"title": ""
},
{
"docid": "1372eca98843f33d82d53e28b69a5f0b",
"text": "\"No, it can really not. Look at Detroit, which has lost a million residents over the past few decades. There is plenty of real estate which will not go for anything like it was sold. Other markets are very risky, like Florida, where speculators drive too much of the price for it to be stable. You have to be sure to buy on the downturn. A lot of price drops in real estate are masked because sellers just don't sell, so you don't really know how low the price is if you absolutely had to sell. In general, in most of America, anyway, you can expect Real Estate to keep up with inflation, but not do much better than that. It is the rental income or the leverage (if you buy with a mortgage) that makes most of the returns. In urban markets that are getting an influx of people and industry, however, Real Estate can indeed outpace inflation, but the number of markets that do this are rare. Also, if you look at it strictly as an investment (as opposed to the question of \"\"Is it worth it to own my own home?\"\") there are a lot of additional costs that you have to recoup, from property taxes to bills, rental headaches etc. It's an investment like any other, and should be approached with the same due diligence.\"",
"title": ""
},
{
"docid": "9183b4c1428a12698926e2e6ad9e4e91",
"text": "A possibility could be real estate brokerage firms such as Realogy or Prudential. Although a brokerage commission is linked to the sale prices it is more directly impacted by sales volume. If volume is maintained or goes up a real estate brokerage firm can actually profit rather handsomely in an up market or a down market. If sales volume does go up another option would be other service markets for real estate such as real estate information and marketing websites and sources i.e. http://www.trulia.com. Furthermore one can go and make a broad generalization such as since real estate no longer requires the same quantity of construction material other industries sensitive to the price of those commodities should technically have a lower cost of doing business. But be careful in the US much of the wealth an average american has is in their home. In this case this means that the economy as a whole takes a dive due to consumer uncertainty. In which case safe havens could benefit, may be things like Proctor & Gamble, gold, or treasuries. Side Note: You can always short builders or someone who loses if the housing market declines, this will make your investment higher as a result of the security going lower.",
"title": ""
}
] | fiqa |
53845aad0f59dfe3c9a84576cb633353 | Is real (physical) money traded during online trading? | [
{
"docid": "2bd492a29d94dd3739c66c7cf4cf0976",
"text": "\"With Forex trading - physical currency is not involved. You're playing with the live exchange rates, and it is not designed for purchasing/selling physical currency. Most Forex trading is based on leveraging, thus you're not only buying money that you're not going to physically receive - you're also paying with money that you do not physically have. The \"\"investment\"\" is in fact a speculation, and is akin to gambling, which, if I remember correctly, is strictly forbidden under the Islam rules. That said, the positions you have - are yours, and technically you can demand the physical currency to be delivered to you. No broker will allow online trading on these conditions, though, similarly to the stocks - almost no broker allows using physical certificates for stocks trading anymore.\"",
"title": ""
},
{
"docid": "c760adde250dd20b09e0e032b5bdd9d6",
"text": "When you buy a currency via FX market, really you are just exchanging one country's currency for another. So if it is permitted to hold one currency electronically, surely it must be permitted to hold a different country's currency electronically.",
"title": ""
},
{
"docid": "05314110242eda6406d27e495479cf4a",
"text": "I asked a followup question on the Islam site. The issue with Islam seems to be that exchanging money for other money is 'riba' (roughly speaking usury). There are different opinions, but it seems that in general exchanging money for 'something else' is fine, but exchanging money for other money is forbidden. The physicality of either the things or the money is not relevant (though again, opinions may differ). It's allowed to buy a piece of software for download, even though nothing physical is ever bought. Speculating on currency is therefore forbidden, and that's true whether or not a pile of banknotes gets moved around at any point. But that's my interpretation of what was said on the Islam site. I'm sure they would answer more detailed questions.",
"title": ""
},
{
"docid": "806eaff76ab4cbf0bb38f479e6c4fba8",
"text": "\"This is somewhat of a non-answer but I'm not sure you'll ever find a satisfying answer to this question, because the premises on which the question is based on are flawed. Money itself does not \"\"exist physically,\"\" at least not in the same sense that a product you buy does. It simply does not make sense to say that you \"\"physically own money.\"\" You can build a product out of atoms, but you cannot build a money out of atoms. If you could, then you could print your own money. Actually, you can try to print your own money, but nobody would knowingly accept it and thus is it functionally nonequivalent to real money. The paper has no intrinsic value. Its value is derived from the fact that other people perceive it as valuable and nowhere else. Ergo paper money is no different than electronic money. It is for this reason that, if I were you, I would be okay with online Forex trading.\"",
"title": ""
},
{
"docid": "5ac0ada5bce1e9894ddcad29f495cc03",
"text": "\"I think you need to define what you mean by \"\"buy currency online using some online forex trading platform\"\" ... In large Fx trades, real money [you mean actual electronic money, as there is not paper that travels these days]... The Fx market is quite wide with all kinds of trades. There are quite a few Fx transactions that are meant for delivery. You have to pay in the currency for full amount and you get the funds electronicall credited to you in other currency [ofcouse you have an account in the other currency or you have an obligation to pay]. This type of transaction is valid in Ismalic Banking. The practise of derivaties based on this or forward contracts on this is not allowed.\"",
"title": ""
},
{
"docid": "597e6d04eba8bbeb3344b750e7fe1092",
"text": "\"This is my two cents (pun intended). It was too long for a comment, so I tried to make it more of an answer. I am no expert with investments or Islam: Anything on a server exists 'physically'. It exists on a hard drive, tape drive, and/or a combination thereof. It is stored as data, which on a hard drive are small particles that are electrically charged, where each bit is represented by that electric charge. That data exists physically. It also depends on your definition of physically. This data is stored on a hard drive, which I deem physical, though is transferred via electric pulses often via fiber cable. Don't fall for marketing words like cloud. Data must be stored somewhere, and is often redundant and backed up. To me, money is just paper with an amount attached to it. It tells me nothing about its value in a market. A $1 bill was worth a lot more 3 decades ago (you could buy more goods because it had a higher value) than it is today. Money is simply an indication of the value of a good you traded at the time you traded. At a simplistic level, you could accomplish the same thing with a friend, saying \"\"If you buy lunch today, I'll buy lunch next time\"\". There was no exchange in money between me and you, but there was an exchange in the value of the lunch, if that makes sense. The same thing could have been accomplished by me and you exchanging half the lunch costs in physical money (or credit/debit card or check). Any type of investment can be considered gambling. Though you do get some sort of proof that the investment exists somewhere Investments may go up or down in value at any given time. Perhaps with enough research you can make educated investments, but that just makes it a smaller gamble. Nothing is guaranteed. Currency investment is akin to stock market investment, in that it may go up or down in value, in comparison to other currencies; though it doesn't make you an owner of the money's issuer, generally, it's similar. I find if you keep all your money in U.S. dollars without considering other nations, that's a sort of ignorant way of gambling, you're betting your money will lose value less slowly than if you had it elsewhere or in multiple places. Back on track to your question: [A]m I really buying that currency? You are trading a currency. You are giving one currency and exchanging it for another. I guess you could consider that buying, since you can consider trading currency for a piece of software as buying something. Or is the situation more like playing with the live rates? It depends on your perception of playing with the live rates. Investments to me are long-term commitments with reputable research attached to it that I intend to keep, through highs and lows, unless something triggers me to change my investment elsewhere. If by playing you mean risk, as described above, you will have a level of risk. If by playing you mean not taking it seriously, then do thorough research before investing and don't be trading every few seconds for minor returns, trying to make major returns out of minor returns (my opinion), or doing anything based on a whim. Was that money created out of thin air? I suggest you do more research before starting to trade currency into how markets and trading works. Simplistically, think of a market as a closed system with other markets, such as UK market, French market, etc. Each can interact with each other. The U.S. [or any market] has a set number of dollars in the pool. $100 for example's sake. Each $1 has a certain value associated with it. If for some reason, the country decides to create more paper that is green, says $1, and stamps presidents on them (money), and adds 15 $1 to the pool (making it $115), each one of these dollars' value goes down. This can also happen with goods. This, along with the trading of goods between markets, peoples' attachment of value to goods of the market, and peoples' perception of the market, is what fluctates currency trading, in simple terms. So essentially, no, money is not made out of thin air. Money is a medium for value though values are always changing and money is a static amount. You are attempting to trade values and own the medium that has the most value, if that makes sense. Values of goods are constantly changing. This is a learning process for me as well so I hope this helps answers your questions you seem to have. As stated above, I'm no expert; I'm actually quite new to this, so I probably missed a few things here and there.\"",
"title": ""
}
] | [
{
"docid": "fd07e3d575eb4ffa0cedff232d7267c4",
"text": "I trade futures. No FX or equities though. It is my only source of income, and has been for about 5 years now. Equities and FX, to me, seems like more of a gamble than Vegas. I don't know how people do it.",
"title": ""
},
{
"docid": "f7cf47e5739f6c4898bf5d089140baa9",
"text": "Yes, you will need to create an actual account. However, when all is done and you are about to log in, there will be an option on whether you want to log in as a live trader or a paper trader. Select the paper trading option and log in and get rich off fake money.",
"title": ""
},
{
"docid": "035b9a9409d4429f9b7bc72a7a323270",
"text": "I don't understand what your contention with Katsuyama is exactly. Are you saying that Lewis' Flash Boys was a work of fiction? Electronic trading is more efficient, cheaper, democratic, etc than paper orders, for sure. But many HFT firms have built billion dollar enterprises by scalping ordinary investors, fractions of pennies at a time, without adding anything of value. The former truth does not invalidate the latter one.",
"title": ""
},
{
"docid": "11c607b0ff6dc8f0ff3d816435c528ad",
"text": "Stock trades are always between real buyers and real sellers. In thinly-traded small stocks, for example, you may not always be able to find a buyer when you want to sell. For most public companies, there is enough volume that individual investors can just about always fill their market orders.",
"title": ""
},
{
"docid": "da970b33c88bfcf180ba2e428bd05130",
"text": "\"There are gold index funds. I'm not sure what you mean by \"\"real gold\"\". If you mean you want to buy physical gold, you don't need to. The gold index funds will track the price of gold and will keep you from filling your basement up with gold bars. Gold index funds will buy gold and then issue shares for the gold they hold. You can then buy and sell these just like you would buy and sell any share. GLD and IAU are the ticker symbols of some of these funds. I think it is also worth pointing out that historically gold has a been a poor investment.\"",
"title": ""
},
{
"docid": "7272c31978e10ac0038691e7e9e1f605",
"text": "\"The only \"\"authoritative document\"\" issued by the IRS to date relating to Cryptocurrencies is Notice 2014-21. It has this to say as the first Q&A: Q-1: How is virtual currency treated for federal tax purposes? A-1: For federal tax purposes, virtual currency is treated as property. General tax principles applicable to property transactions apply to transactions using virtual currency. That is to say, it should be treated as property like any other asset. Basis reporting the same as any other property would apply, as described in IRS documentation like Publication 550, Investment Income and Expenses and Publication 551, Basis of Assets. You should be able to use the same basis tracking method as you would use for any other capital asset like stocks or bonds. Per Publication 550 \"\"How To Figure Gain or Loss\"\", You figure gain or loss on a sale or trade of property by comparing the amount you realize with the adjusted basis of the property. Gain. If the amount you realize from a sale or trade is more than the adjusted basis of the property you transfer, the difference is a gain. Loss. If the adjusted basis of the property you transfer is more than the amount you realize, the difference is a loss. That is, the assumption with property is that you would be using specific identification. There are specific rules for mutual funds to allow for using average cost or defaulting to FIFO, but for general \"\"property\"\", including individual stocks and bonds, there is just Specific Identification or FIFO (and FIFO is just making an assumption about what you're choosing to sell first in the absence of any further information). You don't need to track exactly \"\"which Bitcoin\"\" was sold in terms of exactly how the transactions are on the Bitcoin ledger, it's just that you bought x bitcoins on date d, and when you sell a lot of up to x bitcoins you specify in your own records that the sale was of those specific bitcoins that you bought on date d and report it on your tax forms accordingly and keep track of how much of that lot is remaining. It works just like with stocks, where once you buy a share of XYZ Corp on one date and two shares on another date, you don't need to track the movement of stock certificates and ensure that you sell that exact certificate, you just identify which purchase lot is being sold at the time of sale.\"",
"title": ""
},
{
"docid": "6f178facbd7508300d25c48cbe0b2462",
"text": "If the company has a direct reinvestment plan or DRIP that they operate in house or contract out to a financial company to administer, yes. There can still be transaction fees, and none of these I know of offer real time trading. Your trade price will typically be defined in the plan as the opening or closing price on the trade date. Sometimes these plans offer odd lot sales at a recent running average price which could provide a hundred dollar or so arbitrage opportunity.",
"title": ""
},
{
"docid": "aabfd56468dbb691763a8e11b65b0c44",
"text": "I mean there is always the possibility that occurs, but I think it is extremely unlikely. The network effect is extremely important in technology and even more so with money given liquidity. Bitcoin is the protocol (TCP/IP). Many of the blockchains that people are referencing are just another form of a database and they are not permisonless. It is the equivalent of the Internet versus an intranet.",
"title": ""
},
{
"docid": "8a4f34e63f42deecfde00368d6d715fa",
"text": "\"Paying in physical cash is almost never a good idea for large purchases (unless you like being audited and/or having lots of attention from law enforcement). All purchases over $10k in cash need to have special forms filled out for the IRS and Financial Crimes Enforcement Network, so this is not a tinfoil hat conspiracy theory... if you do that on the regular, you ARE being investigated. When people say \"\"cash\"\" for normal purchases, they generally mean a wire transfer (e.g. buying a house \"\"cash\"\" = wire transfer between banks). In this case, purchasing a company \"\"cash\"\" is contrasted with purchasing with stock. So instead of getting $13.7B of AMZN, Amazon takes out a loan (or pulls their money from cash on on hand) and transfers it to the financial institution handling the sale. Everyone who owns WFM stock gets paid out in cash, as opposed to receiving some number of shares of AMZN.\"",
"title": ""
},
{
"docid": "fd737c6b883bb1c242a47956f42d0b68",
"text": "There is normally a policy at the organisation that would restrict trades or allow trades under certain conditions. This would be in accordance with the current regulations as well as Institutions own ethical standards. Typical I have seen is that Technology roles are to extent not considered sensitive, ie the employees in this job function normally do not access sensitive data [unless your role is analyst or production support]. An employee in exempt roles are allowed to trade in securities directly with other broker or invest in broad based Mutual Funds or engage a portfolio management services from a reputed organisation. It is irrelevant that your company only deals with amounts > 1 Million, infact if you were to know what stock the one million is going into, you may buy it slightly earlier and when the company places the large order, the stock typically moves upwards slightly, enough for you to make some good money. That is Not allowed. But its best you get hold of a document that would layout the do' and don't in your organisation. All such organisation are mandated to have a written policy in this regard.",
"title": ""
},
{
"docid": "3bbf44e2a1efae5b8520705cc16e4ebd",
"text": "In short, thanks to the answers and comments posted so far. No actual money is magically disappeared when the stock price goes down but the value is lost. The value changes of a stock is similar to the value changes of a house. The following is the long answer I came up with based on the previous answers and comments alone with my own understandings. Any experts who find any of the following is 200% out of place and wrong, feel free to edit it or make comments. Everything below only applies if the following are true: The stock price is only decreasing since the IPO because the company has been spending the money but not making profits after the IPO. The devaluation of the stock is not the result of any bad news related to the company but a direct translation of the money the company has lost by spending on whatever the company is doing. The actual money don’t just disappear into the thin air when the stock price goes down. All the money involved in trading this stock has already distributed to the sellers of this stock before the price went down. There is no actual money that is literally disappeared, it was shifted from one hand to another, but again this already happened before the price went down. For example, I bought some stocks for $100, then the price went down to $80. The $100 has already shifted from my hand to the seller before the price went down. I got the stock with less value, but the actual money $100 did not just go down to $80, it’s in the hand of the seller who sold the stock to me. Now if I sell the stock to the same seller who sold the stock to me, then I lost $20, where did the $20 go? it went to the seller who sold the stock to me and then bought it back at a lower price. The seller ended up with the same amount of the stocks and the $20 from me. Did the seller made $20? Yes, but did the seller’s total assets increased? No, it’s still $100, $80 from the stocks, and $20 in cash. Did anyone made an extra $20? No. Although I did lost $20, but the total cash involved is still there, I have the $80 , the seller who sold the stock to me and then bought it back has the $20. The total cash value is still $100. Directly, I did lost $20 to the guy who sold me the stock when the stock has higher value and then bought it back at a lower price. But that guy did not increased his total assets by $20. The value of the stock is decreased, the total money $100 did not disappear, it ended up from one person holding it to 2 people holding it. I lost $20 and nobody gained $20, how is that possible? Assume the company of the stock never made any profit since it’s IPO, the company just keeps spending the money, to really track down where the $20 I lost is going, it is the company has indirectly spent that money. So who got that $20 I lost? It could be the company spent $20 for a birthday cake, the $20 went to the cake maker. The company never did anything to make that $20 back, so that $20 is lost. Again, assume the stock price only goes down after its IPO, then buying this stock is similar to the buying a sport car example from JoeTaxpayer (in one of the answers), and buying an apple example from BrenBarn(in one of the comments from JoeTaxpayer’s answer). Go back to the question, does the money disappears into the thin air when the value of the stock goes down? No, the money did not disappear, it switched hands. It went from the buyer of the stock to the company, and the company has spent that money. Then what happens when the stock price goes down because bad news about the company? I believe the actual money still did not just disappear. If the bad news turn out to be true that the company had indeed lost this much money, the money did not disappear, it’s been spent/lost by the company. If the bad news turn out to be false, the stock price will eventually go up again, the money is still in the hand of the company. As a summary, the money itself did not disappear no matter what happens, it just went from one wallet to another wallet in many different ways through the things people created that has a value.",
"title": ""
},
{
"docid": "3f37a3f8beafc892a7749251e9d7a113",
"text": "\"Many online brokers have a \"\"virtual\"\" or \"\"paper\"\" trading feature to them. You can make trades in near-real time with a fake account balance and it will treat it as though you were making the trade at that time. No need to manage the math yourself - plus, you can even do more complicated trades (One-Cancels-Other/One-Triggers-Other).\"",
"title": ""
},
{
"docid": "ad834980c8330d15845645b9551a35af",
"text": "Sure they can (most publicly traded banks at least) - and they do it a lot. Many banks have a proprietary trading desk, or Prop desk, where traders are buying and selling shares of publicly traded companies on behalf of the bank, with the bank's own money. This is as opposed to regular trading desks where the banks trade on behalf of their customers.",
"title": ""
},
{
"docid": "be541e7f5fb3c8fe472400d71d964114",
"text": "I can make withdrawals immediately, 2x per month. I wasn't told the money would be tied up. And also, although it doesn't count for much, I'd like to trade under a formal prop firm just so I could put it on my resume, people wouldn't put much weight on performance in a retail account. Atleast, I don't believe so, I could be wrong",
"title": ""
},
{
"docid": "2df91218cdd7577567e93eb9bf227e59",
"text": "Obviously, you should not buy stock when the option is to pay down your debt. However, your question is different. Should you sell to reduce debt. That really depends on your personal situation. If you were planning to sell the stock anyway, go ahead and reduce your loans. Check out how the stock is doing and what the perspectives are. If the stock looks like it's going down, sell... Do you have savings? Unless you do, I should advise to sell the stock at any rate. If you do have savings, are they earning you more (in percentage) than your loans? If they are, keep them...",
"title": ""
}
] | fiqa |
52bf372f3351af3c09ccd800fb50233c | Adaptive Optical Self-Interference Cancellation Using a Semiconductor Optical Amplifier | [
{
"docid": "f84c399ff746a8721640e115fd20745e",
"text": "Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond.",
"title": ""
}
] | [
{
"docid": "ab582b371223c25f43e25058883e92fa",
"text": "Internet of Things is an emerging technology having the ability to change the way we live. In IoT vision, each and every 'thing' has the ability of talking to each other that brings the idea of Internet of Everything in reality. Numerous IoT services can make our daily life easier, smarter, and even safer. Using IoT in designing some special services can make a lifesaver system. In this paper, we have presented an IoT enabled approach that can provide emergency communication and location tracking services in a remote car that meets an unfortunate accident or any other emergency situation. Immediately after an accident or an emergency, the system either starts automatically or may be triggered manually. Depending upon type of emergency (police and security, fire and rescue, medical, or civil) it initiates communication and shares critical information e.g. location information, a set of relevant images taken from prefixed angles etc. with appropriate server / authority. Provision of interactive real-time multimedia communication, real-time location tracking etc. have also been integrated to the proposed system to monitor the exact condition in real-time basis. The system prototype has been designed with Raspberry Pi 3 Model B and UMTS-HSDPA communication protocol.",
"title": ""
},
{
"docid": "90414004f8681198328fb48431a34573",
"text": "Process models play important role in computer aided process engineering. Although the structure of these models are a priori known, model parameters should be estimated based on experiments. The accuracy of the estimated parameters largely depends on the information content of the experimental data presented to the parameter identification algorithm. Optimal experiment design (OED) can maximize the confidence on the model parameters. The paper proposes a new additive sequential evolutionary experiment design approach to maximize the amount of information content of experiments. The main idea is to use the identified models to design new experiments to gradually improve the model accuracy while keeping the collected information from previous experiments. This scheme requires an effective optimization algorithm, hence the main contribution of the paper is the incorporation of Evolutionary Strategy (ES) into a new iterative scheme of optimal experiment design (AS-OED). This paper illustrates the applicability of AS-OED for the design of feeding profile for a fed-batch biochemical reactor.",
"title": ""
},
{
"docid": "45d496fe8762fa52bbf6430eda2b7cfd",
"text": "This paper presents deployment algorithms for multiple mobile robots with line-of-sight sensing and communication capabilities in a simple nonconvex polygonal environment. The objective of the proposed algorithms is to achieve full visibility of the environment. We solve the problem by constructing a novel data structure called the vertex-induced tree and designing schemes to deploy over the nodes of this tree by means of distributed algorithms. The agents are assumed to have access to a local memory and their operation is partially asynchronous",
"title": ""
},
{
"docid": "c64b13db5a4c35861b06ec53c5c73946",
"text": "In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.",
"title": ""
},
{
"docid": "4b0b2c7168fa04543d77bee46af14b0a",
"text": "Individuals face privacy risks when providing personal location data to potentially untrusted location based services (LBSs). We develop and demonstrate CacheCloak, a system that enables realtime anonymization of location data. In CacheCloak, a trusted anonymizing server generates mobility predictions from historical data and submits intersecting predicted paths simultaneously to the LBS. Each new predicted path is made to intersect with other users' paths, ensuring that no individual user's path can be reliably tracked over time. Mobile users retrieve cached query responses for successive new locations from the trusted server, triggering new prediction only when no cached response is available for their current locations. A simulated hostile LBS with detailed mobility pattern data attempts to track users of CacheCloak, generating a quantitative measure of location privacy over time. GPS data from a GIS-based traffic simulation in an urban environment shows that CacheCloak can achieve realtime location privacy without loss of location accuracy or availability.",
"title": ""
},
{
"docid": "7d23d8d233a3fc7ff75edf361acbe642",
"text": "The diagnosis and treatment of chronic patellar instability caused by trochlear dysplasia can be challenging. A dysplastic trochlea leads to biomechanical and kinematic changes that often require surgical correction when symptomatic. In the past, trochlear dysplasia was classified using the 4-part Dejour classification system. More recently, new classification systems have been proposed. Future studies are needed to investigate long-term outcomes after trochleoplasty.",
"title": ""
},
{
"docid": "04b0f8be4eaa99aa6ee5cc6b7c6ad662",
"text": "Systems designed with measurement and attestation in mind are often layered, with the lower layers measuring the layers above them. Attestations of such systems, which we call layered attestations, must bundle together the results of a diverse set of application-specific measurements of various parts of the system. Some methods of layered at-testation are more trustworthy than others, so it is important for system designers to understand the trust consequences of different system configurations. This paper presents a formal framework for reasoning about layered attestations, and provides generic reusable principles for achieving trustworthy results.",
"title": ""
},
{
"docid": "8b51bcd5d36d9e15419d09b5fc8995b5",
"text": "In this technical report, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS) from a standpoint of system observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and divergence. We support our claim with an analytical study of the Observability Gramian, along with its right nullspace, which constitutes the basis of the unobservable directions of the system. We develop an Observability-Constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. Our analysis, along with the proposed method for reducing inconsistency, are extensively validated with simulation trials and real-world experimentation.",
"title": ""
},
{
"docid": "265d69d874481270c26eb371ca05ac51",
"text": "A compact dual-band dual-polarized antenna is proposed in this paper. The two pair dipoles with strong end coupling are used for the lower frequency band, and cross-placed patch dipoles are used for the upper frequency band. The ends of the dipoles for lower frequency band are bent to increase the coupling between adjacent dipoles, which can benefit the compactness and bandwidth of the antenna. Breaches are introduced at the ends of the dipoles of the upper band, which also benefit the compactness and matching of the antenna. An antenna prototype was fabricated and measured. The measured results show that the antenna can cover from 790 MHz to 960 MHz (19.4%) for lower band and from 1710 MHz to 2170 MHz (23.7%) for upper band with VSWR < 1.5. It is expected to be a good candidate design for base station antennas.",
"title": ""
},
{
"docid": "0a035cbf258996b1c1ae6662c8e2cc69",
"text": "This paper analyzes the authentication and key agreement protocol adopted by Universal Mobile Telecommunication System (UMTS), an emerging standard for third-generation (3G) wireless communications. The protocol, known as 3GPP AKA, is based on the security framework in GSM and provides significant enhancement to address and correct real and perceived weaknesses in GSM and other wireless communication systems. In this paper, we first show that the 3GPP AKA protocol is vulnerable to a variant of the so-called false base station attack. The vulnerability allows an adversary to redirect user traffic from one network to another. It also allows an adversary to use authentication vectors corrupted from one network to impersonate all other networks. Moreover, we demonstrate that the use of synchronization between a mobile station and its home network incurs considerable difficulty for the normal operation of 3GPP AKA. To address such security problems in the current 3GPP AKA, we then present a new authentication and key agreement protocol which defeats redirection attack and drastically lowers the impact of network corruption. The protocol, called AP-AKA, also eliminates the need of synchronization between a mobile station and its home network. AP-AKA specifies a sequence of six flows. Dependent on the execution environment, entities in the protocol have the flexibility of adaptively selecting flows for execution, which helps to optimize the efficiency of AP-AKA both in the home network and in foreign networks.",
"title": ""
},
{
"docid": "287b284529dc5d5ac183700917a42755",
"text": "Reconfiguration, by exchanging the functional links between the elements of the system, represents one of the most important measures which can improve the operational performance of a distribution system. The authors propose an original method, aiming at achieving such optimization through the reconfiguration of distribution systems taking into account various criteria in a flexible and robust approach. The novelty of the method consists in: the criteria for optimization are evaluated on active power distribution systems (containing distributed generators connected directly to the main distribution system and microgrids operated in grid-connected mode); the original formulation (Pareto optimality) of the optimization problem and an original genetic algorithm (based on NSGA-II) to solve the problem in a non-prohibitive execution time. The comparative tests performed on test systems have demonstrated the accuracy and promptness of the proposed algorithm. OPEN ACCESS Energies 2013, 6 1440",
"title": ""
},
{
"docid": "a31358ffda425f8e3f7fd15646d04417",
"text": "We elaborate the design and simulation of a planar antenna that is suitable for CubeSat picosatellites. The antenna operates at 436 MHz and its main features are miniature size and the built-in capability to produce circular polarization. The miniaturization procedure is given in detail, and the electrical performance of this small antenna is documented. Two main miniaturization techniques have been applied, i.e. dielectric loading and distortion of the current path. We have added an extra degree of freedom to the latter. The radiator is integrated with the chassis of the picosatellite and, at the same time, operates at the lower end of the UHF spectrum. In terms of electrical size, the structure presented herein is one of the smallest antennas that have been proposed for small satellites. Despite its small electrical size, the antenna maintains acceptable efficiency and gain performance in the band of interest.",
"title": ""
},
{
"docid": "fb2b4ebce6a31accb3b5407f24ad64ba",
"text": "The number of multi-robot systems deployed in field applications has risen dramatically over the years. Nevertheless, supervising and operating multiple robots at once is a difficult task for a single operator to execute. In this paper we propose a novel approach for utilizing advising automated agents when assisting an operator to better manage a team of multiple robots in complex environments. We introduce the Myopic Advice Optimization (MYAO) Problem and exemplify its implementation using an agent for the Search And Rescue (SAR) task. Our intelligent advising agent was evaluated through extensive field trials, with 44 non-expert human operators and 10 low-cost mobile robots, in simulation and physical deployment, and showed a significant improvement in both team performance and the operator’s satisfaction.",
"title": ""
},
{
"docid": "19ab044ed5154b4051cae54387767c9b",
"text": "An approach is presented for minimizing power consumption for digital systems implemented in CMOS which involves optimization at all levels of the design. This optimization includes the technology used to implement the digital circuits, the circuit style and topology, the architecture for implementing the circuits and at the highest level the algorithms that are being implemented. The most important technology consideration is the threshold voltage and its control which allows the reduction of supply voltage without signijcant impact on logic speed. Even further supply reductions can be made by the use of an architecture-based voltage scaling strategy, which uses parallelism and pipelining, to tradeoff silicon area and power reduction. Since energy is only consumed when capacitance is being switched, power can be reduced by minimizing this capacitance through operation reduction, choice of number representation, exploitation of signal correlations, resynchronization to minimize glitching, logic design, circuit design, and physical design. The low-power techniques that are presented have been applied to the design of a chipset for a portable multimedia terminal that supports pen input, speech I/O and fullmotion video. The entire chipset that perjorms protocol conversion, synchronization, error correction, packetization, buffering, video decompression and D/A conversion operates from a 1.1 V supply and consumes less than 5 mW.",
"title": ""
},
{
"docid": "42cf4bd800000aed5e0599cba52ba317",
"text": "There is a significant amount of controversy related to the optimal amount of dietary carbohydrate. This review summarizes the health-related positives and negatives associated with carbohydrate restriction. On the positive side, there is substantive evidence that for many individuals, low-carbohydrate, high-protein diets can effectively promote weight loss. Low-carbohydrate diets (LCDs) also can lead to favorable changes in blood lipids (i.e., decreased triacylglycerols, increased high-density lipoprotein cholesterol) and decrease the severity of hypertension. These positives should be balanced by consideration of the likelihood that LCDs often lead to decreased intakes of phytochemicals (which could increase predisposition to cardiovascular disease and cancer) and nondigestible carbohydrates (which could increase risk for disorders of the lower gastrointestinal tract). Diets restricted in carbohydrates also are likely to lead to decreased glycogen stores, which could compromise an individual's ability to maintain high levels of physical activity. LCDs that are high in saturated fat appear to raise low-density lipoprotein cholesterol and may exacerbate endothelial dysfunction. However, for the significant percentage of the population with insulin resistance or those classified as having metabolic syndrome or prediabetes, there is much experimental support for consumption of a moderately restricted carbohydrate diet (i.e., one providing approximately 26%-44 % of calories from carbohydrate) that emphasizes high-quality carbohydrate sources. This type of dietary pattern would likely lead to favorable changes in the aforementioned cardiovascular disease risk factors, while minimizing the potential negatives associated with consumption of the more restrictive LCDs.",
"title": ""
},
{
"docid": "f4617250b5654a673219d779952db35f",
"text": "Convolutional neural network (CNN) models have achieved tremendous success in many visual detection and recognition tasks. Unfortunately, visual tracking, a fundamental computer vision problem, is not handled well using the existing CNN models, because most object trackers implemented with CNN do not effectively leverage temporal and contextual information among consecutive frames. Recurrent neural network (RNN) models, on the other hand, are often used to process text and voice data due to their ability to learn intrinsic representations of sequential and temporal data. Here, we propose a novel neural network tracking model that is capable of integrating information over time and tracking a selected target in video. It comprises three components: a CNN extracting best tracking features in each video frame, an RNN constructing video memory state, and a reinforcement learning (RL) agent making target location decisions. The tracking problem is formulated as a decision-making process, and our model can be trained with RL algorithms to learn good tracking policies that pay attention to continuous, inter-frame correlation and maximize tracking performance in the long run. We compare our model with an existing neural-network based tracking method and show that the proposed tracking approach works well in various scenarios by performing rigorous validation experiments on artificial video sequences with ground truth. To the best of our knowledge, our tracker is the first neural-network tracker that combines convolutional and recurrent networks with RL algorithms.",
"title": ""
},
{
"docid": "ad9dcb0d49afccecf336621a782bca09",
"text": "Online reviews are an important source for consumers to evaluate products/services on the Internet (e.g. Amazon, Yelp, etc.). However, more and more fraudulent reviewers write fake reviews to mislead users. To maximize their impact and share effort, many spam attacks are organized as campaigns, by a group of spammers. In this paper, we propose a new two-step method to discover spammer groups and their targeted products. First, we introduce NFS (Network Footprint Score), a new measure that quantifies the likelihood of products being spam campaign targets. Second, we carefully devise GroupStrainer to cluster spammers on a 2-hop subgraph induced by top ranking products. Our approach has four key advantages: (i) unsupervised detection; both steps require no labeled data, (ii) adversarial robustness; we quantify statistical distortions in the review network, of which spammers have only a partial view, and avoid any side information that spammers can easily evade, (iii) sensemaking; the output facilitates the exploration of the nested hierarchy (i.e., organization) among the spammers, and finally (iv) scalability; both steps have complexity linear in network size, moreover, GroupStrainer operates on a carefully induced subnetwork. We demonstrate the efficiency and effectiveness of our approach on both synthetic and real-world datasets from two different domains with millions of products and reviewers. Moreover, we discover interesting strategies that spammers employ through case studies of our detected groups.",
"title": ""
},
{
"docid": "809508abfa4d7dfad1c43f09c8aa137e",
"text": "In this article, an original design of a range illuminator based on a stepped-septum polarizer and a dual-mode horn is discussed. However, the designed polarizer can also be used with other waveguide antennas, such as pyramidal horns and corrugated horns. A simple procedure for determining the initial septum geometry for the subsequent optimization is described, and based on this, a two-step septum polarizer is developed. A carefully designed dual-mode horn antenna with a rectangular aperture is then utilized to increase the illuminator's gain and suppress sidelobe levels. Results of full-wave electromagnetic simulations are presented and compared to experimentally measured data with good agreement.",
"title": ""
},
{
"docid": "62c3c062bbea8151543e0491190cf02d",
"text": "In this article, we present a survey of recent advances in passive human behaviour recognition in indoor areas using the channel state information (CSI) of commercial WiFi systems. Movement of human body causes a change in the wireless signal reflections, which results in variations in the CSI. By analyzing the data streams of CSIs for different activities and comparing them against stored models, human behaviour can be recognized. This is done by extracting features from CSI data streams and using machine learning techniques to build models and classifiers. The techniques from the literature that are presented herein have great performances, however, instead of the machine learning techniques employed in these works, we propose to use deep learning techniques such as long-short term memory (LSTM) recurrent neural network (RNN), and show the improved performance. We also discuss about different challenges such as environment change, frame rate selection, and multi-user scenario, and suggest possible directions for future work.",
"title": ""
}
] | scidocsrr |
66f65d037d045dcdfd9347297b45ef8e | Application of knowledge-based approaches in software architecture: A systematic mapping study | [
{
"docid": "ca6b556eb4de9a8f66aefd5505c20f3d",
"text": "Knowledge is a broad and abstract notion that has defined epistemological debate in western philosophy since the classical Greek era. In the past Richard Watson was the accepting senior editor for this paper. MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html few years, however, there has been a growing interest in treating knowledge as a significant organizational resource. Consistent with the interest in organizational knowledge and knowledge management (KM), IS researchers have begun promoting a class of information systems, referred to as knowledge management systems (KMS). The objective of KMS is to support creation, transfer, and application of knowledge in organizations. Knowledge and knowledge management are complex and multi-faceted concepts. Thus, effective development and implementation of KMS requires a foundation in several rich",
"title": ""
}
] | [
{
"docid": "44b71e1429f731cc2d91f919182f95a4",
"text": "Power management of multi-core processors is extremely important because it allows power/energy savings when all cores are not used. OS directed power management according to ACPI (Advanced Power and Configurations Interface) specifications is the common approach that industry has adopted for this purpose. While operating systems are capable of such power management, heuristics for effectively managing the power are still evolving. The granularity at which the cores are slowed down/turned off should be designed considering the phase behavior of the workloads. Using 3-D, video creation, office and e-learning applications from the SYSmark benchmark suite, we study the challenges in power management of a multi-core processor such as the AMD Quad-Core Opteron\" and Phenom\". We unveil effects of the idle core frequency on the performance and power of the active cores. We adjust the idle core frequency to have the least detrimental effect on the active core performance. We present optimized hardware and operating system configurations that reduce average active power by 30% while reducing performance by an average of less than 3%. We also present complete system measurements and power breakdown between the various systems components using the SYSmark and SPEC CPU workloads. It is observed that the processor core and the disk consume the most power, with core having the highest variability.",
"title": ""
},
{
"docid": "073e3296fc2976f0db2f18a06b0cb816",
"text": "Nowadays spoofing detection is one of the priority research areas in the field of automatic speaker verification. The success of Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) Challenge 2015 confirmed the impressive perspective in detection of unforeseen spoofing trials based on speech synthesis and voice conversion techniques. However, there is a small number of researches addressed to replay spoofing attacks which are more likely to be used by non-professional impersonators. This paper describes the Speech Technology Center (STC) anti-spoofing system submitted for ASVspoof 2017 which is focused on replay attacks detection. Here we investigate the efficiency of a deep learning approach for solution of the mentioned-above task. Experimental results obtained on the Challenge corpora demonstrate that the selected approach outperforms current state-of-the-art baseline systems in terms of spoofing detection quality. Our primary system produced an EER of 6.73% on the evaluation part of the corpora which is 72% relative improvement over the ASVspoof 2017 baseline system.",
"title": ""
},
{
"docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75",
"text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.",
"title": ""
},
{
"docid": "4f84d3a504cf7b004a414346bb19fa94",
"text": "Abstract—The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.",
"title": ""
},
{
"docid": "2bbbd2d1accca21cdb614a0324aa1a0d",
"text": "We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.",
"title": ""
},
{
"docid": "86bdb6616629da9c2574dc722b003ccf",
"text": "This paper considers the problem of extending Training an Agent Manually via Evaluative Reinforcement (TAMER) in continuous state and action spaces. Investigative research using the TAMER framework enables a non-technical human to train an agent through a natural form of human feedback (negative or positive). The advantages of TAMER have been shown on tasks of training agents by only human feedback or combining human feedback with environment rewards. However, these methods are originally designed for discrete state-action, or continuous state-discrete action problems. This paper proposes an extension of TAMER to allow both continuous states and actions, called ACTAMER. The new framework utilizes any general function approximation of a human trainer’s feedback signal. Moreover, a combined capability of ACTAMER and reinforcement learning is also investigated and evaluated. The combination of human feedback and reinforcement learning is studied in both settings: sequential and simultaneous. Our experimental results demonstrate the proposed method successfully allowing a human to train an agent in two continuous state-action domains: Mountain Car and Cart-pole (balancing).",
"title": ""
},
{
"docid": "35b64e16a8a86ddbee49177f75a662fd",
"text": "Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate-based optimization algorithm that uses a trust region-based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling, and central composite design—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process.",
"title": ""
},
{
"docid": "a61c1e5c1eafd5efd8ee7021613cf90d",
"text": "A millimeter-wave (mmW) bandpass filter (BPF) using substrate integrated waveguide (SIW) is proposed in this work. A BPF with three resonators is formed by etching slots on the top metal plane of the single SIW cavity. The filter is investigated with the theory of electric coupling mechanism. The design procedure and design curves of the coupling coefficient (K) and quality factor (Q) are given and discussed here. The extracted K and Q are used to determine the filter circuit dimensions. In order to prove the validity, a SIW BPF operating at 140 GHz is fabricated in a single circuit layer using low temperature co-fired ceramic (LTCC) technology. The measured insertion loss is 1.913 dB at 140 GHz with a fractional bandwidth of 13.03%. The measured results are in good agreement with simulated results in such high frequency.",
"title": ""
},
{
"docid": "a58cbbff744568ae7abd2873d04d48e9",
"text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.",
"title": ""
},
{
"docid": "758eb7a0429ee116f7de7d53e19b3e02",
"text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.",
"title": ""
},
{
"docid": "fdfcf2f910884bf899623d2711386db2",
"text": "A number of vehicles may be controlled and supervised by traffic security and its management. The License Plate Recognition is broadly employed in traffic management to recognize a vehicle whose owner has despoiled traffic laws or to find stolen vehicles. Vehicle License Plate Detection and Recognition is a key technique in most of the traffic related applications such as searching of stolen vehicles, road traffic monitoring, airport gate monitoring, speed monitoring and automatic parking lots access control. It is simply the ability of automatically extract and recognition of the vehicle license number plate's character from a captured image. Number Plate Recognition method suffered from problem of feature selection process. The current method of number plate recognition system only focus on local, global and Neural Network process of Feature Extraction and process for detection. The Optimized Feature Selection process improves the detection ratio of number plate recognition. In this paper, it is proposed a new methodology for `License Plate Recognition' based on wavelet transform function. This proposed methodology compare with Correlation based method for detection of number plate. Empirical result shows that better performance in comparison of correlation based technique for number plate recognition. Here, it is modified the Matching Technique for numberplate recognition by using Multi-Class RBF Neural Network Optimization.",
"title": ""
},
{
"docid": "fde0b02f0dbf01cd6a20b02a44cdc6cf",
"text": "This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light field technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the same illumination.",
"title": ""
},
{
"docid": "1ad92c6656e89a40b0a376f8c1693760",
"text": "This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robot’s ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in people’s daily lives.",
"title": ""
},
{
"docid": "5c76caebe05acd7d09e6cace0cac9fe1",
"text": "A program that detects people in images has a multitude of potential applications, including tracking for biomedical applications or surveillance, activity recognition for person-device interfaces (device control, video games), organizing personal picture collections, and much more. However, detecting people is difficult, as the appearance of a person can vary enormously because of changes in viewpoint or lighting, clothing style, body pose, individual traits, occlusion, and more. It then makes sense that the first people detectors were really detectors of pedestrians, that is, people walking at a measured pace on a sidewalk, and viewed from a fixed camera. Pedestrians are nearly always upright, their arms are mostly held along the body, and proper camera placement relative to pedestrian traffic can virtually ensure a view from the front or from behind (Figure 1). These factors reduce variation of appearance, although clothing, illumination, background, occlusions, and somewhat limited variations of pose still present very significant challenges.",
"title": ""
},
{
"docid": "b4ae619b0b9cc966622feb2dceda0f2e",
"text": "A novel pressure sensing circuit for non-invasive RF/microwave blood glucose sensors is presented in this paper. RF sensors are of interest to researchers for measuring blood glucose levels non-invasively. For the measurements, the finger is a popular site that has a good amount of blood supply. When a finger is placed on top of the RF sensor, the electromagnetic fields radiating from the sensor interact with the blood in the finger and the resulting sensor response depends on the permittivity of the blood. The varying glucose level in the blood results in a permittivity change causing a shift in the sensor's response. Therefore, by observing the sensor's frequency response it may be possible to predict the blood glucose level. However, there are two crucial points in taking and subsequently predicting the blood glucose level. These points are; the position of the finger on the sensor and the pressure applied onto the sensor. A variation in the glucose level causes a very small frequency shift. However, finger positioning and applying inconsistent pressure have more pronounced effect on the sensor response. For this reason, it may not be possible to take a correct reading if these effects are not considered carefully. Two novel pressure sensing circuits are proposed and presented in this paper to accurately monitor the pressure applied.",
"title": ""
},
{
"docid": "855f67a94e8425846584e5c82355fa91",
"text": "This paper is the product of a workshop held in Amsterdam during the Software Technology and Practice Conference (STEP 2003). The purpose of the paper is to propose Bloom's taxonomy levels for the Guide to the Software Engineering Body of Knowledge (SWEBOK) topics for three software engineer profiles: a new graduate, a graduate with four years of experience, and an experienced member of a software engineering process group. Bloom's taxonomy levels are proposed for topics of four knowledge areas of the SWEBOK Guide: software maintenance, software engineering management, software engineering process, and software quality. By proposing Bloom's taxonomy in this way, the paper aims to illustrate how such profiles could be used as a tool in defining job descriptions, software engineering role descriptions within a software engineering process definition, professional development paths, and training programs.",
"title": ""
},
{
"docid": "a7c0bdbf05ce5d8da20a80dcc3bfaec0",
"text": "Neurosurgery is a medical specialty that relies heavily on imaging. The use of computed tomography and magnetic resonance images during preoperative planning and intraoperative surgical navigation is vital to the success of the surgery and positive patient outcome. Augmented reality application in neurosurgery has the potential to revolutionize and change the way neurosurgeons plan and perform surgical procedures in the future. Augmented reality technology is currently commercially available for neurosurgery for simulation and training. However, the use of augmented reality in the clinical setting is still in its infancy. Researchers are now testing augmented reality system prototypes to determine and address the barriers and limitations of the technology before it can be widely accepted and used in the clinical setting.",
"title": ""
},
{
"docid": "3a2729b235884bddc05dbdcb6a1c8fc9",
"text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.",
"title": ""
},
{
"docid": "28c0ce094c4117157a27f272dbb94b91",
"text": "This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm.",
"title": ""
},
{
"docid": "d50d3997572847200f12d69f61224760",
"text": "The main function of a network layer is to route packets from the source machine to the destination machine. Algorithms that are used for route selection and data structure are the main parts for the network layer. In this paper we examine the network performance when using three routing protocols, RIP, OSPF and EIGRP. Video, HTTP and Voice application where configured for network transfer. We also examine the behaviour when using link failure/recovery controller between network nodes. The simulation results are analyzed, with a comparison between these protocols on the effectiveness and performance in network implemented.",
"title": ""
}
] | scidocsrr |
bb78bd4ac5f73bbd64ad505fea91284a | Ensuring the Quality of the Findings of Qualitative Research : Looking at Trustworthiness Criteria | [
{
"docid": "db5ff75a7966ec6c1503764d7e510108",
"text": "Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.",
"title": ""
}
] | [
{
"docid": "c197fcf3042099003f3ed682f7b7f19c",
"text": "Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an event-based characterization of critical behavioral patterns for temporally varying interaction graphs. We use non-overlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.",
"title": ""
},
{
"docid": "8733daeee2dd85345ce115cb1366f4b2",
"text": "We propose an interactive model, RuleViz, for visualizing the entire process of knowledge discovery and data mining. The model consists of ve components according to the main ingredients of the knowledge discovery process: original data visualization, visual data reduction, visual data preprocess, visual rule discovery, and rule visualization. The RuleViz model for visualizing the process of knowledge discovery is introduced and each component is discussed. Two aspects are emphasized, human-machine interaction and process visualization. The interaction helps the KDD system navigate through the enormous search spaces and recognize the intentions of the user, and the visualization of the KDD process helps users gain better insight into the multidimensional data, understand the intermediate results, and interpret the discovered patterns. According to the RuleViz model, we implement an interactive system, CViz, which exploits \\parallel coordinates\" technique to visualize the process of rule induction. The original data is visualized on the parallel coordinates, and can be interactively reduced both horizontally and vertically. Three approaches for discretizing numerical attributes are provided in the visual data preprocessing. CViz learns classi cation rules on the basis of a rule induction algorithm and presents the result as the algorithm proceeds. The discovered rules are nally visualized on the parallel coordinates with each rule being displayed as a directed \\polygon\", and the rule accuracy and quality are used to render the \\polygons\" and control the choice of rules to be displayed to avoid clutter. The CViz system has been experimented with the UCI data sets and synthesis data sets, and the results demonstrate that the RuleViz model and the implemented visualization system are useful and helpful for understanding the process of knowledge discovery and interpreting the nal results.",
"title": ""
},
{
"docid": "97f058025a32926262b3c141b9b63da1",
"text": "In this paper, a Stacked Sparse Autoencoder (SSAE) based framework is presented for nuclei classification on breast cancer histopathology. SSAE works very well in learning useful high-level feature for better representation of input raw data. To show the effectiveness of proposed framework, SSAE+Softmax is compared with conventional Softmax classifier, PCA+Softmax, and single layer Sparse Autoencoder (SAE)+Softmax in classifying the nuclei and non-nuclei patches extracted from breast cancer histopathology. The SSAE+Softmax for nuclei patch classification yields an accuracy of 83.7%, F1 score of 82%, and AUC of 0.8992, which outperform Softmax classifier, PCA+Softmax, and SAE+Softmax.",
"title": ""
},
{
"docid": "033ee0637607fec8ae1b5834efe355dc",
"text": "We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcementlearning tasks straightforwardly.",
"title": ""
},
{
"docid": "b4123a29a617b7a531d637f8c988b9a4",
"text": "We present a method for contouring an implicit function using a grid topologically dual to structured grids such as octrees. By aligning the vertices of the dual grid with the features of the implicit function, we are able to reproduce thin features of the extracted surface without excessive subdivision required by methods such as marching cubes or dual contouring. Dual marching cubes produces a crack-free, adaptive polygonalization of the surface that reproduces sharp features. Our approach maintains the advantage of using structured grids for operations such as CSG while being able to conform to the relevant features of the implicit function yielding much sparser polygonalizations than has been possible using structured grids.",
"title": ""
},
{
"docid": "7d78ca30853ed8a84bbb56fe82e3b9ba",
"text": "Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21% relative over a baseline multi-stream audio-visual GMM/HMM system.",
"title": ""
},
{
"docid": "a8fd046fb4652814c852113684a152aa",
"text": "Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence. We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region). This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation. We show that in the small steps limit with respect to the Wasserstein distance W2, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result. This means that policies undergo diffusion and advection, concentrating near actions with high reward. This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.",
"title": ""
},
{
"docid": "3433b283726a7e95ba5cb2a3c97cd195",
"text": "Black silicon (BSi) represents a very active research area in renewable energy materials. The rise of BSi as a focus of study for its fundamental properties and potentially lucrative practical applications is shown by several recent results ranging from solar cells and light-emitting devices to antibacterial coatings and gas-sensors. In this paper, the common BSi fabrication techniques are first reviewed, including electrochemical HF etching, stain etching, metal-assisted chemical etching, reactive ion etching, laser irradiation and the molten salt Fray-Farthing-Chen-Cambridge (FFC-Cambridge) process. The utilization of BSi as an anti-reflection coating in solar cells is then critically examined and appraised, based upon strategies towards higher efficiency renewable solar energy modules. Methods of incorporating BSi in advanced solar cell architectures and the production of ultra-thin and flexible BSi wafers are also surveyed. Particular attention is given to routes leading to passivated BSi surfaces, which are essential for improving the electrical properties of any devices incorporating BSi, with a special focus on atomic layer deposition of Al2O3. Finally, three potential research directions worth exploring for practical solar cell applications are highlighted, namely, encapsulation effects, the development of micro-nano dual-scale BSi, and the incorporation of BSi into thin solar cells. It is intended that this paper will serve as a useful introduction to this novel material and its properties, and provide a general overview of recent progress in research currently being undertaken for renewable energy applications.",
"title": ""
},
{
"docid": "f355ed837561186cff4e7492470d6ae7",
"text": "Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of the fundamental hierarchical time series model concludes the article. Much of the Bayesian viewpoint can be argued (as by Jeereys and Jaynes, for examples) as direct application of the theory of probability. In this article the suggested approach for the construction of Bayesian time series models relies on probability theory to provide decompositions of complex joint probability distributions. Speciically, I refer to the familiar factorization of a joint density into an appropriate product of conditionals. Let x and y represent two random variables. I will not diierentiate between random variables and their realizations. Also, I will use an increasingly popular generic notation for probability densities: x] represents the density of x, xjy] is the conditional density of x given y, and x; y] denotes the joint density of x and y. In this notation we can write \\Bayes's Theorem\" as yjx] = xjy]]y]=x]: (1) y",
"title": ""
},
{
"docid": "e6088779901bd4bfaf37a3a1784c3854",
"text": "There has been recently a great progress in the field of automatically generated knowledge bases and corresponding disambiguation systems that are capable of mapping text mentions onto canonical entities. Efforts like the before mentioned have enabled researchers and analysts from various disciplines to semantically “understand” contents. However, most of the approaches have been specifically designed for the English language and in particular support for Arabic is still in its infancy. Since the amount of Arabic Web contents (e.g. in social media) has been increasing dramatically over the last years, we see a great potential for endeavors that support an entity-level analytics of these data. To this end, we have developed a framework called AIDArabic that extends the existing AIDA system by additional components that allow the disambiguation of Arabic texts based on an automatically generated knowledge base distilled from Wikipedia. Even further, we overcome the still existing sparsity of the Arabic Wikipedia by exploiting the interwiki links between Arabic and English contents in Wikipedia, thus, enriching the entity catalog as well as disambiguation context.",
"title": ""
},
{
"docid": "6cf4994b5ed0e17885f229856b7cd58d",
"text": "Recently Neural Architecture Search (NAS) has aroused great interest in both academia and industry, however it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method. In DSO-NAS, we provide a novel model pruning view to NAS problem. In specific, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore can be directly applied to large datasets like ImageNet. Particularly, On CIFAR-10 dataset, DSO-NAS achieves an average test error 2.84%, while on the ImageNet dataset DSO-NAS achieves 25.4% test error under 600M FLOPs with 8 GPUs in 18 hours.",
"title": ""
},
{
"docid": "b9daaabfc245958b9dee7d4910e80431",
"text": "Strawberry fruits are highly valued for their taste and nutritional value. However, results describing the bioaccessibility and intestinal absorption of phenolic compounds from strawberries are still scarce. In our study, a combined in vitro digestion/Caco-2 absorption model was used to mimic physiological conditions in the gastrointestinal track and identify compounds transported across intestinal epithelium. In the course of digestion, the loss of anthocyanins was noted whilst pelargonidin-3-glucoside remained the most abundant compound, amounting to nearly 12 mg per 100 g of digested strawberries. Digestion increased the amount of ellagic acid available by nearly 50%, probably due to decomposition of ellagitannins. Only trace amounts of pelargonidin-3-glucoside were found to be absorbed in the intestine model. Dihydrocoumaric acid sulphate and p-coumaric acid were identified as metabolites formed in enterocytes and released at the serosal side of the model.",
"title": ""
},
{
"docid": "5b759f2d581a8940127b5e45019039d7",
"text": "The structure of the domain name is highly relevant for providing insights into the management, organization and operation of a given enterprise. Security assessment and network penetration testing are using information sourced from the DNS service in order to map the network, perform reconnaissance tasks, identify services and target individual hosts. Tracking the domain names used by popular Botnets is another major application that needs to undercover their underlying DNS structure. Current approaches for this purpose are limited to simplistic brute force scanning or reverse DNS, but these are unreliable. Brute force attacks depend of a huge list of known words and thus, will not work against unknown names, while reverse DNS is not always setup or properly configured. In this paper, we address the issue of fast and efficient generation of DNS names and describe practical experiences against real world large scale DNS names. Our approach is based on techniques derived from natural language modeling and leverage Markov Chain Models in order to build the first DNS scanner (SDBF) that is leveraging both, training and advanced language modeling approaches.",
"title": ""
},
{
"docid": "7562d0fefa669d481b55e059085cd7de",
"text": "Security and privacy are huge challenges in Internet of Things (IoT) environments, but unfortunately, the harmonization of the IoT-related standards and protocols is hardly and slowly widespread. In this paper, we propose a new framework for access control in IoT based on the blockchain technology. Our first contribution consists in providing a reference model for our proposed framework within the Objectives, Models, Architecture and Mechanism specification in IoT. In addition, we introduce FairAccess as a fully decentralized pseudonymous and privacy preserving authorization management framework that enables users to own and control their data. To implement our model, we use and adapt the blockchain into a decentralized access control manager. Unlike financial bitcoin transactions, FairAccess introduces new types of transactions that are used to grant, get, delegate, and revoke access. As a proof of concept, we establish an initial implementation with a Raspberry PI device and local blockchain. Finally, we discuss some limitations and propose further opportunities. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "d68cd0d594f8db4a0decdbdf3656ece1",
"text": "In this paper we describe PRISM, a tool being developed at the University of Birmingham for the analysis of probabilistic systems. PRISM supports three probabilistic models: discrete-time Markov chains, continuous-time Markov chains and Markov decision processes. Analysis is performed through model checking such systems against specifications written in the probabilistic temporal logics PCTL and CSL. The tool features three model checking engines: one symbolic, using BDDs (binary decision diagrams) and MTBDDs (multi-terminal BDDs); one based on sparse matrices; and one which combines both symbolic and sparse matrix methods. PRISM has been successfully used to analyse probabilistic termination, performance, dependability and quality of service properties for a range of systems, including randomized distributed algorithms [2], polling systems [22], workstation clusters [18] and wireless cell communication [17].",
"title": ""
},
{
"docid": "d9599c4140819670a661bd4955680bb7",
"text": "The paper assesses the demand for rural electricity services and contrasts it with the technology options available for rural electrification. Decentralised Distributed Generation can be economically viable as reflected by case studies reported in literature and analysed in our field study. Project success is driven by economically viable technology choice; however it is largely contingent on organisational leadership and appropriate institutional structures. While individual leadership can compensate for deployment barriers, we argue that a large scale roll out of rural electrification requires an alignment of economic incentives and institutional structures to implement, operate and maintain the scheme. This is demonstrated with the help of seven case studies of projects across north India. 1 Introduction We explore the contribution that decentralised and renewable energy technologies can make to rural electricity supply in India. We take a case study approach, looking at seven sites across northern India where renewable energy technologies have been established to provide electrification for rural communities. We supplement our case studies with stakeholder interviews and household surveys, estimating levels of demand for electricity services from willingness and ability to pay. We also assess the overall viability of Distributed Decentralised Generation (DDG) projects by investigating the costs of implementation as well as institutional and organisational barriers to their operation and replication. Renewable energy technologies represent some of the most promising options available for distributed and decentralised electrification. Demand for reliable electricity services is significant. It represents a key driver behind economic development and raising basic standards of living. This is especially applicable to rural India home to 70% of the nation's population and over 25% of the world's poor. Access to reliable and affordable electricity can help support income-generating activity and allow utilisation of modern appliances and agricultural equipment whilst replacing inefficient and polluting kerosene lighting. Presently only around 55% of households are electrified (MOSPI 2006) leaving over 20 million households without power. The supply of electricity across India currently lacks both quality and quantity with an extensive shortfall in supply, a poor record for outages, high levels of transmission and distribution (T&D) losses and an overall need for extended and improved infrastructure (GoI 2006). The Indian Government recently outlined an ambitious plan for 100% village level electrification by the end of 2007 and total household electrification by 2012. To achieve this, a major programme of grid extension and strengthening of the rural electricity infrastructure has been initiated under …",
"title": ""
},
{
"docid": "ba74ebfc0e164b1e6d08c1ac63e49538",
"text": "This chapter develops a unified framework for the study of how network interactions can function as a mechanism for propagation and amplification of microeconomic shocks. The framework nests various classes of games over networks, models of macroeconomic risk originating from microeconomic shocks, and models of financial interactions. Under the assumption that shocks are small, the authors provide a fairly complete characterization of the structure of equilibrium, clarifying the role of network interactions in translating microeconomic shocks into macroeconomic outcomes. This characterization provides a ranking of different networks in terms of their aggregate performance. It also sheds light on several seemingly contradictory results in the prior literature on the role of network linkages in fostering systemic risk.",
"title": ""
},
{
"docid": "31ed2186bcd711ac4a5675275cd458eb",
"text": "Location-aware wireless sensor networks will enable a new class of applications, and accurate range estimation is critical for this task. Low-cost location determination capability is studied almost entirely using radio frequency received signal strength (RSS) measurements, resulting in poor accuracy. More accurate systems use wide bandwidths and/or complex time-synchronized infrastructure. Low-cost, accurate ranging has proven difficult because small timing errors result in large range errors. This paper addresses estimation of the distance between wireless nodes using a two-way ranging technique that approaches the Cramér-Rao Bound on ranging accuracy in white noise and achieves 1-3 m accuracy in real-world ranging and localization experiments. This work provides an alternative to inaccurate RSS and complex, wide-bandwidth methods. Measured results using a prototype wireless system confirm performance in the real world.",
"title": ""
},
{
"docid": "457efc3b22084fd7221637bd574ff075",
"text": "Group-based trajectory models are used to investigate population differences in the developmental courses of behaviors or outcomes . This article demonstrates a new Stata command, traj, for fitting to longitudinal data finite (discrete) mixture models designed to identify clusters of individuals following similar progressions of some behavior or outcome over age or time. Censored normal, Poisson, zero-inflated Poisson, and Bernoulli distributions are supported. Applications to psychometric scale data, count data, and a dichotomous prevalence measure are illustrated. Introduction A developmental trajectory measures the course of an outcome over age or time. The study of developmental trajectories is a central theme of developmental and abnormal psychology and psychiatry, of life course studies in sociology and criminology, of physical and biological outcomes in medicine and gerontology. A wide variety of statistical methods are used to study these phenomena. This article demonstrates a Stata plugin for estimating group-based trajectory models. The Stata program we demonstrate adapts a well-established SAS-based procedure for estimating group-based trajectory model (Jones, Nagin, and Roeder, 2001; Jones and Nagin, 2007) to the Stata platform. Group-based trajectory modeling is a specialized form of finite mixture modeling. The method is designed identify groups of individuals following similarly developmental trajectories. For a recent review of applications of group-based trajectory modeling see Nagin and Odgers (2010) and for an extended discussion of the method, including technical details, see Nagin (2005). A Brief Overview of Group-Based Trajectory Modeling Using finite mixtures of suitably defined probability distributions, the group-based approach for modeling developmental trajectories is intended to provide a flexible and easily applied method for identifying distinctive clusters of individual trajectories within the population and for profiling the characteristics of individuals within the clusters. Thus, whereas the hierarchical and latent curve methodologies model population variability in growth with multivariate continuous distribution functions, the group-based approach utilizes a multinomial modeling strategy. Technically, the group-based trajectory model is an example of a finite mixture model. Maximum likelihood is used for the estimation of the model parameters. The maximization is performed using a general quasi-Newton procedure (Dennis, Gay, and Welsch 1981; Dennis and Mei 1979). The fundamental concept of interest is the distribution of outcomes conditional on age (or time); that is, the distribution of outcome trajectories denoted by ), | ( i i Age Y P where the random vector Yi represents individual i’s longitudinal sequence of behavioral outcomes and the vector Agei represents individual i’s age when each of those measurements is recorded. The group-based trajectory model assumes that the population distribution of trajectories arises from a finite mixture of unknown order J. The likelihood for each individual i, conditional on the number of groups J, may be written as 1 Trajectories can also be defined by time (e.g., time from treatment). 1 ( | ) ( | , ; ) (1), J j j i i i i j P Y Age P Y Age j where is the probability of membership in group j, and the conditional distribution of Yi given membership in j is indexed by the unknown parameter vector which among other things determines the shape of the group-specific trajectory. The trajectory is modeled with up to a 5 order polynomial function of age (or time). For given j, conditional independence is assumed for the sequential realizations of the elements of Yi , yit, over the T periods of measurement. Thus, we may write T i t j it it j i i j age y p j Age Y P ), 2 ( ) ; , | ( ) ; , | ( where p(.) is the distribution of yit conditional on membership in group j and the age of individual i at time t. 2 The software provides three alternative specifications of p(.): the censored normal distribution also known as the Tobit model, the zero-inflated Poisson distribution, and the binary logit distribution. The censored normal distribution is designed for the analysis of repeatedly measured, (approximately) continuous scales which may be censored by either a scale minimum or maximum or both (e.g., longitudinal data on a scale of depression symptoms). A special case is a scale or other outcome variable with no minimum or maximum. The zero-inflated Poisson distribution is designed for the analysis of longitudinal count data (e.g., arrests by age) and binary logit distribution for the analysis of longitudinal data on a dichotomous outcome variable (e.g., whether hospitalized in year t or not). The model also provides capacity for analyzing the effect of time-stable covariate effects on probability of group membership and the effect of time dependent covariates on the trajectory itself. Let i x denote a vector of time stable covariates thought to be associated with probability of trajectory group membership. Effects of time-stable covariates are modeled with a generalized logit function where without loss of generality :",
"title": ""
}
] | scidocsrr |
8715a064a0406b1de5635d2da80a4508 | A content account of creative analogies in biologically inspired design | [
{
"docid": "ad5ff550d8e326166bb50a7b6bded485",
"text": "Inspiration is useful for exploration and discovery of new solution spaces. Systems in natural and artificial worlds and their functionality are seen as rich sources of inspiration for idea generation. However, unlike in the artificial domain where existing systems are often used for inspiration, those from the natural domain are rarely used in a systematic way for this purpose. Analogy is long regarded as a powerful means for inspiring novel idea generation. One aim of the work reported here is to initiate similar work in the area of systematic biomimetics for product development, so that inspiration from both natural and artificial worlds can be used systematically to help develop novel, analogical ideas for solving design problems. A generic model for representing causality of natural and artificial systems has been developed, and used to structure information in a database of systems from both the domains. These are implemented in a piece of software for automated analogical search of relevant ideas from the databases to solve a given problem. Preliminary experiments at validating the software indicate substantial potential for the approach.",
"title": ""
}
] | [
{
"docid": "35ce8c11fa7dd22ef0daf9d0bd624978",
"text": "Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, which propagate through pipeline systems impacting the performance of downstream applications. The detection of OOV regions in the output of a LVCSR system is typically addressed as a binary classification task, where each region is independently classified using local information. In this paper, we show that jointly predicting OOV regions, and including contextual information from each region, leads to substantial improvement in OOV detection. Compared to the state-of-the-art, we reduce the missed OOV rate from 42.6% to 28.4% at 10% false alarm rate.",
"title": ""
},
{
"docid": "91dcf0f281724bd6a5cc8c6479f5d632",
"text": "In this paper, a cable-driven planar parallel haptic interface is presented. First, the velocity equations are derived and the forces in the cables are obtained by the principle of virtual work. Then, an analysis of the wrench-closure workspace is performed and a geometric arrangement of the cables is proposed. Control issues are then discussed and a control scheme is presented. The calibration of the attachment points is also discussed. Finally, the prototype is described and experimental results are provided.",
"title": ""
},
{
"docid": "5637bed8be75d7e79a2c2adb95d4c28e",
"text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.",
"title": ""
},
{
"docid": "7dadeadea2d281b981dcb72506f19366",
"text": "Spacecrafts, which are used for stereoscopic mapping, imaging and telecommunication applications, require fine attitude and stabilization control which has an important role in high precision pointing and accurate stabilization. The conventional techniques for attitude and stabilization control are thrusters, reaction wheels, control moment gyroscopes (CMG) and magnetic torquers. Since reaction wheel can generate relatively smaller torques, they provide very fine stabilization and attitude control. Although conventional PID framework solves many stabilization problems, it is reported that many PID feedback loops are poorly tuned. In this paper, a model reference adaptive LQR control for reaction wheel stabilization problem is implemented. The tracking performance and disturbance rejection capability of proposed controller is found to give smooth motion after abnormal disruptions.",
"title": ""
},
{
"docid": "20ef5a8b6835bedd44d571952b46ca90",
"text": "This paper proposes an XYZ-flexure parallel mechanism (FPM) with large displacement and decoupled kinematics structure. The large-displacement FPM has large motion range more than 1 mm. Moreover, the decoupled XYZ-stage has small cross-axis error and small parasitic rotation. In this study, the typical prismatic joints are investigated, and a new large-displacement prismatic joint using notch hinges is designed. The conceptual design of the FPM is proposed by assembling these modular prismatic joints, and then the optimal design of the FPM is conducted. The analytical models of linear stiffness and dynamics are derived using pseudo-rigid-body (PRB) method. Finally, the numerical simulation using ANSYS is conducted for modal analysis to verify the analytical dynamics equation. Experiments are conducted to verify the proposed design for linear stiffness, cross-axis error and parasitic rotation",
"title": ""
},
{
"docid": "dec3f821a1f9fc8102450a4add31952b",
"text": "Homicide by hanging is an extremely rare incident [1]. Very few cases have been reported in which a person is rendered senseless and then hanged to simulate suicidal death; though there are a lot of cases in wherein a homicide victim has been hung later. We report a case of homicidal hanging of a young Sikh individual found hanging in a well. It became evident from the results of forensic autopsy that the victim had first been given alcohol mixed with pesticides and then hanged by his turban from a well. The rare combination of lynching (homicidal hanging) and use of organo-phosporous pesticide poisoning as a means of homicide are discussed in this paper.",
"title": ""
},
{
"docid": "260dfcd7679ac125204bc50c4a6e2658",
"text": "We present Chronos, a system that enables a single WiFi access point to localize clients to within tens of centimeters. Such a system can bring indoor positioning to homes and small businesses which typically have a single access point. The key enabler underlying Chronos is a novel algorithm that can compute sub-nanosecond time-of-flight using commodity WiFi cards. By multiplying the time-offlight with the speed of light, a MIMO access point computes the distance between each of its antennas and the client, hence localizing it. Our implementation on commodity WiFi cards demonstrates that Chronos’s accuracy is comparable to state-of-the-art localization systems, which use four or five access points.",
"title": ""
},
{
"docid": "fa7ec2419ffc22b1ee43694b5f4e21b9",
"text": "We consider the problem of finding outliers in large multivariate databases. Outlier detection can be applied during the data cleansing process of data mining to identify problems with the data itself, and to fraud detection where groups of outliers are often of particular interest. We use replicator neural networks (RNNs) to provide a measure of the outlyingness of data records. The performance of the RNNs is assessed using a ranked score measure. The effectiveness of the RNNs for outlier detection is demonstrated on two publicly available databases.",
"title": ""
},
{
"docid": "4e35e75d5fc074b1e02f5dded5964c19",
"text": "This paper presents a new bidirectional wireless power transfer (WPT) topology using current fed half bridge converter. Generally, WPT topology with current fed converter uses parallel LC resonant tank network in the transmitter side to compensate the reactive power. However, in medium power application this topology suffers a major drawback that the voltage stress in the inverter switches are considerably high due to high reactive power consumed by the loosely coupled coil. In the proposed topology this is mitigated by adding a suitably designed capacitor in series with the transmitter coil. Both during grid to vehicle and vehicle to grid operations the power flow is controlled through variable switching frequency to achieve extended ZVS of the inverter switches. Detail analysis and converter design procedure is presented for both grid to vehicle and vehicle to grid operations. A 1.2kW lab-prototype is developed and experimental results are presented to verify the analysis.",
"title": ""
},
{
"docid": "e8af6607d171f43f0e1410a5850f10e8",
"text": "Postpartum depression (PPD) is a serious mental health problem. It is prevalent, and offspring are at risk for disturbances in development. Major risk factors include past depression, stressful life events, poor marital relationship, and social support. Public health efforts to detect PPD have been increasing. Standard treatments (e.g., Interpersonal Psychotherapy) and more tailored treatments have been found effective for PPD. Prevention efforts have been less consistently successful. Future research should include studies of epidemiological risk factors and prevalence, interventions aimed at the parenting of PPD mothers, specific diathesis for a subset of PPD, effectiveness trials of psychological interventions, and prevention interventions aimed at addressing mental health issues in pregnant women.",
"title": ""
},
{
"docid": "cccecb08c92f8bcec4a359373a20afcb",
"text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.",
"title": ""
},
{
"docid": "33aa9af9a5f3d3f0b8bf21dca3b13d2f",
"text": "Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.",
"title": ""
},
{
"docid": "2c87f9ef35795c89de6b60e1ceff18c8",
"text": "The paper presents a fusion-tracker and pedestrian classifier for color and thermal cameras. The tracker builds a background model as a multi-modal distribution of colors and temperatures. It is constructed as a particle filter that makes a number of informed reversible transformations to sample the model probability space in order to maximize posterior probability of the scene model. Observation likelihoods of moving objects account their 3D locations with respect to the camera and occlusions by other tracked objects as well as static obstacles. After capturing the coordinates and dimensions of moving objects we apply a pedestrian classifier based on periodic gait analysis. To separate humans from other moving objects, such as cars, we detect, in human gait, a symmetrical double helical pattern, that can then be analyzed using the Frieze Group theory. The results of tracking on color and thermal sequences demonstrate that our algorithm is robust to illumination noise and performs well in the outdoor environments.",
"title": ""
},
{
"docid": "2b1adb51eafbcd50675513bc67e42140",
"text": "This text reviews the generic aspects of the central nervous system evolutionary development, emphasizing the developmental features of the brain structures related with behavior and with the cognitive functions that finally characterized the human being. Over the limbic structures that with the advent of mammals were developed on the top of the primitive nervous system of their ancestrals, the ultimate cortical development with neurons arranged in layers constituted the structural base for an enhanced sensory discrimination, for more complex motor activities, and for the development of cognitive and intellectual functions that finally characterized the human being. The knowledge of the central nervous system phylogeny allow us particularly to infer possible correlations between the brain structures that were developed along phylogeny and the behavior of their related beings. In this direction, without discussing its conceptual aspects, this review ends with a discussion about the central nervous system evolutionary development and the emergence of consciousness, in the light of its most recent contributions.",
"title": ""
},
{
"docid": "7cf90874c70202653a47fa165a1a87f7",
"text": "This work proposes a new trust management system (TMS) for the Internet of Things (IoT). The wide majority of these systems are today bound to the assessment of trustworthiness with respect to a single function. As such, they cannot use past experiences related to other functions. Even those that support multiple functions hide this heterogeneity by regrouping all past experiences into a single metric. These restrictions are detrimental to the adaptation of TMSs to today’s emerging M2M and IoT architectures, which are characterized with heterogeneity in nodes, capabilities and services. To overcome these limitations, we design a context-aware and multi-service trust management system fitting the new requirements of the IoT. Simulation results show the good performance of the proposed system and especially highlight its ability to deter a class of common attacks designed to target trust management systems. a 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "77379e9c6c0781cd44f4e3208d9e5ca4",
"text": "Recent developments in sensing and communication technologies have led to an explosion in the use of mobile devices such as smartphones and tablets. With the increase in the use of mobile devices, one has to constantly worry about the security and privacy as the loss of a mobile device could compromise personal information of the user. To deal with this problem, continuous authentication (also known as active authentication) systems have been proposed in which users are continuously monitored after the initial access to the mobile device. In this paper, we provide an overview of different continuous authentication methods on mobile devices. We discuss the merits and drawbacks of available approaches and identify promising avenues of research in this rapidly evolving field.",
"title": ""
},
{
"docid": "3810c6b33a895730bc57fdc658d3f72e",
"text": "Comics have been shown to be able to tell a story by guiding the viewers gaze patterns through a sequence of images. However, not much research has been done on how comic techniques affect these patterns. We focused this study to investigate the effect that the structure of a comics panels have on the viewers reading patterns, specifically with the time spent reading the comic and the number of times the viewer fixates on a point. We use two versions of a short comic as a stimulus, one version with four long panels and another with sixteen smaller panels. We collected data using the GazePoint eye tracker, focusing on viewing time and number of fixations, and we collected subjective information about the viewers preferences using a questionnaire. We found that no significant effect between panel structure and viewing time or number of fixations, but those viewers slightly tended to prefer the format of four long panels.",
"title": ""
},
{
"docid": "7010278254ee0fadb7b59cb05169578a",
"text": "INTRODUCTION\nLumbar disc herniation (LDH) is a common condition in adults and can impose a heavy burden on both the individual and society. It is defined as displacement of disc components beyond the intervertebral disc space. Various conservative treatments have been recommended for the treatment of LDH and physical therapy plays a major role in the management of patients. Therapeutic exercise is effective for relieving pain and improving function in individuals with symptomatic LDH. The aim of this systematic review is to evaluate the effectiveness of motor control exercise (MCE) for symptomatic LDH.\n\n\nMETHODS AND ANALYSIS\nWe will include all clinical trial studies with a concurrent control group which evaluated the effect of MCEs in patients with symptomatic LDH. We will search PubMed, SCOPUS, PEDro, SPORTDiscus, CINAHL, CENTRAL and EMBASE with no restriction of language. Primary outcomes of this systematic review are pain intensity and functional disability and secondary outcomes are functional tests, muscle thickness, quality of life, return to work, muscle endurance and adverse events. Study selection and data extraction will be performed by two independent reviewers. The assessment of risk of bias will be implemented using the PEDro scale. Publication bias will be assessed by funnel plots, Begg's and Egger's tests. Heterogeneity will be evaluated using the I2 statistic and the χ2 test. In addition, subgroup analyses will be conducted for population and the secondary outcomes. All meta-analyses will be performed using Stata V.12 software.\n\n\nETHICS AND DISSEMINATION\nNo ethical concerns are predicted. The systematic review findings will be published in a peer-reviewed journal and will also be presented at national/international academic and clinical conferences.\n\n\nTRIAL REGISTRATION NUMBER\nCRD42016038166.",
"title": ""
},
{
"docid": "6934b06f35dc7855a8410329b099ca2f",
"text": "Privacy protection in publishing transaction data is an important problem. A key feature of transaction data is the extreme sparsity, which renders any single technique ineffective in anonymizing such data. Among recent works, some incur high information loss, some result in data hard to interpret, and some suffer from performance drawbacks. This paper proposes to integrate generalization and suppression to reduce information loss. However, the integration is non-trivial. We propose novel techniques to address the efficiency and scalability challenges. Extensive experiments on real world databases show that this approach outperforms the state-of-the-art methods, including global generalization, local generalization, and total suppression. In addition, transaction data anonymized by this approach can be analyzed by standard data mining tools, a property that local generalization fails to provide.",
"title": ""
},
{
"docid": "19339fa01942ad3bf33270aa1f6ceae2",
"text": "This study investigated query formulations by users with {\\it Cognitive Search Intents} (CSIs), which are users' needs for the cognitive characteristics of documents to be retrieved, {\\em e.g. comprehensibility, subjectivity, and concreteness. Our four main contributions are summarized as follows (i) we proposed an example-based method of specifying search intents to observe query formulations by users without biasing them by presenting a verbalized task description;(ii) we conducted a questionnaire-based user study and found that about half our subjects did not input any keywords representing CSIs, even though they were conscious of CSIs;(iii) our user study also revealed that over 50\\% of subjects occasionally had experiences with searches with CSIs while our evaluations demonstrated that the performance of a current Web search engine was much lower when we not only considered users' topical search intents but also CSIs; and (iv) we demonstrated that a machine-learning-based query expansion could improve the performances for some types of CSIs.Our findings suggest users over-adapt to current Web search engines,and create opportunities to estimate CSIs with non-verbal user input.",
"title": ""
}
] | scidocsrr |
03b8497dfb86e54bc80bbbc1730be3b6 | Modeling Cyber-Physical Systems with Semantic Agents | [
{
"docid": "73dd590da37ffec2d698142bee2e23fb",
"text": "Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of interacting autonomous agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to do research. Some have gone so far as to contend that ABMS is a new way of doing science. Computational advances make possible a growing number of agent-based applications across many fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling the growth and decline of ancient civilizations to modeling the complexities of the human immune system, and many more. This tutorial describes the foundations of ABMS, identifies ABMS toolkits and development methods illustrated through a supply chain example, and provides thoughts on the appropriate contexts for ABMS versus conventional modeling techniques.",
"title": ""
}
] | [
{
"docid": "4487f3713062ef734ceab5c7f9ccc6e3",
"text": "In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.",
"title": ""
},
{
"docid": "b6fff873c084e9a44d870ffafadbc9e7",
"text": "A wide variety of smartphone applications today rely on third-party advertising services, which provide libraries that are linked into the hosting application. This situation is undesirable for both the application author and the advertiser. Advertising libraries require their own permissions, resulting in additional permission requests to users. Likewise, a malicious application could simulate the behavior of the advertising library, forging the user’s interaction and stealing money from the advertiser. This paper describes AdSplit, where we extended Android to allow an application and its advertising to run as separate processes, under separate user-ids, eliminating the need for applications to request permissions on behalf of their advertising libraries, and providing services to validate the legitimacy of clicks, locally and remotely. AdSplit automatically recompiles apps to extract their ad services, and we measure minimal runtime overhead. AdSplit also supports a system resource that allows advertisements to display their content in an embedded HTML widget, without requiring any native code.",
"title": ""
},
{
"docid": "47ee1b71ed10b64110b84e5eecf2857c",
"text": "Measurements for future outdoor cellular systems at 28 GHz and 38 GHz were conducted in urban microcellular environments in New York City and Austin, Texas, respectively. Measurements in both line-of-sight and non-line-of-sight scenarios used multiple combinations of steerable transmit and receive antennas (e.g. 24.5 dBi horn antennas with 10.9° half power beamwidths at 28 GHz, 25 dBi horn antennas with 7.8° half power beamwidths at 38 GHz, and 13.3 dBi horn antennas with 24.7° half power beamwidths at 38 GHz) at different transmit antenna heights. Based on the measured data, we present path loss models suitable for the development of fifth generation (5G) standards that show the distance dependency of received power. In this paper, path loss is expressed in easy-to-use formulas as the sum of a distant dependent path loss factor, a floating intercept, and a shadowing factor that minimizes the mean square error fit to the empirical data. The new models are compared with previous models that were limited to using a close-in free space reference distance. Here, we illustrate the differences of the two modeling approaches, and show that a floating intercept model reduces the shadow factors by several dB and offers smaller path loss exponents while simultaneously providing a better fit to the empirical data. The upshot of these new path loss models is that coverage is actually better than first suggested by work in [1], [7] and [8].",
"title": ""
},
{
"docid": "9e451fe70d74511d2cc5a58b667da526",
"text": "Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "b42b17131236abc1ee3066905025aa8c",
"text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.",
"title": ""
},
{
"docid": "6e4d846272030b160b30d56a60eb2cad",
"text": "MapReduce and Spark are two very popular open source cluster computing frameworks for large scale data analytics. These frameworks hide the complexity of task parallelism and fault-tolerance, by exposing a simple programming API to users. In this paper, we evaluate the major architectural components in MapReduce and Spark frameworks including: shuffle, execution model, and caching, by using a set of important analytic workloads. To conduct a detailed analysis, we developed two profiling tools: (1) We correlate the task execution plan with the resource utilization for both MapReduce and Spark, and visually present this correlation; (2) We provide a break-down of the task execution time for in-depth analysis. Through detailed experiments, we quantify the performance differences between MapReduce and Spark. Furthermore, we attribute these performance differences to different components which are architected differently in the two frameworks. We further expose the source of these performance differences by using a set of micro-benchmark experiments. Overall, our experiments show that Spark is about 2.5x, 5x, and 5x faster than MapReduce, for Word Count, k-means, and PageRank, respectively. The main causes of these speedups are the efficiency of the hash-based aggregation component for combine, as well as reduced CPU and disk overheads due to RDD caching in Spark. An exception to this is the Sort workload, for which MapReduce is 2x faster than Spark. We show that MapReduce’s execution model is more efficient for shuffling data than Spark, thus making Sort run faster on MapReduce.",
"title": ""
},
{
"docid": "6fb1f05713db4e771d9c610fa9c9925d",
"text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.",
"title": ""
},
{
"docid": "68f0f63fcfa29d3867fa7d2dea6807cc",
"text": "We propose a machine learning framework to capture the dynamics of highfrequency limit order books in financial equity markets and automate real-time prediction of metrics such as mid-price movement and price spread crossing. By characterizing each entry in a limit order book with a vector of attributes such as price and volume at different levels, the proposed framework builds a learning model for each metric with the help of multi-class support vector machines (SVMs). Experiments with real data establish that features selected by the proposed framework are effective for short term price movement forecasts.",
"title": ""
},
{
"docid": "35c299197861d0a57763bbc392e90bb2",
"text": "Imperfect-information games, where players have private information, pose a unique challenge in artificial intelligence. In recent years, Heads-Up NoLimit Texas Hold’em poker, a popular version of poker, has emerged as the primary benchmark for evaluating game-solving algorithms for imperfectinformation games. We demonstrate a winning agent from the 2016 Annual Computer Poker Competition, Baby Tartanian8.",
"title": ""
},
{
"docid": "61953c398f2bcd4fd0ff4662689293a0",
"text": "Today's smartphones and mobile devices typically embed advanced motion sensors. Due to their increasing market penetration, there is a potential for the development of distributed sensing platforms. In particular, over the last few years there has been an increasing interest in monitoring vehicles and driving data, aiming to identify risky driving maneuvers and to improve driver efficiency. Such a driver profiling system can be useful in fleet management, insurance premium adjustment, fuel consumption optimization or CO2 emission reduction. In this paper, we analyze how smartphone sensors can be used to identify driving maneuvers and propose SenseFleet, a driver profile platform that is able to detect risky driving events independently from the mobile device and vehicle. A fuzzy system is used to compute a score for the different drivers using real-time context information like route topology or weather conditions. To validate our platform, we present an evaluation study considering multiple drivers along a predefined path. The results show that our platform is able to accurately detect risky driving events and provide a representative score for each individual driver.",
"title": ""
},
{
"docid": "ff4cfe56f31e21a8f69164790eb39634",
"text": "Active individuals often perform exercises in the heat following heat stress exposure (HSE) regardless of the time-of-day and its variation in body temperature. However, there is no information concerning the diurnal effects of a rise in body temperature after HSE on subsequent exercise performance in a hot environnment. This study therefore investigated the diurnal effects of prior HSE on both sprint and endurance exercise capacity in the heat. Eight male volunteers completed four trials which included sprint and endurance cycling tests at 30 °C and 50% relative humidity. At first, volunteers completed a 30-min pre-exercise routine (30-PR): a seated rest in a temperate environment in AM (AmR) or PM (PmR) (Rest trials); and a warm water immersion at 40 °C to induce a 1 °C increase in core temperature in AM (AmW) or PM (PmW) (HSE trials). Volunteers subsequently commenced exercise at 0800 h in AmR/AmW and at 1700 h in PmR/PmW. The sprint test determined a 10-sec maximal sprint power at 5 kp. Then, the endurance test was conducted to measure time to exhaustion at 60% peak oxygen uptake. Maximal sprint power was similar between trials (p = 0.787). Time to exhaustion in AmW (mean±SD; 15 ± 8 min) was less than AmR (38 ± 16 min; p < 0.01) and PmR (43 ± 24 min; p < 0.01) but similar with PmW (24 ± 9 min). Core temperature was higher from post 30-PR to 6 min into the endurance test in AmW and PmW than AmR and PmR (p < 0.05) and at post 30-PR and the start of the endurance test in PmR than AmR (p < 0.05). The rate of rise in core temperature during the endurance test was greater in AmR than AmW and PmW (p < 0.05). Mean skin temperature was higher from post 30-PR to 6 min into the endurance test in HSE trials than Rest trials (p < 0.05). Mean body temperature was higher from post 30-PR to 6 min into the endurance test in AmW and PmW than AmR and PmR (p < 0.05) and the start to 6 min into the endurance test in PmR than AmR (p < 0.05). Convective, radiant, dry and evaporative heat losses were greater on HSE trials than on Rest trials (p < 0.001). Heart rate and cutaneous vascular conductance were higher at post 30-PR in HSE trials than Rest trials (p < 0.05). Thermal sensation was higher from post 30-PR to the start of the endurance test in AmW and PmW than AmR and PmR (p < 0.05). Perceived exertion from the start to 6 min into the endurance test was higher in HSE trials than Rest trials (p < 0.05). This study demonstrates that an approximately 1 °C increase in core temperature by prior HSE has the diurnal effects on endurance exercise capacity but not on sprint exercise capacity in the heat. Moreover, prior HSE reduces endurance exercise capacity in AM, but not in PM. This reduction is associated with a large difference in pre-exercise core temperature between AM trials which is caused by a relatively lower body temperature in the morning due to the time-of-day variation and contributes to lengthening the attainment of high core temperature during exercise in AmR.",
"title": ""
},
{
"docid": "27136e888c3ebfef4ea7105d68a13ffd",
"text": "The huge amount of (potentially) available spectrum makes millimeter wave (mmWave) a promising candidate for fifth generation cellular networks. Unfortunately, differences in the propagation environment as a function of frequency make it hard to make comparisons between systems operating at mmWave and microwave frequencies. This paper presents a simple channel model for evaluating system level performance in mmWave cellular networks. The model uses insights from measurement results that show mmWave is sensitive to blockages revealing very different path loss characteristics between line-of-sight (LOS) and non-line-of-sight (NLOS) links. The conventional path loss model with a single log-distance path loss function and a shadowing term is replaced with a stochastic path loss model with a distance-dependent LOS probability and two different path loss functions to account for LOS and NLOS links. The proposed model is used to compare microwave and mmWave networks in simulations. It is observed that mmWave networks can provide comparable coverage probability with a dense deployment, leading to much higher data rates thanks to the large bandwidth available in the mmWave spectrum.",
"title": ""
},
{
"docid": "1d7d96d37584398359f9b85bc7741578",
"text": "BACKGROUND\nTwo types of soft tissue filler that are in common use are those formulated primarily with calcium hydroxylapatite (CaHA) and those with cross-linked hyaluronic acid (cross-linked HA).\n\n\nOBJECTIVE\nTo provide physicians with a scientific rationale for determining which soft tissue fillers are most appropriate for volume replacement.\n\n\nMATERIALS\nSix cross-linked HA soft tissue fillers (Restylane and Perlane from Medicis, Scottsdale, AZ; Restylane SubQ from Q-Med, Uppsala, Sweden; and Juvéderm Ultra, Juvéderm Ultra Plus, and Juvéderm Voluma from Allergan, Pringy, France) and a soft tissue filler consisting of CaHA microspheres in a carrier gel containing carboxymethyl cellulose (Radiesse, BioForm Medical, Inc., San Mateo, CA). METHODS The viscosity and elasticity of each filler gel were quantified according to deformation oscillation measurements conducted using a Thermo Haake RS600 Rheometer (Newington, NH) using a plate and plate geometry with a 1.2-mm gap. All measurements were performed using a 35-mm titanium sensor at 30°C. Oscillation measurements were taken at 5 pascal tau (τ) over a frequency range of 0.1 to 10 Hz (interpolated at 0.7 Hz). Researchers chose the 0.7-Hz frequency because it elicited the most reproducible results and was considered physiologically relevant for stresses that are common to the skin. RESULTS The rheological measurements in this study support the concept that soft tissue fillers that are currently used can be divided into three groups. CONCLUSION Rheological evaluation enables the clinician to objectively classify soft tissue fillers, to select specific filler products based on scientific principles, and to reliably predict how these products will perform--lifting, supporting, and sculpting--after they are appropriately injected.",
"title": ""
},
{
"docid": "31756ac6aaa46df16337dbc270831809",
"text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "46980b89e76bc39bf125f63ed9781628",
"text": "In this paper, a design of miniaturized 3-way Bagley polygon power divider (BPD) is presented. The design is based on using non-uniform transmission lines (NTLs) in each arm of the divider instead of the conventional uniform ones. For verification purposes, a 3-way BPD is designed, simulated, fabricated, and measured. Besides suppressing the fundamental frequency's odd harmonics, a size reduction of almost 30% is achieved.",
"title": ""
},
{
"docid": "6784e31e2ec313698a622a7e78288f68",
"text": "Web-based technology is often the technology of choice for distance education given the ease of use of the tools to browse the resources on the Web, the relative affordability of accessing the ubiquitous Web, and the simplicity of deploying and maintaining resources on the WorldWide Web. Many sophisticated web-based learning environments have been developed and are in use around the world. The same technology is being used for electronic commerce and has become extremely popular. However, while there are clever tools developed to understand on-line customer’s behaviours in order to increase sales and profit, there is very little done to automatically discover access patterns to understand learners’ behaviour on web-based distance learning. Educators, using on-line learning environments and tools, have very little support to evaluate learners’ activities and discriminate between different learners’ on-line behaviours. In this paper, we discuss some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process, as well as for the learners to help them in their learning endeavour.",
"title": ""
},
{
"docid": "3b9b49f8c2773497f8e05bff4a594207",
"text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%[email protected] on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.",
"title": ""
},
{
"docid": "fe3a2ef6ffc3e667f73b19f01c14d15a",
"text": "The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.",
"title": ""
}
] | scidocsrr |
856b35eca381031c01d0434bcd9ec421 | Lean UX: the next generation of user-centered agile development? | [
{
"docid": "d50cdc6a7a939716196489f3e18c6222",
"text": "ì Personasî is an interaction design technique with considerable potential for software product development. In three years of use, our colleagues and we have extended Alan Cooperís technique to make Personas a powerful complement to other usability methods. After describing and illustrating our approach, we outline the psychological theory that explains why Personas are more engaging than design based primarily on scenarios. As Cooper and others have observed, Personas can engage team members very effectively. They also provide a conduit for conveying a broad range of qualitative and quantitative data, and focus attention on aspects of design and use that other methods do not.",
"title": ""
},
{
"docid": "382ac4d3ba3024d0c760cff1eef505c3",
"text": "We seek to close the gap between software engineering (SE) and human-computer interaction (HCI) by indicating interdisciplinary interfaces throughout the different phases of SE and HCI lifecycles. As agile representatives of SE, Extreme Programming (XP) and Agile Modeling (AM) contribute helpful principles and practices for a common engineering approach. We present a cross-discipline user interface design lifecycle that integrates SE and HCI under the umbrella of agile development. Melting IT budgets, pressure of time and the demand to build better software in less time must be supported by traveling as light as possible. We did, therefore, choose not just to mediate both disciplines. Following our surveys, a rather radical approach best fits the demands of engineering organizations.",
"title": ""
}
] | [
{
"docid": "a60a60a345fed5e16df157ebf2951c3f",
"text": "A dielectric fibre with a refractive index higher than its surrounding region is a form of dielectric waveguide which represents a possible medium for the guided transmission of energy at optical frequencies. The particular type of dielectric-fibre waveguide discussed is one with a circular cross-section. The choice of the mode of propagation for a fibre waveguide used for communication purposes is governed by consideration of loss characteristics and information capacity. Dielectric loss, bending loss and radiation loss are discussed, and mode stability, dispersion and power handling are examined with respect to information capacity. Physicalrealisation aspects are also discussed. Experimental investigations at both optical and microwave wavelengths are included. List of principle symbols Jn = nth-order Bessel function of the first kind Kn = nth-order modified Bessel function of the second kind 271 271 B — —, phase coefficient of the waveguide Xg }'n = first derivative of Jn K ,̂ = first derivative of Kn hi = radial wavenumber or decay coefficient €,= relative permittivity k0 = free-space propagation coefficient a = radius of the fibre y = longitudinal propagation coefficient k = Boltzman's constant T = absolute temperature, K j5 c = isothermal compressibility X = wavelength n = refractive index Hj, = uth-order Hankel function of the ith type H'v = derivation of Hu v = azimuthal propagation coefficient = i^ — jv2 L = modulation period Subscript n is an integer and subscript m refers to the mth root of L = 0",
"title": ""
},
{
"docid": "ae23145d649c6df81a34babdfc142b31",
"text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "49d42dbbe33a2b0a16d7ec586654a128",
"text": "The goal of the present study is to explore the application of deep convolutional network features to emotion recognition. Results indicate that they perform similarly to recently published models at a best recognition rate of 94.4%, and do so with a single still image rather than a video stream. An implementation of an affective feedback game is also described, where a classifier using these features tracks the facial expressions of a player in real-time. Keywords—emotion recognition, convolutional network, affective computing",
"title": ""
},
{
"docid": "c1735e08317b4c2bfe3622cab7b557e6",
"text": "Intensive repetitive therapy shows promise to improve motor function and quality of life for stroke patients. Intense therapies provided by individualized interaction between the patient and rehabilitation specialist to overcome upper extremity impairment are beneficial, however, they are expensive and difficult to evaluate quantitatively and objectively. The development of a pneumatic muscle (PM) driven therapeutic device, the RUPERT/spl trade/ has the potential of providing a low cost and safe take-home method of supplementing therapy in addition to in the clinic treatment. The device can also provide real-time, objective assessment of functional improvement from the therapy.",
"title": ""
},
{
"docid": "7518c3029ec09d6d2b3f6785047a1fc9",
"text": "In this paper, we describe a novel deep convolutional neural networks (CNN) based approach called contextual deep CNN that can jointly exploit spatial and spectral features for hyperspectral image classification. The contextual deep CNN first concurrently applies multiple 3-dimensional local convolutional filters with different sizes jointly exploiting spatial and spectral features of a hyperspectral image. The initial spatial and spectral feature maps obtained from applying the variable size convolutional filters are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through fully convolutional layers that eventually predict the corresponding label of each pixel vector. The proposed approach is tested on two benchmark datasets: the Indian Pines dataset and the Pavia University scene dataset. Performance comparison shows enhanced classification performance of the proposed approach over the current state of the art on both datasets.",
"title": ""
},
{
"docid": "350334f676d5590cda9d6f430af6e80d",
"text": "Benferhat, S, Dubois D and Prade, H, 1992. \"Representing default rules in possibilistic logic\" In: Proc. of the 3rd Inter. Conf. on Principles of knowledge Representation and Reasoning (KR'92), 673-684, Cambridge, MA, October 26-29. De Finetti, B, 1936. \"La logique de la probabilite\" Actes du Congres Inter, de Philosophic Scientifique, Paris. (Hermann et Cie Editions, 1936, IV1-IV9). Driankov, D, Hellendoorn, H and Reinfrank, M, 1995. An Introduction to Fuzzy Control, Springer-Verlag. Dubois, D and Prade, H, 1988. \"An introduction to possibilistic and fuzzy logics\" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 287-315, Academic Press. Dubois, D and Prade, H, 1994. \"Can we enforce full compositionality in uncertainty calculi?\" In: Proc. 12th US National Conf. On Artificial Intelligence (AAAI94), 149-154, Seattle, WA. Elkan, C, 1994. \"The paradoxical success of fuzzy logic\" IEEE Expert August, 3-8. Lehmann, D and Magidor. M, 1992. \"What does a conditional knowledge base entail?\" Artificial Intelligence 55 (1) 1-60. Maung, 1,1995. \"Two characterizations of a minimum-information principle in possibilistic reasoning\" Int. J. of Approximate Reasoning 12 133-156. Pearl, J, 1990. \"System Z: A natural ordering of defaults with tractable applications to default reasoning\" Proc. of the 2nd Conf. on Theoretical Aspects of Reasoning about Knowledge (TARK'90) 121-135, San Francisco, CA, Morgan Karfman. Shoham, Y, 1988. Reasoning about Change MIT Press. Smets, P, 1988. \"Belief functions\" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 253-286, Academic Press. Smets, P, 1990a. \"The combination of evidence in the transferable belief model\" IEEE Trans, on Pattern Anal. Mach. Intell. 12 447-458. Smets, P, 1990b. \"Constructing the pignistic probability function in a context of uncertainty\" Un certainty in Artificial Intelligence 5 (M Henrion et al., editors), 29-40, North-Holland. Smets, P, 1995. \"Quantifying beliefs by belief functions: An axiomatic justification\" In: Procoj the 13th Inter. Joint Conf. on Artificial Intelligence (IJACT93), 598-603, Chambey, France, August 28-September 3. Smets, P and Kennes, R, 1994. \"The transferable belief model\" Artificial Intelligence 66 191-234.",
"title": ""
},
{
"docid": "c5c5d56d2db769996d8164a0d0a5e00a",
"text": "This paper presents the development of a polymer-based tendon-driven wearable robotic hand, Exo-Glove Poly. Unlike the previously developed Exo-Glove, a fabric-based tendon-driven wearable robotic hand, Exo-Glove Poly was developed using silicone to allow for sanitization between users in multiple-user environments such as hospitals. Exo-Glove Poly was developed to use two motors, one for the thumb and the other for the index/middle finger, and an under-actuation mechanism to grasp various objects. In order to realize Exo-Glove Poly, design features and fabrication processes were developed to permit adjustment to different hand sizes, to protect users from injury, to enable ventilation, and to embed Teflon tubes for the wire paths. The mechanical properties of Exo-Glove Poly were verified with a healthy subject through a wrap grasp experiment using a mat-type pressure sensor and an under-actuation performance experiment with a specialized test set-up. Finally, performance of the Exo-Glove Poly for grasping various shapes of object was verified, including objects needing under-actuation.",
"title": ""
},
{
"docid": "d2c36f67971c22595bc483ebb7345404",
"text": "Resistive-switching random access memory (RRAM) devices utilizing a crossbar architecture represent a promising alternative for Flash replacement in high-density data storage applications. However, RRAM crossbar arrays require the adoption of diodelike select devices with high on-off -current ratio and with sufficient endurance. To avoid the use of select devices, one should develop passive arrays where the nonlinear characteristic of the RRAM device itself provides self-selection during read and write. This paper discusses the complementary switching (CS) in hafnium oxide RRAM, where the logic bit can be encoded in two high-resistance levels, thus being immune from leakage currents and related sneak-through effects in the crossbar array. The CS physical mechanism is described through simulation results by an ion-migration model for bipolar switching. Results from pulsed-regime characterization are shown, demonstrating that CS can be operated at least in the 10-ns time scale. The minimization of the reset current is finally discussed.",
"title": ""
},
{
"docid": "b89099e9b01a83368a1ebdb2f4394eba",
"text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.",
"title": ""
},
{
"docid": "a753be5a5f81ae77bfcb997a2748d723",
"text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.",
"title": ""
},
{
"docid": "cf32bac4be646211d09d1b4107b3f58a",
"text": "The single-feature-based background model often fails in complex scenes, since a pixel is better described by several features, which highlight different characteristics of it. Therefore, the multi-feature-based background model has drawn much attention recently. In this paper, we propose a novel multi-feature-based background model, named stability of adaptive feature (SoAF) model, which utilizes the stabilities of different features in a pixel to adaptively weigh the contributions of these features for foreground detection. We do this mainly due to the fact that the features of pixels in the background are often more stable. In SoAF, a pixel is described by several features and each of these features is depicted by a unimodal model that offers an initial label of the target pixel. Then, we measure the stability of each feature by its histogram statistics over a time sequence and use them as weights to assemble the aforementioned unimodal models to yield the final label. The experiments on some standard benchmarks, which contain the complex scenes, demonstrate that the proposed approach achieves promising performance in comparison with some state-of-the-art approaches.",
"title": ""
},
{
"docid": "f794d4a807a4d69727989254c557d2d1",
"text": "The purpose of this study was to describe the operative procedures and clinical outcomes of a new three-column internal fixation system with anatomical locking plates on the tibial plateau to treat complex three-column fractures of the tibial plateau. From June 2011 to May 2015, 14 patients with complex three-column fractures of the tibial plateau were treated with reduction and internal fixation through an anterolateral approach combined with a posteromedial approach. The patients were randomly divided into two groups: a control group which included seven cases using common locking plates, and an experimental group which included seven cases with a new three-column internal fixation system with anatomical locking plates. The mean operation time of the control group was 280.7 ± 53.7 minutes, which was 215.0 ± 49.1 minutes in the experimental group. The mean intra-operative blood loss of the control group was 692.8 ± 183.5 ml, which was 471.4 ± 138.0 ml in the experimental group. The difference was statistically significant between the two groups above. The differences were not statistically significant between the following mean numbers of the two groups: Rasmussen score immediately after operation; active extension–flexion degrees of knee joint at three and 12 months post-operatively; tibial plateau varus angle (TPA) and posterior slope angle (PA) immediately after operation, at three and at 12 months post-operatively; HSS (The Hospital for Special Surgery) knee-rating score at 12 months post-operatively. All fractures healed. A three-column internal fixation system with anatomical locking plates on tibial plateau is an effective and safe tool to treat complex three-column fractures of the tibial plateau and it is more convenient than the common plate.",
"title": ""
},
{
"docid": "8216a6da70affe452ec3c5998e3c77ba",
"text": "In this paper, the performance of a rectangular microstrip patch antenna fed by microstrip line is designed to operate for ultra-wide band applications. It consists of a rectangular patch with U-shaped slot on one side of the substrate and a finite ground plane on the other side. The U-shaped slot and the finite ground plane are used to achieve an excellent impedance matching to increase the bandwidth. The proposed antenna is designed and optimized based on extensive 3D EM simulation studies. The proposed antenna is designed to operate over a frequency range from 3.6 to 15 GHz.",
"title": ""
},
{
"docid": "6ac9ddefaeaddad00fb3d85b94b07f74",
"text": "Cognitive architectures are theories of cognition that try to capture the essential representations and mechanisms that underlie cognition. Research in cognitive architectures has gradually moved from a focus on the functional capabilities of architectures to the ability to model the details of human behavior, and, more recently, brain activity. Although there are many different architectures, they share many identical or similar mechanisms, permitting possible future convergence. In judging the quality of a particular cognitive model, it is pertinent to not just judge its fit to the experimental data but also its simplicity and ability to make predictions.",
"title": ""
},
{
"docid": "d0d5081b93f48972c92b3c5a7e69350e",
"text": "Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike. This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts, and provide commentary to aid readers in reaching the correct interpretation. We introduce the task of automated lyric annotation (ALA). Like text simplification, a goal of ALA is to rephrase the original text in a more easily understandable manner. However, in ALA the system must often include additional information to clarify niche terminology and abstract concepts. To stimulate research on this task, we release a large collection of crowdsourced annotations for song lyrics. We analyze the performance of translation and retrieval models on this task, measuring performance with both automated and human evaluation. We find that each model captures a unique type of information important to the task.",
"title": ""
},
{
"docid": "f41c9b1bcc36ed842f15d7570ff67f92",
"text": "Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.",
"title": ""
},
{
"docid": "8375f143ff6b42e36e615a78a362304b",
"text": "The Ball and Beam system is a popular technique for the study of control systems. The system has highly non-linear characteristics and is an excellent tool to represent an unstable system. The control of such a system presents a challenging task. The ball and beam mirrors the real time unstable complex systems such as flight control, on a small laboratory level and provides for developing control algorithms which can be implemented at a higher scale. The objective of this paper is to design and implement cascade PD control of the ball and beam system in LabVIEW using data acquisition board and DAQmx and use the designed control circuit to verify results in real time.",
"title": ""
},
{
"docid": "b96b422be2b358d92347659d96a68da7",
"text": "The bipedal spring-loaded inverted pendulum (SLIP) model captures characteristic properties of human locomotion, and it is therefore often used to study human-like walking. The extended variable spring-loaded inverted pendulum (V-SLIP) model provides a control input for gait stabilization and shows robust and energy-efficient walking patterns. This work presents a control strategy that maps the conceptual V-SLIP model on a realistic model of a bipedal robot. This walker implements the variable leg compliance by means of variable stiffness actuators in the knees. The proposed controller consists of multiple levels, each level controlling the robot at a different level of abstraction. This allows the controller to control a simple dynamic structure at the top level and control the specific degrees of freedom of the robot at a lower level. The proposed controller is validated by both numeric simulations and preliminary experimental tests.",
"title": ""
},
{
"docid": "8b2f4d597b1aa5a9579fa3e37f6acc65",
"text": "This work presents a 910MHz/2.4GHz dual-band dipole antenna for Power Harvesting and/or Sensor Network applications whose main advantage lies on its easily tunable bands. Tunability is achieved via the low and high frequency dipole separation Wgap. This separation is used to increase or decrease the S11 magnitude of the required bands. Such tunability can be used to harvest energy in environments where the electric field strength of one carrier band is dominant over the other one, or in the case when both carriers have similar electric field strength. If the environment is crowed by 820MHz-1.02GHz carries Wgap is adjusted to 1mm in order to harvest/sense only the selected band; if the environment is full of 2.24GHz - 2.52 GHz carriers Wgap is set to 7mm. When Wgap is selected to 4mm both bands can be harvested/sensed. The proposed antenna works for UHF-RFID, GSM-840MHz, 3G-UMTS, Wi-Fi and Bluetooth standards. Simulations are carried out in Advanced Design System (ADS) Momentum using commercial FR4 printed circuit board specification.",
"title": ""
}
] | scidocsrr |
1d52c50130f737e30eae4b14fe3ffe0a | Pricing in Network Effect Markets | [
{
"docid": "1e18be7d7e121aa899c96cbcf5ea906b",
"text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1",
"title": ""
}
] | [
{
"docid": "dd14f9eb9a9e0e4e0d24527cf80d04f4",
"text": "The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.",
"title": ""
},
{
"docid": "6f45bc16969ed9deb5da46ff8529bb8a",
"text": "In the future, mobile systems will increasingly feature more advanced organic light-emitting diode (OLED) displays. The power consumption of these displays is highly dependent on the image content. However, existing OLED power-saving techniques either change the visual experience of users or degrade the visual quality of images in exchange for a reduction in the power consumption. Some techniques attempt to enhance the image quality by employing a compound objective function. In this article, we present a win-win scheme that always enhances the image quality while simultaneously reducing the power consumption. We define metrics to assess the benefits and cost for potential image enhancement and power reduction. We then introduce algorithms that ensure the transformation of images into their quality-enhanced power-saving versions. Next, the win-win scheme is extended to process videos at a justifiable computational cost. All the proposed algorithms are shown to possess the win-win property without assuming accurate OLED power models. Finally, the proposed scheme is realized through a practical camera application and a video camcorder on mobile devices. The results of experiments conducted on a commercial tablet with a popular image database and on a smartphone with real-world videos are very encouraging and provide valuable insights for future research and practices.",
"title": ""
},
{
"docid": "d34c96bb2399e4bd3f19825eef98d6dd",
"text": "This paper proposes logic programs as a specification for robot control. These provide a formal specification of what an agent should do depending on what it senses, and its previous sensory inputs and actions. We show how to axiomatise reactive agents, events as an interface between continuous and discrete time, and persistence, as well as axiomatising integration and differentiation over time (in terms of the limit of sums and differences). This specification need not be evaluated as a Prolog program; we use can the fact that it will be evaluated in time to get a more efficient agent. We give a detailed example of a nonholonomic maze travelling robot, where we use the same language to model both the agent and the environment. One of the main motivations for this work is that there is a clean interface between the logic programs here and the model of uncertainty embedded in probabilistic Horn abduction. This is one step towards building a decisiontheoretic planning system where the output of the planner is a plan suitable for actually controlling a robot.",
"title": ""
},
{
"docid": "e578bafcfef89e66cd77f6ee41c1fd1e",
"text": "Quadruped robot is expected to serve in complex conditions such as mountain road, grassland, etc., therefore we desire a walking pattern generation that can guarantee both the speed and the stability of the quadruped robot. In order to solve this problem, this paper focuses on the stability for the tort pattern and proposes trot pattern generation for quadruped robot on the basis of ZMP stability margin. The foot trajectory is first designed based on the work space limitation. Then the ZMP and stability margin is computed to achieve the optimal trajectory of the midpoint of the hip joint of the robot. The angles of each joint are finally obtained through the inverse kinematics calculation. Finally, the effectiveness of the proposed method is demonstrated by the results from the simulation and the experiment on the quadruped robot in BIT.",
"title": ""
},
{
"docid": "be91ec9b4f017818f32af09cafbb2a9a",
"text": "Brainard et al. 2 INTRODUCTION Object recognition is difficult because there is no simple relation between an object's properties and the retinal image. Where the object is located, how it is oriented, and how it is illuminated also affect the image. Moreover, the relation is under-determined: multiple physical configurations can give rise to the same retinal image. In the case of object color, the spectral power distribution of the light reflected from an object depends not only on the object's intrinsic surface reflectance but also factors extrinsic to the object, such as the illumination. The relation between intrinsic reflectance, extrinsic illumination, and the color signal reflected to the eye is shown schematically in Figure 1. The light incident on a surface is characterized by its spectral power distribution E(λ). A small surface element reflects a fraction of the incident illuminant to the eye. The surface reflectance function S(λ) specifies this fraction as a function of wavelength. The spectrum of the light reaching the eye is called the color signal and is given by C(λ) = E(λ)S(λ). Information about C(λ) is encoded by three classes of cone photoreceptors, the L-, M-, and Scones. The top two patches rendered in Plate 1 illustrate the large effect that a typical change in natural illumination (see Wyszecki and Stiles, 1982) can have on the color signal. This effect might lead us to expect that the color appearance of objects should vary radically, depending as much on the current conditions of illumination as on the object's surface reflectance. Yet the very fact that we can sensibly refer to objects as having a color indicates otherwise. Somehow our visual system stabilizes the color appearance of objects against changes in illumination, a perceptual effect that is referred to as color constancy. Because the illumination is the most salient object-extrinsic factor that affects the color signal, it is natural that emphasis has been placed on understanding how changing the illumination affects object color appearance. In a typical color constancy experiment, the independent variable is the illumination and the dependent variable is a measure of color appearance experiments employ different stimulus configurations and psychophysical tasks, but taken as a whole they support the view that human vision exhibits a reasonable degree of color constancy. Recall that the top two patches of Plate 1 illustrate the limiting case where a single surface reflectance is seen under multiple illuminations. Although this …",
"title": ""
},
{
"docid": "14a8adf666b115ff4a72ff600432ff07",
"text": "In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished. Radiology involves decision-making under conditions of uncertainty, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank).",
"title": ""
},
{
"docid": "28d19824a598ae20039f2ed5d8885234",
"text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.",
"title": ""
},
{
"docid": "597311f3187b504d91f7c788144f6b30",
"text": "Objective: Body Integrity Identity Disorder (BIID) describes a phenomenon in which physically healthy people feel the constant desire for an impairment of their body. M. First [4] suggested to classify BIID as an identity disorder. The other main disorder in this respect is Gender Dysphoria. In this paper these phenomena are compared. Method: A questionnaire survey with transsexuals (number of subjects, N=19) and BIID sufferers (N=24) measuring similarities and differences. Age and educational level of the subjects are predominantly matched. Results: No differences were found between BIID and Gender Dysphoria with respect to body image and body perception (U-test: p-value=.757), age of onset (p=.841), the imitation of the desired identity (p=.699 and p=.938), the etiology (p=.299) and intensity of desire (p=.989 and p=.224) as well as in relation to a high level of suffering and impaired quality of life (p=.066). Conclusion: There are many similarities between BIID and Gender Dysphoria, but the sample was too small to make general statements. The results, however, indicate that BIID can actually be classified as an identity disorder.",
"title": ""
},
{
"docid": "714c06da1a728663afd8dbb1cd2d472d",
"text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.",
"title": ""
},
{
"docid": "f4ebbcebefbcc1ba8b6f8e5bf6096645",
"text": "With advances in wireless communication technology, more and more people depend heavily on portable mobile devices for businesses, entertainments and social interactions. Although such portable mobile devices can offer various promising applications, their computing resources remain limited due to their portable size. This however can be overcome by remotely executing computation-intensive tasks on clusters of near by computers known as cloudlets. As increasing numbers of people access the Internet via mobile devices, it is reasonable to envision in the near future that cloudlet services will be available for the public through easily accessible public wireless metropolitan area networks (WMANs). However, the outdated notion of treating cloudlets as isolated data-centers-in-a-box must be discarded as there are clear benefits to connecting multiple cloudlets together to form a network. In this paper we investigate how to balance the workload between multiple cloudlets in a network to optimize mobile application performance. We first introduce a system model to capture the response times of offloaded tasks, and formulate a novel optimization problem, that is to find an optimal redirection of tasks between cloudlets such that the maximum of the average response times of tasks at cloudlets is minimized. We then propose a fast, scalable algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. The experimental results demonstrate the significant potential of the proposed algorithm in reducing the response times of tasks.",
"title": ""
},
{
"docid": "1c8e47f700926cf0b6ab6ed7446a6e7a",
"text": "Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated datasets, which are expensive to develop and thus limited in size. Since such datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity datasets. Additionally, we investigated the effect of dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one dataset, performance improves significantly for five datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any dataset, and performance improves significantly for six datasets by up to 1.1%. The dataset size experiments found that as dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER dataset. We also found that Multi-task Learning is beneficial for small datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.",
"title": ""
},
{
"docid": "b238ceff7cf19621a420494ac311b2dd",
"text": "In this paper, we discuss the extension and integration of the statistical concept of Kernel Density Estimation (KDE) in a scatterplot-like visualization for dynamic data at interactive rates. We present a line kernel for representing streaming data, we discuss how the concept of KDE can be adapted to enable a continuous representation of the distribution of a dependent variable of a 2D domain. We propose to automatically adapt the kernel bandwith of KDE to the viewport settings, in an interactive visualization environment that allows zooming and panning. We also present a GPU-based realization of KDE that leads to interactive frame rates, even for comparably large datasets. Finally, we demonstrate the usefulness of our approach in the context of three application scenarios - one studying streaming ship traffic data, another one from the oil & gas domain, where process data from the operation of an oil rig is streaming in to an on-shore operational center, and a third one studying commercial air traffic in the US spanning 1987 to 2008.",
"title": ""
},
{
"docid": "4c30af9dd05b773ce881a312bcad9cb9",
"text": "This review summarized various chemical recycling methods for PVC, such as pyrolysis, catalytic dechlorination and hydrothermal treatment, with a view to solving the problem of energy crisis and the impact of environmental degradation of PVC. Emphasis was paid on the recent progress on the pyrolysis of PVC, including co-pyrolysis of PVC with biomass/coal and other plastics, catalytic dechlorination of raw PVC or Cl-containing oil and hydrothermal treatment using subcritical and supercritical water. Understanding the advantage and disadvantage of these treatment methods can be beneficial for treating PVC properly. The dehydrochlorination of PVC mainly happed at low temperature of 250-320°C. The process of PVC dehydrochlorination can catalyze and accelerate the biomass pyrolysis. The intermediates from dehydrochlorination stage of PVC can increase char yield of co-pyrolysis of PVC with PP/PE/PS. For the catalytic degradation and dechlorination of PVC, metal oxides catalysts mainly acted as adsorbents for the evolved HCl or as inhibitors of HCl formation depending on their basicity, while zeolites and noble metal catalysts can produce lighter oil, depending the total number of acid sites and the number of accessible acidic sites. For hydrothermal treatment, PVC decomposed through three stages. In the first region (T<250°C), PVC went through dehydrochlorination to form polyene; in the second region (250°C<T<350°C), polyene decomposed to low-molecular weight compounds; in the third region (350°C<T), polyene further decomposed into a large amount of low-molecular weight compounds.",
"title": ""
},
{
"docid": "e6245f210bfbcf47795604b45cb927ad",
"text": "The grid-connected AC module is an alternative solution in photovoltaic (PV) generation systems. It combines a PV panel and a micro-inverter connected to grid. The use of a high step-up converter is essential for the grid-connected micro-inverter because the input voltage is about 15 V to 40 V for a single PV panel. The proposed converter employs a Zeta converter and a coupled inductor, without the extreme duty ratios and high turns ratios generally needed for the coupled inductor to achieve high step-up voltage conversion; the leakage-inductor energy of the coupled inductor is efficiently recycled to the load. These features improve the energy-conversion efficiency. The operating principles and steady-state analyses of continuous and boundary conduction modes, as well as the voltage and current stresses of the active components, are discussed in detail. A 25 V input voltage, 200 V output voltage, and 250 W output power prototype circuit of the proposed converter is implemented to verify the feasibility; the maximum efficiency is up to 97.3%, and full-load efficiency is 94.8%.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "f37d32a668751198ed8acde8ab3bdc12",
"text": "INTRODUCTION\nAlthough the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.\n\n\nMETHODS\nThe total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.\n\n\nRESULTS\nFactor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).\n\n\nDISCUSSION\nThe ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD.",
"title": ""
},
{
"docid": "20e19999be17bce4ba3ae6d94400ba3c",
"text": "Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today's multi-core architecture. In this paper, we study the parallelizability of skip lists for the parallel and concurrent environment, and present PSL, a Parallel in-memory Skip List that lends itself naturally to the multi-core environment, particularly with non-uniform memory access. For each query, PSL traverses the index in a Breadth-First-Search (BFS) to find the list node with the matching key, and exploits SIMD processing to speed up this process. Furthermore, PSL distributes incoming queries among multiple execution threads disjointly and uniformly to eliminate the use of latches and achieve a high parallelizability. The experimental results show that PSL is comparable to a readonly index, FAST, in terms of read performance, and outperforms ART and Masstree respectively by up to 30% and 5x for a variety of workloads.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91e9f4d67c89aea99299966492648300",
"text": "In safety critical domains, system test cases are often derived from functional requirements in natural language (NL) and traceability between requirements and their corresponding test cases is usually mandatory. The definition of test cases is therefore time-consuming and error prone, especially so given the quickly rising complexity of embedded systems in many critical domains. Though considerable research has been devoted to automatic generation of system test cases from NL requirements, most of the proposed approaches re- quire significant manual intervention or additional, complex behavioral modelling. This significantly hinders their applicability in practice. In this paper, we propose Use Case Modelling for System Tests Generation (UMTG), an approach that automatically generates executable system test cases from use case spec- ifications and a domain model, the latter including a class diagram and constraints. Our rationale and motivation are that, in many environments, including that of our industry partner in the reported case study, both use case specifica- tions and domain modelling are common and accepted prac- tice, whereas behavioural modelling is considered a difficult and expensive exercise if it is to be complete and precise. In order to extract behavioral information from use cases and enable test automation, UMTG employs Natural Language Processing (NLP), a restricted form of use case specifica- tions, and constraint solving.",
"title": ""
}
] | scidocsrr |
5a05b75aec6ae542f8a13389f558bb09 | Developing the Models of Service Quality Gaps : A Critical Discussion | [
{
"docid": "5971934855f9d4dde2a7fc91e757606c",
"text": "The use of total quality management (TQM), which creates a system of management procedures that focuses on customer satisfaction and transforms the corporate culture so as to guarantee continual improvement, is discussed. The team approach essential to its implementation is described. Two case studies of applying TQM at AT&T are presented.<<ETX>>",
"title": ""
},
{
"docid": "67379c945de57c3662a2cd96cd67c15b",
"text": "Introduction This paper’s purpose is to illustrate the relationship of profitability to intermediate, customer-related outcomes that managers can influence directly. It is predominantly a general management discussion, consistent with the Nordic School’s view that services are highly interdisciplinary, requiring a “service management” approach (see Grönroos, 1984, 1991). Its findings support the theory that customer satisfaction is related to customer loyalty, which in turn is related to profitability (Heskett et al., 1994, and discussed in Storbacka et al., 1994). While this theory has been advocated for service firms as a class, this paper presents an empirical analysis of one retail bank, limiting the findings’ generalizability. The service profit chain (Heskett et al., 1994) hypothesizes that:",
"title": ""
}
] | [
{
"docid": "c04e3a28b6f3f527edae534101232701",
"text": "An intelligent interface for an information retrieval system has the aims of controlling an underlying information retrieval system di rectly interacting with the user and allowing him to retrieve relevant information without the support of a human intermediary Developing intelligent interfaces for information retrieval is a di cult activity and no well established models of the functions that such systems should possess are available Despite of this di culty many intelligent in terfaces for information retrieval have been implemented in the past years This paper surveys these systems with two aims to stand as a useful entry point for the existing literature and to sketch an ana lysis of the functionalities that an intelligent interface for information retrieval has to possess",
"title": ""
},
{
"docid": "f17f2e754149474ea879711dc5bcd087",
"text": "In grasping, shape adaptation between hand and object has a major influence on grasp success. In this paper, we present an approach to grasping unknown objects that explicitly considers the effect of shape adaptability to simplify perception. Shape adaptation also occurs between the hand and the environment, for example, when fingers slide across the surface of the table to pick up a small object. Our approach to grasping also considers environmental shape adaptability to select grasps with high probability of success. We validate the proposed shape-adaptability-aware grasping approach in 880 real-world grasping trials with 30 objects. Our experiments show that the explicit consideration of shape adaptability of the hand leads to robust grasping of unknown objects. Simple perception suffices to achieve this robust grasping behavior.",
"title": ""
},
{
"docid": "d59cc1c197099db86aba4d9f79cb6267",
"text": "The rapid growth of the Internet as an environment for information exchange and the lack of enforceable standards regarding the information it contains has lead to numerous information qual ity problems. A major issue is the inability of Search Engine technology to wade through the vast expanse of questionable content and return \"quality\" results to a user's query. This paper attempts to address some of the issues involved in determining what quality is, as it pertains to information retrieval on the Internet. The IQIP model is presented as an approach to managing the choice and implementation of quality related algorithms of an Internet crawling Search Engine.",
"title": ""
},
{
"docid": "b314fcc883f4182c45d0c4eb5511df0b",
"text": "The absorption of the light by sea water and light scattering by small particles of underwater environment has become an obstacle of underwater vision researches with camera. It gives impact to the limitation of visibility distances camera in the sea water. The research of 3D reconstruction requires image matching technique to find out the keypoints of image pairs. SIFT is one of the image matching technique where the quality of image matching depends on the quality of the image. This research proposed HSV conversion image with auto level color correction to increase the number of SIFT image matching. The experimental results show the number of image matching is increase until 4 %.",
"title": ""
},
{
"docid": "a5e3a061af7386db9d7c116409da205e",
"text": "This paper describes an online learning based method to detect flames in video by processing the data generated by an ordinary camera monitoring a scene. Our fire detection method consists of weak classifiers based on temporal and spatial modeling of flames. Markov models representing the flame and flame colored ordinary moving objects are used to distinguish temporal flame flicker process from motion of flame colored moving objects. Boundary of flames are represented in wavelet domain and high frequency nature of the boundaries of fire regions is also used as a clue to model the flame flicker spatially. Results from temporal and spatial weak classifiers based on flame flicker and irregularity of the flame region boundaries are updated online to reach a final decision. False alarms due to ordinary and periodic motion of flame colored moving objects are greatly reduced when compared to the existing video based fire detection systems.",
"title": ""
},
{
"docid": "9bacc1ef43fd8c05dde814a18f59e467",
"text": "The processes that affect removal and retention of nitrogen during wastewater treatment in constructed wetlands (CWs) are manifold and include NH(3) volatilization, nitrification, denitrification, nitrogen fixation, plant and microbial uptake, mineralization (ammonification), nitrate reduction to ammonium (nitrate-ammonification), anaerobic ammonia oxidation (ANAMMOX), fragmentation, sorption, desorption, burial, and leaching. However, only few processes ultimately remove total nitrogen from the wastewater while most processes just convert nitrogen to its various forms. Removal of total nitrogen in studied types of constructed wetlands varied between 40 and 55% with removed load ranging between 250 and 630 g N m(-2) yr(-1) depending on CWs type and inflow loading. However, the processes responsible for the removal differ in magnitude among systems. Single-stage constructed wetlands cannot achieve high removal of total nitrogen due to their inability to provide both aerobic and anaerobic conditions at the same time. Vertical flow constructed wetlands remove successfully ammonia-N but very limited denitrification takes place in these systems. On the other hand, horizontal-flow constructed wetlands provide good conditions for denitrification but the ability of these system to nitrify ammonia is very limited. Therefore, various types of constructed wetlands may be combined with each other in order to exploit the specific advantages of the individual systems. The soil phosphorus cycle is fundamentally different from the N cycle. There are no valency changes during biotic assimilation of inorganic P or during decomposition of organic P by microorganisms. Phosphorus transformations during wastewater treatment in CWs include adsorption, desorption, precipitation, dissolution, plant and microbial uptake, fragmentation, leaching, mineralization, sedimentation (peat accretion) and burial. The major phosphorus removal processes are sorption, precipitation, plant uptake (with subsequent harvest) and peat/soil accretion. However, the first three processes are saturable and soil accretion occurs only in FWS CWs. Removal of phosphorus in all types of constructed wetlands is low unless special substrates with high sorption capacity are used. Removal of total phosphorus varied between 40 and 60% in all types of constructed wetlands with removed load ranging between 45 and 75 g N m(-2) yr(-1) depending on CWs type and inflow loading. Removal of both nitrogen and phosphorus via harvesting of aboveground biomass of emergent vegetation is low but it could be substantial for lightly loaded systems (cca 100-200 g N m(-2) yr(-1) and 10-20 g P m(-2) yr(-1)). Systems with free-floating plants may achieve higher removal of nitrogen via harvesting due to multiple harvesting schedule.",
"title": ""
},
{
"docid": "fa7d7672301fdb3cdf3a6f7624165df1",
"text": "We present a 2-mm diameter, 35-μm-thick disk resonator gyro (DRG) fabricated in <;111> silicon with integrated 0.35-μm CMOS analog front-end circuits. The device is fabricated in the commercial InvenSense Fabrication MEMSCMOS integrated platform, which incorporates a wafer-level vacuum seal, yielding a quality factor (Q) of 2800 at the DRGs 78-kHz resonant frequency. After performing electrostatic tuning to enable mode-matched operation, this DRG achieves a 55 μV/°/s sensitivity. Resonator vibration in the sense and drive axes is sensed using capacitive transduction, and amplified using a lownoise, on-chip integrated circuit. This allows the DRG to achieve Brownian noise-limited performance. The angle random walk is measured to be 0.008°/s/√(Hz) and the bias instability is 20°/h.",
"title": ""
},
{
"docid": "4a2cf5de0787b9b5a486c411fd6455e6",
"text": "To improve a product you will need most likely developers, managers and user feedback. Besides the basic software qualities other important properties are usability and user experience for developing a good product. Usability is well known and can be tested with e.g. a usability test or an expert review. In contrast user experience describes the whole impact a product has on the end-user. The timeline goes from before, while and after the use of a product. We present a tool that allows to evaluate the user experience of a product with little effort. We show in addition how this tool can be used for a continuous user experience assessment.",
"title": ""
},
{
"docid": "af4055df4a60a241f43d453f34189d86",
"text": "We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "a19787b0ddf3a6458b0eced0e971eb20",
"text": "BACKGROUND\nThe psychological autopsy method offers the most direct technique currently available for examining the relationship between particular antecedents and suicide. This systematic review aimed to examine the results of studies of suicide that used a psychological autopsy method.\n\n\nMETHOD\nA computer aided search of MEDLINE, BIDS ISI and PSYCHLIT, supplemented by reports known to the reviewers and reports identified from the reference lists of other retrieved reports. Two investigators systematically and independently examined all reports. Median proportions were determined and population attributable fractions were calculated, where possible, in cases of suicide and controls.\n\n\nRESULTS\nOne hundred and fifty-four reports were identified, of which 76 met the criteria for inclusion; 54 were case series and 22 were case-control studies. The median proportion of cases with mental disorder was 91% (95 % CI 81-98%) in the case series. In the case-control studies the figure was 90% (88-95%) in the cases and 27% (14-48%) in the controls. Co-morbid mental disorder and substance abuse also preceded suicide in more cases (38%, 19-57%) than controls (6%, 0-13%). The population attributable fraction for mental disorder ranged from 47-74% in the seven studies in which it could be calculated. The effects of particular disorders and sociological variables have been insufficiently studied to draw clear conclusions.\n\n\nCONCLUSIONS\nThe results indicated that mental disorder was the most strongly associated variable of those that have been studied. Further studies should focus on specific disorders and psychosocial factors. Suicide prevention strategies may be most effective if focused on the treatment of mental disorders.",
"title": ""
},
{
"docid": "77362cc72d7a09dbbb0f067c11fe8087",
"text": "The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.",
"title": ""
},
{
"docid": "eb17d97b32db0682d10dfef2ab0c2902",
"text": "Previous studies have suggested that solar-induced chlorophyll fluorescence (SIF) is correlated with Gross Primary Production (GPP). However, it remains unclear to what extent this relationship is due to absorbed photosynthetically active radiation (APAR) and/or light use efficiency (LUE). Here we present the first time series of near-surface measurement of canopy-scale SIF at 760 nm in temperate deciduous forests. SIF correlated with GPP estimated with eddy covariance at diurnal and seasonal scales (r = 0.82 and 0.73, respectively), as well as with APAR diurnally and seasonally (r = 0.90 and 0.80, respectively). SIF/APAR is significantly positively correlated with LUE and is higher during cloudy days than sunny days. Weekly tower-based SIF agreed with SIF from the Global Ozone Monitoring Experiment-2 (r = 0.82). Our results provide ground-based evidence that SIF is directly related to both APAR and LUE and thus GPP, and confirm that satellite SIF can be used as a proxy for GPP.",
"title": ""
},
{
"docid": "877bc8fb07b60f61bcd3b98e925a7aa0",
"text": "Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 % in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation.",
"title": ""
},
{
"docid": "a74aef75f5b1d5bc44da2f6d2c9284cf",
"text": "In this paper, we define irregular bipolar fuzzy graphs and its various classifications. Size of regular bipolar fuzzy graphs is derived. The relation between highly and neighbourly irregular bipolar fuzzy graphs are established. Some basic theorems related to the stated graphs have also been presented.",
"title": ""
},
{
"docid": "1852d9b0fab03cfc3abe5e0448198299",
"text": "Efficient exploration in high-dimensional environments remains a key challenge in reinforcement learning (RL). Deep reinforcement learning methods have demonstrated the ability to learn with highly general policy classes for complex tasks with high-dimensional inputs, such as raw images. However, many of the most effective exploration techniques rely on tabular representations, or on the ability to construct a generative model over states and actions. Both are exceptionally difficult when these inputs are complex and high dimensional. On the other hand, it is comparatively easy to build discriminative models on top of complex states such as images using standard deep neural networks. This paper introduces a novel approach, EX, which approximates state visitation densities by training an ensemble of discriminators, and assigns reward bonuses to rarely visited states. We demonstrate that EX achieves comparable performance to the state-of-the-art methods on lowdimensional tasks, and its effectiveness scales into high-dimensional state spaces such as visual domains without hand-designing features or density models.",
"title": ""
},
{
"docid": "f1d69b033490ed8c4eec7b476e9b7c08",
"text": "Performance-based measures of emotional intelligence (EI) are more likely than measures based on self-report to assess EI as a construct distinct from personality. A multivariate investigation was conducted with the performance-based, Multi-Factor Emotional Intelligence Scale (MEIS; J. D. Mayer, D. Caruso, & P. Salovey, 1999). Participants (N = 704) also completed the Trait Self-Description Inventory (TSDI, a measure of the Big Five personality factors; Christal, 1994; R. D. Roberts et al.), and the Armed Services Vocational Aptitude Battery (ASVAB, a measure of intelligence). Results were equivocal. Although the MEIS showed convergent validity (correlating moderately with the ASVAB) and divergent validity (correlating minimally with the TSDI), different scoring protocols (i.e., expert and consensus) yielded contradictory findings. Analyses of factor structure and subscale reliability identified further measurement problems. Overall, it is questionable whether the MEIS operationalizes EI as a reliable and valid construct.",
"title": ""
},
{
"docid": "1f62f4d5b84de96583e17fdc0f4828be",
"text": "This study examined age differences in perceptions of online communities held by people who were not yet participating in these relatively new social spaces. Using the Technology Acceptance Model (TAM), we investigated the factors that affect future intention to participate in online communities. Our results supported the proposition that perceived usefulness positively affects behavioral intention, yet it was determined that perceived ease of use was not a significant predictor of perceived usefulness. The study also discovered negative relationships between age and Internet self-efficacy and the perceived quality of online community websites. However, the moderating role of age was not found. The findings suggest that the relationships among perceived ease of use, perceived usefulness, and intention to participate in online communities do not change with age. Theoretical and practical implications and limitations were discussed. ! 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d5e16b5d2c0e10e2ee626bf2c35cca13",
"text": "The co-occurrence of diseases can inform the underlying network biology of shared and multifunctional genes and pathways. In addition, comorbidities help to elucidate the effects of external exposures, such as diet, lifestyle and patient care. With worldwide health transaction data now often being collected electronically, disease co-occurrences are starting to be quantitatively characterized. Linking network dynamics to the real-life, non-ideal patient in whom diseases co-occur and interact provides a valuable basis for generating hypotheses on molecular disease mechanisms, and provides knowledge that can facilitate drug repurposing and the development of targeted therapeutic strategies.",
"title": ""
},
{
"docid": "34af5ac483483fa59eda7804918bdb1c",
"text": "Automatic spelling and grammatical correction systems are one of the most widely used tools within natural language applications. In this thesis, we assume the task of error correction as a type of monolingual machine translation where the source sentence is potentially erroneous and the target sentence should be the corrected form of the input. Our main focus in this project is building neural network models for the task of error correction. In particular, we investigate sequence-to-sequence and attention-based models which have recently shown a higher performance than the state-of-the-art of many language processing problems. We demonstrate that neural machine translation models can be successfully applied to the task of error correction. While the experiments of this research are performed on an Arabic corpus, our methods in this thesis can be easily applied to any language. Keywords— natural language error correction, recurrent neural networks, encoderdecoder models, attention mechanism",
"title": ""
}
] | scidocsrr |
18b187af6666031609b07017bfa0c654 | Customer relationship management classification using data mining techniques | [
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "4bf5fd6fdb2cb82fa13abdb13653f3ac",
"text": "Customer relationship management (CRM) has once again gained prominence amongst academics and practitioners. However, there is a tremendous amount of confusion regarding its domain and meaning. In this paper, the authors explore the conceptual foundations of CRM by examining the literature on relationship marketing and other disciplines that contribute to the knowledge of CRM. A CRM process framework is proposed that builds on other relationship development process models. CRM implementation challenges as well as CRM's potential to become a distinct discipline of marketing are also discussed in this paper. JEL Classification Codes: M31.",
"title": ""
}
] | [
{
"docid": "c1d5df0e2058e3f191a8227fca51a2fb",
"text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.",
"title": ""
},
{
"docid": "36cf5e6ffec29f0eede4f369104d00d3",
"text": "This meta-analysis is a study of the experimental literature of technology use in postsecondary education from 1990 up to 2010 exclusive of studies of online or distance education previously reviewed by Bernard et al. (2004). It reports the overall weighted average effects of technology use on achievement and attitude outcomes and explores moderator variables in an attempt to explain how technology treatments lead to positive or negative effects. Out of an initial pool of 11,957 study abstracts, 1105 were chosen for analysis, yielding 879 achievement and 181 attitude effect sizes after pre-experimental designs and studies with obvious methodological confounds were removed. The random effects weighted average effect size for achievement was gþ 1⁄4 0.27, k 1⁄4 879, p < .05, and for attitude outcomes it was gþ 1⁄4 0.20, k 1⁄4 181, p < .05. The collection of achievement outcomes was divided into two sub-collections, according to the amount of technology integration in the control condition. These were no technology in the control condition (k 1⁄4 479) and some technology in the control condition (k 1⁄4 400). Random effects multiple meta-regression analysis was run on each sub-collection revealing three significant predictors (subject matter, degree of difference in technology use between the treatment and the control and pedagogical uses of technology). The set of predictors for each sub-collection was both significant and homogeneous. Differences were found among the levels of all three moderators, but particularly in favor of cognitive support applications. There were no significant predictors for attitude outcomes. Crown Copyright 2013 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e28613435d7dbd944a997f2d1fa67598",
"text": "Emerging multimedia content including images and texts are always jointly utilized to describe the same semantics. As a result, crossmedia retrieval becomes increasingly important, which is able to retrieve the results of the same semantics with the query but with different media types. In this paper, we propose a novel heterogeneous similarity measure with nearest neighbors (HSNN). Unlike traditional similarity measures which are limited in homogeneous feature space, HSNN could compute the similarity between media objects with different media types. The heterogeneous similarity is obtained by computing the probability for two media objects belonging to the same semantic category. The probability is achieved by analyzing the homogeneous nearest neighbors of each media object. HSNN is flexible so that any traditional similarity measure could be incorporated, which is further regarded as the weak ranker. An effective ranking model is learned from multiple weak rankers through AdaRank for cross-media retrieval. Experiments on the wikipedia dataset show the effectiveness of the proposed approach, compared with stateof-the-art methods. The cross-media retrieval also shows to outperform image retrieval systems on a unimedia retrieval task.",
"title": ""
},
{
"docid": "b617762a18685137a1e18838b2e46f11",
"text": "Information Extraction (IE) is a technology for localizing and classifying pieces of relevant information in unstructured natural language texts and detecting relevant relations among them. This thesis deals with one of the central tasks of IE, i.e., relation extraction. The goal is to provide a general framework that automatically learns mappings between linguistic analyses and target semantic relations, with minimal human intervention. Furthermore, this framework is supposed to support the adaptation to new application domains and new relations with various complexities. The central result is a new approach to relation extraction which is based on a minimally supervised method for automatically learning extraction grammars from a large collection of parsed texts, initialized by some instances of the target relation, called semantic seed. Due to the semantic seed approach, the framework can accommodate new relation types and domains with minimal effort. It supports relations of different arity as well as their projections. Furthermore, this framework is general enough to employ any linguistic analysis tools that provide the required type and depth of analysis. The adaptability and the scalability of the framework is facilitated by the DARE rule representation model which is recursive and compositional. In comparison to other IE rule representation models, e.g., Stevenson and Greenwood (2006), the DARE rule representation model is expressive enough to achieve good coverage of linguistic constructions for finding mentions of the target relation. The powerful DARE rules are constructed via a bottom-up and compositional rule discovery strategy, driven by the semantic seed. The control of the quality of newly acquired knowledge during the bootstrapping process is realized through a ranking and filtering strategy, taking two aspects into account: the domain relevance and the trustworthiness of the origin. A spe-",
"title": ""
},
{
"docid": "6fe39cbe3811ac92527ba60620b39170",
"text": "Providing accurate information about human's state, activity is one of the most important elements in Ubiquitous Computing. Various applications can be enabled if one's state, activity can be recognized. Due to the low deployment cost, non-intrusive sensing nature, Wi-Fi based activity recognition has become a promising, emerging research area. In this paper, we survey the state-of-the-art of the area from four aspects ranging from historical overview, theories, models, key techniques to applications. In addition to the summary about the principles, achievements of existing work, we also highlight some open issues, research directions in this emerging area.",
"title": ""
},
{
"docid": "7111c220a28d7a6fab32d9ecc914c5aa",
"text": "Songbirds are one of the best-studied examples of vocal learners. Learning of both human speech and birdsong depends on hearing. Once learned, adult song in many species remains unchanging, suggesting a reduced influence of sensory experience. Recent studies have revealed, however, that adult song is not always stable, extending our understanding of the mechanisms involved in song maintenance, and their similarity to those active during song learning. Here we review some of the processes that contribute to song learning and production, with an emphasis on the role of auditory feedback. We then consider some of the possible neural substrates involved in these processes, particularly basal ganglia circuitry. Although a thorough treatment of human speech is beyond the scope of this article, we point out similarities between speech and song learning, and ways in which studies of these disparate behaviours complement each other in developing an understanding of general principles that contribute to learning and maintenance of vocal behaviour.",
"title": ""
},
{
"docid": "f44bfa0a366fb50a571e6df9f4c3f91d",
"text": "BACKGROUND\nIn silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process.\n\n\nRESULTS\ncamb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2).\n\n\nCONCLUSIONS\nOverall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.",
"title": ""
},
{
"docid": "2bf1766eccd14d2da3581018ff621f09",
"text": "We propose a novel segmentation approach for introducing shape priors in the geometric active contour framework. Following the work of Leventon, we propose to revisit the use of linear principal component analysis (PCA) to introduce prior knowledge about shapes in a more robust manner. Our contribution in this paper is twofold. First, we demonstrate that building a space of familiar shapes by applying PCA on binary images (instead of signed distance functions) enables one to constrain the contour evolution in a way that is more faithful to the elements of a training set. Secondly, we present a novel region-based segmentation framework, able to separate regions of different intensities in an image. Shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description allows for the simultaneous encoding of multiple types of shapes and leads to promising segmentation results. In particular, our shape-driven segmentation technique offers a convincing level of robustness with respect to noise, clutter, partial occlusions, and blurring.",
"title": ""
},
{
"docid": "6c71078281d0ff7e4829624af5124bfb",
"text": "The modeling of artificial, human-level creativity is becoming more and more achievable. In recent years, neural networks have been successfully applied to different tasks such as image and music generation, demonstrating their great potential in realizing computational creativity. The fuzzy definition of creativity combined with varying goals of the evaluated generative systems, however, makes subjective evaluation seem to be the only viable methodology of choice. We review the evaluation of generative music systems and discuss the inherent challenges of their evaluation. Although subjective evaluation should always be the ultimate choice for the evaluation of creative results, researchers unfamiliar with rigorous subjective experiment design and without the necessary resources for the execution of a large-scale experiment face challenges in terms of reliability, validity, and replicability of the results. In numerous studies, this leads to the report of insignificant and possibly irrelevant results and the lack of comparability with similar and previous generative systems. Therefore, we propose a set of simple musically informed objective metrics enabling an objective and reproducible way of evaluating and comparing the output of music generative systems. We demonstrate the usefulness of the proposed metrics with several experiments on real-world data.",
"title": ""
},
{
"docid": "973426438175226bb46c39cc0a390d97",
"text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.",
"title": ""
},
{
"docid": "9e31cedf404c989d15a2f06c5800f207",
"text": "For automatic driving, vehicles must be able to recognize their environment and take control of the vehicle. The vehicle must perceive relevant objects, which includes other traffic participants as well as infrastructure information, assess the situation and generate appropriate actions. This work is a first step of integrating previous works on environment perception and situation analysis toward automatic driving strategies. We present a method for automatic cruise control of vehicles in urban environments. The longitudinal velocity is influenced by the speed limit, the curvature of the lane, the state of the next traffic light and the most relevant target on the current lane. The necessary acceleration is computed in respect to the information which is estimated by an instrumented vehicle.",
"title": ""
},
{
"docid": "4d52865efa6c359d68125c7013647c86",
"text": "In recent years, we have witnessed an unprecedented proliferation of large document collections. This development has spawned the need for appropriate analytical means. In particular, to seize the thematic composition of large document collections, researchers increasingly draw on quantitative topic models. Among their most prominent representatives is the Latent Dirichlet Allocation (LDA). Yet, these models have significant drawbacks, e.g. the generated topics lack context and thus meaningfulness. Prior research has rarely addressed this limitation through the lens of mixed-methods research. We position our paper towards this gap by proposing a structured mixedmethods approach to the meaningful analysis of large document collections. Particularly, we draw on qualitative coding and quantitative hierarchical clustering to validate and enhance topic models through re-contextualization. To illustrate the proposed approach, we conduct a case study of the thematic composition of the AIS Senior Scholars' Basket of Journals.",
"title": ""
},
{
"docid": "a62aae5ac55e884d6e1e3ef0282657cc",
"text": "Nowadays, the remote Home Automation turns out to be more and more significant and appealing. It improves the value of our lives by automating various electrical appliances or instruments. This paper describes GSM (Global System Messaging) based secured device control system using App Inventor for Android mobile phones. App Inventor is a latest visual programming platform for developing mobile applications for Android-based smart phones. The Android Mobile Phone Platform becomes more and more popular among software developers, because of its powerful capabilities and open architecture. It is a fantastic platform for the real world interface control, as it offers an ample of resources and already incorporates a lot of sensors. No need to write programming codes to develop apps in the App Inventor, instead it provides visual design interface as the way the apps looks and use blocks of interlocking components to control the app’s behaviour. The App Inventor aims to make programming enjoyable and accessible to",
"title": ""
},
{
"docid": "506d3e23383de6d3a37471798770ed70",
"text": "One of the most controversial issues in uncertainty modelling and information sciences is the relationship between probability theory and fuzzy sets. This paper is meant to survey the literature pertaining to this debate, and to try to overcome misunderstandings and to supply access to many basic references that have addressed the \"probability versus fuzzy set\" challenge. This problem has not a single facet, as will be claimed here. Moreover it seems that a lot of controversies might have been avoided if protagonists had been patient enough to build a common language and to share their scientific backgrounds. The main points made here are as follows. i) Fuzzy set theory is a consistent body of mathematical tools. ii) Although fuzzy sets and probability measures are distinct, several bridges relating them have been proposed that should reconcile opposite points of view ; especially possibility theory stands at the cross-roads between fuzzy sets and probability theory. iii) Mathematical objects that behave like fuzzy sets exist in probability theory. It does not mean that fuzziness is reducible to randomness. Indeed iv) there are ways of approaching fuzzy sets and possibility theory that owe nothing to probability theory. Interpretations of probability theory are multiple especially frequentist versus subjectivist views (Fine [31]) ; several interpretations of fuzzy sets also exist. Some interpretations of fuzzy sets are in agreement with probability calculus and some are not. The paper is structured as follows : first we address some classical misunderstandings between fuzzy sets and probabilities. They must be solved before any discussion can take place. Then we consider probabilistic interpretations of membership functions, that may help in membership function assessment. We also point out nonprobabilistic interpretations of fuzzy sets. The next section examines the literature on possibility-probability transformations and tries to clarify some lurking controversies on that topic. In conclusion, we briefly mention several subfields of fuzzy set research where fuzzy sets and probability are conjointly used.",
"title": ""
},
{
"docid": "eec7a9a6859e641c3cc0ade73583ef5c",
"text": "We propose an Apache Spark-based scale-up server architecture using Docker container-based partitioning method to improve performance scalability. The performance scalability problem of Apache Spark-based scale-up servers is due to garbage collection(GC) and remote memory access overheads when the servers are equipped with significant number of cores and Non-Uniform Memory Access(NUMA). The proposed method minimizes the problems using Docker container-based architecture effectively partitioning the original scale-up server into small logical servers. Our evaluation study based on benchmark programs revealed that the partitioning method showed performance improvement by ranging from 1.1x through 1.7x on a 120 core scale-up system. Our proof-of-concept scale-up server architecture provides the basis towards complete and practical design of partitioning-based scale-up servers showing performance scalability.",
"title": ""
},
{
"docid": "ce463006a11477c653c15eb53f673837",
"text": "This paper presents a meaning-based statistical math word problem (MWP) solver with understanding, reasoning and explanation. It comprises a web user interface and pipelined modules for analysing the text, transforming both body and question parts into their logic forms, and then performing inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating the extracted math quantity with its associated syntactic and semantic information (which specifies the physical meaning of that quantity). Those role-tags are then used to identify the desired operands and filter out irrelevant quantities (so that the answer can be obtained precisely). Since the physical meaning of each quantity is explicitly represented with those role-tags and used in the inference process, the proposed approach could explain how the answer is obtained in a human comprehensible way.",
"title": ""
},
{
"docid": "325003e43d73d68a851a8c3fa6681f94",
"text": "This tutorial is aimed at introducing some basic ideas of stochastic programming. The intended audience of the tutorial is optimization practitioners and researchers who wish to acquaint themselves with the fundamental issues that arise when modeling optimization problems as stochastic programs. The emphasis of the paper is on motivation and intuition rather than technical completeness (although we could not avoid giving some technical details). Since it is not intended to be a historical overview of the subject, relevant references are given in the “Notes” section at the end of the paper, rather than in the text. Stochastic programming is an approach for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include parameters which are unknown at the time a decision should be made. When the parameters are uncertain, but assumed to lie in some given set of possible values, one might seek a solution that is feasible for all possible parameter choices and optimizes a given objective function. Such an approach might make sense for example when designing a least-weight bridge with steel having a tensile strength that is known only to within some tolerance. Stochastic programming models are similar in style but try to take advantage of the fact that probability distributions governing the data are known or can be estimated. Often these models apply to settings in which decisions are made repeatedly in essentially the same circumstances, and the objective is to come up with a decision that will perform well on average. An example would be designing truck routes for daily milk delivery to customers with random demand. Here probability distributions (e.g., of demand) could be estimated from data that have been collected over time. The goal is to find some policy that is feasible for all (or almost all) the possible parameter realizations and optimizes the expectation of some function of the decisions and the random variables.",
"title": ""
},
{
"docid": "a6bc752bd6a4fc070fa01a5322fb30a1",
"text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classi cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classi cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.",
"title": ""
},
{
"docid": "391fb9de39cb2d0635f2329362db846e",
"text": "In recent years, there has been an explosion of interest in mining time series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.",
"title": ""
},
{
"docid": "e78c1fed6f3c09642a8c2c592545bea0",
"text": "We present a general framework and algorithmic approach for incremental approximation algorithms. The framework handles cardinality constrained minimization problems, such as the k-median and k-MST problems. Given some notion of ordering on solutions of different cardinalities k, we give solutions for all values of k such that the solutions respect the ordering and such that for any k, our solution is close in value to the value of an optimal solution of cardinality k. For instance, for the k-median problem, the notion of ordering is set inclusion and our incremental algorithm produces solutions such that any k and k', k < k', our solution of size k is a subset of our solution of size k'. We show that our framework applies to this incremental version of the k-median problem (introduced by Mettu and Plaxton [30]), and incremental versions of the k-MST problem, k-vertex cover problem, k-set cover problem, as well as the uncapacitated facility location problem (which is not cardinality-constrained). For these problems we either get new incremental algorithms, or improvements over what was previously known. We also show that the framework applies to hierarchical clustering problems. In particular, we give an improved algorithm for a hierarchical version of the k-median problem introduced by Plaxton [31].",
"title": ""
}
] | scidocsrr |
e2a6440dfb55b8643d8baa4aa813ce33 | Online extremism and the communities that sustain it: Detecting the ISIS supporting community on Twitter | [
{
"docid": "261daa58ee9553a5c35693329073b53a",
"text": "In the last decade, the field of international relations has undergone a revolution in conflict studies. Where earlier approaches attempted to identify the attributes of individuals, states, and systems that produced conflict, the “rationalist approach to war” now explains violence as the product of private information with incentives to misrepresent, problems of credible commitment, and issue indivisibilities. In this new approach, war is understood as a bargaining failure that leaves both sides worse off than had they been able to negotiate an efficient solution. This rationalist framework has proven remarkably general—being applied to civil wars, ethnic conflicts, and interstate wars—and fruitful in understanding not only the causes of war but also war termination and conflict management. Interstate war is no longer seen as sui generis, but as a particular form within a single, integrated theory of conflict. This rationalist approach to war may at first appear to be mute in the face of the terrorist attacks of September 11, 2001. Civilian targets were attacked “out of the blue.” The terrorists did not issue prior demands. A theory premised on bargaining, therefore, would seem ill-suited to explaining such violence. Yet, as I hope to show, extremist terrorism can be rational and strategic. A rationalist approach also yields insights into the nature and strategy of terrorism and offers some general guidelines that targets should consider in response, including the importance of a multilateral coalition as a means of committing the target to a moderate military strategy. Analytically, and more centrally for this essay, extremist terrorism reveals a silence at the heart of the current rationalist approach to war even as it suggests a potentially fruitful way of extending the basic model. In extant models, the distribution of capabilities and, thus, the range of acceptable bargains are exogenous,",
"title": ""
}
] | [
{
"docid": "6a4638a12c87b470a93e0d373a242868",
"text": "Unfortunately, few of today’s classrooms focus on helping students develop as creative thinkers. Even students who perform well in school are often unprepared for the challenges that they encounter after graduation, in their work lives as well as their personal lives. Many students learn to solve specific types of problems, but they are unable to adapt and improvise in response to the unexpected situations that inevitably arise in today’s fast-changing world.",
"title": ""
},
{
"docid": "5dc898dc6c9dd35994170cf134de3be6",
"text": "This paper investigates a new approach in straw row position and orientation reconstruction in an open field, based on image segmentation with Fully Convolutional Networks (FCN). The model architecture consists of an encoder (for feature extraction) and decoder (produces segmentation map from encoded features) modules and similar to [1] except for two fully connected layers. The heatmaps produced by the FCN are used to determine orientations and spatial arrangments of the straw rows relatively to harvester via transforming the bird's eye view and Fast Hough Transform (FHT). This leads to real-time harvester trajectory optimization over treated area of the field by correction conditions calculation through the row’s directions family.",
"title": ""
},
{
"docid": "4d9ad24707702e70747143ad477ed831",
"text": "The paper presents a high-speed (500 f/s) large-format 1 K/spl times/1 K 8 bit 3.3 V CMOS active pixel sensor (APS) with 1024 ADCs integrated on chip. The sensor achieves an extremely high output data rate of over 500 Mbytes per second and a low power dissipation of 350 mW at the 66 MHz master clock rate. Principal architecture and circuit solutions allowing such a high throughput are discussed along with preliminary results of the chip characterization.",
"title": ""
},
{
"docid": "cfb08af0088de56519960beb9ee56607",
"text": "Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this “one task, one model” approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature.",
"title": ""
},
{
"docid": "3fb6e2a0f91f4cbb1ed514e422a57ca0",
"text": "Recent years have seen an increased interest in and availability of parallel corpora. Large corpora from international organizations (e.g. European Union, United Nations, European Patent Office), or from multilingual Internet sites (e.g. OpenSubtitles) are now easily available and are used for statistical machine translation but also for online search by different user groups. This paper gives an overview of different usages and different types of search systems. In the past, parallel corpus search systems were based on sentence-aligned corpora. We argue that automatic word alignment allows for major innovations in searching parallel corpora. Some online query systems already employ word alignment for sorting translation variants, but none supports the full query functionality that has been developed for parallel treebanks. We propose to develop such a system for efficiently searching large parallel corpora with a powerful query language.",
"title": ""
},
{
"docid": "a1c917d7a685154060ddd67d631ea061",
"text": "In this paper, for finding the place of plate, a real time and fast method is expressed. In our suggested method, the image is taken to HSV color space; then, it is broken into blocks in a stable size. In frequent process, each block, in special pattern is probed. With the appearance of pattern, its neighboring blocks according to geometry of plate as a candidate are considered and increase blocks, are omitted. This operation is done for all of the uncontrolled blocks of images. First, all of the probable candidates are exploited; then, the place of plate is obtained among exploited candidates as density and geometry rate. In probing every block, only its lip pixel is studied which consists 23.44% of block area. From the features of suggestive method, we can mention the lack of use of expensive operation in image process and its low dynamic that it increases image process speed. This method is examined on the group of picture in background, distance and point of view. The rate of exploited plate reached at 99.33% and character recognition rate achieved 97%.",
"title": ""
},
{
"docid": "e6c0aa517c857ed217fc96aad58d7158",
"text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.",
"title": ""
},
{
"docid": "9d97803a016e24fc9a742d45adf1cc3a",
"text": "Biochemical compositional analysis of microbial biomass is a useful tool that can provide insight into the behaviour of an organism and its adaptational response to changes in its environment. To some extent, it reflects the physiological and metabolic status of the organism. Conventional methods to estimate biochemical composition often employ different sample pretreatment strategies and analytical steps for analysing each major component, such as total proteins, carbohydrates, and lipids, making it labour-, time- and sample-intensive. Such analyses when carried out individually can also result in uncertainties of estimates as different pre-treatment or extraction conditions are employed for each of the component estimations and these are not necessarily standardised for the organism, resulting in observations that are not easy to compare within the experimental set-up or between laboratories. We recently reported a method to estimate total lipids in microalgae (Chen, Vaidyanathan, Anal. Chim. Acta, 724, 67-72). Here, we propose a unified method for the simultaneous estimation of the principal biological components, proteins, carbohydrates, lipids, chlorophyll and carotenoids, in a single microalgae culture sample that incorporates the earlier published lipid assay. The proposed methodology adopts an alternative strategy for pigment assay that has a high sensitivity. The unified assay is shown to conserve sample (by 79%), time (67%), chemicals (34%) and energy (58%) when compared to the corresponding assay for each component, carried out individually on different samples. The method can also be applied to other microorganisms, especially those with recalcitrant cell walls.",
"title": ""
},
{
"docid": "dc26775493cad4149e639bcae6fa6a8c",
"text": "Fast expansion of natural language functionality of intelligent virtual agents is critical for achieving engaging and informative interactions. However, developing accurate models for new natural language domains is a time and data intensive process. We propose efficient deep neural network architectures that maximally re-use available resources through transfer learning. Our methods are applied for expanding the understanding capabilities of a popular commercial agent and are evaluated on hundreds of new domains, designed by internal or external developers. We demonstrate that our proposed methods significantly increase accuracy in low resource settings and enable rapid development of accurate models with less data.",
"title": ""
},
{
"docid": "1a9670cc170343073fba2a5820619120",
"text": "Occlusions present a great challenge for pedestrian detection in practical applications. In this paper, we propose a novel approach to simultaneous pedestrian detection and occlusion estimation by regressing two bounding boxes to localize the full body as well as the visible part of a pedestrian respectively. For this purpose, we learn a deep convolutional neural network (CNN) consisting of two branches, one for full body estimation and the other for visible part estimation. The two branches are treated differently during training such that they are learned to produce complementary outputs which can be further fused to improve detection performance. The full body estimation branch is trained to regress full body regions for positive pedestrian proposals, while the visible part estimation branch is trained to regress visible part regions for both positive and negative pedestrian proposals. The visible part region of a negative pedestrian proposal is forced to shrink to its center. In addition, we introduce a new criterion for selecting positive training examples, which contributes largely to heavily occluded pedestrian detection. We validate the effectiveness of the proposed bi-box regression approach on the Caltech and CityPersons datasets. Experimental results show that our approach achieves promising performance for detecting both non-occluded and occluded pedestrians, especially heavily occluded ones.",
"title": ""
},
{
"docid": "4cb942fd2549525412b1a49590d4dfbd",
"text": "This paper proposes a new adaptive patient-cooperative control strategy for improving the effectiveness and safety of robot-assisted ankle rehabilitation. This control strategy has been developed and implemented on a compliant ankle rehabilitation robot (CARR). The CARR is actuated by four Festo Fluidic muscles located to the calf in parallel, has three rotational degrees of freedom. The control scheme consists of a position controller implemented in joint space and a high-level admittance controller in task space. The admittance controller adaptively modifies the predefined trajectory based on real-time ankle measurement, which enhances the training safety of the robot. Experiments were carried out using different modes to validate the proposed control strategy on the CARR. Three training modes include: 1) a passive mode using a joint-space position controller, 2) a patient–robot cooperative mode using a fixed-parameter admittance controller, and 3) a cooperative mode using a variable-parameter admittance controller. Results demonstrate satisfactory trajectory tracking accuracy, even when externally disturbed, with a maximum normalized root mean square deviation less than 5.4%. These experimental findings suggest the potential of this new patient-cooperative control strategy as a safe and engaging control solution for rehabilitation robots.",
"title": ""
},
{
"docid": "e23d1c9fb7cd7aac7fcfe156ff9a9d35",
"text": "This is the second in a series of papers that describes the use of the Internet on a distance-taught undergraduate Computer Science course (Thomas et al., 1998). This paper examines students’ experience of a large-scale trial in which students were taught using electronic communication exclusively. The paper compares the constitution and experiences of a group of Internet students to those of conventional distance learning students on the same course. Learning styles, background questionnaires, and learning outcomes were used in the comparison of the two groups. The study reveals comparable learning outcomes with no discrimination in grade as the result of using different communication media. The student experience is reported, highlighting the main gains and issues of using the Internet as a communication medium in distance education. This paper also shows that using the Internet in this context can provide students with a worthwhile experience. Introduction There is a danger assuming that replacing traditional teaching techniques with new technologies can cause a significant improvement (Dede, 1996; Moore, 1996). There are many examples where attempts have been made to use electronic communication to cope with increasing student numbers (Daniel, 1998) (and proportionately diminishing resources) or to improve learning outcomes (Bischoff et al., 1996; Scardamalia and Bereiter, 1992; Moskal et al., 1997). However, it is vital to discover whether the pressure to increase student numbers overshadows the need to provide students with a meaningful educational experience, and whether course appraisal techniques disguise the quality of the courses that are presented. The Open University (OU) in the UK has an eye on both the future and the past: the future to embrace new technologies with which to enrich its distance teaching programmes, the past to ensure maintenance of standards and quality. Our aims focus on providing valuable and repeatable learning experiences. In our view, improvements in student performance should not come at the expense of the student experience. As a distance education university we are interested in the effects of new technology on the student who is remote from both teacher and fellow students. The Internet could be a life-line for students in remote areas: it is a means for combating their isolation, extending their knowledge, and gaining proficiency in its use (Franks, 1996). It gives students a communications technology that cheaply and quickly connects them to the rest of the world, giving them ready access to information. The issue for educators is how to harness effectively the benefits of the Internet in order to provide students with a fulfilling educational experience (Bates, 1991). The work reported here focuses on the effect of the Internet on student experiences to determine what real gains there might be, if any, in replacing traditional teaching processes with new methods that exploit the Internet. Background The distance approach to education requires an understanding of the issues facing part-time students, including: • dealing with distance: ie, overcoming isolation; • dealing with asynchronous learning: ie, handling delays when help or feedback is not available as soon as required; • managing part-time study: ie, coping with job, family or other commitments as well as studying. Helping students deal with these issues by providing an appropriate support network is reflected in the student’s experiences. The reputation of the OU stands firmly on a high quality “supported” distance learning process (Baker et al., 1996) which has nurtured the “good experience” reported by many of its students over the lifetime of the university. The OU is keen to ensure this experience is not undermined by the use of new technologies. When we first began investigating the use of the Internet in one of our popular undergraduate Computing courses, experienced teaching staff expressed concern about the effect it would have on students. Not surprisingly, they were unconvinced by the argument that the Internet might improve student performance as they had seen technology fads come and go. Their perspective focused on the student experience, ie, the intellectual self-development and self-awareness en route, which they regard as the most valuable aspect of an OU student’s life. Therefore, our investigation of the effects of introducing Internet-based teaching had two main aims: • to examine the experiences of the Internet students and compare them to those of the students on the conventional course; • to identify means of improving the service to students by use of appropriate technology. 30 British Journal of Educational Technology Vol 31 No 1 2000 © British Educational Communications and Technology Agency, 2000. Thus, both the Internet and conventional students studied the same course with the same materials; they attempted the same assignments and they sat the same examination. The difference in treatment between the groups was solely the communications medium. The Internet trial The Internet trial was conducted with the introductory course, Fundamentals of Computing (M205). This course used Pascal as its exemplar programming language and taught data structures, file processing and programming. It attracted students with a range of abilities and backgrounds, from complete novices taking the course as their first taste of university education, to those with considerable experience both of Computing and of distance education. The course was typical of Open University courses; study materials including printed texts, audio and video tapes, CD-ROMs and floppy discs, were mailed to students. Students were required to submit assignments for grading and feedback, and to take a final examination. During the term, students could attend a small number of local tutorials, telephone or write to their personal tutor for advice, and form self-help groups with other students. Thus, students had opportunities to communicate with their tutor, either on a one-to-one basis or in a group situation, and with other students. In our trial, the Internet was used for communication in every aspect of the course’s presentation. Internet students communicated with their tutors and fellow students via electronic mail and HyperNews (a simple electronic news system used for conferencing). In practice, the students used email for one-to-one asynchronous communication, and conferencing for communication with either their tutorial group or their peer group. Tutor-marked written assignments (known as “TMAs”) are the core of the Open University’s teaching system, providing a mechanism for individual feedback and instruction, as well as assessment. Traditionally, TMAs are paper documents exchanged by post: passing from student to tutor, then to the central Assignment Handling Office (AHO), and then back to the student. Despite the excellent postal service in the UK, this can be a cumbersome and slow procedure. In the Internet trial, assignments were processed electronically: students submitted word-processed documents, either by email attachment or secure Web form, to a central database. Tutors down-loaded assignments from the database and, with the aid of a specially designed marking tool, graded and commented on student scripts on-screen. Marked scripts were returned to the central database via an automated handler where the results were recorded. The script was then routed electronically back to the student. Details of the electronic submission and marking system can be found in Thomas et al. (1998). The study groups The students elected to enrol for either the conventional course or the Internet version. In a typical year, the conventional course attracts about 3,500 students; of this, we Distance education via the Internet 31 © British Educational Communications and Technology Agency, 2000. were restricted to about 300 students for the Internet version. The target groups were as follows: • Internet: all students who enrolled on the Internet presentation (300); • Conventional: students enrolled on the conventional course, including students whose tutors also had Internet students (150) and students of selected tutors with only conventional students (50). The composition of the conventional target group allowed us to consider tutor differences as well as to make conventional–Internet comparisons for given tutors. The study Given that the Internet students were self-selected (a “fact of life”, since the OU philosophy prevents researchers from imposing special conditions on students), we were keen to establish how divergent they were from conventional students in terms of the factors we would have been likely to have used to make selections in a controlled study. The data sources for this analysis included: • background questionnaires: used to establish students’ previous computing experience and prior knowledge, helping to assess group constitution; • learning style questionnaires: used to assess whether any student who displayed a preferred learning style fared better in one medium or the other, and to compare the learning style profiles of the groups overall; • final grades including both continuous assessment and final examination; used to compare the two groups’ learning outcomes. The background and learning style questionnaires were sent to students in the target populations at the beginning of the course. Conventional students received these materials by post and Internet students by electronic mail. Background questionnaire The background questionnaire was designed to reveal individual characteristics and, in compilation, to indicate group constitution. It was assumed that it would be possible to assess through analysis whether groups were comparable and, if necessary, to compensate for group differences. It is a self-assessment questionnaire which asks students for their opinions, rather than a psychological index of their",
"title": ""
},
{
"docid": "8498a3240ae68bcd2b34e2b09cc1d7e2",
"text": "The impact of capping agents and environmental conditions (pH, ionic strength, and background electrolytes) on surface charge and aggregation potential of silver nanoparticles (AgNPs) suspensions were investigated. Capping agents are chemicals used in the synthesis of nanoparticles to prevent aggregation. The AgNPs examined in the study were as follows: (a) uncoated AgNPs (H(2)-AgNPs), (b) electrostatically stabilized (citrate and NaBH(4)-AgNPs), (c) sterically stabilized (polyvinylpyrrolidone (PVP)-AgNPs), and (d) electrosterically stabilized (branched polyethyleneimine (BPEI)-AgNPs)). The uncoated (H(2)-AgNPs), the citrate, and NaBH(4)-coated AgNPs aggregated at higher ionic strengths (100 mM NaNO(3)) and/or acidic pH (3.0). For these three nanomaterials, chloride (Cl(-), 10 mM), as a background electrolyte, resulted in a minimal change in the hydrodynamic diameter even at low pH (3.0). This was limited by the presence of residual silver ions, which resulted in the formation of stable negatively charged AgCl colloids. Furthermore, the presence of Ca(2+) (10 mM) resulted in aggregation of the three previously identified AgNPs regardless of the pH. As for PVP coated AgNPs, the ionic strength, pH and electrolyte type had no impact on the aggregation of the sterically stabilized AgNPs. The surface charge and aggregation of the BPEI coated AgNPs varied according to the solution pH.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "d8fab661721e70a64fac930343203d20",
"text": "Studies of a range of higher cognitive functions consistently activate a region of anterior cingulate cortex (ACC), typically posterior to the genu and superior to the corpus collosum. In particular, this ACC region appears to be active in task situations where there is a need to override a prepotent response tendency, when responding is underdetermined, and when errors are made. We have hypothesized that the function of this ACC region is to monitor for the presence of crosstalk or competition between incompatible responses. In prior work, we provided initial support for this hypothesis, demonstrating ACC activity in the same region both during error trials and during correct trials in task conditions designed to elicit greater response competition. In the present study, we extend our testing of this hypothesis to task situations involving underdetermined responding. Specifically, 14 healthy control subjects performed a verb-generation task during event-related functional magnetic resonance imaging (fMRI), with the on-line acquisition of overt verbal responses. The results demonstrated that the ACC, and only the ACC, was more active in a series of task conditions that elicited competition among alternative responses. These conditions included a greater ACC response to: (1) Nouns categorized as low vs. high constraint (i.e., during a norming study, multiple verbs were produced with equal frequency vs. a single verb that produced much more frequently than any other); (2) the production of verbs that were weak associates, rather than, strong associates of particular nouns; and (3) the production of verbs that were weak associates for nouns categorized as high constraint. We discuss the implication of these results for understanding the role that the ACC plays in human cognition.",
"title": ""
},
{
"docid": "9c98b5467d454ca46116b479f63c2404",
"text": "A learning style describes the attitudes and behaviors, which determine an individual’s preferred way of learning. Learning styles are particularly important in educational settings since they may help students and tutors become more self-aware of their strengths and weaknesses as learners. The traditional way to identify learning styles is using a test or questionnaire. Despite being reliable, these instruments present some problems that hinder the learning style identification. Some of these problems include students’ lack of motivation to fill out a questionnaire and lack of self-awareness of their learning preferences. Thus, over the last years, several approaches have been proposed for automatically detecting learning styles, which aim to solve these problems. In this work, we review and analyze current trends in the field of automatic detection of learning styles. We present the results of our analysis and discuss some limitations, implications and research gaps that can be helpful to researchers working in the field of learning styles.",
"title": ""
},
{
"docid": "6a2aeddd0ed502712647d1c53216d28f",
"text": "High voltage pulse power supply using Marx generator and solid-state switches is proposed in this study. The Marx generator is composed of 12 stages and each stage is made of IGBT stack, two diode stacks, and capacitor. To charge the capacitors of each stage in parallel, inductive charging method is used and this method results in high efficiency and high repetition rates. It can generate the pulse voltage with the following parameters: voltage: up to 120 kV, rising time: sub /spl mu/S, pulse width: up to 10 /spl mu/S, pulse repetition rate: 1000 pps. The proposed pulsed power generator uses IGBT stack with a simple driver and has modular design. So this system structure gives compactness and easiness to implement total system. Some experimental results are included to verify the system performances in this paper.",
"title": ""
},
{
"docid": "3f9f01e3b3f5ab541cbe78fb210cf744",
"text": "The reliable and effective localization system is the basis of Automatic Guided Vehicle (AGV) to complete given tasks automatically in warehouse environment. However, there are no obvious features that can be used for localization of AGV to be extracted in warehouse environment and it dose make it difficult to realize the localization of AGV. So in this paper, we concentrate on the problem of optimal landmarks placement in warehouse so as to improve the reliability of localization. Firstly, we take the practical warehouse environment into consideration and transform the problem of landmarks placement into an optimization problem which aims at maximizing the difference degree between each basic unit of localization. Then Genetic Algorithm (GA) is used to solve the optimization problem. Then we match the observed landmarks with the already known ones stored in the map and the Triangulation method is used to estimate the position of AGV after the matching has been done. Finally, experiments in a real warehouse environment validate the effectiveness and reliability of our method.",
"title": ""
},
{
"docid": "d973047c3143043bb25d4a53c6b092ec",
"text": "Persian License Plate Detection and Recognition System is an image-processing technique used to identify a vehicle by its license plate. In fact this system is one kind of automatic inspection of transport, traffic and security systems and is of considerable interest because of its potential applications to areas such as automatic toll collection, traffic law enforcement and security control of restricted areas. License plate location is an important stage in vehicle license plate recognition for automated transport system. This paper presents a real time and robust method of license plate detection and recognition from cluttered images based on the morphology and template matching. In this system main stage is the isolation of the license plate from the digital image of the car obtained by a digital camera under different circumstances such as illumination, slop, distance, and angle. The algorithm starts with preprocessing and signal conditioning. Next license plate is localized using morphological operators. Then a template matching scheme will be used to recognize the digits and characters within the plate. This system implemented with help of Isfahan Control Traffic organization and the performance was 98.2% of correct plates identification and localization and 92% of correct recognized characters. The results regarding the complexity of the problem and diversity of the test cases show the high accuracy and robustness of the proposed method. The method could also be applicable for other applications in the transport information systems, where automatic recognition of registration plates, shields, signs, and so on is often necessary. This paper presents a morphology-based method.",
"title": ""
},
{
"docid": "d7582552589626891258f52b0d750915",
"text": "Social Live Stream Services (SLSS) exploit a new level of social interaction. One of the main challenges in these services is how to detect and prevent deviant behaviors that violate community guidelines. In this work, we focus on adult content production and consumption in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis. We use a pre-trained deep learning model to identify broadcasters of adult content. Our results indicate that moderation systems in place are highly ineffective in suspending the accounts of such users. We create two large datasets by crawling the social graphs of these platforms, which we analyze to identify characterizing traits of adult content producers and consumers, and discover interesting patterns of relationships among them, evident in both networks.",
"title": ""
}
] | scidocsrr |
157aca1d20382bf85f4a8e34ef0d4104 | Multimodal Popularity Prediction of Brand-related Social Media Posts | [
{
"docid": "1a6a7f515aa19b3525989f2cc4aa514f",
"text": "Hundreds of thousands of photographs are uploaded to the internet every minute through various social networking and photo sharing platforms. While some images get millions of views, others are completely ignored. Even from the same users, different photographs receive different number of views. This begs the question: What makes a photograph popular? Can we predict the number of views a photograph will receive even before it is uploaded? These are some of the questions we address in this work. We investigate two key components of an image that affect its popularity, namely the image content and social context. Using a dataset of about 2.3 million images from Flickr, we demonstrate that we can reliably predict the normalized view count of images with a rank correlation of 0.81 using both image content and social cues. In this paper, we show the importance of image cues such as color, gradients, deep learning features and the set of objects present, as well as the importance of various social cues such as number of friends or number of photos uploaded that lead to high or low popularity of images.",
"title": ""
},
{
"docid": "d83f34978bd6dd72131c36f8adb34850",
"text": "Images in social networks share different destinies: some are going to become popular while others are going to be completely unnoticed. In this paper we propose to use visual sentiment features together with three novel context features to predict a concise popularity score of social images. Experiments on large scale datasets show the benefits of proposed features on the performance of image popularity prediction. Exploiting state-of-the-art sentiment features, we report a qualitative analysis of which sentiments seem to be related to good or poor popularity. To the best of our knowledge, this is the first work understanding specific visual sentiments that positively or negatively influence the eventual popularity of images.",
"title": ""
},
{
"docid": "72553ef6330b68e37f83db08cc9016e2",
"text": "Social network services have become a viable source of information for users. In Twitter, information deemed important by the community propagates through retweets. Studying the characteristics of such popular messages is important for a number of tasks, such as breaking news detection, personalized message recommendation, viral marketing and others. This paper investigates the problem of predicting the popularity of messages as measured by the number of future retweets and sheds some light on what kinds of factors influence information propagation in Twitter. We formulate the task into a classification problem and study two of its variants by investigating a wide spectrum of features based on the content of the messages, temporal information, metadata of messages and users, as well as structural properties of the users' social graph on a large scale dataset. We show that our method can successfully predict messages which will attract thousands of retweets with good performance.",
"title": ""
},
{
"docid": "0eb16a7a6422905008c54e19c9833abc",
"text": "Instagram is a growing social media platform that provides a means of self-expression and communication through creative visuals. Businesses are responding to this trend by using it as a cost-effective marketing tool. This paper examined the accounts of the leading food brands on Instagram: McDonald’s, Taco Bell, Shredz, Ben & Jerry’s, and Oreo. Photos were classified according to 11 elements: product, person and product, people and product, humor and product, world events, recipes, campaign with no products, user-generated, regram from a celebrity, and video. They were further analyzed along five dimensions of personality: sincerity, excitement, competence, sophistication, and ruggedness. Results presented common themes revealing that brands are using Instagram to promote their products and, more significantly, to colorfully express their personalities.",
"title": ""
}
] | [
{
"docid": "bf5f08174c55ed69e454a87ff7fbe6e2",
"text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "347509d68f6efd4da747a7a3e704a9a2",
"text": "Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation.\n One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.",
"title": ""
},
{
"docid": "2a8aa90a9e45f58486cb712fe1271842",
"text": "Existing relation classification methods that rely on distant supervision assume that a bag of sentences mentioning an entity pair are all describing a relation for the entity pair. Such methods, performing classification at the bag level, cannot identify the mapping between a relation and a sentence, and largely suffers from the noisy labeling problem. In this paper, we propose a novel model for relation classification at the sentence level from noisy data. The model has two modules: an instance selector and a relation classifier. The instance selector chooses high-quality sentences with reinforcement learning and feeds the selected sentences into the relation classifier, and the relation classifier makes sentencelevel prediction and provides rewards to the instance selector. The two modules are trained jointly to optimize the instance selection and relation classification processes. Experiment results show that our model can deal with the noise of data effectively and obtains better performance for relation classification at the sentence level.",
"title": ""
},
{
"docid": "180dd2107c6a39e466b3d343fa70174f",
"text": "This paper presents simulation and hardware implementation of incremental conductance (IncCond) maximum power point tracking (MPPT) used in solar array power systems with direct control method. The main difference of the proposed system to existing MPPT systems includes elimination of the proportional-integral control loop and investigation of the effect of simplifying the control circuit. Contributions are made in several aspects of the whole system, including converter design, system simulation, controller programming, and experimental setup. The resultant system is capable of tracking MPPs accurately and rapidly without steady-state oscillation, and also, its dynamic performance is satisfactory. The IncCond algorithm is used to track MPPs because it performs precise control under rapidly changing atmospheric conditions. MATLAB and Simulink were employed for simulation studies, and Code Composer Studio v3.1 was used to program a TMS320F2812 digital signal processor. The proposed system was developed and tested successfully on a photovoltaic solar panel in the laboratory. Experimental results indicate the feasibility and improved functionality of the system.",
"title": ""
},
{
"docid": "d8472e56a4ffe5d6b0cb0c902186d00b",
"text": "In C. S. Peirce, as well as in the work of many biosemioticians, the semiotic object is sometimes described as a physical “object” with material properties and sometimes described as an “ideal object” or mental representation. I argue that to the extent that we can avoid these types of characterizations we will have a more scientific definition of sign use and will be able to better integrate the various fields that interact with biosemiotics. In an effort to end Cartesian dualism in semiotics, which has been the main obstacle to a scientific biosemiotics, I present an argument that the “semiotic object” is always ultimately the objective of self-affirmation (of habits, physical or mental) and/or self-preservation. Therefore, I propose a new model for the sign triad: response-sign-objective. With this new model it is clear, as I will show, that self-mistaking (not self-negation as others have proposed) makes learning, creativity and purposeful action possible via signs. I define an “interpretation” as a response to something as if it were a sign, but whose semiotic objective does not, in fact, exist. If the response-as-interpretation turns out to be beneficial for the system after all, there is biopoiesis. When the response is not “interpretive,” but self-confirming in the usual way, there is biosemiosis. While the conditions conducive to fruitful misinterpretation (e.g., accidental similarity of non-signs to signs and/or contiguity of non-signs to self-sustaining processes) might be artificially enhanced, according to this theory, the outcomes would be, by nature, more or less uncontrollable and unpredictable. Nevertheless, biosemiotics could be instrumental in the manipulation and/or artificial creation of purposeful systems insofar as it can describe a formula for the conditions under which new objectives and novel purposeful behavior may emerge, however unpredictably.",
"title": ""
},
{
"docid": "d6e1250fe1044db0a973d5e26fff1a51",
"text": "Microblogs have become one of the most popular platforms for news sharing. However, due to its openness and lack of supervision, rumors could also be easily posted and propagated on social networks, which could cause huge panic and threat during its propagation. In this paper, we detect rumors by leveraging hierarchical representations at different levels and the social contexts. Specifically, we propose a novel hierarchical neural network combined with social information (HSA-BLSTM). We first build a hierarchical bidirectional long short-term memory model for representation learning. Then, the social contexts are incorporated into the network via attention mechanism, such that important semantic information is introduced to the framework for more robust rumor detection. Experimental results on two real world datasets demonstrate that the proposed method outperforms several state-of-the-arts in both rumor detection and early detection scenarios.",
"title": ""
},
{
"docid": "1014a33211c9ca3448fa02cf734a5775",
"text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.",
"title": ""
},
{
"docid": "3d10793b2e4e63e7d639ff1e4cdf04b6",
"text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.",
"title": ""
},
{
"docid": "27d4e0ec4ef3b39b0b76225969dc521a",
"text": "Various definitions have been defined for the word “interoperability”; in general however, “interoperability” is a property through which the systems and organizations can communicate and cooperate. In e-government domain to establish “interoperability” among the organizations, using a suitable “interoperability” framework is inevitable. Numerous frameworks have been proposed in e-government interoperability domain. Selecting a suitable interoperability framework as a reference from among the existing interoperability frameworks in e-government domain is a major challenge of great importance. This paper aims at investigating the present frameworks of interoperability in e-government domain and identifying a reference interoperability framework for this domain. The results of this study show that all the existing interoperability frameworks in the e government domain have some drawbacks. Defining a comprehensive framework of interoperability in e-government domain is therefore inevitable. Keywords—Interoperability, Framework, E-Government.",
"title": ""
},
{
"docid": "a2fcc3734115b76ca562dc190ebc5349",
"text": "Image inpainting is concerned with the completion of missing data in an image. When the area to inpaint is relatively large, this problem becomes challenging. In these cases, traditional methods based on patch models and image propagation are limited, since they fail to consider a global perspective of the problem. In this letter, we employ a recently proposed dictionary learning framework, coined Trainlets, to design large adaptable atoms from a corpus of various datasets of face images by leveraging the online sparse dictionary learning algorithm. We, therefore, formulate the inpainting task as an inverse problem with a sparse-promoting prior based on the learned global model. Our results show the effectiveness of our scheme, obtaining much more plausible results than competitive methods.",
"title": ""
},
{
"docid": "921d9dc34f32522200ddcd606d22b6b4",
"text": "The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.",
"title": ""
},
{
"docid": "4bc74a746ef958a50bb8c542aa25860f",
"text": "A new approach to super resolution line spectrum estimation in both temporal and spatial domain using a coprime pair of samplers is proposed. Two uniform samplers with sample spacings MT and NT are used where M and N are coprime and T has the dimension of space or time. By considering the difference set of this pair of sample spacings (which arise naturally in computation of second order moments), sample locations which are O(MN) consecutive multiples of T can be generated using only O(M + N) physical samples. In order to efficiently use these O(MN) virtual samples for super resolution spectral estimation, a novel algorithm based on the idea of spatial smoothing is proposed, which can be used for estimating frequencies of sinusoids buried in noise as well as for estimating Directions-of-Arrival (DOA) of impinging signals on a sensor array. This technique allows us to construct a suitable positive semidefinite matrix on which subspace based algorithms like MUSIC can be applied to detect O(MN) spectral lines using only O(M + N) physical samples.",
"title": ""
},
{
"docid": "3797ca0ca77e51b2e77a1f46665edeb8",
"text": "This paper proposes a new method for the Karmed dueling bandit problem, a variation on the regular K-armed bandit problem that offers only relative feedback about pairs of arms. Our approach extends the Upper Confidence Bound algorithm to the relative setting by using estimates of the pairwise probabilities to select a promising arm and applying Upper Confidence Bound with the winner as a benchmark. We prove a sharp finite-time regret bound of order O(K log T ) on a very general class of dueling bandit problems that matches a lower bound proven in (Yue et al., 2012). In addition, our empirical results using real data from an information retrieval application show that it greatly outperforms the state of the art.",
"title": ""
},
{
"docid": "50ebb1feb21be692aaddb6ca74170c49",
"text": "We show that a character-level encoderdecoder framework can be successfully applied to question answering with a structured knowledge base. We use our model for singlerelation question answering and demonstrate the effectiveness of our approach on the SimpleQuestions dataset (Bordes et al., 2015), where we improve state-of-the-art accuracy from 63.9% to 70.9%, without use of ensembles. Importantly, our character-level model has 16x fewer parameters than an equivalent word-level model, can be learned with significantly less data compared to previous work, which relies on data augmentation, and is robust to new entities in testing. 1",
"title": ""
},
{
"docid": "d4a0b5558045245a55efbf9b71a84bc3",
"text": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.",
"title": ""
},
{
"docid": "936c4fb60d37cce15ed22227d766908f",
"text": "English. The SENTIment POLarity Classification Task 2016 (SENTIPOLC), is a rerun of the shared task on sentiment classification at the message level on Italian tweets proposed for the first time in 2014 for the Evalita evaluation campaign. It includes three subtasks: subjectivity classification, polarity classification, and irony detection. In 2016 SENTIPOLC has been again the most participated EVALITA task with a total of 57 submitted runs from 13 different teams. We present the datasets – which includes an enriched annotation scheme for dealing with the impact on polarity of a figurative use of language – the evaluation methodology, and discuss results and participating systems. Italiano. Descriviamo modalità e risultati della seconda edizione della campagna di valutazione di sistemi di sentiment analysis (SENTIment POLarity Classification Task), proposta nel contesto di “EVALITA 2016: Evaluation of NLP and Speech Tools for Italian”. In SENTIPOLC è stata valutata la capacità dei sistemi di riconoscere diversi aspetti del sentiment espresso nei messaggi Twitter in lingua italiana, con un’articolazione in tre sottotask: subjectivity classification, polarity classification e irony detection. La campagna ha suscitato nuovamente grande interesse, con un totale di 57 run inviati da 13 gruppi di partecipanti.",
"title": ""
},
{
"docid": "2950e3c1347c4adeeb2582046cbea4b8",
"text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.",
"title": ""
},
{
"docid": "7704e1154d480c167eff13c0e3fe4411",
"text": "An autonomous dual wheel self balancing robot is developed that is capable of balancing its position around predetermined position. Initially the system was nonlinear and unstable. It is observed that the system becomes stable after redesigning the physical structure of the system using PID controller and analyzing its dynamic behavior using mathematical modeling. The position of self balancing robot is controlled by PID controller. Simulation results using PROTEOUS, MATLAB, and VM lab are observed and verified vital responses of different components. Balancing is claimed and shown the verification for this nonlinear and unstable system. Some fluctuations in forward or backward around its mean position is observed, afterwards it acquires its balanced position in reasonable settling time. The research is applicable in gardening, hospitals, shopping malls and defense systems etc.",
"title": ""
},
{
"docid": "7816f9fc22866f2c4f12313715076a20",
"text": "Image-to-image translation has been made much progress with embracing Generative Adversarial Networks (GANs). However, it’s still very challenging for translation tasks that require high quality, especially at high-resolution and photorealism. In this paper, we present Discriminative Region Proposal Adversarial Networks (DRPAN) for highquality image-to-image translation. We decompose the procedure of imageto-image translation task into three iterated steps, first is to generate an image with global structure but some local artifacts (via GAN), second is using our DRPnet to propose the most fake region from the generated image, and third is to implement “image inpainting” on the most fake region for more realistic result through a reviser, so that the system (DRPAN) can be gradually optimized to synthesize images with more attention on the most artifact local part. Experiments on a variety of image-to-image translation tasks and datasets validate that our method outperforms state-of-the-arts for producing high-quality translation results in terms of both human perceptual studies and automatic quantitative measures.",
"title": ""
}
] | scidocsrr |
01080490d8845e603208753303c2cc7c | The transformation of product development process into lean environment using set-based concurrent engineering: A case study from an aerospace industry | [
{
"docid": "61768befa972c8e9f46524a59c44fabb",
"text": "This paper presents a newly defined set-based concurrent engineering process, which the authors believe addresses some of the key challenges faced by engineering enterprises in the 21 century. The main principles of Set-Based Concurrent Engineering (SBCE) have been identified via an extensive literature review. Based on these principles the SBCE baseline model was developed. The baseline model defines the stages and activities which represent the product development process to be employed in the LeanPPD (lean product and process development) project. The LeanPPD project is addressing the needs of European manufacturing companies for a new model that extends beyond lean manufacturing, and incorporates lean thinking in the product design development process.",
"title": ""
}
] | [
{
"docid": "32d0a26f21a25fe1e783b1edcfbcf673",
"text": "Histologic grading has been used as a guide for clinical management in follicular lymphoma (FL). Proliferation index (PI) of FL generally correlates with tumor grade; however, in cases of discordance, it is not clear whether histologic grade or PI correlates with clinical aggressiveness. To objectively evaluate these cases, we determined PI by Ki-67 immunostaining in 142 cases of FL (48 grade 1, 71 grade 2, and 23 grade 3). A total of 24 cases FL with low histologic grade but high PI (LG-HPI) were identified, a frequency of 18%. On histologic examination, LG-HPI FL often exhibited blastoid features. Patients with LG-HPI FL had inferior disease-specific survival but a higher 5-year disease-free rate than low-grade FL with concordantly low PI (LG-LPI). However, transformation to diffuse large B-cell lymphoma was uncommon in LG-HPI cases (1 of 19; 5%) as compared with LG-LPI cases (27 of 74; 36%). In conclusion, LG-HPI FL appears to be a subgroup of FL with clinical behavior more akin to grade 3 FL. We propose that these LG-HPI FL cases should be classified separately from cases of low histologic grade FL with concordantly low PI.",
"title": ""
},
{
"docid": "8b1734f040031e22c50b6b2a573ff58a",
"text": "Is it permissible to harm one to save many? Classic moral dilemmas are often defined by the conflict between a putatively rational response to maximize aggregate welfare (i.e., the utilitarian judgment) and an emotional aversion to harm (i.e., the non-utilitarian judgment). Here, we address two questions. First, what specific aspect of emotional responding is relevant for these judgments? Second, is this aspect of emotional responding selectively reduced in utilitarians or enhanced in non-utilitarians? The results reveal a key relationship between moral judgment and empathic concern in particular (i.e., feelings of warmth and compassion in response to someone in distress). Utilitarian participants showed significantly reduced empathic concern on an independent empathy measure. These findings therefore reveal diminished empathic concern in utilitarian moral judges.",
"title": ""
},
{
"docid": "8c7b6d0ecb1b1a4a612f44e8de802574",
"text": "Recently, the Fisher vector representation of local features has attracted much attention because of its effectiveness in both image classification and image retrieval. Another trend in the area of image retrieval is the use of binary feature such as ORB, FREAK, and BRISK. Considering the significant performance improvement in terms of accuracy in both image classification and retrieval by the Fisher vector of continuous feature descriptors, if the Fisher vector were also to be applied to binary features, we would receive the same benefits in binary feature based image retrieval and classification. In this paper, we derive the closed-form approximation of the Fisher vector of binary features which are modeled by the Bernoulli mixture model. In experiments, it is shown that the Fisher vector representation improves the accuracy of image retrieval by 25% compared with a bag of binary words approach.",
"title": ""
},
{
"docid": "b0eb2048209c7ceeb3c67c2b24693745",
"text": "Modeling an ontology is a hard and time-consuming task. Although methodologies are useful for ontologists to create good ontologies, they do not help with the task of evaluating the quality of the ontology to be reused. For these reasons, it is imperative to evaluate the quality of the ontology after constructing it or before reusing it. Few studies usually present only a set of criteria and questions, but no guidelines to evaluate the ontology. The effort to evaluate an ontology is very high as there is a huge dependence on the evaluator’s expertise to understand the criteria and questions in depth. Moreover, the evaluation is still very subjective. This study presents a novel methodology for ontology evaluation, taking into account three fundamental principles: i) it is based on the Goal, Question, Metric approach for empirical evaluation; ii) the goals of the methodologies are based on the roles of knowledge representations combined with specific evaluation criteria; iii) each ontology is evaluated according to the type of ontology. The methodology was empirically evaluated using different ontologists and ontologies of the same domain. The main contributions of this study are: i) defining a step-by-step approach to evaluate the quality of an ontology; ii) proposing an evaluation based on the roles of knowledge representations; iii) the explicit difference of the evaluation according to the type of the ontology iii) a questionnaire to evaluate the ontologies; iv) a statistical model that automatically calculates the quality of the ontologies.",
"title": ""
},
{
"docid": "4b6a4f9d91bc76c541f4879a1a684a3f",
"text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.",
"title": ""
},
{
"docid": "b0c4b345063e729d67396dce77e677a6",
"text": "Work done on the implementation of a fuzzy logic controller in a single intersection of two one-way streets is presented. The model of the intersection is described and validated, and the use of the theory of fuzzy sets in constructing a controller based on linguistic control instructions is introduced. The results obtained from the implementation of the fuzzy logic controller are tabulated against those corresponding to a conventional effective vehicle-actuated controller. With the performance criterion being the average delay of vehicles, it is shown that the use of a fuzzy logic controller results in a better performance.",
"title": ""
},
{
"docid": "ad6672657fc07ed922f1e2c0212b30bc",
"text": "As a generalization of the ordinary wavelet transform, the fractional wavelet transform (FRWT) is a very promising tool for signal analysis and processing. Many of its fundamental properties are already known; however, little attention has been paid to its sampling theory. In this paper, we first introduce the concept of multiresolution analysis associated with the FRWT, and then propose a sampling theorem for signals in FRWT-based multiresolution subspaces. The necessary and sufficient condition for the sampling theorem is derived. Moreover, sampling errors due to truncation and aliasing are discussed. The validity of the theoretical derivations is demonstrated via simulations.",
"title": ""
},
{
"docid": "892e70d9666267bc1faf3911c8e60264",
"text": "Interactive spoken dialogue provides many new challenges for natural language understanding systems. One of the most critical challenges is simply determining the speaker’s intended utterances: both segmenting a speaker’s turn into utterances and determining the intended words in each utterance. Even assuming perfect word recognition, the latter problem is complicated by the occurrence of speech repairs, which occur where speakers go back and change (or repeat) something they just said. The words that are replaced or repeated are no longer part of the intended utterance, and so need to be identified. Segmenting turns and resolving repairs are strongly intertwined with a third task: identifying discourse markers. Because of the interactions, and interactions with POS tagging and speech recognition, we need to address these tasks together and early on in the processing stream. This paper presents a statistical language model in which we redefine the speech recognition problem so that it includes the identification of POS tags, discourse markers, speech repairs and intonational phrases. By solving these simultaneously, we obtain better results on each task than addressing them separately. Our model is able to identify 72% of turn-internal intonational boundaries with a precision of 71%, 97% of discourse markers with 96% precision, and detect and correct 66% of repairs with 74% precision.",
"title": ""
},
{
"docid": "1adc476c1e322d7cc7a0c93e726a8e2c",
"text": "A wireless body area network is a radio-frequency- based wireless networking technology that interconnects tiny nodes with sensor or actuator capabilities in, on, or around a human body. In a civilian networking environment, WBANs provide ubiquitous networking functionalities for applications varying from healthcare to safeguarding of uniformed personnel. This article surveys pioneer WBAN research projects and enabling technologies. It explores application scenarios, sensor/actuator devices, radio systems, and interconnection of WBANs to provide perspective on the trade-offs between data rate, power consumption, and network coverage. Finally, a number of open research issues are discussed.",
"title": ""
},
{
"docid": "fe55db2d04fdba4f4655e39520f135bd",
"text": "The application of virtual reality in e-commerce has enormous potential for transforming online shopping into a real-world equivalent. However, the growing research interest focuses on virtual reality technology adoption for the development of e-commerce environments without addressing social and behavioral facets of online shopping such as trust. At the same time, trust is a critical success factor for e-commerce and remains an open issue as to how it can be accomplished within an online store. This paper shows that the use of virtual reality for online shopping environments offers an advanced customer experience compared to conventional web stores and enables the formation of customer trust. The paper presents a prototype virtual shopping mall environment, designed on principles derived by an empirically tested model for building trust in e-commerce. The environment is evaluated with an empirical study providing evidence and explaining that a virtual reality shopping environment would be preferred by customers over a conventional web store and would facilitate the assessment of the e-vendor’s trustworthiness.",
"title": ""
},
{
"docid": "3e6151d32fc5c2be720aab5cc467eecb",
"text": "The weighted linear combination (WLC) technique is a decision rule for deriving composite maps using GIS. It is one of the most often used decision models in GIS. The method, however, is frequently applied without full understanding of the assumptions underling this approach. In many case studies, the WLC model has been applied incorrectly and with dubious results because analysts (decision makers) have ignored or been unaware of the assumptions. This paper provides a critical overview of the current practice with respect to GIS/WLC and suggests the best practice approach.",
"title": ""
},
{
"docid": "5a7e97c755e29a9a3c82fc3450f9a929",
"text": "Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that enables secure execution of a program in an isolated environment, called an enclave. SGX hardware protects the running enclave against malicious software, including the operating system, hypervisor, and even low-level firmware. This strong security property allows trustworthy execution of programs in hostile environments, such as a public cloud, without trusting anyone (e.g., a cloud provider) between the enclave and the SGX hardware. However, recent studies have demonstrated that enclave programs are vulnerable to accurate controlled-channel attacks conducted by a malicious OS. Since enclaves rely on the underlying OS, curious and potentially malicious OSs can observe a sequence of accessed addresses by intentionally triggering page faults. In this paper, we propose T-SGX, a complete mitigation solution to the controlled-channel attack in terms of compatibility, performance, and ease of use. T-SGX relies on a commodity component of the Intel processor (since Haswell), called Transactional Synchronization Extensions (TSX), which implements a restricted form of hardware transactional memory. As TSX is implemented as an extension (i.e., snooping the cache protocol), any unusual event, such as an exception or interrupt, that should be handled in its core component, results in an abort of the ongoing transaction. One interesting property is that the TSX abort suppresses the notification of errors to the underlying OS. This means that the OS cannot know whether a page fault has occurred during the transaction. T-SGX, by utilizing this property of TSX, can carefully isolate the effect of attempts to tap running enclaves, thereby completely eradicating the known controlledchannel attack. We have implemented T-SGX as a compiler-level scheme to automatically transform a normal enclave program into a secured enclave program without requiring manual source code modification or annotation. We not only evaluate the security properties of T-SGX, but also demonstrate that it could be applied to all the previously demonstrated attack targets, such as libjpeg, Hunspell, and FreeType. To evaluate the performance of T-SGX, we ported 10 benchmark programs of nbench to the SGX environment. Our evaluation results look promising. T-SGX is † The two lead authors contributed equally to this work. ⋆ The author did part of this work during an intership at Microsoft Research. an order of magnitude faster than the state-of-the-art mitigation schemes. On our benchmarks, T-SGX incurs on average 50% performance overhead and less than 30% storage overhead.",
"title": ""
},
{
"docid": "3081cab6599394a1cc062e1f2e00decf",
"text": "This paper describes the 3Book, a 3D interactive visualization of a codex book as a component for digital library and information-intensive applications. The 3Book is able to represent books of almost unlimited length, allows users to read large format books, and has features to enhance reading and sensemaking.",
"title": ""
},
{
"docid": "33c113db245fb36c3ce8304be9909be6",
"text": "Bring Your Own Device (BYOD) is growing in popularity. In fact, this inevitable and unstoppable trend poses new security risks and challenges to control and manage corporate networks and data. BYOD may be infected by viruses, spyware or malware that gain access to sensitive data. This unwanted access led to the disclosure of information, modify access policy, disruption of service, loss of productivity, financial issues, and legal implications. This paper provides a review of existing literature concerning the access control and management issues, with a focus on recent trends in the use of BYOD. This article provides an overview of existing research articles which involve access control and management issues, which constitute of the recent rise of usage of BYOD devices. This review explores a broad area concerning information security research, ranging from management to technical solution of access control in BYOD. The main aim for this is to investigate the most recent trends touching on the access control issues in BYOD concerning information security and also to analyze the essential and comprehensive requirements needed to develop an access control framework in the future. Keywords— Bring Your Own Device, BYOD, access control, policy, security.",
"title": ""
},
{
"docid": "21025b37c5c172399c63148f1bfa49ab",
"text": "Buffer overflows belong to the most common class of attacks on today’s Internet. Although stack-based variants are still by far more frequent and well-understood, heap-based overflows have recently gained more attention. Several real-world exploits have been published that corrupt heap management information and allow arbitrary code execution with the privileges of the victim process. This paper presents a technique that protects the heap management information and allows for run-time detection of heap-based overflows. We discuss the structure of these attacks and our proposed detection scheme that has been implemented as a patch to the GNU Lib C. We report the results of our experiments, which demonstrate the detection effectiveness and performance impact of our approach. In addition, we discuss different mechanisms to deploy the memory protection.",
"title": ""
},
{
"docid": "b51f3871cf5354c23e5ffd18881fe951",
"text": "As the Internet grows in importance, concerns about online privacy have arisen. We describe the development and validation of three short Internet-administered scales measuring privacy related attitudes ('Privacy Concern') and behaviors ('General Caution' and 'Technical Protection'). Internet Privacy Scales 1 In Press: Journal of the American Society for Information Science and Technology UNCORRECTED proofs. This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright 2006 Wiley Periodicals, Inc. Running Head: INTERNET PRIVACY SCALES Development of measures of online privacy concern and protection for use on the",
"title": ""
},
{
"docid": "05f77aceabb886ea54af07e1bfeb1686",
"text": "The associations between time spent in sleep, sedentary behaviors (SB) and physical activity with health are usually studied without taking into account that time is finite during the day, so time spent in each of these behaviors are codependent. Therefore, little is known about the combined effect of time spent in sleep, SB and physical activity, that together constitute a composite whole, on obesity and cardio-metabolic health markers. Cross-sectional analysis of NHANES 2005-6 cycle on N = 1937 adults, was undertaken using a compositional analysis paradigm, which accounts for this intrinsic codependence. Time spent in SB, light intensity (LIPA) and moderate to vigorous activity (MVPA) was determined from accelerometry and combined with self-reported sleep time to obtain the 24 hour time budget composition. The distribution of time spent in sleep, SB, LIPA and MVPA is significantly associated with BMI, waist circumference, triglycerides, plasma glucose, plasma insulin (all p<0.001), and systolic (p<0.001) and diastolic blood pressure (p<0.003), but not HDL or LDL. Within the composition, the strongest positive effect is found for the proportion of time spent in MVPA. Strikingly, the effects of MVPA replacing another behavior and of MVPA being displaced by another behavior are asymmetric. For example, re-allocating 10 minutes of SB to MVPA was associated with a lower waist circumference by 0.001% but if 10 minutes of MVPA is displaced by SB this was associated with a 0.84% higher waist circumference. The proportion of time spent in LIPA and SB were detrimentally associated with obesity and cardiovascular disease markers, but the association with SB was stronger. For diabetes risk markers, replacing SB with LIPA was associated with more favorable outcomes. Time spent in MVPA is an important target for intervention and preventing transfer of time from LIPA to SB might lessen the negative effects of physical inactivity.",
"title": ""
},
{
"docid": "7471dc4c3020d479457dfbbdac924501",
"text": "Objective:Communicating with families is a core skill for neonatal clinicians, yet formal communication training rarely occurs. This study examined the impact of an intensive interprofessional communication training for neonatology fellows and nurse practitioners.Study Design:Evidence-based, interactive training for common communication challenges in neonatology incorporated didactic sessions, role-plays and reflective exercises. Participants completed surveys before, after, and one month following the training.Result:Five neonatology fellows and eight nurse practitioners participated (n=13). Before the training, participants overall felt somewhat prepared (2.6 on 5 point Likert-type scale) to engage in core communication challenges; afterwards, participants overall felt very well prepared (4.5 on Likert-type scale) (P<0.05). One month later, participants reported frequently practicing the taught skills and felt quite willing to engage in difficult conversations.Conclusion:An intensive communication training program increased neonatology clinicians’ self-perceived competence to face communication challenges which commonly occur, but for which training is rarely provided.",
"title": ""
},
{
"docid": "a2253bf241f7e5f60e889258e4c0f40c",
"text": "BACKGROUND-Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE-This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD-The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS-Seven distinct evaluation strategies were identified, wherein the most common one, “Pre-Post Comparison,” was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, “Project” represents the majority with 66 percent. CONCLUSION-The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that “Pre-Post Comparison” was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used.",
"title": ""
},
{
"docid": "c699ede2caeb5953decc55d8e42c2741",
"text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.",
"title": ""
}
] | scidocsrr |
38aa4a2252a57fe8fe8569d4e884f89b | Massive Non-Orthogonal Multiple Access for Cellular IoT: Potentials and Limitations | [
{
"docid": "cf43e30eab17189715b085a6e438ea7d",
"text": "This paper presents our investigation of non-orthogonal multiple access (NOMA) as a novel and promising power-domain user multiplexing scheme for future radio access. Based on information theory, we can expect that NOMA with a successive interference canceller (SIC) applied to the receiver side will offer a better tradeoff between system efficiency and user fairness than orthogonal multiple access (OMA), which is widely used in 3.9 and 4G mobile communication systems. This improvement becomes especially significant when the channel conditions among the non-orthogonally multiplexed users are significantly different. Thus, NOMA can be expected to efficiently exploit the near-far effect experienced in cellular environments. In this paper, we describe the basic principle of NOMA in both the downlink and uplink and then present our proposed NOMA scheme for the scenario where the base station is equipped with multiple antennas. Simulation results show the potential system-level throughput gains of NOMA relative to OMA. key words: cellular system, non-orthogonal multiple access, superposition coding, successive interference cancellation",
"title": ""
}
] | [
{
"docid": "ae593e6c1ea6e01093d8226ef219320f",
"text": "Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's “true” 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an 21 inspired objective for trajectory reconstruction that is able to “adaptively” select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.",
"title": ""
},
{
"docid": "eeac967209e931538e0b7a035c876446",
"text": "INTRODUCTION\nThis is the first of seven articles from a preterm birth and stillbirth report. Presented here is an overview of the burden, an assessment of the quality of current estimates, review of trends, and recommendations to improve data.\n\n\nPRETERM BIRTH\nFew countries have reliable national preterm birth prevalence data. Globally, an estimated 13 million babies are born before 37 completed weeks of gestation annually. Rates are generally highest in low- and middle-income countries, and increasing in some middle- and high-income countries, particularly the Americas. Preterm birth is the leading direct cause of neonatal death (27%); more than one million preterm newborns die annually. Preterm birth is also the dominant risk factor for neonatal mortality, particularly for deaths due to infections. Long-term impairment is an increasing issue.\n\n\nSTILLBIRTH\nStillbirths are currently not included in Millennium Development Goal tracking and remain invisible in global policies. For international comparisons, stillbirths include late fetal deaths weighing more than 1000g or occurring after 28 weeks gestation. Only about 2% of all stillbirths are counted through vital registration and global estimates are based on household surveys or modelling. Two global estimation exercises reached a similar estimate of around three million annually; 99% occur in low- and middle-income countries. One million stillbirths occur during birth. Global stillbirth cause-of-death estimates are impeded by multiple, complex classification systems.\n\n\nRECOMMENDATIONS TO IMPROVE DATA\n(1) increase the capture and quality of pregnancy outcome data through household surveys, the main data source for countries with 75% of the global burden; (2) increase compliance with standard definitions of gestational age and stillbirth in routine data collection systems; (3) strengthen existing data collection mechanisms--especially vital registration and facility data--by instituting a standard death certificate for stillbirth and neonatal death linked to revised International Classification of Diseases coding; (4) validate a simple, standardized classification system for stillbirth cause-of-death; and (5) improve systems and tools to capture acute morbidity and long-term impairment outcomes following preterm birth.\n\n\nCONCLUSION\nLack of adequate data hampers visibility, effective policies, and research. Immediate opportunities exist to improve data tracking and reduce the burden of preterm birth and stillbirth.",
"title": ""
},
{
"docid": "800dc3e6a3f58d2af1ed7cd526074d54",
"text": "The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.",
"title": ""
},
{
"docid": "35e33ddfa05149dea9b0aef4983c8cc1",
"text": "We propose a fast approximation method of a softmax function with a very large vocabulary using singular value decomposition (SVD). SVD-softmax targets fast and accurate probability estimation of the topmost probable words during inference of neural network language models. The proposed method transforms the weight matrix used in the calculation of the output vector by using SVD. The approximate probability of each word can be estimated with only a small part of the weight matrix by using a few large singular values and the corresponding elements for most of the words. We applied the technique to language modeling and neural machine translation and present a guideline for good approximation. The algorithm requires only approximately 20% of arithmetic operations for an 800K vocabulary case and shows more than a three-fold speedup on a GPU.",
"title": ""
},
{
"docid": "6f94fd155f3689ab1a6b242243b13e09",
"text": "Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.",
"title": ""
},
{
"docid": "1790c02ba32f15048da0f6f4d783aeda",
"text": "In this paper, resource allocation for energy efficient communication in orthogonal frequency division multiple access (OFDMA) downlink networks with large numbers of base station (BS) antennas is studied. Assuming perfect channel state information at the transmitter (CSIT), the resource allocation algorithm design is modeled as a non-convex optimization problem for maximizing the energy efficiency of data transmission (bit/Joule delivered to the users), where the circuit power consumption and a minimum required data rate are taken into consideration. Subsequently, by exploiting the properties of fractional programming, an efficient iterative resource allocation algorithm is proposed to solve the problem. In particular, the power allocation, subcarrier allocation, and antenna allocation policies for each iteration are derived. Simulation results illustrate that the proposed iterative resource allocation algorithm converges in a small number of iterations and unveil the trade-off between energy efficiency and the number of antennas.",
"title": ""
},
{
"docid": "6e9810c78c6923f720b6b088138db904",
"text": "The integration of microgrids that depend on the renewable distributed energy resources with the current power systems is a critical issue in the smart grid. In this paper, we propose a non-cooperative game-theoretic framework to study the strategic behavior of distributed microgrids that generate renewable energies and characterize the power generation solutions by using the Nash equilibrium concept. Our framework not only incorporates economic factors but also takes into account the stability and efficiency of the microgrids, including the power flow constraints and voltage angle regulations. We develop two decentralized update schemes for microgrids and show their convergence to a unique Nash equilibrium. Also, we propose a novel fully distributed PMU-enabled algorithm which only needs the information of voltage angle at the bus. To show the resiliency of the distributed algorithm, we introduce two failure models of the smart grid. Case studies based on the IEEE 14-bus system are used to corroborate the effectiveness and resiliency of the proposed algorithms.",
"title": ""
},
{
"docid": "df09cf0e7c323b6deda69d64f3af507a",
"text": "We propose a new multistage procedure for a real-time brain-machine/computer interface (BCI). The developed system allows a BCI user to navigate a small car (or any other object) on the computer screen in real time, in any of the four directions, and to stop it if necessary. Extensive experiments with five young healthy subjects confirmed the high performance of the proposed online BCI system. The modular structure, high speed, and the optimal frequency band characteristics of the BCI platform are features which allow an extension to a substantially higher number of commands in the near future.",
"title": ""
},
{
"docid": "6a72468ebba00563adc8a5f5d24d0ea6",
"text": "Denoising algorithms are well developed for grayscale and color images, but not as well for color filter array (CFA) data. Consequently, the common color imaging pipeline demosaics CFA data before denoising. In this paper we explore the noise-related properties of the imaging pipeline that demosaics CFA data before denoising. We then propose and explore a way to transform CFA data to a form that is amenable to existing grayscale and color denoising schemes. Since CFA data are a third as many as demosaicked data, we can expect to reduce processing time and power requirements to about a third of current requirements.",
"title": ""
},
{
"docid": "07cd406cead1a086f61f363269de1aac",
"text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.",
"title": ""
},
{
"docid": "b9b267cc96e2cb8b31ac63a278757dec",
"text": "Evolutionary considerations suggest aging is caused not by active gene programming but by evolved limitations in somatic maintenance, resulting in a build-up of damage. Ecological factors such as hazard rates and food availability influence the trade-offs between investing in growth, reproduction, and somatic survival, explaining why species evolved different life spans and why aging rate can sometimes be altered, for example, by dietary restriction. To understand the cell and molecular basis of aging is to unravel the multiplicity of mechanisms causing damage to accumulate and the complex array of systems working to keep damage at bay.",
"title": ""
},
{
"docid": "bf04d5a87fbac1157261fac7652b9177",
"text": "We consider the partitioning of a society into coalitions in purely hedonic settings; i.e., where each player's payo is completely determined by the identity of other members of her coalition. We rst discuss how hedonic and non-hedonic settings di er and some su cient conditions for the existence of core stable coalition partitions in hedonic settings. We then focus on a weaker stability condition: individual stability, where no player can bene t from moving to another coalition while not hurting the members of that new coalition. We show that if coalitions can be ordered according to some characteristic over which players have single-peaked preferences, or where players have symmetric and additively separable preferences, then there exists an individually stable coalition partition. Examples show that without these conditions, individually stable coalition partitions may not exist. We also discuss some other stability concepts, and the incompatibility of stability with other normative properties.",
"title": ""
},
{
"docid": "b8e8404c061350aeba92f6ed1ecea1f1",
"text": "We consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues. Realized demand is observed over time, but the underlying functional relationship between price and mean demand rate that governs these observations (otherwise known as the demand function or demand curve) is not known. We consider two instances of this problem: (i) a setting where the demand function is assumed to belong to a known parametric family with unknown parameter values; and (ii) a setting where the demand function is assumed to belong to a broad class of functions that need not admit any parametric representation. In each case we develop policies that learn the demand function “on the fly,” and optimize prices based on that. The performance of these algorithms is measured in terms of the regret: the revenue loss relative to the maximal revenues that can be extracted when the demand function is known prior to the start of the selling season. We derive lower bounds on the regret that hold for any admissible pricing policy, and then show that our proposed algorithms achieve a regret that is “close” to this lower bound. The magnitude of the regret can be interpreted as the economic value of prior knowledge on the demand function, manifested as the revenue loss due to model uncertainty.",
"title": ""
},
{
"docid": "7e0c6afa66f21d1469ca6d889d69a3f5",
"text": "In this paper, we propose and validate a novel design for a double-gate tunnel field-effect transistor (DG tunnel FET), for which the simulations show significant improvements compared with single-gate devices using a gate dielectric. For the first time, DG tunnel FET devices, which are using a high-gate dielectric, are explored using realistic design parameters, showing an on-current as high as 0.23 mA for a gate voltage of 1.8 V, an off-current of less than 1 fA (neglecting gate leakage), an improved average subthreshold swing of 57 mV/dec, and a minimum point slope of 11 mV/dec. The 2D nature of tunnel FET current flow is studied, demonstrating that the current is not confined to a channel at the gate-dielectric surface. When varying temperature, tunnel FETs with a high-kappa gate dielectric have a smaller threshold voltage shift than those using SiO2, while the subthreshold slope for fixed values of Vg remains nearly unchanged, in contrast with the traditional MOSFET. Moreover, an Ion/Ioff ratio of more than 2 times 1011 is shown for simulated devices with a gate length (over the intrinsic region) of 50 nm, which indicates that the tunnel FET is a promising candidate to achieve better-than-ITRS low-standby-power switch performance.",
"title": ""
},
{
"docid": "a669bebcbb6406549b78f365cf352008",
"text": "Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies--BitCoin--have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years--digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia--and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.",
"title": ""
},
{
"docid": "548be1a1c55ad27e47dba3fb1f20e404",
"text": "The proportional odds (PO) assumption for ordinal regression analysis is often violated because it is strongly affected by sample size and the number of covariate patterns. To address this issue, the partial proportional odds (PPO) model and the generalized ordinal logit model were developed. However, these models are not typically used in research. One likely reason for this is the restriction of current statistical software packages: SPSS cannot perform the generalized ordinal logit model analysis and SAS requires data restructuring. This article illustrates the use of generalized ordinal logistic regression models to predict mathematics proficiency levels using Stata and compares the results from fitting PO models and generalized ordinal logistic regression models.",
"title": ""
},
{
"docid": "68689ad05be3bf004120141f0534fd2b",
"text": "A group of 156 first year medical students completed measures of emotional intelligence (EI) and physician empathy, and a scale assessing their feelings about a communications skills course component. Females scored significantly higher than males on EI. Exam performance in the autumn term on a course component (Health and Society) covering general issues in medicine was positively and significantly related to EI score but there was no association between EI and exam performance later in the year. High EI students reported more positive feelings about the communication skills exercise. Females scored higher than males on the Health and Society component in autumn, spring and summer exams. Structural equation modelling showed direct effects of gender and EI on autumn term exam performance, but no direct effects other than previous exam performance on spring and summer term performance. EI also partially mediated the effect of gender on autumn term exam performance. These findings provide limited evidence for a link between EI and academic performance for this student group. More extensive work on associations between EI, academic success and adjustment throughout medical training would clearly be of interest. 2005 Elsevier Ltd. All rights reserved. 0191-8869/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.04.014 q Ethical approval from the College of Medicine and Veterinary Medicine was sought and received for this investigation. Student information was gathered and used in accordance with the Data Protection Act. * Corresponding author. Tel.: +44 131 65",
"title": ""
},
{
"docid": "ada6153aeeddcc385de538062f2f7e4c",
"text": "As analysts attempt to make sense of a collection of documents, such as intelligence analysis reports, they need to “connect the dots” between pieces of information that may initially seem unrelated. We conducted a user study to analyze the cognitive process by which users connect pairs of documents and how they spatialize connections. Users created conceptual stories that connected the dots using a range of organizational strategies and spatial representations. Insights from our study can drive the design of data mining algorithms and visual analytic tools to support analysts' complex cognitive processes.",
"title": ""
},
{
"docid": "2e87c4fbb42424f3beb07e685c856487",
"text": "Conventional wisdom ties the origin and early evolution of the genus Homo to environmental changes that occurred near the end of the Pliocene. The basic idea is that changing habitats led to new diets emphasizing savanna resources, such as herd mammals or underground storage organs. Fossil teeth provide the most direct evidence available for evaluating this theory. In this paper, we present a comprehensive study of dental microwear in Plio-Pleistocene Homo from Africa. We examined all available cheek teeth from Ethiopia, Kenya, Tanzania, Malawi, and South Africa and found 18 that preserved antemortem microwear. Microwear features were measured and compared for these specimens and a baseline series of five extant primate species (Cebus apella, Gorilla gorilla, Lophocebus albigena, Pan troglodytes, and Papio ursinus) and two protohistoric human foraging groups (Aleut and Arikara) with documented differences in diet and subsistence strategies. Results confirmed that dental microwear reflects diet, such that hard-object specialists tend to have more large microwear pits, whereas tough food eaters usually have more striations and smaller microwear features. Early Homo specimens clustered with baseline groups that do not prefer fracture resistant foods. Still, Homo erectus and individuals from Swartkrans Member 1 had more small pits than Homo habilis and specimens from Sterkfontein Member 5C. These results suggest that none of the early Homo groups specialized on very hard or tough foods, but that H. erectus and Swartkrans Member 1 individuals ate, at least occasionally, more brittle or tough items than other fossil hominins studied.",
"title": ""
},
{
"docid": "931b8f97d86902f984338285e62c8ef8",
"text": "One of the goals of Artificial intelligence (AI) is the realization of natural dialogue between humans and machines. in recent years, the dialogue systems, also known as interactive conversational systems are the fastest growing area in AI. Many companies have used the dialogue systems technology to establish various kinds of Virtual Personal Assistants(VPAs) based on their applications and areas, such as Microsoft's Cortana, Apple's Siri, Amazon Alexa, Google Assistant, and Facebook's M. However, in this proposal, we have used the multi-modal dialogue systems which process two or more combined user input modes, such as speech, image, video, touch, manual gestures, gaze, and head and body movement in order to design the Next-Generation of VPAs model. The new model of VPAs will be used to increase the interaction between humans and the machines by using different technologies, such as gesture recognition, image/video recognition, speech recognition, the vast dialogue and conversational knowledge base, and the general knowledge base. Moreover, the new VPAs system can be used in other different areas of applications, including education assistance, medical assistance, robotics and vehicles, disabilities systems, home automation, and security access control.",
"title": ""
}
] | scidocsrr |
18b752f9e223fba936ca48722db2d9ec | Visual search and reading tasks using ClearType and regular displays: two experiments | [
{
"docid": "eb0eec2fe000511a37e6487ff51ddb68",
"text": "We report on a laboratory study that compares reading from paper to reading on-line. Critical differences have to do with the major advantages paper offers in supporting annotation while reading, quick navigation, and flexibility of spatial layout. These, in turn, allow readers to deepen their understanding of the text, extract a sense of its structure, create a plan for writing, cross-refer to other documents, and interleave reading and writing. We discuss the design implications of these findings for the development of better reading technologies.",
"title": ""
}
] | [
{
"docid": "8eb51537b051bbf78d87a0cd48e9d90c",
"text": "One of the important techniques of Data mining is Classification. Many real world problems in various fields such as business, science, industry and medicine can be solved by using classification approach. Neural Networks have emerged as an important tool for classification. The advantages of Neural Networks helps for efficient classification of given data. In this study a Heart diseases dataset is analyzed using Neural Network approach. To increase the efficiency of the classification process parallel approach is also adopted in the training phase.",
"title": ""
},
{
"docid": "4ccea211a4b3b01361a4205990491764",
"text": "published by the press syndicate of the university of cambridge Vygotsky's educational theory in cultural context / edited by Alex Kozulin. .. [et al.]. p. cm. – (Learning in doing) Includes bibliographical references and index.",
"title": ""
},
{
"docid": "936cebe86936c6aa49758636554a4dc7",
"text": "A new kind of distributed power divider/combiner circuit for use in octave bandwidth (or more) microstrip power transistor amplifier is presented. The design, characteristics and advantages are discussed. Experimental results on a 4-way divider are presented and compared with theory.",
"title": ""
},
{
"docid": "23d6e2407335a076526df89355b9c7fe",
"text": "In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. At the same time, this paper brings in variation rate to describe the load variation of system virtual machines, and it also introduces average load distance to measure the overall load balancing effect of the algorithm. The experiment shows that this strategy has fairly good global astringency and efficiency, and the algorithm of this paper is, to a great extent, able to solve the problems of load imbalance and high migration cost after system VM being scheduled. What is more, the average load distance does not grow with the increase of VM load variation rate, and the system scheduling algorithm has quite good resource utility.",
"title": ""
},
{
"docid": "76f1935fcf5d30cd61d5452a892c4afb",
"text": "This paper examines the adoption and implementation of the Information Technology Infrastructure Library (ITIL). Specifically, interviews with a CIO, as well as literature from the ITIL Official site and from the practitioner’s journals are consulted in order to determine whether the best practices contained in the ITIL framework may improve the management of information technology (IT) services, as well as assist in promoting the alignment of Business and the IT Function within the organization. A conceptual model is proposed which proposes a two-way relationship between IT and the provision of IT Services, with ITIL positioned as an intervening variable.",
"title": ""
},
{
"docid": "65d60131b1ceba50399ceffa52de7e8a",
"text": "Cox, Matthew L. Miller, and Jeffrey A. Bloom. San Diego, CA: Academic Press, 2002, 576 pp. $69.96 (hardbound). A key ingredient to copyright protection, digital watermarking provides a solution to the illegal copying of material. It also has broader uses in recording and electronic transaction tracking. This book explains “the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.” [book notes] The authors are extensively experienced in digital watermarking technologies. Cox recently joined the NEC Research Institute after a five-year stint at AT&T Bell Labs. Miller’s interest began at AT&T Bell Labs in 1979. He also is employed at NEC. Bloom is a researcher in digital watermarking at the Sarnoff Corporation. His acquaintance with the field began at Signafy, Inc. and continued through his employment at NEC Research Institute. The book features the following: Review of the underlying principles of watermarking relevant for image, video, and audio; Discussion of a wide variety of applications, theoretical principles, detection and embedding concepts, and key properties; Examination of copyright protection and other applications; Presentation of a series of detailed examples that illustrate watermarking concepts and practices; Appendix, in print and on the Web, containing the source code for the examples; Comprehensive glossary of terms. “The authors provide a comprehensive overview of digital watermarking, rife with detailed examples and grounded within strong theoretical framework. Digital Watermarking will serve as a valuable introduction as well as a useful reference for those engaged in the field.”—Walter Bender, Director, M.I.T. Media Lab",
"title": ""
},
{
"docid": "b8274589a145a94e19329b2640a08c17",
"text": "Since 2004, many nations have started issuing “e-passports” containing an RFID tag that, when powered, broadcast information. It is claimed that these passports are more secure and that our data will be protected from any possible unauthorised attempts to read it. In this paper we show that there is a flaw in one of the passport’s protocols that makes it possible to trace the movements of a particular passport, without having to break the passport’s cryptographic key. All an attacker has to do is to record one session between the passport and a legitimate reader, then by replaying a particular message, the attacker can distinguish that passport from any other. We have implemented our attack and tested it successfully against passports issued by a range of nations.",
"title": ""
},
{
"docid": "6e36103ba9f21103252141ad4a53b4ac",
"text": "In this paper, we describe the binary classification of sentences into idiomatic and non-idiomatic. Our idiom detection algorithm is based on linear discriminant analysis (LDA). To obtain a discriminant subspace, we train our model on a small number of randomly selected idiomatic and non-idiomatic sentences. We then project both the training and the test data on the chosen subspace and use the three nearest neighbor (3NN) classifier to obtain accuracy. The proposed approach is more general than the previous algorithms for idiom detection — neither does it rely on target idiom types, lexicons, or large manually annotated corpora, nor does it limit the search space by a particular linguistic con-",
"title": ""
},
{
"docid": "9fd0049d079919282082a119763f2740",
"text": "The rapid development of Internet has given birth to a new business model: Cloud Computing. This new paradigm has experienced a fantastic rise in recent years. Because of its infancy, it remains a model to be developed. In particular, it must offer the same features of services than traditional systems. The cloud computing is large distributed systems that employ distributed resources to deliver a service to end users by implementing several technologies. Hence providing acceptable response time for end users, presents a major challenge for cloud computing. All components must cooperate to meet this challenge, in particular through load balancing algorithms. This will enhance the availability and will gain the end user confidence. In this paper we try to give an overview of load balancing in the cloud computing by exposing the most important research challenges.",
"title": ""
},
{
"docid": "5063a63d425b5ceebbadfbab14a0a75d",
"text": "Two studies investigated young infants' use of the word-learning principle Mutual Exclusivity. In Experiment 1, a linear relationship between age and performance was discovered. Seventeen-month-old infants successfully used Mutual Exclusivity to map novel labels to novel objects in a preferential looking paradigm. That is, when presented a familiar and a novel object (e.g. car and phototube) and asked to \"look at the dax\", 17-month-olds increased looking to the novel object (i.e. phototube) above baseline preference. On these trials, 16-month-olds were at chance. And, 14-month-olds systematically increased looking to the familiar object (i.e. car) in response to hearing the novel label \"dax\". Experiment 2 established that this increase in looking to the car was due solely to hearing the novel label \"dax\". Several possible interpretations of the surprising form of failure at 14 months are discussed.",
"title": ""
},
{
"docid": "3e3514d3a163c1982529327e81a88f84",
"text": "With the growth of recipe sharing services, online cooking recipes associated with ingredients and cooking procedures are available. Many recipe sharing sites have devoted to the development of recipe recommendation mechanism. While most food related research has been on recipe recommendation, little effort has been done on analyzing the correlation between recipe cuisines and ingredients. In this paper, we aim to investigate the underlying cuisine-ingredient connections by exploiting the classification techniques, including associative classification and support vector machine. Our study conducted on food.com data provides insights about which cuisines are the most similar and what are the essential ingredients for a cuisine, with an application to automatic cuisine labeling for recipes.",
"title": ""
},
{
"docid": "1baaa67ff7b4d00d6f03ae908cf1ca71",
"text": "Function approximation has been found in many applications. The radial basis function (RBF) network is one approach which has shown a great promise in this sort of problems because of its faster learning capacity. A traditional RBF network takes Gaussian functions as its basis functions and adopts the least-squares criterion as the objective function, However, it still suffers from two major problems. First, it is difficult to use Gaussian functions to approximate constant values. If a function has nearly constant values in some intervals, the RBF network will be found inefficient in approximating these values. Second, when the training patterns incur a large error, the network will interpolate these training patterns incorrectly. In order to cope with these problems, an RBF network is proposed in this paper which is based on sequences of sigmoidal functions and a robust objective function. The former replaces the Gaussian functions as the basis function of the network so that constant-valued functions can be approximated accurately by an RBF network, while the latter is used to restrain the influence of large errors. Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.",
"title": ""
},
{
"docid": "4768001167cefad7b277e3b77de648bb",
"text": "MicroRNAs (miRNAs) regulate gene expression at the posttranscriptional level and are therefore important cellular components. As is true for protein-coding genes, the transcription of miRNAs is regulated by transcription factors (TFs), an important class of gene regulators that act at the transcriptional level. The correct regulation of miRNAs by TFs is critical, and increasing evidence indicates that aberrant regulation of miRNAs by TFs can cause phenotypic variations and diseases. Therefore, a TF-miRNA regulation database would be helpful for understanding the mechanisms by which TFs regulate miRNAs and understanding their contribution to diseases. In this study, we manually surveyed approximately 5000 reports in the literature and identified 243 TF-miRNA regulatory relationships, which were supported experimentally from 86 publications. We used these data to build a TF-miRNA regulatory database (TransmiR, http://cmbi.bjmu.edu.cn/transmir), which contains 82 TFs and 100 miRNAs with 243 regulatory pairs between TFs and miRNAs. In addition, we included references to the published literature (PubMed ID) information about the organism in which the relationship was found, whether the TFs and miRNAs are involved with tumors, miRNA function annotation and miRNA-associated disease annotation. TransmiR provides a user-friendly interface by which interested parties can easily retrieve TF-miRNA regulatory pairs by searching for either a miRNA or a TF.",
"title": ""
},
{
"docid": "8ab4f34c736742a153477f919dfb4d8f",
"text": "In this paper, we model the trajectory of sea vessels and provide a service that predicts in near-real time the position of any given vessel in 4’, 10’, 20’ and 40’ time intervals. We explore the necessary tradeoffs between accuracy, performance and resource utilization are explored given the large volume and update rates of input data. We start with building models based on well-established machine learning algorithms using static datasets and multi-scan training approaches and identify the best candidate to be used in implementing a single-pass predictive approach, under real-time constraints. The results are measured in terms of accuracy and performance and are compared against the baseline kinematic equations. Results show that it is possible to efficiently model the trajectory of multiple vessels using a single model, which is trained and evaluated using an adequately large, static dataset, thus achieving a significant gain in terms of resource usage while not compromising accuracy.",
"title": ""
},
{
"docid": "a330c7ec22ab644404bbb558158e69e7",
"text": "With the advance in both hardware and software technologies, automated data generation and storage has become faster than ever. Such data is referred to as data streams. Streaming data is ubiquitous today and it is often a challenging task to store, analyze and visualize such rapid large volumes of data. Most conventional data mining techniques have to be adapted to run in a streaming environment, because of the underlying resource constraints in terms of memory and running time. Furthermore, the data stream may often show concept drift, because of which adaptation of conventional algorithms becomes more challenging. One such important conventional data mining problem is that of classification. In the classification problem, we attempt to model the class variable on the basis of one or more feature variables. While this problem has been extensively studied from a conventional mining perspective, it is a much more challenging problem in the data stream domain. In this chapter, we will re-visit the problem of classification from the data stream perspective. The techniques for this problem need to be thoroughly re-designed to address the issue of resource constraints and concept drift. This chapter reviews the state-of-the-art techniques in the literature along with their corresponding advantages and disadvantages.",
"title": ""
},
{
"docid": "5da45b946151bc72930cb8eebbe9d3f8",
"text": "Dr. Manfred Bischoff Institute of Innovation Management of EADS, Zeppelin University, Am Seemoser Horn 20, D-88045 Friedrichshafen, Germany. [email protected] Institute of Technology Management, University of St. Gallen, Dufourstrasse 40a, CH-9000 St. Gallen, Switzerland. [email protected] Center for Open Innovation, Haas School of Business, Faculty Wing, F402, University of California, Berkeley, Berkeley, CA 94720-1930, USA. [email protected]",
"title": ""
},
{
"docid": "1a9d595aaff44165fd486b97025ca36d",
"text": "1389-1286/$ see front matter 2008 Elsevier B.V doi:10.1016/j.comnet.2008.09.022 * Corresponding author. Tel.: +1 413 545 4465. E-mail address: [email protected] (M. Zink). 1 http://www.usatoday.com/tech/news/2006-07 x.htm http://en.wikipedia.org/wiki/YouTube. User-Generated Content has become very popular since new web services such as YouTube allow for the distribution of user-produced media content. YouTube-like services are different from existing traditional VoD services in that the service provider has only limited control over the creation of new content. We analyze how content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. Based on these measurements, we analyzed the duration and the data rate of streaming sessions, the popularity of videos, and access patterns for video clips from the clients in the campus network. The analysis of the traffic shows that trace statistics are relatively stable over short-term periods while long-term trends can be observed. We demonstrate how synthetic traces can be generated from the measured traces and show how these synthetic traces can be used as inputs to trace-driven simulations. We also analyze the benefits of alternative distribution infrastructures to improve the performance of a YouTube-like VoD service. The results of these simulations show that P2P-based distribution and proxy caching can reduce network traffic significantly and allow for faster access to video clips. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6b221edbde15defb80ecfb03340b012d",
"text": "Abstract We use well-established methods of knot theory to study the topological structure of the set of periodic orbits of the Lü attractor. We show that, for a specific set of parameters, the Lü attractor is topologically different from the classical Lorenz attractor, whose dynamics is formed by a double cover of the simple horseshoe. This argues against the ‘similarity’ between the Lü and Lorenz attractors, claimed, for these parameter values, by some authors on the basis of non-topological observations. However, we show that the Lü system belongs to the Lorenz-like family, since by changing the values of the parameters, the behaviour of the system follows the behaviour of all members of this family. An attractor of the Lü kind with higher order symmetry is constructed and some remarks on the Chen attractor are also presented.",
"title": ""
},
{
"docid": "3c81e6ff0e7b2eb509cea08904bdeaf3",
"text": "A novel ultra wideband (UWB) bandpass filter with double notch-bands is presented in this paper. Multilayer schematic is adopted to achieve compact size. Stepped impedance resonators (SIRs), which can also suppress harmonic response, are designed on top and second layers, respectively, and broadside coupling technique is used to achieve tight couplings for a wide passband. Folded SIRs that can provide desired notch-bands are designed on the third layer and coupled underneath the second layer SIRs. The designed prototype is fabricated using multilayer liquid crystal polymer (LCP) technology. Good agreement between simulated and measured response is observed. The fabricated filter has dual notch-bands with center frequencies of 6.4/8.0 GHz with 3 dB bandwidths of 9.5%/13.4% and high rejection levels up to 26.4 dB and 43.7 dB at 6.4/8.0 GHz are observed, respectively. It also has low-insertion losses and flat group delay in passbands, and excellent stopband rejection level higher than 30.0 dB from 11.4 GHz to 18.0 GHz.",
"title": ""
},
{
"docid": "5be572ea448bfe40654956112cecd4e1",
"text": "BACKGROUND\nBeta blockers reduce mortality in patients who have chronic heart failure, systolic dysfunction, and are on background treatment with diuretics and angiotensin-converting enzyme inhibitors. We aimed to compare the effects of carvedilol and metoprolol on clinical outcome.\n\n\nMETHODS\nIn a multicentre, double-blind, and randomised parallel group trial, we assigned 1511 patients with chronic heart failure to treatment with carvedilol (target dose 25 mg twice daily) and 1518 to metoprolol (metoprolol tartrate, target dose 50 mg twice daily). Patients were required to have chronic heart failure (NYHA II-IV), previous admission for a cardiovascular reason, an ejection fraction of less than 0.35, and to have been treated optimally with diuretics and angiotensin-converting enzyme inhibitors unless not tolerated. The primary endpoints were all-cause mortality and the composite endpoint of all-cause mortality or all-cause admission. Analysis was done by intention to treat.\n\n\nFINDINGS\nThe mean study duration was 58 months (SD 6). The mean ejection fraction was 0.26 (0.07) and the mean age 62 years (11). The all-cause mortality was 34% (512 of 1511) for carvedilol and 40% (600 of 1518) for metoprolol (hazard ratio 0.83 [95% CI 0.74-0.93], p=0.0017). The reduction of all-cause mortality was consistent across predefined subgroups. The composite endpoint of mortality or all-cause admission occurred in 1116 (74%) of 1511 on carvedilol and in 1160 (76%) of 1518 on metoprolol (0.94 [0.86-1.02], p=0.122). Incidence of side-effects and drug withdrawals did not differ by much between the two study groups.\n\n\nINTERPRETATION\nOur results suggest that carvedilol extends survival compared with metoprolol.",
"title": ""
}
] | scidocsrr |
c8629718e67cccbf5a4b71079a0fed55 | An IoT environmental data collection system for fungal detection in crop fields | [
{
"docid": "8c61854c397f8c56c4258c53d6d58894",
"text": "Given the rapid development of plant genomic technologies, a lack of access to plant phenotyping capabilities limits our ability to dissect the genetics of quantitative traits. Effective, high-throughput phenotyping platforms have recently been developed to solve this problem. In high-throughput phenotyping platforms, a variety of imaging methodologies are being used to collect data for quantitative studies of complex traits related to the growth, yield and adaptation to biotic or abiotic stress (disease, insects, drought and salinity). These imaging techniques include visible imaging (machine vision), imaging spectroscopy (multispectral and hyperspectral remote sensing), thermal infrared imaging, fluorescence imaging, 3D imaging and tomographic imaging (MRT, PET and CT). This paper presents a brief review on these imaging techniques and their applications in plant phenotyping. The features used to apply these imaging techniques to plant phenotyping are described and discussed in this review.",
"title": ""
},
{
"docid": "597e00855111c6ccb891c96e28f23585",
"text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.",
"title": ""
}
] | [
{
"docid": "85480263c05578c19b38360dbf843910",
"text": "Monolithic operating system designs undermine the security of computing systems by allowing single exploits anywhere in the kernel to enjoy full supervisor privilege. The nested kernel operating system architecture addresses this problem by \"nesting\" a small isolated kernel within a traditional monolithic kernel. The \"nested kernel\" interposes on all updates to virtual memory translations to assert protections on physical memory, thus significantly reducing the trusted computing base for memory access control enforcement. We incorporated the nested kernel architecture into FreeBSD on x86-64 hardware while allowing the entire operating system, including untrusted components, to operate at the highest hardware privilege level by write-protecting MMU translations and de-privileging the untrusted part of the kernel. Our implementation inherently enforces kernel code integrity while still allowing dynamically loaded kernel modules, thus defending against code injection attacks. We also demonstrate that the nested kernel architecture allows kernel developers to isolate memory in ways not possible in monolithic kernels by introducing write-mediation and write-logging services to protect critical system data structures. Performance of the nested kernel prototype shows modest overheads: <1% average for Apache and 2.7% for kernel compile. Overall, our results and experience show that the nested kernel design can be retrofitted to existing monolithic kernels, providing important security benefits.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "318daea2ef9b0d7afe2cb08edcfe6025",
"text": "Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.",
"title": ""
},
{
"docid": "fe6f81141e58bf5cf13bec80e033e197",
"text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.",
"title": ""
},
{
"docid": "b9b194410824bd769b708baef7953aaf",
"text": "Road and lane detection play an important role in autonomous driving and commercial driver-assistance systems. Vision-based road detection is an essential step towards autonomous driving, yet a challenging task due to illumination and complexity of the visual scenery. Urban scenes may present additional challenges such as intersections, multi-lane scenarios, or clutter due to heavy traffic. This paper presents an integrative approach to ego-lane detection that aims to be as simple as possible to enable real-time computation while being able to adapt to a variety of urban and rural traffic scenarios. The approach at hand combines and extends a road segmentation method in an illumination-invariant color image, lane markings detection using a ridge operator, and road geometry estimation using RANdom SAmple Consensus (RANSAC). Employing the segmented road region as a prior for lane markings extraction significantly improves the execution time and success rate of the RANSAC algorithm, and makes the detection of weakly pronounced ridge structures computationally tractable, thus enabling ego-lane detection even in the absence of lane markings. Segmentation performance is shown to increase when moving from a color-based to a histogram correlation-based model. The power and robustness of this algorithm has been demonstrated in a car simulation system as well as in the challenging KITTI data base of real-world urban traffic scenarios.",
"title": ""
},
{
"docid": "b5af84f96015be76875f620d0c24e646",
"text": "The worldwide burden of cancer (malignant tumor) is a major health problem, with more than 8 million new cases and 5 million deaths per year. Cancer is the second leading cause of death. With growing techniques the survival rate has increased and so it becomes important to contribute even the smallest help in this field favoring the survival rate. Tumor is a mass of tissue formed as the result of abnormal, excessive, uncoordinated, autonomous and purposeless proliferation of cells.",
"title": ""
},
{
"docid": "96fa50abd2a4fcff47af85f07b4e9d5d",
"text": "Complex biological systems and cellular networks may underlie most genotype to phenotype relationships. Here, we review basic concepts in network biology, discussing different types of interactome networks and the insights that can come from analyzing them. We elaborate on why interactome networks are important to consider in biology, how they can be mapped and integrated with each other, what global properties are starting to emerge from interactome network models, and how these properties may relate to human disease.",
"title": ""
},
{
"docid": "3057285113f5cdd4308f7dcbc028fcad",
"text": "PURPOSE\nTo evaluate structural alterations of iris and pupil diameters (PDs) in patients using systemic α-1-adrenergic receptor antagonists (α-1ARAs), which are associated with intraoperative floppy iris syndrome (IFIS).\n\n\nMETHODS\nEighty-eight eyes of 49 male were evaluated prospectively. Patients were assigned to 2 different groups. Study group included 23 patients taking any systemic α-1ARAs treatment, and control group included 26 patients not taking any systemic α-1ARAs treatment. All patients underwent anterior segment optical coherence tomography to evaluate iris thickness at the dilator muscle region (DMR) and at the sphincter muscle region (SMR). The PD was measured using a computerized infrared pupillometer under scotopic and photopic illumination.\n\n\nRESULTS\nThe study group included 46 eyes of 23 patients and the control group included 42 eyes of 26 patients. Most treated patients were on tamsulosin (16/23). Mean age was similar in the study and control groups (61.9±7.1 vs. 60.3±8, 2 years, nonsignificant). DMR (506.5±89.4 vs. 503.6±83.5 μm), SMR (507.8±78.1 vs. 522.1±96.4 μm) and the DMR/SMR ratio (1.0±0.15 vs. 0.99±0.23 μm) was similar in the study and control groups and these differences were nonsignificant. Scotopic PDs were also similar in both groups (3.99±1.11 vs. 3.74±1.35, nonsignificant). A significantly reduced photopic PD (2.89±0.55 vs. 3.62±0.64, P<0.001) and an increased scotopic/photopic PD (1.42±0.44 vs. 1.02±0.30, P<0.001) were found in the study group.\n\n\nCONCLUSIONS\nEvaluating PD alterations might be more useful than evaluating iris structural alterations in predicting IFIS. There is still a need for a reliable method that will determine the possibility of IFIS.",
"title": ""
},
{
"docid": "2e58ccf42547abeaa39f9d811b159feb",
"text": "Civitas is the first electronic voting system that is coercion-resistant, universally and voter verifiable, and suitable for remote voting. This paper describes the design and implementation of Civitas. Assurance is established in the design through security proofs, and in the implementation through information-flow security analysis. Experimental results give a quantitative evaluation of the tradeoffs between time, cost, and security.",
"title": ""
},
{
"docid": "fdeaaa484227c1e3c0dbb02677cd68a6",
"text": "A new image-based approach for fast and robust vehicle tracking from a moving platform is presented. Position, orientation, and full motion state, including velocity, acceleration, and yaw rate of a detected vehicle, are estimated from a tracked rigid 3-D point cloud. This point cloud represents a 3-D object model and is computed by analyzing image sequences in both space and time, i.e., by fusion of stereo vision and tracked image features. Starting from an automated initial vehicle hypothesis, tracking is performed by means of an extended Kalman filter. The filter combines the knowledge about the movement of the rigid point cloud's points in the world with the dynamic model of a vehicle. Radar information is used to improve the image-based object detection at far distances. The proposed system is applied to predict the driving path of other traffic participants and currently runs at 25 Hz (640 times 480 images) on our demonstrator vehicle.",
"title": ""
},
{
"docid": "dc2f4cbd2c18e4f893750a0a1a40002b",
"text": "A microstrip half-grid array antenna (HGA) based on low temperature co-fired ceramic (LTCC) technology is presented in this paper. The antenna is designed for the 77-81 GHz radar frequency band and uses a high permittivity material (εr = 7.3). The traditional single-grid array antenna (SGA) uses two radiating elements in the H-plane. For applications using digital beam forming, the focusing of an SGA in the scanning plane (H-plane) limits the field of view (FoV) of the radar system and the width of the SGA enlarges the minimal spacing between the adjacent channels. To overcome this, an array antenna using only half of the grid as radiating element was designed. As feeding network, a laminated waveguide with a vertically arranged power divider was adopted. For comparison, both an SGA and an HGA were fabricated. The measured results show: using an HGA, an HPBW increment in the H-plane can be achieved and their beam patterns in the E-plane remain similar. This compact LTCC antenna is suitable for radar application with a large FoV requirement.",
"title": ""
},
{
"docid": "70b0353efb11a25630ace7faba4a588b",
"text": "We develop an abstract theory of justifications suitable for describing the semantics of a range of logics in knowledge representation, computational and mathematical logic. A theory or program in one of these logics induces a semantical structure called a justification frame. Such a justification frame defines a class of justifications each of which embodies a potential reason why its facts are true. By defining various evaluation functions for these justifications, a range of different semantics are obtained. By allowing nesting of justification frames, various language constructs can be integrated in a seamless way. The theory provides elegant and compact formalisations of existing and new semantics in logics of various areas, showing unexpected commonalities and interrelations, and creating opportunities for new expressive knowledge representation formalisms.",
"title": ""
},
{
"docid": "f479586f0a6fba660950a8d002e7e595",
"text": "ii I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Abstract An important element in retailing is the use of impulse purchases; generally small items that are bought by consumers on the spur of the moment. By some estimates, impulse purchases make up approximately 50 percent of all spending by consumers. While impulse purchases have been studied in the brick-and-mortar retail environment, they have not been researched in the online retail environment. With e-commerce growing rapidly and approaching $20 billion per year in the Canadian and US markets, this is an important unexplored area. Using real purchasing behaviour from visitors to the Reunion website of Huntsville High School in Ontario Canada, I explored factors that influence the likelihood of an impulse purchase in an online retail environment. Consistent with diminishing sensitivity (mental accounting and the psychophysics of pricing), the results indicate that the likelihood of a consumer purchasing the impulse item increases with the total amount spent on other items. The results also show that presenting the offer in a popup is a more effective location and presentation mode than embedding the offer into the checkout page and increases the likelihood of the consumer making an impulse purchase. In addition, the results confirm that providing a reason to purchase by linking a $1 donation for a charity to the impulse item increases the frequency of the impulse purchase. iv Acknowledgements",
"title": ""
},
{
"docid": "cd6e9587aa41f95768d6c146df82c50f",
"text": "This paper deals with genetic algorithm implementation in Python. Genetic algorithm is a probabilistic search algorithm based on the mechanics of natural selection and natural genetics. In genetic algorithms, a solution is represented by a list or a string. List or string processing in Python is more productive than in C/C++/Java. Genetic algorithms implementation in Python is quick and easy. In this paper, we introduce genetic algorithm implementation methods in Python. And we discuss various tools for speeding up Python programs.",
"title": ""
},
{
"docid": "30fe64da6dc0d75d0be37ac1a92e8c24",
"text": "—Perhaps the most important application of accurate personal identification is securing limited access systems from malicious attacks. Among all the presently employed biometric techniques, fingerprint identification systems have received the most attention due to the long history of fingerprints and their extensive use in forensics. This paper deals with the issue of selection of an optimal algorithm for fingerprint matching in order to design a system that matches required specifications in performance and accuracy. Two competing algorithms were compared against a common database using MATLAB simulations.",
"title": ""
},
{
"docid": "a01965406575363328f4dae4241a05b7",
"text": "IT governance is one of these concepts that suddenly emerged and became an important issue in the information technology area. Some organisations started with the implementation of IT governance in order to achieve a better alignment between business and IT. This paper interprets important existing theories, models and practices in the IT governance domain and derives research questions from it. Next, multiple research strategies are triangulated in order to understand how organisations are implementing IT governance in practice and to analyse the relationship between these implementations and business/IT alignment. Major finding is that organisations with more mature IT governance practices likely obtain a higher degree of business/IT alignment maturity.",
"title": ""
},
{
"docid": "ec625a278b7ae5b0aea787814fdd425f",
"text": "IoT with its ability to make objects be sensed and connected is inevitable in the smart campus market. Even though the smart campus market has not taken off yet, there is an enormous research that is going on now all over the world to explore such technology. Several factors are driving investigators to study smart campus including: deliver high quality services, protect the environment, and save cost. In this paper, not only we explore the research conducted in this area, but we also investigate challenges and provide possible research opportunities regarding smart campus.",
"title": ""
},
{
"docid": "31e558e1d306e204bfa64121749b75fc",
"text": "Experimental results in psychology have shown the important role of manipulation in guiding infant development. This has inspired work in developmental robotics as well. In this case, however, the benefits of this approach have been limited by the intrinsic difficulties of the task. Controlling the interaction between the robot and the environment in a meaningful and safe way is hard especially when little prior knowledge is available. We push the idea that haptic feedback can enhance the way robots interact with unmodeled environments. We approach grasping and manipulation as tasks driven mainly by tactile and force feedback. We implemented a grasping behavior on a robotic platform with sensitive tactile sensors and compliant actuators; the behavior allows the robot to grasp objects placed on a table. Finally, we demonstrate that the haptic feedback originated by the interaction with the objects carries implicit information about their shape and can be useful for learning.",
"title": ""
},
{
"docid": "f05718832e9e8611b4cd45b68d0f80e3",
"text": "Conflict occurs frequently in any workplace; health care is not an exception. The negative consequences include dysfunctional team work, decreased patient satisfaction, and increased employee turnover. Research demonstrates that training in conflict resolution skills can result in improved teamwork, productivity, and patient and employee satisfaction. Strategies to address a disruptive physician, a particularly difficult conflict situation in healthcare, are addressed.",
"title": ""
},
{
"docid": "eeee6fceaec33b4b1ef5aed9f8b0dcf5",
"text": "This paper presents a novel orthomode transducer (OMT) with the dimension of WR-10 waveguide. The internal structure of the OMT is in the shape of Y so we named it a Y-junction OMT, it contain one square waveguide port with the dimension 2.54mm × 2.54mm and two WR-10 rectangular waveguide ports with the dimension of 1.27mm × 2.54mm. The operating frequency band of OMT is 70-95GHz (more than 30% bandwidth) with simulated insertion loss <;-0.3dB and cross polarization better than -40dB throughout the band for both TE10 and TE01 modes.",
"title": ""
}
] | scidocsrr |
d3889f249c96ad7e734031ae8ddd16f5 | Factors mediating disclosure in social network sites | [
{
"docid": "7eed84f959268599e1b724b0752f6aa5",
"text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.",
"title": ""
}
] | [
{
"docid": "a6b4ee8a6da7ba240b7365cf1a70669d",
"text": "Received: 2013-04-15 Accepted: 2013-05-13 Accepted after one revision by Prof. Dr. Sinz. Published online: 2013-06-14 This article is also available in German in print and via http://www. wirtschaftsinformatik.de: Blohm I, Leimeister JM (2013) Gamification. Gestaltung IT-basierter Zusatzdienstleistungen zur Motivationsunterstützung und Verhaltensänderung. WIRTSCHAFTSINFORMATIK. doi: 10.1007/s11576-013-0368-0.",
"title": ""
},
{
"docid": "752e6d6f34ffc638e9a0d984a62db184",
"text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.",
"title": ""
},
{
"docid": "beec3b6b4e5ecaa05d6436426a6d93b7",
"text": "This paper introduces a 6LoWPAN simulation model for OMNeT++. Providing a 6LoWPAN model is an important step to advance OMNeT++-based Internet of Things simulations. We integrated Contiki’s 6LoWPAN implementation into OMNeT++ in order to avoid problems of non-standard compliant, non-interoperable, or highly abstracted and thus unreliable simulation models. The paper covers the model’s structure as well as its integration and the generic interaction between OMNeT++ / INET and Contiki.",
"title": ""
},
{
"docid": "41d546266db9b3e9ec5071e4926abb8d",
"text": "Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction. Under the assumption that the rays refract only twice when traveling through the object, we present the first approach to simultaneously reconstructing the 3D positions and normals of the object's surface at both refraction locations. Our acquisition setup requires only two cameras and one monitor, which serves as the light source. After acquiring the ray-ray correspondences between each camera and the monitor, we solve an optimization function which enforces a new position-normal consistency constraint. That is, the 3D positions of surface points shall agree with the normals required to refract the rays under Snell's law. Experimental results using both synthetic and real data demonstrate the robustness and accuracy of the proposed approach.",
"title": ""
},
{
"docid": "cf41591ea323c2dd2aa4f594c61315d9",
"text": "Natural language descriptions of videos provide a potentially rich and vast source of supervision. However, the highly-varied nature of language presents a major barrier to its effective use. What is needed are models that can reason over uncertainty over both videos and text. In this paper, we tackle the core task of person naming: assigning names of people in the cast to human tracks in TV videos. Screenplay scripts accompanying the video provide some crude supervision about who’s in the video. However, even the basic problem of knowing who is mentioned in the script is often difficult, since language often refers to people using pronouns (e.g., “he”) and nominals (e.g., “man”) rather than actual names (e.g., “Susan”). Resolving the identity of these mentions is the task of coreference resolution, which is an active area of research in natural language processing. We develop a joint model for person naming and coreference resolution, and in the process, infer a latent alignment between tracks and mentions. We evaluate our model on both vision and NLP tasks on a new dataset of 19 TV episodes. On both tasks, we significantly outperform the independent baselines.",
"title": ""
},
{
"docid": "13cdf06acdcf3f6e0c7085662cb99315",
"text": "Terrestrial ecosystems play a significant role in the global carbon cycle and offset a large fraction of anthropogenic CO2 emissions. The terrestrial carbon sink is increasing, yet the mechanisms responsible for its enhancement, and implications for the growth rate of atmospheric CO2, remain unclear. Here using global carbon budget estimates, ground, atmospheric and satellite observations, and multiple global vegetation models, we report a recent pause in the growth rate of atmospheric CO2, and a decline in the fraction of anthropogenic emissions that remain in the atmosphere, despite increasing anthropogenic emissions. We attribute the observed decline to increases in the terrestrial sink during the past decade, associated with the effects of rising atmospheric CO2 on vegetation and the slowdown in the rate of warming on global respiration. The pause in the atmospheric CO2 growth rate provides further evidence of the roles of CO2 fertilization and warming-induced respiration, and highlights the need to protect both existing carbon stocks and regions, where the sink is growing rapidly.",
"title": ""
},
{
"docid": "b1ffdb1e3f069b78458a2b464293d97a",
"text": "We consider the detection of activities from non-cooperating individuals with features obtained on the radio frequency channel. Since environmental changes impact the transmission channel between devices, the detection of this alteration can be used to classify environmental situations. We identify relevant features to detect activities of non-actively transmitting subjects. In particular, we distinguish with high accuracy an empty environment or a walking, lying, crawling or standing person, in case-studies of an active, device-free activity recognition system with software defined radios. We distinguish between two cases in which the transmitter is either under the control of the system or ambient. For activity detection the application of one-stage and two-stage classifiers is considered. Apart from the discrimination of the above activities, we can show that a detected activity can also be localized simultaneously within an area of less than 1 meter radius.",
"title": ""
},
{
"docid": "22241857a42ffcad817356900f52df66",
"text": "Most of the intensive care units (ICU) are equipped with commercial pulse oximeters for monitoring arterial blood oxygen saturation (SpO2) and pulse rate (PR). Photoplethysmographic (PPG) data recorded from pulse oximeters usually corrupted by motion artifacts (MA), resulting in unreliable and inaccurate estimated measures of SpO2. In this paper, a simple and efficient MA reduction method based on Ensemble Empirical Mode Decomposition (E2MD) is proposed for the estimation of SpO2 from processed PPGs. Performance analysis of the proposed E2MD is evaluated by computing the statistical and quality measures indicating the signal reconstruction like SNR and NRMSE. Intentionally created MAs (Horizontal MA, Vertical MA and Bending MA) in the recorded PPGs are effectively reduced by the proposed one and proved to be the best suitable method for reliable and accurate SpO2 estimation from the processed PPGs.",
"title": ""
},
{
"docid": "2702eb18e03af90e4061badd87bae7f7",
"text": "Two linear time (and hence asymptotically optimal) algorithms for computing the Euclidean distance transform of a two-dimensional binary image are presented. The algorithms are based on the construction and regular sampling of the Voronoi diagram whose sites consist of the unit (feature) pixels in the image. The rst algorithm, which is of primarily theoretical interest, constructs the complete Voronoi diagram. The second, more practical, algorithm constructs the Voronoi diagram where it intersects the horizontal lines passing through the image pixel centres. Extensions to higher dimensional images and to other distance functions are also discussed.",
"title": ""
},
{
"docid": "897962874a43ee19e3f50f431d4c449e",
"text": "According to Dennett, the same system may be described using a ‘physical’ (mechanical) explanatory stance, or using an ‘intentional’ (beliefand goalbased) explanatory stance. Humans tend to find the physical stance more helpful for certain systems, such as planets orbiting a star, and the intentional stance for others, such as living animals. We define a formal counterpart of physical and intentional stances within computational theory: a description of a system as either a device, or an agent, with the key difference being that ‘devices’ are directly described in terms of an input-output mapping, while ‘agents’ are described in terms of the function they optimise. Bayes’ rule can then be applied to calculate the subjective probability of a system being a device or an agent, based only on its behaviour. We illustrate this using the trajectories of an object in a toy grid-world domain.",
"title": ""
},
{
"docid": "36e99c1f3be629e3d556e5dc48243e0a",
"text": "Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.",
"title": ""
},
{
"docid": "83238b7ede9cc85090e44028e79375af",
"text": "Purpose – This paper aims to represent a capability model for industrial robot as they pertain to assembly tasks. Design/methodology/approach – The architecture of a real kit building application is provided to demonstrate how robot capabilities can be used to fully automate the planning of assembly tasks. Discussion on the planning infrastructure is done with the Planning Domain Definition Language (PDDL) for heterogeneous multi robot systems. Findings – The paper describes PDDL domain and problem files that are used by a planner to generate a plan for kitting. Discussion on the plan shows that the best robot is selected to carry out assembly actions. Originality/value – The author presents a robot capability model that is intended to be used for helping manufacturers to characterize the different capabilities their robots contribute to help the end user to select the appropriate robots for the appropriate tasks, selecting backup robots during robot’s failures to limit the deterioration of the system’s productivity and the products’ quality and limiting robots’ failures and increasing productivity by providing a tool to manufacturers that outputs a process plan that assigns the best robot to each task needed to accomplish the assembly.",
"title": ""
},
{
"docid": "88f6a0f18d32d9cf6da82ff730b22298",
"text": "In this letter, we propose an energy efficient power control scheme for resource sharing between cellular and device-to-device (D2D) users in cellular network assisted D2D communication. We take into account the circuit power consumption of the device-to-device user (DU) and aim at maximizing the DU's energy efficiency while guaranteeing the required throughputs of both the DU and the cellular user. Specifically, we define three different regions for the circuit power consumption of the DU and derive the optimal power control scheme for each region. Moreover, a distributed algorithm is proposed for implementation of the optimal power control scheme.",
"title": ""
},
{
"docid": "d5e3b7d29389990154b50087f5c13c88",
"text": "This paper presents two sets of features, shape representation and kinematic structure, for human activity recognition using a sequence of RGB-D images. The shape features are extracted using the depth information in the frequency domain via spherical harmonics representation. The other features include the motion of the 3D joint positions (i.e. the end points of the distal limb segments) in the human body. Both sets of features are fused using the Multiple Kernel Learning (MKL) technique at the kernel level for human activity recognition. Our experiments on three publicly available datasets demonstrate that the proposed features are robust for human activity recognition and particularly when there are similarities",
"title": ""
},
{
"docid": "815e0ad06fdc450aa9ba3f56ab19ab05",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "ad53198bab3ad3002b965914f92ce3c9",
"text": "Adaptive Learning Algorithms for Transferable Visual Recognition by Judith Ho↵man Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor Trevor Darrell, Chair Understanding visual scenes is a crucial piece in many artificial intelligence applications ranging from autonomous vehicles and household robotic navigation to automatic image captioning for the blind. Reliably extracting high-level semantic information from the visual world in real-time is key to solving these critical tasks safely and correctly. Existing approaches based on specialized recognition models are prohibitively expensive or intractable due to limitations in dataset collection and annotation. By facilitating learned information sharing between recognition models these applications can be solved; multiple tasks can regularize one another, redundant information can be reused, and the learning of novel tasks is both faster and easier. In this thesis, I present algorithms for transferring learned information between visual data sources and across visual tasks all with limited human supervision. I will both formally and empirically analyze the adaptation of visual models within the classical domain adaptation setting and extend the use of adaptive algorithms to facilitate information transfer between visual tasks and across image modalities. Most visual recognition systems learn concepts directly from a large collection of manually annotated images/videos. A model which detects pedestrians requires a human to manually go through thousands or millions of images and indicate all instances of pedestrians. However, this model is susceptible to biases in the labeled data and often fails to generalize to new scenarios a detector trained in Palo Alto may have degraded performance in Rome, or a detector trained in sunny weather may fail in the snow. Rather than require human supervision for each new task or scenario, this work draws on deep learning, transformation learning, and convex-concave optimization to produce novel optimization frameworks which transfer information from the large curated databases to real world scenarios.",
"title": ""
},
{
"docid": "79a3631f3ada452ad3193924071211dd",
"text": "The encoder-decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel source-side token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture a correspondence between source and target tokens. The experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. Additionally, we show that our method has an ability to learn a reasonable token-wise correspondence without knowing any true alignments.",
"title": ""
},
{
"docid": "77b9d8a71d5bdd0afdf93cd525950496",
"text": "One of the main tasks of a dialog system is to assign intents to user utterances, which is a form of text classification. Since intent labels are application-specific, bootstrapping a new dialog system requires collecting and annotating in-domain data. To minimize the need for a long and expensive data collection process, we explore ways to improve the performance of dialog systems with very small amounts of training data. In recent years, word embeddings have been shown to provide valuable features for many different language tasks. We investigate the use of word embeddings in a text classification task with little training data. We find that count and vector features complement each other and their combination yields better results than either type of feature alone. We propose a simple alternative, vector extrema, to replace the usual averaging of a sentence’s vectors. We show how taking vector extrema is well suited for text classification and compare it against standard vector baselines in three different applications.",
"title": ""
},
{
"docid": "420fa81c2dbe77622108c978d5c6c019",
"text": "Reasoning about a scene's thermal signature, in addition to its visual appearance and spatial configuration, would facilitate significant advances in perceptual systems. Applications involving the segmentation and tracking of persons, vehicles, and other heat-emitting objects, for example, could benefit tremendously from even coarsely accurate relative temperatures. With the increasing affordability of commercially available thermal cameras, as well as the imminent introduction of new, mobile form factors, such data will be readily and widely accessible. However, in order for thermal processing to complement existing methods in RGBD, there must be an effective procedure for calibrating RGBD and thermal cameras to create RGBDT (red, green, blue, depth, and thermal) data. In this paper, we present an automatic method for the synchronization and calibration of RGBD and thermal cameras in arbitrary environments. While traditional calibration methods fail in our multimodal setting, we leverage invariant features visible by both camera types. We first synchronize the streams with a simple optimization procedure that aligns their motion statistic time series. We then find the relative poses of the cameras by minimizing an objective that measures the alignment between edge maps from the two streams. In contrast to existing methods that use special calibration targets with key points visible to both cameras, our method requires nothing more than some edges visible to both cameras, such as those arising from humans. We evaluate our method and demonstrate that it consistently converges to the correct transform and that it results in high-quality RGBDT data.",
"title": ""
},
{
"docid": "19863150313643b977f72452bb5a8a69",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
}
] | scidocsrr |
e745cdf3341de90bb9b19a4739da8659 | Game design principles in everyday fitness applications | [
{
"docid": "16d949f6915cbb958cb68a26c6093b6b",
"text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "1aeca45f1934d963455698879b1e53e8",
"text": "A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish'n'Steps, which links a player’s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players’ fish tanks included other players’ fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players’ attitudes towards physical activity. Furthermore, although most player’s enthusiasm in the game decreased after the game’s first two weeks, analyzing the results using Prochaska's Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.",
"title": ""
}
] | [
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "928eb797289d2630ff2e701ced782a14",
"text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "70ea3e32d4928e7fd174b417ec8b6d0e",
"text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "b1453c089b5b9075a1b54e4f564f7b45",
"text": "Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crashes. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10× larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.",
"title": ""
},
{
"docid": "ad4d38ee8089a67353586abad319038f",
"text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.",
"title": ""
},
{
"docid": "c256283819014d79dd496a3183116b68",
"text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "7963adab39b58ab0334b8eef4149c59c",
"text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.",
"title": ""
},
{
"docid": "179d8f41102862710595671e5a819d70",
"text": "Detecting changes in time series data is an important data analysis task with application in various scientific domains. In this paper, we propose a novel approach to address the problem of change detection in time series data, which can find both the amplitude and degree of changes. Our approach is based on wavelet footprints proposed originally by the signal processing community for signal compression. We, however, exploit the properties of footprints to efficiently capture discontinuities in a signal. We show that transforming time series data using footprint basis up to degree D generates nonzero coefficients only at the change points with degree up to D. Exploiting this property, we propose a novel change detection query processing scheme which employs footprint-transformed data to identify change points, their amplitudes, and degrees of change efficiently and accurately. We also present two methods for exact and approximate transformation of data. Our analytical and empirical results with both synthetic and real-world data show that our approach outperforms the best known change detection approach in terms of both performance and accuracy. Furthermore, unlike the state of the art approaches, our query response time is independent from the number of change points in the data and the user-defined change threshold.",
"title": ""
},
{
"docid": "c59aaad99023e5c6898243db208a4c3c",
"text": "This paper presents a method for automated vessel segmentation in retinal images. For each pixel in the field of view of the image, a 41-D feature vector is constructed, encoding information on the local intensity structure, spatial properties, and geometry at multiple scales. An AdaBoost classifier is trained on 789 914 gold standard examples of vessel and nonvessel pixels, then used for classifying previously unseen images. The algorithm was tested on the public digital retinal images for vessel extraction (DRIVE) set, frequently used in the literature and consisting of 40 manually labeled images with gold standard. Results were compared experimentally with those of eight algorithms as well as the additional manual segmentation provided by DRIVE. Training was conducted confined to the dedicated training set from the DRIVE database, and feature-based AdaBoost classifier (FABC) was tested on the 20 images from the test set. FABC achieved an area under the receiver operating characteristic (ROC) curve of 0.9561, in line with state-of-the-art approaches, but outperforming their accuracy (0.9597 versus 0.9473 for the nearest performer).",
"title": ""
},
{
"docid": "e11b4a08fc864112d4f68db1ea9703e9",
"text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8c2e69380cebdd6affd43c6bfed2fc51",
"text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.",
"title": ""
},
{
"docid": "a1046f5282cf4057fd143fdce79c6990",
"text": "Rheumatoid arthritis is a multisystem disease with underlying immune mechanisms. Osteoarthritis is a debilitating, progressive disease of diarthrodial joints associated with the aging process. Although much is known about the pathogenesis of rheumatoid arthritis and osteoarthritis, our understanding of some immunologic changes remains incomplete. This study tries to examine the numeric changes in the T cell subsets and the alterations in the levels of some cytokines and adhesion molecules in these lesions. To accomplish this goal, peripheral blood and synovial fluid samples were obtained from 24 patients with rheumatoid arthritis, 15 patients with osteoarthritis and six healthy controls. The counts of CD4 + and CD8 + T lymphocytes were examined using flow cytometry. The levels of some cytokines (TNF-α, IL1-β, IL-10, and IL-17) and a soluble intercellular adhesion molecule-1 (sICAM-1) were measured in the sera and synovial fluids using enzyme linked immunosorbant assay. We found some variations in the counts of T cell subsets, the levels of cytokines and sICAM-1 adhesion molecule between the healthy controls and the patients with arthritis. High levels of IL-1β, IL-10, IL-17 and TNF-α (in the serum and synovial fluid) were observed in arthritis compared to the healthy controls. In rheumatoid arthritis, a high serum level of sICAM-1 was found compared to its level in the synovial fluid. A high CD4+/CD8+ T cell ratio was found in the blood of the patients with rheumatoid arthritis. In rheumatoid arthritis, the cytokine levels correlated positively with some clinicopathologic features. To conclude, the development of rheumatoid arthritis and osteoarthritis is associated with alteration of the levels of some cytokines. The assessment of these immunologic changes may have potential prognostic roles.",
"title": ""
},
{
"docid": "15e034d722778575b43394b968be19ad",
"text": "Elections are contests for the highest stakes in national politics and the electoral system is a set of predetermined rules for conducting elections and determining their outcome. Thus defined, the electoral system is distinguishable from the actual conduct of elections as well as from the wider conditions surrounding the electoral contest, such as the state of civil liberties, restraints on the opposition and access to the mass media. While all these aspects are of obvious importance to free and fair elections, the main interest of this study is the electoral system.",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "11a1c92620d58100194b735bfc18c695",
"text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.",
"title": ""
},
{
"docid": "02469f669769f5c9e2a9dc49cee20862",
"text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.",
"title": ""
},
{
"docid": "24e1a6f966594d4230089fc433e38ce6",
"text": "The need for omnidirectional antennas for wireless applications has increased considerably. The antennas are used in a variety of bands anywhere from 1.7 to 2.5 GHz, in different configurations which mainly differ in gain. The omnidirectionality is mostly obtained using back-to-back elements or simply using dipoles in different collinear-array configurations. The antenna proposed in this paper is a patch which was built in a cylindrical geometry rather than a planar one, and which generates an omnidirectional pattern in the H-plane.",
"title": ""
}
] | scidocsrr |
2df5ead9048b5e67787022e54562bf66 | Applying Sensor-Based Technology to Improve Construction Safety Management | [
{
"docid": "8ff8a8ce2db839767adb8559f6d06721",
"text": "Indoor environments present opportunities for a rich set of location-aware applications such as navigation tools for humans and robots, interactive virtual games, resource discovery, asset tracking, location-aware sensor networking etc. Typical indoor applications require better accuracy than what current outdoor location systems provide. Outdoor location technologies such as GPS have poor indoor performance because of the harsh nature of indoor environments. Further, typical indoor applications require different types of location information such as physical space, position and orientation. This dissertation describes the design and implementation of the Cricket indoor location system that provides accurate location in the form of user space, position and orientation to mobile and sensor network applications. Cricket consists of location beacons that are attached to the ceiling of a building, and receivers, called listeners, attached to devices that need location. Each beacon periodically transmits its location information in an RF message. At the same time, the beacon also transmits an ultrasonic pulse. The listeners listen to beacon transmissions and measure distances to nearby beacons, and use these distances to compute their own locations. This active-beacon passive-listener architecture is scalable with respect to the number of users, and enables applications that preserve user privacy. This dissertation describes how Cricket achieves accurate distance measurements between beacons and listeners. Once the beacons are deployed, the MAT and AFL algorithms, described in this dissertation, use measurements taken at a mobile listener to configure the beacons with a coordinate assignment that reflects the beacon layout. This dissertation presents beacon interference avoidance and detection algorithms, as well as outlier rejection algorithms to prevent and filter out outlier distance estimates caused by uncoordinated beacon transmissions. The Cricket listeners can measure distances with an accuracy of 5 cm. The listeners can detect boundaries with an accuracy of 1 cm. Cricket has a position estimation accuracy of 10 cm and an orientation accuracy of 3 degrees. Thesis Supervisor: Hari Balakrishnan Title: Associate Professor of Computer Science and Engineering",
"title": ""
}
] | [
{
"docid": "7cfdad39cebb90cac18a8f9ae6a46238",
"text": "A malware macro (also called \"macro virus\") is the code that exploits the macro functionality of office documents (especially Microsoft Office’s Excel and Word) to carry out malicious action against the systems of the victims that open the file. This type of malware was very popular during the late 90s and early 2000s. After its rise when it was created as a propagation method of other malware in 2014, macro viruses continue posing a threat to the user that is far from being controlled. This paper studies the possibility of improving macro malware detection via machine learning techniques applied to the properties of the code.",
"title": ""
},
{
"docid": "af6b26efef62f3017a0eccc5d2ae3c33",
"text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.",
"title": ""
},
{
"docid": "6fb0aac60ec74b5efca4eeda22be979d",
"text": "Images captured in hazy or foggy weather conditions are seriously degraded by the scattering of atmospheric particles, which directly influences the performance of outdoor computer vision systems. In this paper, a fast algorithm for single image dehazing is proposed based on linear transformation by assuming that a linear relationship exists in the minimum channel between the hazy image and the haze-free image. First, the principle of linear transformation is analyzed. Accordingly, the method of estimating a medium transmission map is detailed and the weakening strategies are introduced to solve the problem of the brightest areas of distortion. To accurately estimate the atmospheric light, an additional channel method is proposed based on quad-tree subdivision. In this method, average grays and gradients in the region are employed as assessment criteria. Finally, the haze-free image is obtained using the atmospheric scattering model. Numerous experimental results show that this algorithm can clearly and naturally recover the image, especially at the edges of sudden changes in the depth of field. It can, thus, achieve a good effect for single image dehazing. Furthermore, the algorithmic time complexity is a linear function of the image size. This has obvious advantages in running time by guaranteeing a balance between the running speed and the processing effect.",
"title": ""
},
{
"docid": "35de54ee9d3d4c117cf4c1d8fc4f4e87",
"text": "On the purpose of managing process models to make them more practical and effective in enterprises, a construction of BPMN-based Business Process Model Base is proposed. Considering Business Process Modeling Notation (BPMN) is used as a standard of process modeling, based on BPMN, the process model transformation is given, and business blueprint modularization management methodology is used for process management. Therefore, BPMN-based Business Process Model Base provides a solution of business process modeling standardization, management and execution so as to enhance the business process reuse.",
"title": ""
},
{
"docid": "9a6249777e0137121df0c02cffe63b73",
"text": "With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.",
"title": ""
},
{
"docid": "dd1fd4f509e385ea8086a45a4379a8b5",
"text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.",
"title": ""
},
{
"docid": "72cff051b5d2bcd8eaf41b6e9ae9eca9",
"text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.",
"title": ""
},
{
"docid": "0890227418a3fca80f280f9fa810f6a3",
"text": "OBJECTIVE\nTo update the likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan.\n\n\nMETHODS\nUltrasound examination of the fetal profile was carried out and the presence or absence of the nasal bone was noted immediately before karyotyping in 5918 fetuses at 11 to 13+6 weeks. Logistic regression analysis was used to examine the effect of maternal ethnic origin and fetal crown-rump length (CRL) and nuchal translucency (NT) on the incidence of absent nasal bone in the chromosomally normal and trisomy 21 fetuses.\n\n\nRESULTS\nThe fetal profile was successfully examined in 5851 (98.9%) cases. In 5223/5851 cases the fetal karyotype was normal and in 628 cases it was abnormal. In the chromosomally normal group the incidence of absent nasal bone was related first to the ethnic origin of the mother, being 2.2% for Caucasians, 9.0% for Afro-Caribbeans and 5.0% for Asians; second to fetal CRL, being 4.7% for CRL of 45-54 mm, 3.4% for CRL of 55-64 mm, 1.4% for CRL of 65-74 mm and 1% for CRL of 75-84 mm; and third to NT, being 1.6% for NT < or = 95th centile, 2.7% for NT > 95th centile-3.4 mm, 5.4% for NT 3.5-4.4 mm, 6% for NT 4.5-5.4 mm and 15% for NT > or = 5.5 mm. In the chromosomally abnormal group there was absent nasal bone in 229/333 (68.8%) cases with trisomy 21 and in 95/295 (32.2%) cases with other chromosomal defects. Logistic regression analysis demonstrated that in the chromosomally normal fetuses significant independent prediction of the likelihood of absent nasal bone was provided by CRL, NT and Afro-Caribbean ethnic group, and in the trisomy 21 fetuses by CRL and NT. The likelihood ratio for trisomy 21 for absent nasal bone was derived by dividing the likelihood in trisomy 21 by that in normal fetuses.\n\n\nCONCLUSION\nAt the 11-14-week scan the incidence of absent nasal bone is related to the presence or absence of chromosomal defects, CRL, NT and ethnic origin.",
"title": ""
},
{
"docid": "d436517b8dd58d67cee91eb3d2c12b93",
"text": "The ability to deploy neural networks in real-world, safety-critical systems is severely limited by the presence of adversarial examples: slightly perturbed inputs that are misclassified by the network. In recent years, several techniques have been proposed for training networks that are robust to such examples; and each time stronger attacks have been devised, demonstrating the shortcomings of existing defenses. This highlights a key difficulty in designing an effective defense: the inability to assess a network’s robustness against future attacks. We propose to address this difficulty through formal verification techniques. We construct ground truths: adversarial examples with provably minimal perturbation. We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced to the ground truths; and also of defense techniques, by measuring the increase in distortion to ground truths in the hardened network versus the original. We use this technique to assess recently suggested attack and defense techniques.",
"title": ""
},
{
"docid": "6cd317113158241a98517ad5a8247174",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "255ede4ccdeeeb32cb09e52fa7d0ca0b",
"text": "Advanced neural machine translation (NMT) models generally implement encoder and decoder as multiple layers, which allows systems to model complex functions and capture complicated linguistic structures. However, only the top layers of encoder and decoder are leveraged in the subsequent process, which misses the opportunity to exploit the useful information embedded in other layers. In this work, we propose to simultaneously expose all of these signals with layer aggregation and multi-layer attention mechanisms. In addition, we introduce an auxiliary regularization term to encourage different layers to capture diverse information. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation data demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "ad49ca31e92eaeb44cbb24206e10c9ee",
"text": "PESQ, Perceptual Evaluation of Speech Quality [5], and POLQA, Perceptual Objective Listening Quality Assessment [1] , are standards comprising a test methodology for automated assessment of voice quality of speech as experienced by human beings. The predictions of those objective measures should come as close as possible to subjective quality scores as obtained in subjective listening tests, usually, a Mean Opinion Score (MOS) is predicted. Wavenet [6] is a deep neural network originally developed as a deep generative model of raw audio waveforms. Wavenet architecture is based on dilated causal convolutions, which exhibit very large receptive fields. In this short paper we suggest using the Wavenet architecture, in particular its large receptive filed in order to mimic PESQ algorithm. By doing so we can use it as a differentiable loss function for speech enhancement. 1 Problem formulation and related work In statistics, the Mean Squared Error (MSE) or Peak Signal to Noise Ratio (PSNR) of an estimator are widely used objective measures and are good distortion indicators (loss functions) between the estimators output and the size that we want to estimate. those loss functions are used for many reconstruction tasks. However, PSNR and MSE do not have good correlation with reliable subjective methods such as Mean Opinion Score (MOS) obtained from expert listeners. A more suitable speech quality assessment can by achieved by using tests that aim to achieve high correlation with MOS tests such as PEAQ or POLQA. However those algorithms are hard to represent as a differentiable function such as MSE moreover, as opposed to MSE that measures the average",
"title": ""
},
{
"docid": "c13aff70c3b080cfd5d374639e5ec0e9",
"text": "Contemporary vehicles are getting equipped with an increasing number of Electronic Control Units (ECUs) and wireless connectivities. Although these have enhanced vehicle safety and efficiency, they are accompanied with new vulnerabilities. In this paper, we unveil a new important vulnerability applicable to several in-vehicle networks including Control Area Network (CAN), the de facto standard in-vehicle network protocol. Specifically, we propose a new type of Denial-of-Service (DoS), called the bus-off attack, which exploits the error-handling scheme of in-vehicle networks to disconnect or shut down good/uncompromised ECUs. This is an important attack that must be thwarted, since the attack, once an ECU is compromised, is easy to be mounted on safety-critical ECUs while its prevention is very difficult. In addition to the discovery of this new vulnerability, we analyze its feasibility using actual in-vehicle network traffic, and demonstrate the attack on a CAN bus prototype as well as on two real vehicles. Based on our analysis and experimental results, we also propose and evaluate a mechanism to detect and prevent the bus-off attack.",
"title": ""
},
{
"docid": "a94f066ec5db089da7fd19ac30fe6ee3",
"text": "Information Centric Networking (ICN) is a new networking paradigm in which the ne twork provides users with content instead of communicatio n channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the co ntinuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which groun ds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Altho ugh some details of our solution have been specifically designed for the CONET architecture, i ts general ideas and concepts are applicable to a c lass of recent ICN proposals, which follow the basic mod e of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limit ations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA Eu ropean research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a v ariety of vendors, therefore we had to design the experiment taking into account the features that ar e currently available on off-the-shelf OpenFlow equipment.",
"title": ""
},
{
"docid": "7cc362ec57b9b4a8f0e5d9beaf0ed02f",
"text": "Conclusions Trading Framework Deep Learning has become a robust machine learning tool in recent years, and models based on deep learning has been applied to various fields. However, applications of deep learning in the field of computational finance are still limited[1]. In our project, Long Short Term Memory (LSTM) Networks, a time series version of Deep Neural Networks model, is trained on the stock data in order to forecast the next day‘s stock price of Intel Corporation (NASDAQ: INTC): our model predicts next day’s adjusted closing price based on information/features available until the present day. Based on the predicted price, we trade the Intel stock according to the strategy that we developed, which is described below. Locally Weighted Regression has also been performed in lieu of the unsupervised learning model for comparison.",
"title": ""
},
{
"docid": "dd79b1a2269971167c91d42fca98bb55",
"text": "The relationship between berry chemical composition, region of origin and quality grade was investigated for Chardonnay grapes sourced from vineyards located in seven South Australian Geographical Indications (GI). Measurements of basic chemical parameters, amino acids, elements, and free and bound volatiles were conducted for grapes collected during 2015 and 2016. Multiple factor analysis (MFA) was used to determine the sets of data that best discriminated each GI and quality grade. Important components for the discrimination of grapes based on GI were 2-phenylethanol, benzyl alcohol and C6 compounds, as well as Cu, Zn, and Mg, titratable acidity (TA), total soluble solids (TSS), and pH. Discriminant analysis (DA) based on MFA results correctly classified 100% of the samples into GI in 2015 and 2016. Classification according to grade was achieved based on the results for elements such as Cu, Na, Fe, volatiles including C6 and aryl alcohols, hydrolytically-released volatiles such as (Z)-linalool oxide and vitispirane, pH, TSS, alanine and proline. Correct classification through DA according to grade was 100% for both vintages. Significant correlations were observed between climate, GI, grade, and berry composition. Climate influenced the synthesis of free and bound volatiles as well as amino acids, sugars, and acids, as a result of higher temperatures and precipitation.",
"title": ""
},
{
"docid": "b3db73c0398e6c0e6a90eac45bb5821f",
"text": "The task of video grounding, which temporally localizes a natural language description in a video, plays an important role in understanding videos. Existing studies have adopted strategies of sliding window over the entire video or exhaustively ranking all possible clip-sentence pairs in a presegmented video, which inevitably suffer from exhaustively enumerated candidates. To alleviate this problem, we formulate this task as a problem of sequential decision making by learning an agent which regulates the temporal grounding boundaries progressively based on its policy. Specifically, we propose a reinforcement learning based framework improved by multi-task learning and it shows steady performance gains by considering additional supervised boundary information during training. Our proposed framework achieves state-ofthe-art performance on ActivityNet’18 DenseCaption dataset (Krishna et al. 2017) and Charades-STA dataset (Sigurdsson et al. 2016; Gao et al. 2017) while observing only 10 or less clips per video.",
"title": ""
},
{
"docid": "0eabd9e8a9468ebb308e1f578578c8b1",
"text": "Textual documents created and distributed on the Internet are ever changing in various forms. Most of existing works are devoted to topic modeling and the evolution of individual topics, while sequential relations of topics in successive documents published by a specific user are ignored. In this paper, in order to characterize and detect personalized and abnormal behaviors of Internet users, we propose Sequential Topic Patterns (STPs) and formulate the problem of mining User-aware Rare Sequential Topic Patterns (URSTPs) in document streams on the Internet. They are rare on the whole but relatively frequent for specific users, so can be applied in many real-life scenarios, such as real-time monitoring on abnormal user behaviors. We present a group of algorithms to solve this innovative mining problem through three phases: preprocessing to extract probabilistic topics and identify sessions for different users, generating all the STP candidates with (expected) support values for each user by pattern-growth, and selecting URSTPs by making user-aware rarity analysis on derived STPs. Experiments on both real (Twitter) and synthetic datasets show that our approach can indeed discover special users and interpretable URSTPs effectively and efficiently, which significantly reflect users' characteristics.",
"title": ""
},
{
"docid": "fd2da8187978c334d5fe265b4df14487",
"text": "Monopulse is a classical radar technique [1] of precise direction finding of a source or target. The concept can be used both in radar applications as well as in modern communication techniques. The information contained in antenna sidelobes normally disturbs the determination of DOA in the case of a classical monopulse system. The suitable combination of amplitudeand phase-monopulse algorithm leads to the novel complex monopulse algorithm (CMP), which also can utilise information from the sidelobes by using the phase shift of the signals in the sidelobes in relation to the mainlobes.",
"title": ""
},
{
"docid": "4d6bd155102e7431d17f651dc124ffc2",
"text": "Probiotic microorganisms are generally considered to beneficially affect host health when used in adequate amounts. Although generally used in dairy products, they are also widely used in various commercial food products such as fermented meats, cereals, baby foods, fruit juices, and ice creams. Among lactic acid bacteria, Lactobacillus and Bifidobacterium are the most commonly used bacteria in probiotic foods, but they are not resistant to heat treatment. Probiotic food diversity is expected to be greater with the use of probiotics, which are resistant to heat treatment and gastrointestinal system conditions. Bacillus coagulans (B. coagulans) has recently attracted the attention of researchers and food manufacturers, as it exhibits characteristics of both the Bacillus and Lactobacillus genera. B. coagulans is a spore-forming bacterium which is resistant to high temperatures with its probiotic activity. In addition, a large number of studies have been carried out on the low-cost microbial production of industrially valuable products such as lactic acid and various enzymes of B. coagulans which have been used in food production. In this review, the importance of B. coagulans in food industry is discussed. Moreover, some studies on B. coagulans products and the use of B. coagulans as a probiotic in food products are summarized.",
"title": ""
}
] | scidocsrr |
093f2e084435e6cca140c173ff96cad9 | A Model Driven Approach Accelerating Ontology-based IoT Applications Development | [
{
"docid": "49e824c73b62d4c05b28fbd46fde1a28",
"text": "The Advent of Internet-of-Things (IoT) paradigm has brought exciting opportunities to solve many real-world problems. IoT in industries is poised to play an important role not only to increase productivity and efficiency but also to improve customer experiences. Two main challenges that are of particular interest to industry include: handling device heterogeneity and getting contextual information to make informed decisions. These challenges can be addressed by IoT along with proven technologies like the Semantic Web. In this paper, we present our work, SQenIoT: a Semantic Query Engine for Industrial IoT. SQenIoT resides on a commercial product and offers query capabilities to retrieve information regarding the connected things in a given facility. We also propose a things query language, targeted for resource-constrained gateways and non-technical personnel such as facility managers. Two other contributions include multi-level ontologies and mechanisms for semantic tagging in our commercial products. The implementation details of SQenIoT and its performance results are also presented.",
"title": ""
},
{
"docid": "a53caf0e12e25aadb812e9819fa41e27",
"text": "Abstact This paper does not pretend either to transform completely the ontological art in engineering or to enumerate xhaustively the complete set of works that has been reported in this area. Its goal is to clarify to readers interested in building ontologies from scratch, the activities they should perform and in which order, as well as the set of techniques to be used in each phase of the methodology. This paper only presents a set of activities that conform the ontology development process, a life cycle to build ontologies based in evolving prototypes, and METHONTOLOGY, a well-structured methodology used to build ontologies from scratch. This paper gathers the experience of the authors on building an ontology in the domain of chemicals.",
"title": ""
}
] | [
{
"docid": "34e21b8051f3733c077d7087c035be2f",
"text": "This paper deals with the synthesis of a speed control strategy for a DC motor drive based on an output feedback backstepping controller. The backstepping method takes into account the non linearities of the system in the design control law and leads to a system asymptotically stable in the context of Lyapunov theory. Simulated results are displayed to validate the feasibility and the effectiveness of the proposed strategy.",
"title": ""
},
{
"docid": "b382f93bb45e7324afaff9950d814cf3",
"text": "OBJECTIVE\nA vocational rehabilitation program (occupational therapy and supported employment) for promoting the return to the community of long-stay persons with schizophrenia was established at a psychiatric hospital in Japan. The purpose of the study was to evaluate the program in terms of hospitalization rates, community tenure, and social functioning with each individual serving as his or her control.\n\n\nMETHODS\nFifty-two participants, averaging 8.9 years of hospitalization, participated in the vocational rehabilitation program consisting of 2 to 6 hours of in-hospital occupational therapy for 6 days per week and a post-discharge supported employment component. Seventeen years after the program was established, a retrospective study was conducted to evaluate the impact of the program on hospitalizations, community tenure, and social functioning after participants' discharge from hospital, using an interrupted time-series analysis. The postdischarge period was compared with the period from onset of illness to the index discharge on the three outcome variables.\n\n\nRESULTS\nAfter discharge from the hospital, the length of time spent by participants out of the hospital increased, social functioning improved, and risk of hospitalization diminished by 50%. Female participants and those with supportive families spent more time out of the hospital than participants who were male or came from nonsupportive families.\n\n\nCONCLUSION\nA combined program of occupational therapy and supported employment was successful in a Japanese psychiatric hospital when implemented with the continuing involvement of a clinical team. Interventions that improve the emotional and housing supports provided to persons with schizophrenia by their families are likely to enhance the outcome of vocational services.",
"title": ""
},
{
"docid": "9363421f524b4990c5314298a7e56e80",
"text": "hree years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats 1. Google Brain's discovery that the Inter-net is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language. Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again. Such advances make for exciting times in THE LEARNING MACHINES Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence.",
"title": ""
},
{
"docid": "50964057831f482d806bf1c9d46621c0",
"text": "We propose a unified framework for deep density models by formally defining density destructors. A density destructor is an invertible function that transforms a given density to the uniform density—essentially destroying any structure in the original density. This destructive transformation generalizes Gaussianization via ICA and more recent autoregressive models such as MAF and Real NVP. Informally, this transformation can be seen as a generalized whitening procedure or a multivariate generalization of the univariate CDF function. Unlike Gaussianization, our destructive transformation has the elegant property that the density function is equal to the absolute value of the Jacobian determinant. Thus, each layer of a deep density can be seen as a shallow density—uncovering a fundamental connection between shallow and deep densities. In addition, our framework provides a common interface for all previous methods enabling them to be systematically combined, evaluated and improved. Leveraging the connection to shallow densities, we also propose a novel tree destructor based on tree densities and an image-specific destructor based on pixel locality. We illustrate our framework on a 2D dataset, MNIST, and CIFAR-10. Code is available on first author’s website.",
"title": ""
},
{
"docid": "d838819f465fb2bde432666d09f25526",
"text": "Phenyl boronic acid-functionalized CdSe/ZnS quantum dots (QDs) were synthesized. The modified particles bind nicotinamide adenine dinucleotide (NAD(+)) or 1,4-dihydronicotinamide adenine dinucleotide (NADH). The NAD(+)-functionalized QDs are effectively quenched by an electron transfer process, while the NADH-modified QDs are inefficiently quenched by the reduced cofactor. These properties enable the implementation of the QDs for the fluorescence analysis of ethanol in the presence of alcohol dehydrogenase. The NADH-functionalized QDs were used for the optical analysis of the 1,3,5-trinitrotriazine, RDX explosive, with a detection limit that corresponded to 1 x 10(-10) M. We demonstrate cooperative optical and catalytic functions of the core-shell components of the QDs in the analysis of RDX.",
"title": ""
},
{
"docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1",
"text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.",
"title": ""
},
{
"docid": "6f125b0a1f7de3402c1a6e2af72af506",
"text": "The location-based service (LBS) of mobile communication and the personalization of information recommendation are two important trends in the development of electric commerce. However, many previous researches have only emphasized on one of the two trends. In this paper, we integrate the application of LBS with recommendation technologies to present a location-based service recommendation model (LBSRM) and design a prototype system to simulate and measure the validity of LBSRM. Due to the accumulation and variation of preference, in the recommendation model we conduct an adaptive method including long-term and short-term preference adjustment to enhance the result of recommendation. Research results show, with the assessments of relative index, the rate of recommendation precision could be 85.48%. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f65d5366115da23c8acd5bce1f4a9887",
"text": "Effective crisis management has long relied on both the formal and informal response communities. Social media platforms such as Twitter increase the participation of the informal response community in crisis response. Yet, challenges remain in realizing the formal and informal response communities as a cooperative work system. We demonstrate a supportive technology that recognizes the existing capabilities of the informal response community to identify needs (seeker behavior) and provide resources (supplier behavior), using their own terminology. To facilitate awareness and the articulation of work in the formal response community, we present a technology that can bridge the differences in terminology and understanding of the task between the formal and informal response communities. This technology includes our previous work using domain-independent features of conversation to identify indications of coordination within the informal response community. In addition, it includes a domain-dependent analysis of message content (drawing from the ontology of the formal response community and patterns of language usage concerning the transfer of property) to annotate social media messages. The resulting repository of annotated messages is accessible through our social media analysis tool, Twitris. It allows recipients in the formal response community to sort on resource needs and availability along various dimensions including geography and time. Thus, computation indexes the original social media content and enables complex querying to identify contents, players, and locations. Evaluation of the computed annotations for seeker-supplier behavior with human judgment shows fair to moderate agreement. In addition to the potential benefits to the formal emergency response community regarding awareness of the observations and activities of the informal response community, the analysis serves as a point of reference for evaluating more computationally intensive efforts and characterizing the patterns of language behavior during a crisis.",
"title": ""
},
{
"docid": "6358c534b358d47b6611bd2a5ef95134",
"text": "In recent years, query recommendation algorithms have been designed to provide related queries for search engine users. Most of these solutions, which perform extensive analysis of users' search history (or query logs), are largely insufficient for long-tail queries that rarely appear in query logs. To handle such queries, we study a new solution, which makes use of a knowledge base (or KB), such as YAGO and Freebase. A KB is a rich information source that describes how real-world entities are connected. We extract entities from a query, and use these entities to explore new ones in the KB. Those discovered entities are then used to suggest new queries to the user. As shown in our experiments, our approach provides better recommendation results for long-tail queries than existing solutions.",
"title": ""
},
{
"docid": "c891330d08fb8e41d179e803524a1737",
"text": "This article deals with active frequency filter design using signalflow graphs. The procedure of multifunctional circuit design that can realize more types of frequency filters is shown. To design a new circuit the Mason – Coates graphs with undirected self-loops have been used. The voltage conveyors whose properties are dual to the properties of the well-known current conveyors have been used as the active element.",
"title": ""
},
{
"docid": "8c95392ab3cc23a7aa4f621f474d27ba",
"text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.",
"title": ""
},
{
"docid": "45be193fe04064886615367dd9225c92",
"text": "Automatic electrocardiogram (ECG) beat classification is essential to timely diagnosis of dangerous heart conditions. Specifically, accurate detection of premature ventricular contractions (PVCs) is imperative to prepare for the possible onset of life-threatening arrhythmias. Although many groups have developed highly accurate algorithms for detecting PVC beats, results have generally been limited to relatively small data sets. Additionally, many of the highest classification accuracies (>90%) have been achieved in experiments where training and testing sets overlapped significantly. Expanding the overall data set greatly reduces overall accuracy due to significant variation in ECG morphology among different patients. As a result, we believe that morphological information must be coupled with timing information, which is more constant among patients, in order to achieve high classification accuracy for larger data sets. With this approach, we combined wavelet-transformed ECG waves with timing information as our feature set for classification. We used select waveforms of 18 files of the MIT/BIH arrhythmia database, which provides an annotated collection of normal and arrhythmic beats, for training our neural-network classifier. We then tested the classifier on these 18 training files as well as 22 other files from the database. The accuracy was 95.16% over 93,281 beats from all 40 files, and 96.82% over the 22 files outside the training set in differentiating normal, PVC, and other beats",
"title": ""
},
{
"docid": "f55cd152f6c9e32ed33e4cca1a91cf2e",
"text": "This study investigated whether being charged with a child pornography offense is a valid diagnostic indicator of pedophilia, as represented by an index of phallometrically assessed sexual arousal to children. The sample of 685 male patients was referred between 1995 and 2004 for a sexological assessment of their sexual interests and behavior. As a group, child pornography offenders showed greater sexual arousal to children than to adults and differed from groups of sex offenders against children, sex offenders against adults, and general sexology patients. The results suggest child pornography offending is a stronger diagnostic indicator of pedophilia than is sexually offending against child victims. Theoretical and clinical implications are discussed.",
"title": ""
},
{
"docid": "16a18f742d67e4dfb660b4ce3b660811",
"text": "Container-based virtualization has become the de-facto standard for deploying applications in data centers. However, deployed containers frequently include a wide-range of tools (e.g., debuggers) that are not required for applications in the common use-case, but they are included for rare occasions such as in-production debugging. As a consequence, containers are significantly larger than necessary for the common case, thus increasing the build and deployment time. CNTR1 provides the performance benefits of lightweight containers and the functionality of large containers by splitting the traditional container image into two parts: the “fat” image — containing the tools, and the “slim” image — containing the main application. At run-time, CNTR allows the user to efficiently deploy the “slim” image and then expand it with additional tools, when and if necessary, by dynamically attaching the “fat” image. To achieve this, CNTR transparently combines the two container images using a new nested namespace, without any modification to the application, the container manager, or the operating system. We have implemented CNTR in Rust, using FUSE, and incorporated a range of optimizations. CNTR supports the full Linux filesystem API, and it is compatible with all container implementations (i.e., Docker, rkt, LXC, systemd-nspawn). Through extensive evaluation, we show that CNTR incurs reasonable performance overhead while reducing, on average, by 66.6% the image size of the Top-50 images available on Docker Hub.",
"title": ""
},
{
"docid": "36e5cd6aac9b0388f67a9584d9bf0bf6",
"text": "To learn to program, a novice programmer must understand the dynamic, runtime aspect of program code, a so-called notional machine. Understanding the machine can be easier when it is represented graphically, and tools have been developed to this end. However, these tools typically support only one programming language and do not work in a web browser. In this article, we present the functionality and technical implementation of the two visualization tools. First, the language-agnostic and extensible Jsvee library helps educators visualize notional machines and create expression-level program animations for online course materials. Second, the Kelmu toolkit can be used by ebook authors to augment automatically generated animations, for instance by adding annotations such as textual explanations and arrows. Both of these libraries have been used in introductory programming courses, and there is preliminary evidence that students find the animations useful.",
"title": ""
},
{
"docid": "26029eb824fc5ad409f53b15bfa0dc15",
"text": "Detecting contradicting statements is a fundamental and challenging natural language processing and machine learning task, with numerous applications in information extraction and retrieval. For instance, contradictions need to be recognized by question answering systems or multi-document summarization systems. In terms of machine learning, it requires the ability, through supervised learning, to accurately estimate and capture the subtle differences between contradictions and for instance, paraphrases. In terms of natural language processing, it demands a pipeline approach with distinct phases in order to extract as much knowledge as possible from sentences. Previous state-of-the-art systems rely often on semantics and alignment relations. In this work, I move away from the commonly setup used in this domain, and address the problem of detecting contradictions as a classification task. I argue that for such classification, one can heavily rely on features based on those used for detecting paraphrases and recognizing textual entailment, alongside with numeric and string based features. This M.Sc. dissertation provides a system capable of detecting contradictions from a pair of affirmations published across newspapers with both a F1-score and Accuracy of 71%. Furthermore, this M.Sc. dissertation provides an assessment of what are the most informative features for detecting contradictions and paraphrases and infer if exists a correlation between contradiction detection and paraphrase identification.",
"title": ""
},
{
"docid": "bab7a21f903157fcd0d3e70da4e7261a",
"text": "The clinical, electrophysiological and morphological findings (light and electron microscopy of the sural nerve and gastrocnemius muscle) are reported in an unusual case of Guillain-Barré polyneuropathy with an association of muscle hypertrophy and a syndrome of continuous motor unit activity. Fasciculation, muscle stiffness, cramps, myokymia, impaired muscle relaxation and percussion myotonia, with their electromyographic accompaniments, were abolished by peripheral nerve blocking, carbamazepine, valproic acid or prednisone therapy. Muscle hypertrophy, which was confirmed by morphometric data, diminished 2 months after the beginning of prednisone therapy. Electrophysiological and nerve biopsy findings revealed a mixed process of axonal degeneration and segmental demyelination. Muscle biopsy specimen showed a marked predominance and hypertrophy of type-I fibres and atrophy, especially of type-II fibres.",
"title": ""
},
{
"docid": "cfddb85a8c81cb5e370fe016ea8d4c5b",
"text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.",
"title": ""
},
{
"docid": "ba974ef3b1724a0b31331f558ed13e8e",
"text": "The paper presents a simple and effective sketch-based algorithm for large scale image retrieval. One of the main challenges in image retrieval is to localize a region in an image which would be matched with the query image in contour. To tackle this problem, we use the human perception mechanism to identify two types of regions in one image: the first type of region (the main region) is defined by a weighted center of image features, suggesting that we could retrieve objects in images regardless of their sizes and positions. The second type of region, called region of interests (ROI), is to find the most salient part of an image, and is helpful to retrieve images with objects similar to the query in a complicated scene. So using the two types of regions as candidate regions for feature extraction, our algorithm could increase the retrieval rate dramatically. Besides, to accelerate the retrieval speed, we first extract orientation features and then organize them in a hierarchal way to generate global-to-local features. Based on this characteristic, a hierarchical database index structure could be built which makes it possible to retrieve images on a very large scale image database online. Finally a real-time image retrieval system on 4.5 million database is developed to verify the proposed algorithm. The experiment results show excellent retrieval performance of the proposed algorithm and comparisons with other algorithms are also given.",
"title": ""
},
{
"docid": "e7865d56e092376493090efc48a7e238",
"text": "Machine learning techniques are applied to the task of context awareness, or inferring aspects of the user's state given a stream of inputs from sensors worn by the person. We focus on the task of indoor navigation and show that, by integrating information from accelerometers, magnetometers and temperature and light sensors, we can collect enough information to infer the user's location. However, our navigation algorithm performs very poorly, with almost a 50% error rate, if we use only the raw sensor signals. Instead, we introduce a \"data cooking\" module that computes appropriate high-level features from the raw sensor data. By introducing these high-level features, we are able to reduce the error rate to 2% in our example environment.",
"title": ""
}
] | scidocsrr |
cfecd5986b39b4a57b6543db7319bf74 | Classification and Characteristics of Electronic Payment Systems | [
{
"docid": "3fc66dd37228df26f0cae8fa66283ce7",
"text": "Consumers' lack of trust has often been cited as a major barrier to the adoption of electronic commerce (e-commerce). To address this problem, a model of trust was developed that describes what design factors affect consumers' assessment of online vendors' trustworthiness. Six components were identified and regrouped into three categories: Prepurchase Knowledge, Interface Properties and Informational Content. This model also informs the Human-Computer Interaction (HCI) design of e-commerce systems in that its components can be taken as trust-specific high-level user requirements.",
"title": ""
}
] | [
{
"docid": "6661cc34d65bae4b09d7c236d0f5400a",
"text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.",
"title": ""
},
{
"docid": "e04ff1f4c08bc0541da0db5cd7928ef7",
"text": "Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task.",
"title": ""
},
{
"docid": "4ec7af75127df22c9cb7bd279cb2bcf3",
"text": "This paper describes a real-time walking control system developed for the biped robots JOHNNIE and LOLA. Walking trajectories are planned on-line using a simplified robot model and modified by a stabilizing controller. The controller uses hybrid position/force control in task space based on a resolved motion rate scheme. Inertial stabilization is achieved by modifying the contact force trajectories. The paper includes an analysis of the dynamics of controlled bipeds, which is the basis for the proposed control system. The system was tested both in forward dynamics simulations and in experiments with JOHNNIE.",
"title": ""
},
{
"docid": "d49ea26480f4170ec3684ddbf3272306",
"text": "Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce “entropy-based” features—approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.",
"title": ""
},
{
"docid": "dadd9fec98c7dbc05d4e898d282e78fa",
"text": "Managing unanticipated changes in turbulent and dynamic market environments requires organizations to reach an extended level of flexibility, which is known as agility. Agility can be defined as ability to sense environmental changes and to readily respond to those. While information systems are alleged to have a major influence on organizational agility, service-oriented architecture (SOA) poses an opportunity to shape agile information systems and ultimately organizational agility. However, related research studies predominantly comprise theoretical claims only. Seeking a detailed picture and in-depth insights, we conduct a qualitative exploratory case study. The objective of our research-in-progress is therefore to provide first-hand empirical data to contribute insights into SOA’s influence on organizational agility. We contribute to the two related research fields of SOA and organizational agility by addressing lack of empirical research on SOA’s organizational implications.",
"title": ""
},
{
"docid": "8f0da69d48c3d5098018b2e5046b6e8e",
"text": "Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.",
"title": ""
},
{
"docid": "0e1dc67e473e6345be5725f2b06e916f",
"text": "A number of experiments explored the hypothesis that immediate memory span is not constant, but varies with the length of the words to be recalled. Results showed: (1) Memory span is inversely related to word length across a wide range of materials; (2) When number of syllables and number of phonemes are held constant, words of short temporal duration are better recalled than words of long duration; (3) Span could be predicted on the basis of the number of words which the subject can read in approximately 2 sec; (4) When articulation is suppressed by requiring the subject to articulate an irrelevant sound, the word length effect disappears with visual presentation, but remains when presentation is auditory. The results are interpreted in terms of a phonemically-based store of limited temporal capacity, which may function as an output buffer for speech production, and as a supplement to a more central working memory system.",
"title": ""
},
{
"docid": "736a413352df6b0225b4d567a26a5d78",
"text": "This letter presents a compact, single-feed, dual-band antenna covering both the 433-MHz and 2.45-GHz Industrial Scientific and Medical (ISM) bands. The antenna has small dimensions of 51 ×28 mm2. A square-spiral resonant element is printed on the top layer for the 433-MHz band. The remaining space within the spiral is used to introduce an additional parasitic monopole element on the bottom layer that is resonant at 2.45 GHz. Measured results show that the antenna has a 10-dB return-loss bandwidth of 2 MHz at 433 MHz and 132 MHz at 2.45 GHz, respectively. The antenna has omnidirectional radiation characteristics with a peak realized gain (measured) of -11.5 dBi at 433 MHz and +0.5 dBi at 2.45 GHz, respectively.",
"title": ""
},
{
"docid": "122e3e4c10e4e5f2779773bde106d068",
"text": "In recent years, research on image generation methods has been developing fast. The auto-encoding variational Bayes method (VAEs) was proposed in 2013, which uses variational inference to learn a latent space from the image database and then generates images using the decoder. The generative adversarial networks (GANs) came out as a promising framework, which uses adversarial training to improve the generative ability of the generator. However, the generated pictures by GANs are generally blurry. The deep convolutional generative adversarial networks (DCGANs) were then proposed to leverage the quality of generated images. Since the input noise vectors are randomly sampled from a Gaussian distribution, the generator has to map from a whole normal distribution to the images. This makes DCGANs unable to reflect the inherent structure of the training data. In this paper, we propose a novel deep model, called generative adversarial networks with decoder-encoder output noise (DE-GANs), which takes advantage of both the adversarial training and the variational Bayesain inference to improve the performance of image generation. DE-GANs use a pre-trained decoder-encoder architecture to map the random Gaussian noise vectors to informative ones and pass them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained by the same images as the generators, the output vectors could carry the intrinsic distribution information of the original images. Moreover, the loss function of DE-GANs is different from GANs and DCGANs. A hidden-space loss function is added to the adversarial loss function to enhance the robustness of the model. Extensive empirical results show that DE-GANs can accelerate the convergence of the adversarial training process and improve the quality of the generated images.",
"title": ""
},
{
"docid": "0b44782174d1dae460b86810db8301ec",
"text": "We present an overview of Markov chain Monte Carlo, a sampling method for model inference and uncertainty quantification. We focus on the Bayesian approach to MCMC, which allows us to estimate the posterior distribution of model parameters, without needing to know the normalising constant in Bayes’ theorem. Given an estimate of the posterior, we can then determine representative models (such as the expected model, and the maximum posterior probability model), the probability distributions for individual parameters, and the uncertainty about the predictions from these models. We also consider variable dimensional problems in which the number of model parameters is unknown and needs to be inferred. Such problems can be addressed with reversible jump (RJ) MCMC. This leads us to model choice, where we may want to discriminate between models or theories of differing complexity. For problems where the models are hierarchical (e.g. similar structure but with a different number of parameters), the Bayesian approach naturally selects the simpler models. More complex problems require an estimate of the normalising constant in Bayes’ theorem (also known as the evidence) and this is difficult to do reliably for high dimensional problems. We illustrate the applications of RJMCMC with 3 examples from our earlier working involving modelling distributions of geochronological age data, inference of sea-level and sediment supply histories from 2D stratigraphic cross-sections, and identification of spatially discontinuous thermal histories from a suite of apatite fission track samples distributed in 3D. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f282c9ff4afa773af39eb963f4987d09",
"text": "The fast development of computing and communication has reformed the financial markets' dynamics. Nowadays many people are investing and trading stocks through online channels and having access to real-time market information efficiently. There are more opportunities to lose or make money with all the stocks information available throughout the World; however, one should spend a lot of effort and time to follow those stocks and the available instant information. This paper presents a preliminary regarding a multi-agent recommender system for computational investing. This system utilizes a hybrid filtering technique to adaptively recommend the most profitable stocks at the right time according to investor's personal favour. The hybrid technique includes collaborative and content-based filtering. The content-based model uses investor preferences, influencing macro-economic factors, stocks profiles and the predicted trend to tailor to its advices. The collaborative filter assesses the investor pairs' investing behaviours and actions that are proficient in economic market to recommend the similar ones to the target investor.",
"title": ""
},
{
"docid": "7c9b3de1491b8f58b1326b9d91ab688e",
"text": "We characterize the cache behavior of an in-memory tag table and demonstrate that an optimized implementation can typically achieve a near-zero memory traffic overhead. Both industry and academia have repeatedly demonstrated tagged memory as a key mechanism to enable enforcement of powerful security invariants, including capabilities, pointer integrity, watchpoints, and information-flow tracking. A single-bit tag shadowspace is the most commonly proposed requirement, as one bit is the minimum metadata needed to distinguish between an untyped data word and any number of new hardware-enforced types. We survey various tag shadowspace approaches and identify their common requirements and positive features of their implementations. To avoid non-standard memory widths, we identify the most practical implementation for tag storage to be an in-memory table managed next to the DRAM controller. We characterize the caching performance of such a tag table and demonstrate a DRAM traffic overhead below 5% for the vast majority of applications. We identify spatial locality on a page scale as the primary factor that enables surprisingly high table cache-ability. We then demonstrate tag-table compression for a set of common applications. A hierarchical structure with elegantly simple optimizations reduces DRAM traffic overhead to below 1% for most applications. These insights and optimizations pave the way for commercial applications making use of single-bit tags stored in commodity memory.",
"title": ""
},
{
"docid": "450401c2092f881e26210e27d01d6195",
"text": "This article describes what should typically be included in the introduction, method, results, and discussion sections of a meta-analytic review. Method sections include information on literature searches, criteria for inclusion of studies, and a listing of the characteristics recorded for each study. Results sections include information describing the distribution of obtained effect sizes, central tendencies, variability, tests of significance, confidence intervals, tests for heterogeneity, and contrasts (univariate or multivariate). The interpretation of meta-analytic results is often facilitated by the inclusion of the binomial effect size display procedure, the coefficient of robustness, file drawer analysis, and, where overall results are not significant, the counternull value of the obtained effect size and power analysis.",
"title": ""
},
{
"docid": "bb2504b2275a20010c0d5f9050173d40",
"text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.",
"title": ""
},
{
"docid": "a56efa3471bb9e3091fffc6b1585f689",
"text": "Rogowski current transducers combine a high bandwidth, an easy to use thin flexible coil, and low insertion impedance making them an ideal device for measuring pulsed currents in power electronic applications. Practical verification of a Rogowski transducer's ability to measure current transients due to the fastest MOSFET and IGBT switching requires a calibrated test facility capable of generating a pulse with a rise time of the order of a few 10's ns. A flexible 8-module system has been built which gives a 2000A peak current with a rise time of 40ns. The modular approach enables verification for a range of transducer coil sizes and ratings.",
"title": ""
},
{
"docid": "01c8b3612769216c21d8c16567faa430",
"text": "Optimal decision making during the business process execution is crucial for achieving the business goals of an enterprise. Process execution often involves the usage of the decision logic specified in terms of business rules represented as atomic elements of conditions leading to conclusions. However, the question of using and integrating the processand decision-centric approaches, i.e. harmonization of the widely accepted Business Process Model and Notation (BPMN) and the recent Decision Model and Notation (DMN) proposed by the OMG group, is important. In this paper, we propose a four-step approach to derive decision models from process models on the examples of DMN and BPMN: (1) Identification of decision points in a process model; (2) Extraction of decision logic encapsulating the data dependencies affecting the decisions in the process model; (3) Construction of a decision model; (4) Adaptation of the process model with respect to the derived decision logic. Our contribution also consists in proposing an enrichment of the extracted decision logic by taking into account the predictions of process performance measures corresponding to different decision outcomes. We demonstrate the applicability of the approach on an exemplary business process from the banking domain.",
"title": ""
},
{
"docid": "a81e4b95dfaa7887f66066343506d35f",
"text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.",
"title": ""
},
{
"docid": "e6a913ca404c59cd4e0ecffaf18144e5",
"text": "SPARQL is the standard language for querying RDF data. In this article, we address systematically the formal study of the database aspects of SPARQL, concentrating in its graph pattern matching facility. We provide a compositional semantics for the core part of SPARQL, and study the complexity of the evaluation of several fragments of the language. Among other complexity results, we show that the evaluation of general SPARQL patterns is PSPACE-complete. We identify a large class of SPARQL patterns, defined by imposing a simple and natural syntactic restriction, where the query evaluation problem can be solved more efficiently. This restriction gives rise to the class of well-designed patterns. We show that the evaluation problem is coNP-complete for well-designed patterns. Moreover, we provide several rewriting rules for well-designed patterns whose application may have a considerable impact in the cost of evaluating SPARQL queries.",
"title": ""
},
{
"docid": "868501b6dc57751b7a6416d91217f0bd",
"text": "OBJECTIVE\nThe major aim of this research is to determine whether infants who were anxiously/resistantly attached in infancy develop more anxiety disorders during childhood and adolescence than infants who were securely attached. To test different theories of anxiety disorders, newborn temperament and maternal anxiety were included in multiple regression analyses.\n\n\nMETHOD\nInfants participated in Ainsworth's Strange Situation Procedure at 12 months of age. The Schedule for Affective Disorders and Schizophrenia for School-Age Children was administered to the 172 children when they reached 17.5 years of age. Maternal anxiety and infant temperament were assessed near the time of birth.\n\n\nRESULTS\nThe hypothesized relation between anxious/resistant attachment and later anxiety disorders was confirmed. No relations with maternal anxiety and the variables indexing temperament were discovered, except for a composite score of nurses' ratings designed to access \"high reactivity,\" and the Neonatal Behavioral Assessment Scale clusters of newborn range of state and inability to habituate to stimuli. Anxious/resistant attachment continued to significantly predict child/adolescent anxiety disorders, even when entered last, after maternal anxiety and temperament, in multiple regression analyses.\n\n\nCONCLUSION\nThe attachment relationship appears to play an important role in the development of anxiety disorders. Newborn temperament may also contribute.",
"title": ""
},
{
"docid": "bb75aa9bbe07e635493b123eaaadf74d",
"text": "Right ventricular (RV) pacing increases the incidence of atrial fibrillation (AF) and hospitalization rate for heart failure. Many patients with sinus node dysfunction (SND) are implanted with a DDDR pacemaker to ensure the treatment of slowly conducted atrial fibrillation and atrioventricular (AV) block. Many pacemakers are never reprogrammed after implantation. This study aims to evaluate the effectiveness of programming DDIR with a long AV delay in patients with SND and preserved AV conduction as a possible strategy to reduce RV pacing in comparison with a nominal DDDR setting including an AV search hysteresis. In 61 patients (70 ± 10 years, 34 male, PR < 200 ms, AV-Wenckebach rate at ≥130 bpm) with symptomatic SND a DDDR pacemaker was implanted. The cumulative prevalence of right ventricular pacing was assessed according to the pacemaker counter in the nominal DDDR-Mode (AV delay 150/120 ms after atrial pacing/sensing, AV search hysteresis active) during the first postoperative days and in DDIR with an individually programmed long fixed AV delay after 100 days (median). With the nominal DDDR mode the median incidence of right ventricular pacing amounted to 25.2%, whereas with DDIR and long AV delay the median prevalence of RV pacing was significantly reduced to 1.1% (P < 0.001). In 30 patients (49%) right ventricular pacing was almost completely (<1%) eliminated, n = 22 (36%) had >1% <20% and n = 4 (7%) had >40% right ventricular pacing. The median PR interval was 161 ms. The median AV interval with DDIR was 280 ms. The incidence of right ventricular pacing in patients with SND and preserved AV conduction, who are treated with a dual chamber pacemaker, can significantly be reduced by programming DDIR with a long, individually adapted AV delay when compared with a nominal DDDR setting, but nonetheless in some patients this strategy produces a high proportion of disadvantageous RV pacing. The DDIR mode with long AV delay provides an effective strategy to reduce unnecessary right ventricular pacing but the effect has to be verified in every single patient.",
"title": ""
}
] | scidocsrr |
01a61572cc5ab882346b3d1656a9f13d | Scalable Hierarchical Network-on-Chip Architecture for Spiking Neural Network Hardware Implementations | [
{
"docid": "4bce473bb65dfc545d5895c7edb6cea6",
"text": "mathematical framework of the population equations. It will turn out that the results are – of course – consistent with those derived from the population equation. We study a homogeneous network of N identical neurons which are mutually coupled with strength wij = J0/N where J0 > 0 is a positive constant. In other words, the (excitatory) interaction is scaled with one over N so that the total input to a neuron i is of order one even if the number of neurons is large (N →∞). Since we are interested in synchrony we suppose that all neurons have fired simultaneously at t̂ = 0. When will the neurons fire again? Since all neurons are identical we expect that the next firing time will also be synchronous. Let us calculate the period T between one synchronous pulse and the next. We start from the firing condition of SRM0 neurons θ = ui(t) = η(t− t̂i) + ∑",
"title": ""
},
{
"docid": "0ff8c4799b62c70ef6b7d70640f1a931",
"text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.",
"title": ""
},
{
"docid": "99698fc712b777dfb3d1eb782626586f",
"text": "Looking into the future, when the billion transitor ASICs will become reality, this p per presents Network on a chip (NOC) concept and its associated methodology as solu the design productivity problem. NOC is a network of computational, storage and I/O resou interconnected by a network of switches. Resources communcate with each other usi dressed data packets routed to their destination by the switch fabric. Arguments are pre to justify that in the billion transistor era, the area and performance penalty would be minim A concrete topology for the NOC, a honeycomb structure, is proposed and discussed. A odology to support NOC is presented. This methodology outlines steps from requirements to implementation. As an illustration of the concepts, a plausible mapping of an entire ba tion on hypothetical NOC is discussed.",
"title": ""
}
] | [
{
"docid": "ce7fdc16d6d909a4e0c3294ed55af51d",
"text": "In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue — RNN-Transducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models — when all encoder layers are forward only, and when encoders downsample the input representation aggressively.",
"title": ""
},
{
"docid": "8b6d5e7526e58ce66cf897d17b094a91",
"text": "Regression testing is an expensive maintenance process used to revalidate modified software. Regression test selection (RTS) techniques try to lower the cost of regression testing by selecting and running a subset of the existing test cases. Many such techniques have been proposed and initial studies show that they can produce savings. We believe, however, that issues such as the frequency with which testing is done have a strong effect on the behavior of these techniques. Therefore, we conducted an experiment to assess the effects of test application frequency on the costs and benefits of regression test selection techniques. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases.",
"title": ""
},
{
"docid": "e226452a288c3067ef8ee613f0b64090",
"text": "Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQVAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete bottleneck with EM helps us achieve better image generation results on CIFAR-10, and together with knowledge distillation, allows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.",
"title": ""
},
{
"docid": "e913d5a0d898df3db28b97b27757b889",
"text": "Speech-language pathologists tend to rely on the noninstrumental swallowing evaluation in making recommendations about a patient’s diet and management plan. The present study was designed to examine the sensitivity and specificity of the accuracy of using the chin-down posture during the clinical/bedside swallowing assessment. In 15 patients with acute stroke and clinically suspected oropharyngeal dysphagia, the correlation between clinical and videofluoroscopic findings was examined. Results identified that there is a difference in outcome prediction using the chin-down posture during the clinical/bedside assessment of swallowing compared to assessment by videofluoroscopy. Results are discussed relative to statistical and clinical perspectives, including site of lesion and factors to be considered in the design of an overall treatment plan for a patient with disordered swallowing.",
"title": ""
},
{
"docid": "50b0ecff19de467ab8558134fb666a87",
"text": "Real-time video objects detection, tracking, and recognition are challenging issues due to the real-time processing requirements of the machine learning algorithms. In recent years, video processing is performed by deep learning (DL) based techniques that achieve higher accuracy but require higher computations cost. This paper presents a recent survey of the state-of-the-art DL platforms and architectures used for deep vision systems. It highlights the contributions and challenges from over numerous research studies. In particular, this paper first describes the architecture of various DL models such as AutoEncoders, deep Boltzmann machines, convolution neural networks, recurrent neural networks and deep residual learning. Next, deep real-time video objects detection, tracking and recognition studies are highlighted to illustrate the key trends in terms of cost of computation, number of layers and the accuracy of results. Finally, the paper discusses the challenges of applying DL for real-time video processing and draw some directions for the future of DL algorithms.",
"title": ""
},
{
"docid": "e2d63fece5536aa4668cd5027a2f42b9",
"text": "To ensure integrity, trust, immutability and authenticity of software and information (cyber data, user data and attack event data) in a collaborative environment, research is needed for cross-domain data communication, global software collaboration, sharing, access auditing and accountability. Blockchain technology can significantly automate the software export auditing and tracking processes. It allows to track and control what data or software components are shared between entities across multiple security domains. Our blockchain-based solution relies on role-based and attribute-based access control and prevents unauthorized data accesses. It guarantees integrity of provenance data on who updated what software module and when. Furthermore, our solution detects data leakages, made behind the scene by authorized blockchain network participants, to unauthorized entities. Our approach is used for data forensics/provenance, when the identity of those entities who have accessed/ updated/ transferred the sensitive cyber data or sensitive software is determined. All the transactions in the global collaborative software development environment are recorded in the blockchain public ledger and can be verified any time in the future. Transactions can not be repudiated by invokers. We also propose modified transaction validation procedure to improve performance and to protect permissioned IBM Hyperledger-based blockchains from DoS attacks, caused by bursts of invalid transactions.",
"title": ""
},
{
"docid": "0ab28f6fee235eb3e2e0897d7fb2a182",
"text": "Internet of things (IoT) applications have become increasingly popular in recent years, with applications ranging from building energy monitoring to personal health tracking and activity recognition. In order to leverage these data, automatic knowledge extraction – whereby we map from observations to interpretable states and transitions – must be done at scale. As such, we have seen many recent IoT data sets include annotations with a human expert specifying states, recorded as a set of boundaries and associated labels in a data sequence. ese data can be used to build automatic labeling algorithms that produce labels as an expert would. Here, we refer to human-specified boundaries as breakpoints. Traditional changepoint detection methods only look for statistically-detectable boundaries that are defined as abrupt variations in the generative parameters of a data sequence. However, we observe that breakpoints occur on more subtle boundaries that are non-trivial to detect with these statistical methods. In this work, we propose a new unsupervised approach, based on deep learning, that outperforms existing techniques and learns the more subtle, breakpoint boundaries with a high accuracy. rough extensive experiments on various real-world data sets – including human-activity sensing data, speech signals, and electroencephalogram (EEG) activity traces – we demonstrate the effectiveness of our algorithm for practical applications. Furthermore, we show that our approach achieves significantly beer performance than previous methods. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for thirdparty components of this work must be honored. For all other uses, contact the owner/author(s).",
"title": ""
},
{
"docid": "52f95d1c0e198c64455269fd09108703",
"text": "Dynamic control theory has long been used in solving optimal asset allocation problems, and a number of trading decision systems based on reinforcement learning methods have been applied in asset allocation and portfolio rebalancing. In this paper, we extend the existing work in recurrent reinforcement learning (RRL) and build an optimal variable weight portfolio allocation under a coherent downside risk measure, the expected maximum drawdown, E(MDD). In particular, we propose a recurrent reinforcement learning method, with a coherent risk adjusted performance objective function, the Calmar ratio, to obtain both buy and sell signals and asset allocation weights. Using a portfolio consisting of the most frequently traded exchange-traded funds, we show that the expected maximum drawdown risk based objective function yields superior return performance compared to previously proposed RRL objective functions (i.e. the Sharpe ratio and the Sterling ratio), and that variable weight RRL long/short portfolios outperform equal weight RRL long/short portfolios under different transaction cost scenarios. We further propose an adaptive E(MDD) risk based RRL portfolio rebalancing decision system with a transaction cost and market condition stop-loss retraining mechanism, and we show that the ∗Corresponding author: Steve Y. Yang, Postal address: School of Business, Stevens Institute of Technology, 1 Castle Point on Hudson, Hoboken, NJ 07030 USA. Tel.: +1 201 216 3394 Fax: +1 201 216 5385 Email addresses: [email protected] (Saud Almahdi), [email protected] (Steve Y. Yang) Preprint submitted to Expert Systems with Applications June 15, 2017",
"title": ""
},
{
"docid": "55aace95d409340750eecf02c6cc72f3",
"text": "The paper addresses the stability of the co-authorship networks in time. The analysis is done on the networks of Slovenian researchers in two time periods (1991-2000 and 2001-2010). Two researchers are linked if they published at least one scientific bibliographic unit in a given time period. As proposed by Kronegger et al. (2011), the global network structures are examined by generalized blockmodeling with the assumed multi-core--semi-periphery--periphery blockmodel type. The term core denotes a group of researchers who published together in a systematic way with each other. The obtained blockmodels are comprehensively analyzed by visualizations and through considering several statistics regarding the global network structure. To measure the stability of the obtained blockmodels, different adjusted modified Rand and Wallace indices are applied. Those enable to distinguish between the splitting and merging of cores when operationalizing the stability of cores. Also, the adjusted modified indices can be used when new researchers occur in the second time period (newcomers) and when some researchers are no longer present in the second time period (departures). The research disciplines are described and clustered according to the values of these indices. Considering the obtained clusters, the sources of instability of the research disciplines are studied (e.g., merging or splitting of cores, newcomers or departures). Furthermore, the differences in the stability of the obtained cores on the level of scientific disciplines are studied by linear regression analysis where some personal characteristics of the researchers (e.g., age, gender), are also considered.",
"title": ""
},
{
"docid": "5cb8c778f0672d88241cc22da9347415",
"text": "Phishing websites, fraudulent sites that impersonate a trusted third party to gain access to private data, continue to cost Internet users over a billion dollars each year. In this paper, we describe the design and performance characteristics of a scalable machine learning classifier we developed to detect phishing websites. We use this classifier to maintain Google’s phishing blacklist automatically. Our classifier analyzes millions of pages a day, examining the URL and the contents of a page to determine whether or not a page is phishing. Unlike previous work in this field, we train the classifier on a noisy dataset consisting of millions of samples from previously collected live classification data. Despite the noise in the training data, our classifier learns a robust model for identifying phishing pages which correctly classifies more than 90% of phishing pages several weeks after training concludes.",
"title": ""
},
{
"docid": "d45b084040e5f07d39f622fc3543e10b",
"text": "Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25% relative meanIoU improvement compared to the best baseline methods for one-shot segmentation on unseen classes in the PASCAL VOC 2012 dataset and is at least 3× faster. The code is publicly available at: https://github.com/lzzcd001/OSLSM.",
"title": ""
},
{
"docid": "d93609853422aed1c326d35ab820095d",
"text": "We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall. We do this by determining how light naturally reflected off surfaces in the hidden scene interacts with the occluder. By modeling the light transport as a linear system, and incorporating prior knowledge about light field structures, we can invert the system to recover the hidden scene. We demonstrate results of our inference method across simulations and experiments with different types of occluders. For instance, using the shadow cast by a real house plant, we are able to recover low resolution light fields with different levels of texture and parallax complexity. We provide two experimental results: a human subject and two planar elements at different depths.",
"title": ""
},
{
"docid": "fc9ee686c2a339f2f790074aeee5432b",
"text": "Recent work using auxiliary prediction task classifiers to investigate the properties of LSTM representations has begun to shed light on why pretrained representations, like ELMo (Peters et al., 2018) and CoVe (McCann et al., 2017), are so beneficial for neural language understanding models. We still, though, do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn. With this in mind, we compare four objectives—language modeling, translation, skip-thought, and autoencoding—on their ability to induce syntactic and part-of-speech information. We make a fair comparison between the tasks by holding constant the quantity and genre of the training data, as well as the LSTM architecture. We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data. These results suggest that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information. We also find that the representations from randomly-initialized, frozen LSTMs perform strikingly well on our syntactic auxiliary tasks, but this effect disappears when the amount of training data for the auxiliary tasks is reduced.",
"title": ""
},
{
"docid": "2215fd5b4f1e884a66b62675c8c92d33",
"text": "In the context of structural optimization we propose a new numerical method based on a combination of the classical shape derivative and of the level-set method for front propagation. We implement this method in two and three space dimensions for a model of linear or nonlinear elasticity. We consider various objective functions with weight and perimeter constraints. The shape derivative is computed by an adjoint method. The cost of our numerical algorithm is moderate since the shape is captured on a fixed Eulerian mesh. Although this method is not specifically designed for topology optimization, it can easily handle topology changes. However, the resulting optimal shape is strongly dependent on the initial guess. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b78dfbd9640d53c6bd782af9be1f278a",
"text": "Code analyzers such as Error Prone and FindBugs detect code patterns symptomatic of bugs, performance issues, or bad style. These tools express patterns as quick fixes that detect and rewrite unwanted code. However, it is difficult to come up with new quick fixes and decide which ones are useful and frequently appear in real code. We propose to rely on the collective wisdom of programmers and learn quick fixes from revision histories in software repositories. We present REVISAR, a tool for discovering common Java edit patterns in code repositories. Given code repositories and their revision histories, REVISAR (i) identifies code edits from revisions and (ii) clusters edits into sets that can be described using an edit pattern. The designers of code analyzers can then inspect the patterns and add the corresponding quick fixes to their tools. We ran REVISAR on nine popular GitHub projects, and it discovered 89 useful edit patterns that appeared in 3 or more projects. Moreover, 64% of the discovered patterns did not appear in existing tools. We then conducted a survey with 164 programmers from 124 projects and found that programmers significantly preferred eight out of the nine of the discovered patterns. Finally, we submitted 16 pull requests applying our patterns to 9 projects and, at the time of the writing, programmers accepted 6 (60%) of them. The results of this work aid toolsmiths in discovering quick fixes and making informed decisions about which quick fixes to prioritize based on patterns programmers actually apply in practice.",
"title": ""
},
{
"docid": "dbcdcd2cdf8894f853339b5fef876dde",
"text": "Genicular nerve radiofrequency ablation (RFA) has recently gained popularity as an intervention for chronic knee pain in patients who have failed other conservative or surgical treatments. Long-term efficacy and adverse events are still largely unknown. Under fluoroscopic guidance, thermal RFA targets the lateral superior, medial superior, and medial inferior genicular nerves, which run in close proximity to the genicular arteries that play a crucial role in supplying the distal femur, knee joint, meniscus, and patella. RFA targets nerves by relying on bony landmarks, but fails to provide visualization of vascular structures. Although vascular injuries after genicular nerve RFA have not been reported, genicular vascular complications are well documented in the surgical literature. This article describes the anatomy, including detailed cadaveric dissections and schematic drawings, of the genicular neurovascular bundle. The present investigation also included a comprehensive literature review of genicular vascular injuries involving those arteries which lie near the targets of genicular nerve RFA. These adverse vascular events are documented in the literature as case reports. Of the 27 cases analyzed, 25.9% (7/27) involved the lateral superior genicular artery, 40.7% (11/27) involved the medial superior genicular artery, and 33.3% (9/27) involved the medial inferior genicular artery. Most often, these vascular injuries result in the formation of pseudoaneurysm, arteriovenous fistula (AVF), hemarthrosis, and/or osteonecrosis of the patella. Although rare, these complications carry significant morbidities. Based on the detailed dissections and review of the literature, our investigation suggests that vascular injury is a possible risk of genicular RFA. Lastly, recommendations are offered to minimize potential iatrogenic complications.",
"title": ""
},
{
"docid": "27cf1715f4cf77f098ea4f64b690ff0d",
"text": "Existing mechanical circuit breakers can not satisfy the requirements of fast operation in power system due to noise, electric arc and long switching response time. Moreover the non-grid-connected wind power system is based on the Flexible Direct Current Transmission (FDCT) technique. It is especially necessary to research the Solid-State Circuit Breakers (SSCB) to realize the rapid and automatic control for the circuit breakers in the system. Meanwhile, the newly-developed Solid-State Circuit Breakers (SSCB) operating at the natural zero-crossing point of AC system is not suitable for a DC system. Based on the characteristics of the DC system, a novel circuit scheme has been proposed in this paper. The new scheme makes full use of ideology of soft-switching and current-commutation forced by resonance. This scheme successfully realizes the soft turn-on and fast turn-off. In this paper, the topology of current limiter is presented and analytical mathematical models are derived through comprehensive analysis. Finally, normal turn-on and turn-off experiments and overload delay protection test were conducted. The results show the reliability of the novel theory and feasibility of proposed topology. The proposed scheme can be applied in the grid-connected and non-grid-connected DC transmission and distribution systems.",
"title": ""
},
{
"docid": "de7b16961bb4aa2001a3d0859f68e4c6",
"text": "A new practical method is given for the self-calibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. The calibration method is evaluated on several sets of synthetic and real image data.",
"title": ""
},
{
"docid": "342d074c84d55b60a617d31026fe23e1",
"text": "Fractured bones heal by a cascade of cellular events in which mesenchymal cells respond to unknown regulators by proliferating, differentiating, and synthesizing extracellular matrix. Current concepts suggest that growth factors may regulate different steps in this cascade (10). Recent studies suggest regulatory roles for PDGF, aFGF, bFGF, and TGF-beta in the initiation and the development of the fracture callus. Fracture healing begins immediately following injury, when growth factors, including TGF-beta 1 and PDGF, are released into the fracture hematoma by platelets and inflammatory cells. TGF-beta 1 and FGF are synthesized by osteoblasts and chondrocytes throughout the healing process. TGF-beta 1 and PDGF appear to have an influence on the initiation of fracture repair and the formation of cartilage and intramembranous bone in the initiation of callus formation. Acidic FGF is synthesized by chondrocytes, chondrocyte precursors, and macrophages. It appears to stimulate the proliferation of immature chondrocytes or precursors, and indirectly regulates chondrocyte maturation and the expression of the cartilage matrix. Presumably, growth factors in the callus at later times regulate additional steps in repair of the bone after fracture. These studies suggest that growth factors are central regulators of cellular proliferation, differentiation, and extracellular matrix synthesis during fracture repair. Abnormal growth factor expression has been implicated as causing impaired or abnormal healing in other tissues, suggesting that altered growth factor expression also may be responsible for abnormal or delayed fracture repair. As a complete understanding of fracture-healing regulation evolves, we expect new insights into the etiology of abnormal or delayed fracture healing, and possibly new therapies for these difficult clinical problems.",
"title": ""
},
{
"docid": "483578f69e60298f5afba28eff514120",
"text": "This paper proposes a multiport power electronic transformer (PET) topology with multiwinding medium-frequency transformer (MW-MFT) isolation along with the associated modeling analysis and control scheme. The power balance at different ports can be controlled using the multiwinding transformer's common flux linkage. The potential applications of the proposed multiport PET are high-power traction systems for locomotives and electric multiple units, marine propulsion, wind power generation, and utility grid distribution applications. The complementary polygon equivalent circuit modeling of an MW-MFT is presented. The current and power characteristics of the virtual circuit branches and the multiports with general-phase-shift control are described. The general current and power analysis for the multiple active bridge (MAB) isolation units is investigated. Power decoupling methods, including nonlinear solution for power balancing are proposed. The zero-voltage-switching conditions for the MAB are discussed. Control strategies including soft-switching-phase-shift control and voltage balancing control based on the power decoupling calculations are described. Simulations and experiments are presented to verify the performance of the proposed topology and control algorithms.",
"title": ""
}
] | scidocsrr |
d9f5438e76dc0fddb745e99e13477dcf | Edgecourier: an edge-hosted personal service for low-bandwidth document synchronization in mobile cloud storage services | [
{
"docid": "2c4babb483ddd52c9f1333cbe71a3c78",
"text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"title": ""
}
] | [
{
"docid": "9d28e5b6ad14595cd2d6b4071a867f6f",
"text": "This paper presents the analysis and the comparison study of a High-voltage High-frequency Ozone Generator using PWM and Phase-Shifted PWM full-bridge inverter as a power supply. The circuits operations of the inverters are fully described. In order to ensure that zero voltage switching (ZVS) mode always operated over a certain range of a frequency variation, a series-compensated resonant inductor is included. The comparison study are ozone quantity and output voltage that supplied by the PWM and Phase-Shifted PWM full-bridge inverter. The ozone generator fed by Phase-Shifted PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency and phase shift angle of the converter whilst the applied voltage to the electrode is kept constant. However, the ozone generator fed by PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency of the converter whilst the applied voltage to the electrode is decreased. As a consequence, the absolute ozone quantity affected by the frequency is possibly achieved.",
"title": ""
},
{
"docid": "423cba015a9cfc247943dd7d3c4ea1cf",
"text": "No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or informa tion storage and retrieval) without permission in writing from the publisher. Preface Probability is common sense reduced to calculation Laplace This book is an outgrowth of our involvement in teaching an introductory prob ability course (\"Probabilistic Systems Analysis'�) at the Massachusetts Institute of Technology. The course is attended by a large number of students with diverse back grounds, and a broad range of interests. They span the entire spectrum from freshmen to beginning graduate students, and from the engineering school to the school of management. Accordingly, we have tried to strike a balance between simplicity in exposition and sophistication in analytical reasoning. Our key aim has been to develop the ability to construct and analyze probabilistic models in a manner that combines intuitive understanding and mathematical precision. In this spirit, some of the more mathematically rigorous analysis has been just sketched or intuitively explained in the text. so that complex proofs do not stand in the way of an otherwise simple exposition. At the same time, some of this analysis is developed (at the level of advanced calculus) in theoretical prob lems, that are included at the end of the corresponding chapter. FUrthermore, some of the subtler mathematical issues are hinted at in footnotes addressed to the more attentive reader. The book covers the fundamentals of probability theory (probabilistic mod els, discrete and continuous random variables, multiple random variables, and limit theorems), which are typically part of a first course on the subject. It also contains, in Chapters 4-6 a number of more advanced topics, from which an instructor can choose to match the goals of a particular course. In particular, in Chapter 4, we develop transforms, a more advanced view of conditioning, sums of random variables, least squares estimation, and the bivariate normal distribu-v vi Preface tion. Furthermore, in Chapters 5 and 6, we provide a fairly detailed introduction to Bernoulli, Poisson, and Markov processes. Our M.LT. course covers all seven chapters in a single semester, with the ex ception of the material on the bivariate normal (Section 4.7), and on continuous time Markov chains (Section 6.5). However, in an alternative course, the material on stochastic processes could be omitted, thereby allowing additional emphasis on foundational material, or coverage of other topics of the instructor's choice. Our …",
"title": ""
},
{
"docid": "7e1712f9e2846862d072c902a84b2832",
"text": "Reinforcement learning is a computational approach to learn from interaction. However, learning from scratch using reinforcement learning requires exorbitant number of interactions with the environment even for simple tasks. One way to alleviate the problem is to reuse previously learned skills as done by humans. This thesis provides frameworks and algorithms to build and reuse Skill Library. Firstly, we extend the Parameterized Action Space formulation using our Skill Library to multi-goal setting and show improvements in learning using hindsight at coarse level. Secondly, we use our Skill Library for exploring at a coarser level to learn the optimal policy for continuous control. We demonstrate the benefits, in terms of speed and accuracy, of the proposed approaches for a set of real world complex robotic manipulation tasks in which some state-of-the-art methods completely fail.",
"title": ""
},
{
"docid": "6f484310532a757a28c427bad08f7623",
"text": "We address the problem of tracking and recognizing faces in real-world, noisy videos. We track faces using a tracker that adaptively builds a target model reflecting changes in appearance, typical of a video setting. However, adaptive appearance trackers often suffer from drift, a gradual adaptation of the tracker to non-targets. To alleviate this problem, our tracker introduces visual constraints using a combination of generative and discriminative models in a particle filtering framework. The generative term conforms the particles to the space of generic face poses while the discriminative one ensures rejection of poorly aligned targets. This leads to a tracker that significantly improves robustness against abrupt appearance changes and occlusions, critical for the subsequent recognition phase. Identity of the tracked subject is established by fusing pose-discriminant and person-discriminant features over the duration of a video sequence. This leads to a robust video-based face recognizer with state-of-the-art recognition performance. We test the quality of tracking and face recognition on real-world noisy videos from YouTube as well as the standard Honda/UCSD database. Our approach produces successful face tracking results on over 80% of all videos without video or person-specific parameter tuning. The good tracking performance induces similarly high recognition rates: 100% on Honda/UCSD and over 70% on the YouTube set containing 35 celebrities in 1500 sequences.",
"title": ""
},
{
"docid": "09e9a3c3ae9552d675aea363b672312d",
"text": "Substrate Integrated Waveguides (SIW) are used for transmission of Electromagnetic waves. They are planar structures belonging to the family of Substrate Integrated Circuits. Because of their planar nature, they can be fabricated on planar circuits like Printed Circuit Boards (PCB) and can be integrated with other planar transmission lines like microstrips. They retain the low loss property of their conventional metallic waveguides and are widely used as interconnection in high speed circuits, filters, directional couplers, antennas. This paper is a comprehensive review of Substrate Integrated Waveguide and its integration with Microstrip line. In this paper, design techniques for SIW and its microstrip interconnect are presented. HFSS is used for simulation results. The objective of this paper is to provide broad perspective of SIW Technology.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "e92f19a7d99df50321f21ce639a84a35",
"text": "Software tagging has been shown to be an efficient, lightweight social computing mechanism to improve different social and technical aspects of software development. Despite the importance of tags, there exists limited support for automatic tagging for software artifacts, especially during the evolutionary process of software development. We conducted an empirical study on IBM Jazz's repository and found that there are several missing tags in artifacts and more precise tags are desirable. This paper introduces a novel, accurate, automatic tagging recommendation tool that is able to take into account users' feedbacks on tags, and is very efficient in coping with software evolution. The core technique is an automatic tagging algorithm that is based on fuzzy set theory. Our empirical evaluation on the real-world IBM Jazz project shows the usefulness and accuracy of our approach and tool.",
"title": ""
},
{
"docid": "460aa0df99a3e88a752d5f657f1565de",
"text": "Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.",
"title": ""
},
{
"docid": "dfae6cf3df890c8cfba756384c4e88e6",
"text": "In this paper, we propose a second order optimization method to learn models where both the dimensionality of the parameter space and the number of training samples is high. In our method, we construct on each iteratio n a Krylov subspace formed by the gradient and an approximation to the Hess ian matrix, and then use a subset of the training data samples to optimize ove r this subspace. As with the Hessian Free (HF) method of [6], the Hessian matrix i s never explicitly constructed, and is computed using a subset of data. In p ractice, as in HF, we typically use a positive definite substitute for the Hessi an matrix such as the Gauss-Newton matrix. We investigate the effectiveness of o ur proposed method on learning the parameters of deep neural networks, and comp are its performance to widely used methods such as stochastic gradient descent, conjugate gradient descent and L-BFGS, and also to HF. Our method leads to faster convergence than either L-BFGS or HF, and generally performs better than either of them in cross-validation accuracy. It is also simpler and more gene ral than HF, as it does not require a positive semi-definite approximation of the He ssian matrix to work well nor the setting of a damping parameter. The chief drawba ck versus HF is the need for memory to store a basis for the Krylov subspace.",
"title": ""
},
{
"docid": "c92807c973f51ac56fe6db6c2bb3f405",
"text": "Machine learning relies on the availability of a vast amount of data for training. However, in reality, most data are scattered across different organizations and cannot be easily integrated under many legal and practical constraints. In this paper, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical models under a data federation. The federation allows knowledge to be shared without compromising user privacy, and enables complimentary knowledge to be transferred in the network. As a result, a target-domain party can build more flexible and powerful models by leveraging rich labels from a source-domain party. A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation. The framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving approach. This framework is very flexible and can be effectively adapted to various secure multi-party machine learning tasks.",
"title": ""
},
{
"docid": "c9077052caa804aaa58d43aaf8ba843f",
"text": "Many authors have laid down a concept about organizational learning and the learning organization. Amongst them They contributed an explanation on how organizations learn and provided tools to transfer the theoretical concepts of organizational learning into practice. Regarding the present situation it seems, that organizational learning becomes even more important. This paper provides a complementary view on the learning organization from the perspective of the evolutionary epistemology. The evolutionary epistemology gives an answer, where the subjective structures of cognition come from and why they are similar in all human beings. Applying this evolutionary concept to organizations it could be possible to provide a deeper insight of the cognition processes of organizations and explain the principles that lay behind a learning organization. It also could give an idea, which impediments in learning, caused by natural dispositions, deduced from genetic barriers of cognition in biology are existing and managers must be aware of when trying to facilitate organizational learning within their organizations.",
"title": ""
},
{
"docid": "ad0892ee2e570a8a2f5a90883d15f2d2",
"text": "Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by bootstrapping additional training data. This is done by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from multiple sources. If our system can make a highconfidence extraction of some mentions in such a cluster, it can then acquire diverse training examples by adding the other mentions as well. Our experiments show significant performance improvements on multiple event extractors over ACE 2005 and TAC-KBP 2015 datasets.",
"title": ""
},
{
"docid": "c08fa2224b8a38b572ea546abd084bd1",
"text": "Off-chip main memory has long been a bottleneck for system performance. With increasing memory pressure due to multiple on-chip cores, effective cache utilization is important. In a system with limited cache space, we would ideally like to prevent 1) cache pollution, i.e., blocks with low reuse evicting blocks with high reuse from the cache, and 2) cache thrashing, i.e., blocks with high reuse evicting each other from the cache.\n In this paper, we propose a new, simple mechanism to predict the reuse behavior of missed cache blocks in a manner that mitigates both pollution and thrashing. Our mechanism tracks the addresses of recently evicted blocks in a structure called the Evicted-Address Filter (EAF). Missed blocks whose addresses are present in the EAF are predicted to have high reuse and all other blocks are predicted to have low reuse. The key observation behind this prediction scheme is that if a block with high reuse is prematurely evicted from the cache, it will be accessed soon after eviction. We show that an EAF-implementation using a Bloom filter, which is cleared periodically, naturally mitigates the thrashing problem by ensuring that only a portion of a thrashing working set is retained in the cache, while incurring low storage cost and implementation complexity.\n We compare our EAF-based mechanism to five state-of-the-art mechanisms that address cache pollution or thrashing, and show that it provides significant performance improvements for a wide variety of workloads and system configurations.",
"title": ""
},
{
"docid": "ada1db1673526f98840291977998773d",
"text": "The effect of immediate versus delayed feedback on rule-based and information-integration category learning was investigated. Accuracy rates were examined to isolate global performance deficits, and model-based analyses were performed to identify the types of response strategies used by observers. Feedback delay had no effect on the accuracy of responding or on the distribution of best fitting models in the rule-based category-learning task. However, delayed feedback led to less accurate responding in the information-integration category-learning task. Model-based analyses indicated that the decline in accuracy with delayed feedback was due to an increase in the use of rule-based strategies to solve the information-integration task. These results provide support for a multiple-systems approach to category learning and argue against the validity of single-system approaches.",
"title": ""
},
{
"docid": "fee191728bc0b1fbf11344961be10215",
"text": "In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems. Disciplines Computer Sciences Comments Vanderwende, L., Suzuki, H., Brockett, C., & Nenkova, A., Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion, Information Processing and Management, Special Issue on Summarization Volume 43, Issue 6, 2007, doi: 10.1016/j.ipm.2007.01.023 This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/736",
"title": ""
},
{
"docid": "5528f1ee010e7fba440f1f7ff84a3e8e",
"text": "In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this thesis is allowable only for scholarly purposes, consistent with \"fair use\" as prescribed in the U.S. Copyright Law. Any other reproduction for any purposes or by any means shall not be allowed without my written permission. PREFACE Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the \"Walkthrough\" at the University of North Carolina. The setup at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 …",
"title": ""
},
{
"docid": "980bc7323411806e6e4faffe0b7303e2",
"text": "The ability to generate intermediate frames between two given images in a video sequence is an essential task for video restoration and video post-processing. In addition, restoration requires robust denoising algorithms, must handle corrupted frames and recover from impaired frames accordingly. In this paper we present a unified framework for all these tasks. In our approach we use a variant of the TV-L denoising algorithm that operates on image sequences in a space-time volume. The temporal derivative is modified to take the pixels’ movement into account. In order to steer the temporal gradient in the desired direction we utilize optical flow to estimate the velocity vectors between consecutive frames. We demonstrate our approach on impaired movie sequences as well as on benchmark datasets where the ground-truth is known.",
"title": ""
},
{
"docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442",
"text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.",
"title": ""
},
{
"docid": "cb1e6d11d372e72f7675a55c8f2c429d",
"text": "We evaluate the performance of a hardware/software architecture designed to perform a wide range of fast image processing tasks. The system ar chitecture is based on hardware featuring a Field Programmable Gate Array (FPGA) co-processor and a h ost computer. A LabVIEW TM host application controlling a frame grabber and an industrial camer a is used to capture and exchange video data with t he hardware co-processor via a high speed USB2.0 chann el, implemented with a standard macrocell. The FPGA accelerator is based on a Altera Cyclone II ch ip and is designed as a system-on-a-programmablechip (SOPC) with the help of an embedded Nios II so ftware processor. The SOPC system integrates the CPU, external and on chip memory, the communication channel and typical image filters appropriate for the evaluation of the system performance. Measured tran sfer rates over the communication channel and processing times for the implemented hardware/softw are logic are presented for various frame sizes. A comparison with other solutions is given and a rang e of applications is also discussed.",
"title": ""
},
{
"docid": "d88ce8a3e9f669c40b21710b69ac11be",
"text": "The smart city concept represents a compelling platform for IT-enabled service innovation. It offers a view of the city where service providers use information technologies to engage with citizens to create more effective urban organizations and systems that can improve the quality of life. The emerging Internet of Things (IoT) model is foundational to the development of smart cities. Integrated cloud-oriented architecture of networks, software, sensors, human interfaces, and data analytics are essential for value creation. IoT smart-connected products and the services they provision will become essential for the future development of smart cities. This paper will explore the smart city concept and propose a strategy development model for the implementation of IoT systems in a smart city context.",
"title": ""
}
] | scidocsrr |
7790af0a9eff3fe9c19cf8bcd0395fef | On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty | [
{
"docid": "7b46cf9aa63423485f4f48d635cb8f5c",
"text": "It sounds good when knowing the multiple criteria decision analysis an integrated approach in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.",
"title": ""
}
] | [
{
"docid": "d03f900c785a5d6abf8bb16434693e4d",
"text": "Juvenile gigantomastia is a benign disorder of the breast in which one or both of the breasts undergo a massive increase in size during adolescence. The authors present a series of four cases of juvenile gigantomastia, advances in endocrine management, and the results of surgical therapy. Three patients were treated for initial management of juvenile gigantomastia and one patient was evaluated for a gestationally induced recurrence of juvenile gigantomastia. The three women who presented for initial management had a complete evaluation to rule out other etiologies of breast enlargement. Endocrine therapy was used in 2 patients, one successfully. A 17-year-old girl had unilateral hypertrophy treated with reduction surgery. She had no recurrence and did not require additional surgery. Two patients, ages 10 and 12 years, were treated at a young age with reduction mammaplasty, and both of these girls required secondary surgery for treatment. One patient underwent subtotal mastectomy with implant reconstruction but required two subsequent operations for removal of recurrent hypertrophic breast tissue. The second patient started a course of tamoxifen followed by reduction surgery. While on tamoxifen, the second postoperative result remained stable, and the contralateral breast, which had exhibited some minor hypertrophy, regressed in size. The fourth patient was a gravid 24-year-old who had been treated for juvenile gigantomastia at age 14, and presented with gestationally induced recurrent hypertrophy. The authors' experience has been that juvenile gigantomastia in young patients is prone to recurrence, and is in agreement with previous studies that subcutaneous mastectomy provides definitive treatment. However, tamoxifen may be a useful adjunct and may allow stable results when combined with reduction mammaplasty. If successful, the use of tamoxifen would eliminate the potential complications of breast prostheses. Lastly, the 17-year-old patient did not require secondary surgery, suggesting that older patients may be treated definitively with reduction surgery alone.",
"title": ""
},
{
"docid": "7ef20dc3eb5ec7aee75f41174c9fae12",
"text": "As the data and ontology layers of the Semantic Web stack have achieved a certain level of maturity in standard recommendations such as RDF and OWL, the current focus lies on two related aspects. On the one hand, the definition of a suitable query language for RDF, SPARQL, is close to recommendation status within the W3C. The establishment of the rules layer on top of the existing stack on the other hand marks the next step to be taken, where languages with their roots in Logic Programming and Deductive Databases are receiving considerable attention. The purpose of this paper is threefold. First, we discuss the formal semantics of SPARQLextending recent results in several ways. Second, weprovide translations from SPARQL to Datalog with negation as failure. Third, we propose some useful and easy to implement extensions of SPARQL, based on this translation. As it turns out, the combination serves for direct implementations of SPARQL on top of existing rules engines as well as a basis for more general rules and query languages on top of RDF.",
"title": ""
},
{
"docid": "ad1000d0975bb0c605047349267c5e47",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "4261e44dad03e8db3c0520126b9c7c4d",
"text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.",
"title": ""
},
{
"docid": "c34b6fac632c05c73daee2f0abce3ae8",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "19361b2d5e096f26e650b25b745e5483",
"text": "Multispectral pedestrian detection has attracted increasing attention from the research community due to its crucial competence for many around-the-clock applications (e.g., video surveillance and autonomous driving), especially under insufficient illumination conditions. We create a human baseline over the KAIST dataset and reveal that there is still a large gap between current top detectors and human performance. To narrow this gap, we propose a network fusion architecture, which consists of a multispectral proposal network to generate pedestrian proposals, and a subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. The unified network is learned by jointly optimizing pedestrian detection and semantic segmentation tasks. The final detections are obtained by integrating the outputs from different modalities as well as the two stages. The approach significantly outperforms state-of-the-art methods on the KAIST dataset while remain fast. Additionally, we contribute a sanitized version of training annotations for the KAIST dataset, and examine the effects caused by different kinds of annotation errors. Future research of this problem will benefit from the sanitized version which eliminates the interference of annotation errors.",
"title": ""
},
{
"docid": "106ec8b5c3f5bff145be2bbadeeafe68",
"text": "Objective: To provide a parsimonious clustering pipeline that provides comparable performance to deep learning-based clustering methods, but without using deep learning algorithms, such as autoencoders. Materials and methods: Clustering was performed on six benchmark datasets, consisting of five image datasets used in object, face, digit recognition tasks (COIL20, COIL100, CMU-PIE, USPS, and MNIST) and one text document dataset (REUTERS-10K) used in topic recognition. K-means, spectral clustering, Graph Regularized Non-negative Matrix Factorization, and K-means with principal components analysis algorithms were used for clustering. For each clustering algorithm, blind source separation (BSS) using Independent Component Analysis (ICA) was applied. Unsupervised feature learning (UFL) using reconstruction cost ICA (RICA) and sparse filtering (SFT) was also performed for feature extraction prior to the cluster algorithms. Clustering performance was assessed using the normalized mutual information and unsupervised clustering accuracy metrics. Results: Performing, ICA BSS after the initial matrix factorization step provided the maximum clustering performance in four out of six datasets (COIL100, CMU-PIE, MNIST, and REUTERS-10K). Applying UFL as an initial processing component helped to provide the maximum performance in three out of six datasets (USPS, COIL20, and COIL100). Compared to state-of-the-art non-deep learning clustering methods, ICA BSS and/ or UFL with graph-based clustering algorithms outperformed all other methods. With respect to deep learning-based clustering algorithms, the new methodology presented here obtained the following rankings: COIL20, 2nd out of 5; COIL100, 2nd out of 5; CMU-PIE, 2nd out of 5; USPS, 3rd out of 9; MNIST, 8th out of 15; and REUTERS-10K, 4th out of 5. Discussion: By using only ICA BSS and UFL using RICA and SFT, clustering accuracy that is better or on par with many deep learning-based clustering algorithms was achieved. For instance, by applying ICA BSS to spectral clustering on the MNIST dataset, we obtained an accuracy of 0.882. This is better than the well-known Deep Embedded Clustering algorithm that had obtained an accuracy of 0.818 using stacked denoising autoencoders in its model. Open Access © The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. RESEARCH Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 https://doi.org/10.1186/s13673-018-0148-3 *Correspondence: [email protected] Department of Electrical and Computer Engineering, University of Ontario Institute of Technology, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada Page 2 of 19 Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 Conclusion: Using the new clustering pipeline presented here, effective clustering performance can be obtained without employing deep clustering algorithms and their accompanying hyper-parameter tuning procedure.",
"title": ""
},
{
"docid": "1ae3bacfff3bffad223eb6cad7250fc3",
"text": "The effects of a human head on the performance of small planar ultra-wideband (UWB) antennas in proximity of the head are investigated numerically and experimentally. In simulation, a numerical head model is used in the XFDTD software package. The head model developed by REMCOM is with the frequency-dependent dielectric constant and conductivity obtained from the average data of anatomical human heads. Two types of planar antennas printed on printed circuit board (PCB) are designed to cover the UWB band. The impedance and radiation performance of the antennas are examined when the antennas are placed very close to the human head. The study shows that the human head slightly affects the impedance performance of the antennas. The radiated field distributions and the gain of the antennas demonstrate that the human head significantly blocks and absorbs the radiation from the antennas so that the radiation patterns are directional in the horizontal planes and the average gain greatly decreases. The information derived from the study is helpful to engineers who are applying UWB devices around/on human heads.",
"title": ""
},
{
"docid": "e8758a9e2b139708ca472dd60397dc2e",
"text": "Multiple photovoltaic (PV) modules feeding a common load is the most common form of power distribution used in solar PV systems. In such systems, providing individual maximum power point tracking (MPPT) schemes for each of the PV modules increases the cost. Furthermore, its v-i characteristic exhibits multiple local maximum power points (MPPs) during partial shading, making it difficult to find the global MPP using conventional single-stage (CSS) tracking. To overcome this difficulty, the authors propose a novel MPPT algorithm by introducing a particle swarm optimization (PSO) technique. The proposed algorithm uses only one pair of sensors to control multiple PV arrays, thereby resulting in lower cost, higher overall efficiency, and simplicity with respect to its implementation. The validity of the proposed algorithm is demonstrated through experimental studies. In addition, a detailed performance comparison with conventional fixed voltage, hill climbing, and Fibonacci search MPPT schemes are presented. Algorithm robustness was verified for several complicated partial shading conditions, and in all cases this method took about 2 s to find the global MPP.",
"title": ""
},
{
"docid": "0af8cffabf74b5955e1a7bb6edf48cdf",
"text": "One of the main challenges in game AI is building agents that can intelligently react to unforeseen game situations. In real-time strategy games, players create new strategies and tactics that were not anticipated during development. In order to build agents capable of adapting to these types of events, we advocate the development of agents that reason about their goals in response to unanticipated game events. This results in a decoupling between the goal selection and goal execution logic in an agent. We present a reactive planning implementation of the Goal-Driven Autonomy conceptual model and demonstrate its application in StarCraft. Our system achieves a win rate of 73% against the builtin AI and outranks 48% of human players on a competitive ladder server.",
"title": ""
},
{
"docid": "f2fc46012fa4b767f514b9d145227ec7",
"text": "Derivation of backpropagation in convolutional neural network (CNN) is conducted based on an example with two convolutional layers. The step-by-step derivation is helpful for beginners. First, the feedforward procedure is claimed, and then the backpropagation is derived based on the example. 1 Feedforward",
"title": ""
},
{
"docid": "a712b6efb5c869619864cd817c2e27e1",
"text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.",
"title": ""
},
{
"docid": "6264a8e43070f686375150b4beadaee7",
"text": "A control law for an integrated power/attitude control system (IPACS) for a satellite is presented. Four or more energy/momentum wheels in an arbitrary noncoplanar con guration and a set of three thrusters are used to implement the torque inputs. The energy/momentum wheels are used as attitude-control actuators, as well as an energy storage mechanism, providing power to the spacecraft. In that respect, they can replace the currently used heavy chemical batteries. The thrusters are used to implement the torques for large and fast (slew) maneuvers during the attitude-initialization and target-acquisition phases and to implement the momentum management strategies. The energy/momentum wheels are used to provide the reference-tracking torques and the torques for spinning up or down the wheels for storing or releasing kinetic energy. The controller published in a previous work by the authors is adopted here for the attitude-tracking function of the wheels. Power tracking for charging and discharging the wheels is added to complete the IPACS framework. The torques applied by the energy/momentum wheels are decomposed into two spaces that are orthogonal to each other, with the attitude-control torques and power-tracking torques in each space. This control law can be easily incorporated in an IPACS system onboard a satellite. The possibility of the occurrence of singularities, in which no arbitrary energy pro le can be tracked, is studied for a generic wheel cluster con guration. A standard momentum management scheme is considered to null the total angular momentum of the wheels so as to minimize the gyroscopic effects and prevent the singularity from occurring. A numerical example for a satellite in a low Earth near-polar orbit is provided to test the proposed IPACS algorithm. The satellite’s boresight axis is required to track a ground station, and the satellite is required to rotate about its boresight axis so that the solar panel axis is perpendicular to the satellite–sun vector.",
"title": ""
},
{
"docid": "0e153353fb8af1511de07c839f6eaca5",
"text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.",
"title": ""
},
{
"docid": "18c517f26bceeb7930a4418f7a6b2f30",
"text": "BACKGROUND\nWe aimed to study whether pulmonary hypertension (PH) and elevated pulmonary vascular resistance (PVR) could be predicted by conventional echo Doppler and novel tissue Doppler imaging (TDI) in a population of chronic obstructive pulmonary disease (COPD) free of LV disease and co-morbidities.\n\n\nMETHODS\nEchocardiography and right heart catheterization was performed in 100 outpatients with COPD. By echocardiography the time-integral of the TDI index, right ventricular systolic velocity (RVSmVTI) and pulmonary acceleration-time (PAAcT) were measured and adjusted for heart rate. The COPD patients were randomly divided in a derivation (n = 50) and a validation cohort (n = 50).\n\n\nRESULTS\nPH (mean pulmonary artery pressure (mPAP) ≥ 25mmHg) and elevated PVR ≥ 2Wood unit (WU) were predicted by satisfactory area under the curve for RVSmVTI of 0.93 and 0.93 and for PAAcT of 0.96 and 0.96, respectively. Both echo indices were 100% feasible, contrasting 84% feasibility for parameters relying on contrast enhanced tricuspid-regurgitation. RVSmVTI and PAAcT showed best correlations to invasive measured mPAP, but less so to PVR. PAAcT was accurate in 90- and 78% and RVSmVTI in 90- and 84% in the calculation of mPAP and PVR, respectively.\n\n\nCONCLUSIONS\nHeart rate adjusted-PAAcT and RVSmVTI are simple and reproducible methods that correlate well with pulmonary artery pressure and PVR and showed high accuracy in detecting PH and increased PVR in patients with COPD. Taken into account the high feasibility of these two echo indices, they should be considered in the echocardiographic assessment of COPD patients.",
"title": ""
},
{
"docid": "0141a93f93a7cf3c8ee8fd705b0a9657",
"text": "We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT’14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact [email protected]. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "cdaa99f010b20906fee87d8de08e1106",
"text": "We propose a novel hierarchical clustering algorithm for data-sets in which only pairwise distances between the points are provided. The classical Hungarian method is an efficient algorithm for solving the problem of minimal-weight cycle cover. We utilize the Hungarian method as the basic building block of our clustering algorithm. The disjoint cycles, produced by the Hungarian method, are viewed as a partition of the data-set. The clustering algorithm is formed by hierarchical merging. The proposed algorithm can handle data that is arranged in non-convex sets. The number of the clusters is automatically found as part of the clustering process. We report an improved performance of our algorithm in a variety of examples and compare it to the spectral clustering algorithm.",
"title": ""
},
{
"docid": "e938ad7500cecd5458e4f68e564e6bc4",
"text": "In this article, an adaptive fuzzy sliding mode control (AFSMC) scheme is derived for robotic systems. In the AFSMC design, the sliding mode control (SMC) concept is combined with fuzzy control strategy to obtain a model-free fuzzy sliding mode control. The equivalent controller has been replaced by a fuzzy system and the uncertainties are estimated online. The approach of the AFSMC has the learning ability to generate the fuzzy control actions and adaptively compensates for the uncertainties. Despite the high nonlinearity and coupling effects, the control input of the proposed control algorithm has been decoupled leading to a simplified control mechanism for robotic systems. Simulations have been carried out on a two link planar robot. Results show the effectiveness of the proposed control system.",
"title": ""
}
] | scidocsrr |
bf7cb5713c1bc22b3c7b27902d580a24 | Why Users Disintermediate Peer-to-Peer Marketplaces | [
{
"docid": "00b98536f0ecd554442a67fb31f77f4c",
"text": "We use a large, nationally-representative sample of working-age adults to demonstrate that personality (as measured by the Big Five) is stable over a four-year period. Average personality changes are small and do not vary substantially across age groups. Intra-individual personality change is generally unrelated to experiencing adverse life events and is unlikely to be economically meaningful. Like other non-cognitive traits, personality can be modeled as a stable input into many economic decisions. JEL classi cation: J3, C18.",
"title": ""
},
{
"docid": "a8ca6ef7b99cca60f5011b91d09e1b06",
"text": "When virtual teams need to establish trust at a distance, it is advantageous for them to use rich media to communicate. We studied the emergence of trust in a social dilemma game in four different communication situations: face-to-face, video, audio, and text chat. All three of the richer conditions were significant improvements over text chat. Video and audio conferencing groups were nearly as good as face-to-face, but both did show some evidence of what we term delayed trust (slower progress toward full cooperation) and fragile trust (vulnerability to opportunistic behavior)",
"title": ""
},
{
"docid": "76034cd981a64059f749338a2107e446",
"text": "We examine how financial assurance structures and the clearly defined financial transaction at the core of monetized network hospitality reduce uncertainty for Airbnb hosts and guests. We apply the principles of social exchange and intrinsic and extrinsic motivation to a qualitative study of Airbnb hosts to 1) describe activities that are facilitated by the peer-to-peer exchange platform and 2) how the assurance of the initial financial exchange facilitates additional social exchanges between hosts and guests. The study illustrates that the financial benefits of hosting do not necessarily crowd out intrinsic motivations for hosting but instead strengthen them and even act as a gateway to further social exchange and interpersonal interaction. We describe the assurance structures in networked peer-to-peer exchange, and explain how such assurances can reconcile contention between extrinsic and intrinsic motivations. We conclude with implications for design and future research.",
"title": ""
}
] | [
{
"docid": "12680d4fcf57a8a18d9c2e2b1107bf2d",
"text": "Recent advances in computer and technology resulted into ever increasing set of documents. The need is to classify the set of documents according to the type. Laying related documents together is expedient for decision making. Researchers who perform interdisciplinary research acquire repositories on different topics. Classifying the repositories according to the topic is a real need to analyze the research papers. Experiments are tried on different real and artificial datasets such as NEWS 20, Reuters, emails, research papers on different topics. Term Frequency-Inverse Document Frequency algorithm is used along with fuzzy K-means and hierarchical algorithm. Initially experiment is being carried out on small dataset and performed cluster analysis. The best algorithm is applied on the extended dataset. Along with different clusters of the related documents the resulted silhouette coefficient, entropy and F-measure trend are presented to show algorithm behavior for each data set.",
"title": ""
},
{
"docid": "d31ba2b9ca7f5a33619fef33ade3b75a",
"text": "We present ARPKI, a public-key infrastructure that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI is the first such infrastructure that systematically takes into account requirements identified by previous research. Moreover, ARPKI is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing all features required for deployment. ARPKI efficiently handles the certification process with low overhead and without incurring additional latency to TLS.\n ARPKI offers extremely strong security guarantees, where compromising n-1 trusted signing and verifying entities is insufficient to launch an impersonation attack. Moreover, it deters misbehavior as all its operations are publicly visible.",
"title": ""
},
{
"docid": "8b205549e43d355174e8d8fce645ca99",
"text": "In recent era, the weighted matrix rank minimization is used to reduce image noise, promisingly. However, low-rank weighted conditions may cause oversmoothing or oversharpening of the denoised image. This demands a clever engineering algorithm. Particularly, to remove heavy noise in image is always a challenging task, specially, when there is need to preserve the fine edge structures. To attain a reliable estimate of heavy noise image, a norm weighted fusion estimators method is proposed in wavelet domain. This holds the significant geometric structure of the given noisy image during the denoising process. Proposed method is applied on standard benchmark images, and simulation results outperform the most popular rivals of noise reduction approaches, such as BM3D, EPLL, LSSC, NCSR, SAIST, and WNNM in terms of the quality measurement metric PSNR (dB) and structural analysis SSIM indices.",
"title": ""
},
{
"docid": "7f575dd097ac747eddd2d7d0dc1055d5",
"text": "It has been widely believed that biometric template aging does not occur for iris biometrics. We compare the match score distribution for short time-lapse iris image pairs, with a mean of approximately one month between the enrollment image and the verification image, to the match score distributions for image pairs with one, two and three years of time lapse. We find clear and consistent evidence of a template aging effect that is noticeable at one year and that increases with increasing time lapse. For a state-of-the-art iris matcher, and three years of time lapse, at a decision threshold corresponding to a one in two million false match rate, we observe an 153% increase in the false non-match rate, with a bootstrap estimated 95% confidence interval of 85% to 307%.",
"title": ""
},
{
"docid": "2a1c5ba1c1057364420fd220995a74ff",
"text": "A multicell rectifier (MC) structure with N + 2 redundancy is presented. The topology is based on power cells implemented with the integrated gate commuted thyristors (IGCTs) to challenge the SCR standard industry solution for the past 35 years. This rectifier is a reliable, compact, efficient, nonpolluting alternative and cost-effective solution for electrolytic applications. Its structure, based on power cells, enables load shedding to ensure power delivery even in the event of power cell failures. It injects quasi-sinusoidal input currents and provides unity power factor without the use of passive or active filters. A complete evaluation based on IEEE standards 493-1997 and IEEE C57.18.10 for average downtime, failures rates, and efficiency is included. For comparison purposes, results are shown against conventional systems known for their high efficiency and reliability.",
"title": ""
},
{
"docid": "36da2b6102762c80b3ae8068d764e220",
"text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move",
"title": ""
},
{
"docid": "dedc509f31c9b7e6c4409d655a158721",
"text": "Envelope tracking (ET) is by now a well-established technique that improves the efficiency of microwave power amplifiers (PAs) compared to what can be obtained with conventional class-AB or class-B operation for amplifying signals with a time-varying envelope, such as most of those used in present wireless communication systems. ET is poised to be deployed extensively in coming generations of amplifiers for cellular handsets because it can reduce power dissipation for signals using the long-term evolution (LTE) standard required for fourthgeneration (4G) wireless systems, which feature high peak-to-average power ratios (PAPRs). The ET technique continues to be actively developed for higher carrier frequencies and broader bandwidths. This article reviews the concepts and history of ET, discusses several applications currently on the drawing board, presents challenges for future development, and highlights some directions for improving the technique.",
"title": ""
},
{
"docid": "c5bc0cd14aa51c24a00107422fc8ca10",
"text": "This paper proposes a new high-voltage Pulse Generator (PG), fed from low voltage dc supply Vs. This input supply voltage is utilized to charge two arms of N series-connected modular multilevel converter sub-module capacitors sequentially through a resistive-inductive branch, such that each arm is charged to NVS. With a step-up nano-crystalline transformer of n turns ratio, the proposed PG is able to generate bipolar rectangular pulses of peak ±nNVs, at high repetition rates. However, equal voltage-second area of consecutive pulse pair polarities should be assured to avoid transformer saturation. Not only symmetrical pulses can be generated, but also asymmetrical pulses with equal voltage-second areas are possible. The proposed topology is tested via simulations and a scaled-down experimentation, which establish the viability of the topology for water treatment applications.",
"title": ""
},
{
"docid": "c5ae1d66d31128691e7e7d8e2ccd2ba8",
"text": "The scope of this paper is two-fold: firstly it proposes the application of a 1-2-3 Zones approach to Internet of Things (IoT)-related Digital Forensics (DF) investigations. Secondly, it introduces a Next-Best-Thing Triage (NBT) Model for use in conjunction with the 1-2-3 Zones approach where necessary and vice versa. These two `approaches' are essential for the DF process from an IoT perspective: the atypical nature of IoT sources of evidence (i.e. Objects of Forensic Interest - OOFI), the pervasiveness of the IoT environment and its other unique attributes - and the combination of these attributes - dictate the necessity for a systematic DF approach to incidents. The two approaches proposed are designed to serve as a beacon to incident responders, increasing the efficiency and effectiveness of their IoT-related investigations by maximizing the use of the available time and ensuring relevant evidence identification and acquisition. The approaches can also be applied in conjunction with existing, recognised DF models, methodologies and frameworks.",
"title": ""
},
{
"docid": "4f3936b753abd2265d867c0937aec24c",
"text": "A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conway’s game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.",
"title": ""
},
{
"docid": "7159d958139d684e4a74abe252788a40",
"text": "Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.",
"title": ""
},
{
"docid": "7618fa5b704c892b6b122f3602893d75",
"text": "At the dawn of the second automotive century it is apparent that the competitive realm of the automotive industry is shifting away from traditional classifications based on firms’ production systems or geographical homes. Companies across the regional and volume spectrum have adopted a portfolio of manufacturing concepts derived from both mass and lean production paradigms, and the recent wave of consolidation means that regional comparisons can no longer be made without considering the complexities induced by the diverse ownership structure and plethora of international collaborations. In this chapter we review these dynamics and propose a double helix model illustrating how the basis of competition has shifted from cost-leadership during the heyday of Ford’s original mass production, to variety and choice following Sloan’s portfolio strategy, to diversification through leadership in design, technology or manufacturing excellence, as in the case of Toyota, and to mass customisation, which marks the current competitive frontier. We will explore how the production paradigms that have determined much of the competition in the first automotive century have evolved, what trends shape the industry today, and what it will take to succeed in the automotive industry of the future. 1 This chapter provides a summary of research conducted as part of the ILIPT Integrated Project and the MIT International Motor Vehicle Program (IMVP), and expands on earlier works, including the book The second century: reconnecting customer and value chain through build-toorder (Holweg and Pil 2004) and the paper Beyond mass and lean production: on the dynamics of competition in the automotive industry (Économies et Sociétés: Série K: Économie de l’Enterprise, 2005, 15:245–270).",
"title": ""
},
{
"docid": "411f47c2edaaf3696d44521d4a97eb28",
"text": "An energy-efficient 3 Gb/s current-mode interface scheme is proposed for on-chip global interconnects and silicon interposer channels. The transceiver core consists of an open-drain transmitter with one-tap pre-emphasis and a current sense amplifier load as the receiver. The current sense amplifier load is formed by stacking a PMOS diode stage and a cross-coupled NMOS stage, providing an optimum current-mode receiver without any bias current. The proposed scheme is verified with two cases of transceivers implemented in 65 nm CMOS. A 10 mm point-to-point data-only channel shows an energy efficiency of 9.5 fJ/b/mm, and a 20 mm four-drop source-synchronous link achieves 29.4 fJ/b/mm including clock and data channels.",
"title": ""
},
{
"docid": "227d8ad4000e6e1d9fd1aa6bff8ed64c",
"text": "Recently, speed sensorless control of Induction Motor (IM) drives received great attention to avoid the different problems associated with direct speed sensors. Among different rotor speed estimation techniques, Model Reference Adaptive System (MRAS) schemes are the most common strategies employed due to their relative simplicity and low computational effort. In this paper a novel adaptation mechanism is proposed which replaces normally used conventional Proportional-Integral (PI) controller in MRAS adaptation mechanism by a Fractional Order PI (FOPI) controller. The performance of two adaptation mechanism controllers has been verified through simulation results using MATLAB/SIMULINK software. It is seen that the performance of the induction motor has improved when FOPI controller is used in place of classical PI controller.",
"title": ""
},
{
"docid": "8f65f1971405e0c225e3625bb682a2d4",
"text": "We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018. arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster.",
"title": ""
},
{
"docid": "4bce887df71f59085938c8030e7b0c1c",
"text": "Context plays an important role in human language understanding, thus it may also be useful for machines learning vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required. After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabelled corpora, and in both cases the transferability is evaluated on a set of downstream NLP tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.",
"title": ""
},
{
"docid": "370d8cccec6964954154e796f0e558c8",
"text": "We present optical imaging-based methods to measure vital physiological signals, including breathing frequency (BF), exhalation flow rate, heart rate (HR), and pulse transit time (PTT). The breathing pattern tracking was based on the detection of body movement associated with breathing using a differential signal processing approach. A motion-tracking algorithm was implemented to correct random body movements that were unrelated to breathing. The heartbeat pattern was obtained from the color change in selected region of interest (ROI) near the subject's mouth, and the PTT was determined by analyzing pulse patterns at different body parts of the subject. The measured BF, exhaled volume flow rate and HR are consistent with those measured simultaneously with reference technologies (r = 0.98, p <; 0.001 for HR; r = 0.93, p <; 0.001 for breathing rate), and the measured PTT difference (30-40 ms between mouth and palm) is comparable to the results obtained with other techniques in the literature. The imaging-based methods are suitable for tracking vital physiological parameters under free-living condition and this is the first demonstration of using noncontact method to obtain PTT difference and exhalation flow rate.",
"title": ""
},
{
"docid": "7bd0d55e08ff4d94c021dd53142ef5aa",
"text": "From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96% recall and 71% precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94% were rated sensible.",
"title": ""
},
{
"docid": "57889499aaa45b38754d9d6cebff96b8",
"text": "ion speeds up the DTW algorithm by operating on a reduced representation of the data. These algorithms include IDDTW [3], PDTW [13], and COW [2] . The left side of Figure 5 shows a full-resolution cost matrix for which a minimum-distance warp path must be found. Rather than running DTW on the full resolution (1/1) cost matrix, the time series are reduced in size to make the number of cells in the cost matrix more manageable. A warp path is found for the lowerresolution time series and is mapped back to full resolution. The resulting speedup depends on how much abstraction is used. Obviously, the calculated warp path becomes increasingly inaccurate as the level of abstraction increases. Projecting the low resolution warp path to the full resolution usually creates a warp path that is far from optimal. This is because even IF an optimal warp path passes through the low-resolution cell, projecting the warp path to the higher resolution ignores local variations in the warp path that can be very significant. Indexing [9][14] uses lower-bounding functions to prune the number of times DTW is run for similarity search [17]. Indexing speeds up applications in which DTW is used, but it does not make up the actual DTW calculation any more efficient. Our FastDTW algorithm uses ideas from both the constraints and abstraction approaches. Using a combination of both overcomes many limitations of using either method individually, and yields an accurate algorithm that is O(N) in both time and space complexity. Our multi-level approach is superficially similar to IDDTW [3] because they both evaluate several different resolutions. However, IDDTW simply executes PDTW [13] at increasingly higher resolutions until a desired “accuracy” is achieved. IDDTW does not project low resolution solutions to higher resolutions. In Section 4, we will demonstrate that these methods are more inaccurate than our method given the same amount of execution time. Projecting warp paths to higher resolutions is also done in the construction of “Match Webs” [15]. However, their approach is still O(N) due to the simultaneous search for many warp paths (they call them “chains”). A multi-resolution approach in their application also could not continue down to the low resolutions without severely reducing the number of “chains” that could be found. Some recent research [18] asserts that there is no need to speed up the original DTW algorithm. However, this is only true under the following (common) conditions: 1) Tight Constraints A relatively strict near-linear warp path is allowable. 2) Short Time Series All time series are short enough for the DTW algorithm to execute quickly. (~3,000 points if a warp path is needed, or ~100,000 if no warp path is needed and the user has a lot of patience).",
"title": ""
},
{
"docid": "97ac64bb4d06216253eacb17abfcb103",
"text": "UIMA Ruta is a rule-based system designed for information extraction tasks, but it is also applicable for many natural language processing use cases. This demonstration gives an overview of the UIMA Ruta Workbench, which provides a development environment and tooling for the rule language. It was developed to ease every step in engineering rule-based applications. In addition to the full-featured rule editor, the user is supported by explanation of the rule execution, introspection in results, automatic validation and rule induction. Furthermore, the demonstration covers the usage and combination of arbitrary components for natural language processing.",
"title": ""
}
] | scidocsrr |
dd11c94df91e2ee7de70f1a5eb43b719 | CRITICAL SUCCESS FACTORS (CSFS) OF ENTERPRISE RESOURCE PLANNING (ERP) SYSTEM IMPLEMENTATION IN HIGHER EDUCATION INSTITUTIONS (HEIS): CONCEPTS AND LITERATURE REVIEW | [
{
"docid": "1746b22d663eb477fb429783ce89f07e",
"text": "Enterprise resource planning (ERP) system has been one of the most popular business management systems, providing benefits of real-time capabilities and seamless communication for business in large organizations. However, not all ERP implementations have been successful. Since ERP implementation affects entire organizations such as process, people, and culture, there are a number of challenges that companies may encounter in implementing ERP systems. Recently, some universities have begun replacing their legacy systems with ERP systems to improve management and administration. This thesis focuses on challenges of ERP implementation between corporate and university environment. I review previous studies that determine Critical Successful Factors (CSFs) and risk factors to implement ERP in both environments. Particularly, case studies in this thesis emphasize the organizational dynamics involved in ERP implementation by using CSFs and three phases of framework by Miles and Huberman (1994): antecedent condition, implementation process, and outcomes. This study uses findings from the case studies to assess ERP readiness and CSFs' fulfillment. The results from this study contribute to contextual understanding of distinctive challenges in ERP implementation between corporate and university environment. Thesis Supervisor: Professor Stuart Madnick Title: John Norris Maguire Professor of Information Technologies, MIT Sloan School of Management and Professor of Engineering Systems, School of Engineering",
"title": ""
}
] | [
{
"docid": "58c0456c8ae9045898aca67de9954659",
"text": "Channel sensing and spectrum allocation has long been of interest as a prospective addition to cognitive radios for wireless communications systems occupying license-free bands. Conventional approaches to cyclic spectral analysis have been proposed as a method for classifying signals for applications where the carrier frequency and bandwidths are unknown, but is, however, computationally complex and requires a significant amount of observation time for adequate performance. Neural networks have been used for signal classification, but only for situations where the baseband signal is present. By combining these techniques a more efficient and reliable classifier can be developed where a significant amount of processing is performed offline, thus reducing online computation. In this paper we take a renewed look at signal classification using spectral coherence and neural networks, the performance of which is characterized by Monte Carlo simulations",
"title": ""
},
{
"docid": "c3a7d3fa13bed857795c4cce2e992b87",
"text": "Healthcare consumers, researchers, patients and policy makers increasingly use systematic reviews (SRs) to aid their decision-making process. However, the conduct of SRs can be a time-consuming and resource-intensive task. Often, clinical practice guideline developers or other decision-makers need to make informed decisions in a timely fashion (e.g. outbreaks of infection, hospital-based health technology assessments). Possible approaches to address the issue of timeliness in the production of SRs are to (a) implement process parallelisation, (b) adapt and apply innovative technologies, and/or (c) modify SR processes (e.g. study eligibility criteria, search sources, data extraction or quality assessment). Highly parallelised systematic reviewing requires substantial resources to support a team of experienced information specialists, reviewers and methodologists working alongside with clinical content experts to minimise the time for completing individual review steps while maximising the parallel progression of multiple steps. Effective coordination and management within the team and across external stakeholders are essential elements of this process. Emerging innovative technologies have a great potential for reducing workload and improving efficiency of SR production. The most promising areas of application would be to allow automation of specific SR tasks, in particular if these tasks are time consuming and resource intensive (e.g. language translation, study selection, data extraction). Modification of SR processes involves restricting, truncating and/or bypassing one or more SR steps, which may risk introducing bias to the review findings. Although the growing experiences in producing various types of rapid reviews (RR) and the accumulation of empirical studies exploring potential bias associated with specific SR tasks have contributed to the methodological development for expediting SR production, there is still a dearth of research examining the actual impact of methodological modifications and comparing the findings between RRs and SRs. This evidence would help to inform as to which SR tasks can be accelerated or truncated and to what degree, while maintaining the validity of review findings. Timely delivered SRs can be of value in informing healthcare decisions and recommendations, especially when there is practical urgency and there is no other relevant synthesised evidence.",
"title": ""
},
{
"docid": "dbc253488a9f5d272e75b38dc98ea101",
"text": "A new form of a hybrid design of a microstrip-fed parasitic coupled ring fractal monopole antenna with semiellipse ground plane is proposed for modern mobile devices having a wireless local area network (WLAN) module along with a Worldwide Interoperability for Microwave Access (WiMAX) function. In comparison to the previous monopole structures, the miniaturized antenna dimension is only about 25 × 25 × 1 mm3 , which is 15 times smaller than the previous proposed design. By only increasing the fractal iterations, very good impedance characteristics are obtained. Throughout this letter, the improvement process of the impedance and radiation properties is completely presented and discussed.",
"title": ""
},
{
"docid": "2bf928d7701602a9827ffcf9e4ac984e",
"text": "Recommender systems are helpful tools which provide an adaptive Web environment for Web users. Recently, a number of Web page recommender systems have been developed to extract the user behavior from the user’s navigational path and predict the next request as he/she visits Web pages. Web Usage Mining (WUM) is a kind of data mining method that can be used to discover this behavior of user and his/her access patterns from Web log data. This paper first presents an overview of the used concepts and techniques of WUM to design Web recommender systems. Then it is shown that how WUM can be applied to Web server logs for discovering access patterns. Afterward, we analyze some of the problems and challenges in deploying recommender systems. Finally, we propose the solutions which address these problems.",
"title": ""
},
{
"docid": "2d146e411e1a1068f6e907709d542a4f",
"text": "Plug-in vehicles can behave either as loads or as a distributed energy and power resource in a concept known as vehicle-to-grid (V2G) connection. This paper reviews the current status and implementation impact of V2G/grid-to-vehicle (G2V) technologies on distributed systems, requirements, benefits, challenges, and strategies for V2G interfaces of both individual vehicles and fleets. The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural, and technical obstacles. Although V2G operation can reduce the lifetime of vehicle batteries, it is projected to become economical for vehicle owners and grid operators. Components and unidirectional/bidirectional power flow technologies of V2G systems, individual and aggregated structures, and charging/recharging frequency and strategies (uncoordinated/coordinated smart) are addressed. Three elements are required for successful V2G operation: power connection to the grid, control and communication between vehicles and the grid operator, and on-board/off-board intelligent metering. Success of the V2G concept depends on standardization of requirements and infrastructure decisions, battery technology, and efficient and smart scheduling of limited fast-charge infrastructure. A charging/discharging infrastructure must be deployed. Economic benefits of V2G technologies depend on vehicle aggregation and charging/recharging frequency and strategies. The benefits will receive increased attention from grid operators and vehicle owners in the future.",
"title": ""
},
{
"docid": "614b45e8802497bdd61df63a9745c115",
"text": "Wireless sensor networks have potential to monitor environments for both military and civil applications. Due to inhospitable conditions these sensors are not always deployed uniformly in the area of interest. Since sensors are generally constrained in on-board energy supply, efficient management of the network is crucial to extend the life of the sensors. Sensors energy cannot support long haul communication to reach a remote command site and thus requires many levels of hops or a gateway to forward the data on behalf of the sensor. In this paper we propose an algorithm to network these sensors in to well define clusters with less-energy-constrained gateway nodes acting as clusterheads, and balance load among these gateways. Simulation results show how our approach can balance the load and improve the lifetime of the system.",
"title": ""
},
{
"docid": "8bfd08fabeba8593b833773d28d78605",
"text": "Like many social variables, gender pervasively influences how people communicate with one another. However, prior computational work has largely focused on linguistic gender difference and communications about gender, rather than communications directed to people of that gender, in part due to lack of data. Here, we fill a critical need by introducing a multi-genre corpus of more than 25M comments from five socially and topically diverse sources tagged for the gender of the addressee. Using these data, we describe pilot studies on how differential responses to gender can be measured and analyzed and present 30k annotations for the sentiment and relevance of these responses, showing that across our datasets responses to women are more likely to be emotive and about the speaker as an individual (rather than about the content being responded to). Our dataset enables studying socially important questions like gender bias, and has potential uses for downstream applications such as dialogue systems, gender detection or obfuscation, and debiasing language generation.",
"title": ""
},
{
"docid": "da4f95cc061e7f2433ffa37a8e34437e",
"text": "Active learning has been proven to be quite effective in reducing the human labeling efforts by actively selecting the most informative examples to label. In this paper, we present a batch-mode active learning method based on logistic regression. Our key motivation is an out-of-sample bound on the estimation error of class distribution in logistic regression conditioned on any fixed training sample. It is different from a typical PACstyle passive learning error bound, that relies on the i.i.d. assumption of example-label pairs. In addition, it does not contain the class labels of the training sample. Therefore, it can be immediately used to design an active learning algorithm by minimizing this bound iteratively. We also discuss the connections between the proposed method and some existing active learning approaches. Experiments on benchmark UCI datasets and text datasets demonstrate that the proposed method outperforms the state-of-the-art active learning methods significantly.",
"title": ""
},
{
"docid": "855b34b0db99446f980ddb9b96e52001",
"text": "based, companies increasingly derive revenue from the creation and sustenance of long-term relationships with their customers. In such an environment, marketing serves the purpose of maximizing customer lifetime value (CLV) and customer equity, which is the sum of the lifetime values of the company’s customers. This article reviews a number of implementable CLV models that are useful for market segmentation and the allocation of marketing resources for acquisition, retention, and crossselling. The authors review several empirical insights that were obtained from these models and conclude with an agenda of areas that are in need of further research.",
"title": ""
},
{
"docid": "f21ac64e54b23ab671c5fc038bef4686",
"text": "This paper presents the working methodology and results on Code Mix Entity Extraction in Indian Languages (CMEE-IL) shared the task of FIRE-2016. The aim of the task is to identify various entities such as a person, organization, movie and location names in a given code-mixed tweets. The tweets in code mix are written in English mixed with Hindi or Tamil. In this work, Entity Extraction system is implemented for both Hindi-English and Tamil-English code-mix tweets. The system employs context based character embedding features to train Support Vector Machine (SVM) classifier. The training data was tokenized such that each line containing a single word. These words were further split into characters. Embedding vectors of these characters are appended with the I-O-B tags and used for training the system. During the testing phase, we use context embedding features to predict the entity tags for characters in test data. We observed that the cross-validation accuracy using character embedding gave better results for Hindi-English twitter dataset compare to TamilEnglish twitter dataset. CCS Concepts • Information Retrieval ➝ Retrieval tasks and goals ➝ Information Extraction • Machine Learning ➝ Machine Learning approaches ➝ Kernel Methods ➝ Support Vector Machines",
"title": ""
},
{
"docid": "d66f86ac2b42d13ba2199e41c85d3c93",
"text": "We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles:(I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions.The edit operations are also assisted by the model.(II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation [51,68], we propose a unified interface for full image annotation in a single pass.(III) Empower the annotator.We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the ma-chine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively.\n Through extensive experiments on the COCO+Stuff dataset [11,51], we demonstrate that Fluid Annotation leads to accurate an-notations very efficiently, taking 3x less annotation time than the popular LabelMe interface [70].",
"title": ""
},
{
"docid": "c91057dab0cd143042e180e5e432a4fa",
"text": "The topic of this paper is a Genetic Algorithm solution to the Vehicle Routing Problem with Time Windows, a variant of one of the most common problems in contemporary operations research. The paper will introduce the problem starting with more general Traveling Salesman and Vehicle Routing problems and present some of the prevailing strategies for solving them, focusing on Genetic Algorithms. At the end, it will summarize the Genetic Algorithm solution proposed by K.Q. Zhu which was used in the programming part of the project.",
"title": ""
},
{
"docid": "efeab631d5750bad4138c69b7f866194",
"text": "In the practice of Digital Marketing (DM), Web Analytics (WA) and Key Performance Indicators (KPIs) can and should play an important role in marketing strategy formulation. It is the aim of this article to survey the various DM metrics to determine and address the following question: What are the most relevant metrics and KPIs that companies need to understand and manage in order to increase the effectiveness of their DM strategies? Therefore, to achieve these objectives, a Systematic Literature Review has been carried out based on two main themes (i) Digital Marketing and (ii) Web Analytics. The search terms consulted in the databases have been (i) DM and (ii) WA obtaining a result total of n = 378 investigations. The databases that have been consulted for the extraction of data were Scopus, PubMed, PsyINFO, ScienceDirect and Web of Science. In this study, we define and identify the main KPIs in measuring why, how and for what purpose users interact with web pages and ads. The main contribution of the study is to lay out and clarify quantitative and qualitative KPIs and indicators for DM performance in order to achieve a consensus on the use and measurement of these indicators.",
"title": ""
},
{
"docid": "bf5280b0c76ffe4b02976df1d2c1ec93",
"text": "5G Technology stands for Fifth Generation Mobile technology. From generation 1G to 2.5G and from 3G to 5G this world of telecommunication has seen a number of improvements along with improved performance with every passing day. Fifth generation network provide affordable broadband wireless connectivity (very high speed). The paper throws light on network architecture of fifth generation technology. Currently 5G term is not officially used. In fifth generation researches are being made on development of World Wide Wireless Web (WWWW), Dynamic Adhoc Wireless Networks (DAWN) and Real Wireless World. Fifth generation focus on (Voice over IP) VOIP-enabled devices that user will experience a high level of call volume and data transmission. Wire-less system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore has started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. The proposed network is enforced by nanotechnology, cloud computing and based on all IP Platform. The main features in 5G mobile network is that user can simultaneously connect to the multiple wireless technologies and can switch between them. This forthcoming mobile technology will support IPv6 and flat IP.",
"title": ""
},
{
"docid": "9970a23aedeb1a613a0909c28c35222e",
"text": "Imaging radars incorporating digital beamforming (DBF) typically require a uniform linear antenna array (ULA). However, using a large number of parallel receivers increases system complexity and cost. A switched antenna array can provide a similar performance at a lower expense. This paper describes an active switched antenna array with 32 integrated planar patch antennas illuminating a cylindrical lens. The array can be operated over a frequency range from 73 GHz–81 GHz. Together with a broadband FMCW frontend (Frequency Modulated Continuous Wave) a DBF radar was implemented. The design of the array is presented together with measurement results.",
"title": ""
},
{
"docid": "41a4e84cf6dfc073c962dd9c6c13d6fe",
"text": "Pteridinone-based Toll-like receptor 7 (TLR7) agonists were identified as potent and selective alternatives to the previously reported adenine-based agonists, leading to the discovery of GS-9620. Analogues were optimized for the immunomodulatory activity and selectivity versus other TLRs, based on differential induction of key cytokines including interferon α (IFN-α) and tumor necrosis factor α (TNF-α). In addition, physicochemical properties were adjusted to achieve desirable in vivo pharmacokinetic and pharmacodynamic properties. GS-9620 is currently in clinical evaluation for the treatment of chronic hepatitis B (HBV) infection.",
"title": ""
},
{
"docid": "8b05f1d48e855580a8b0b91f316e89ab",
"text": "The demand for improved service delivery requires new approaches and attitudes from local government. Implementation of knowledge sharing practices in local government is one of the critical processes that can help to establish learning organisations. The main purpose of this paper is to investigate how knowledge management systems can be used to improve the knowledge sharing culture among local government employees. The study used an inductive research approach which included a thorough literature review and content analysis. The technology-organisation-environment theory was used as the theoretical foundation of the study. Making use of critical success factors, the study advises how existing knowledge sharing practices can be supported and how new initiatives can be developed, making use of a knowledge management system. The study recommends that local government must ensure that knowledge sharing practices and initiatives are fully supported and promoted by top management.",
"title": ""
},
{
"docid": "c9878a454c91fec094fce02e1ac49348",
"text": "Autonomous walking bipedal machines, possibly useful for rehabilitation and entertainment purposes, need a high energy efficiency, offered by the concept of ‘Passive Dynamic Walking’ (exploitation of the natural dynamics of the robot). 2D passive dynamic bipeds have been shown to be inherently stable, but in the third dimension two problematic degrees of freedom are introduced: yaw and roll. We propose a design for a 3D biped with a pelvic body as a passive dynamic compensator, which will compensate for the undesired yaw and roll motion, and allow the rest of the robot to move as if it were a 2D machine. To test our design, we perform numerical simulations on a multibody model of the robot. With limit cycle analysis we calculate the stability of the robot when walking at its natural speed. The simulation shows that the compensator, indeed, effectively compensates for both the yaw and the roll motion, and that the walker is stable.",
"title": ""
},
{
"docid": "6d5429ddf4050724432da73af60274d6",
"text": "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the subsentence or “concept”-level, to a sentencelevel model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.",
"title": ""
},
{
"docid": "4143ffda9aefc24130cf14d1a55b7330",
"text": "The abundance of opinions on the web has kindled the study of opinion summarization over the last few years. People have introduced various techniques and paradigms to solving this special task. This survey attempts to systematically investigate the different techniques and approaches used in opinion summarization. We provide a multi-perspective classification of the approaches used and highlight some of the key weaknesses of these approaches. This survey also covers evaluation techniques and data sets used in studying the opinion summarization problem. Finally, we provide insights into some of the challenges that are left to be addressed as this will help set the trend for future research in this area.",
"title": ""
}
] | scidocsrr |
9519f2e4936beaf1ed2a04eda5c253c7 | Innovation Search Strategy and Predictable Returns : A Bias for Novelty | [
{
"docid": "b72c83f9daa1c46c7455c27f193bd0af",
"text": "This paper shows that over time, expected market illiquidity positively affects ex ante stock excess return, suggesting that expected stock excess return partly represents an illiquidity premium. This complements the cross-sectional positive return–illiquidity relationship. Also, stock returns are negatively related over time to contemporaneous unexpected illiquidity. The illiquidity measure here is the average across stocks of the daily ratio of absolute stock return to dollar volume, which is easily obtained from daily stock data for long time series in most stock markets. Illiquidity affects more strongly small firm stocks, thus explaining time series variations in their premiums over time. r 2002 Elsevier Science B.V. All rights reserved. JEL classificaion: G12",
"title": ""
}
] | [
{
"docid": "10a6ba088f09fc02679532e8155d571c",
"text": "Now a day's internet is the most valuable source of learning, getting ideas, reviews for a product or a service. Everyday millions of reviews are generated in the internet about a product, person or a place. Because of their huge number and size it is very difficult to handle and understand such reviews. Sentiment analysis is such a research area which understands and extracts the opinion from the given review and the analysis process includes natural language processing (NLP), computational linguistics, text analytics and classifying the polarity of the opinion. In the field of sentiment analysis there are many algorithms exist to tackle NLP problems. Each algorithm is used by several applications. In this paper we have shown the taxonomy of various sentiment analysis methods. This paper also shows that Support vector machine (SVM) gives high accuracy compared to Naïve bayes and maximum entropy methods.",
"title": ""
},
{
"docid": "66451aa5a41ec7f9246d749c0983fa60",
"text": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.",
"title": ""
},
{
"docid": "c61d8a536241f2cb0bf8adb53131a511",
"text": "A method to design a microstrip-fed antipodal tapered-slot antenna, which has ultrawideband (UWB) performance and miniaturized dimensions, is presented. The proposed method modifies the antenna's structure to establish a direct connection between the microstrip feeder and the radiator. That modification, which removes the need to use any transitions and/or baluns in the feeding structure, is the first step in the proposed miniaturization. In the second step of miniaturization, the radiator and ground plane are corrugated to enable further reduction in the antenna's size without jeopardizing its performance. The simulated and measured results confirm the benefits of the adopted method in reducing the surface area of the antenna, while maintaining the ultrawideband performance.",
"title": ""
},
{
"docid": "1b6ddffacc50ad0f7e07675cfe12c282",
"text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.",
"title": ""
},
{
"docid": "b5b91947716e3594e3ddbb300ea80d36",
"text": "In this paper, a novel drive method, which is different from the traditional motor drive techniques, for high-speed brushless DC (BLDC) motor is proposed and verified by a series of experiments. It is well known that the BLDC motor can be driven by either pulse-width modulation (PWM) techniques with a constant dc-link voltage or pulse-amplitude modulation (PAM) techniques with an adjustable dc-link voltage. However, to our best knowledge, there is rare study providing a proper drive method for a high-speed BLDC motor with a large power over a wide speed range. Therefore, the detailed theoretical analysis comparison of the PWM control and the PAM control for high-speed BLDC motor is first given. Then, a conclusion that the PAM control is superior to the PWM control at high speed is obtained because of decreasing the commutation delay and high-frequency harmonic wave. Meanwhile, a new high-speed BLDC motor drive method based on the hybrid approach combining PWM and PAM is proposed. Finally, the feasibility and effectiveness of the performance analysis comparison and the new drive method are verified by several experiments.",
"title": ""
},
{
"docid": "fca58dee641af67f9bb62958b5b088f2",
"text": "This work explores the possibility of mixing two different fingerprints, pertaining to two different fingers, at the image level in order to generate a new fingerprint. To mix two fingerprints, each fingerprint pattern is decomposed into two different components, viz., the continuous and spiral components. After prealigning the components of each fingerprint, the continuous component of one fingerprint is combined with the spiral component of the other fingerprint. Experiments on the West Virginia University (WVU) and FVC2002 datasets show that mixing fingerprints has several benefits: (a) it can be used to generate virtual identities from two different fingers; (b) it can be used to obscure the information present in an individual's fingerprint image prior to storing it in a central database; and (c) it can be used to generate a cancelable fingerprint template, i.e., the template can be reset if the mixed fingerprint is compromised.",
"title": ""
},
{
"docid": "d68068c949d6c5b8cf4445007c4d6287",
"text": "This paper furthers the development of methods to distinguish truth from deception in textual data. We use rhetorical structure theory (RST) as the analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure. A sample of 36 elicited personal stories, self-ranked as truthful or deceptive, is manually analyzed by assigning RST discourse relations among each story’s constituent parts. A vector space model (VSM) assesses each story’s position in multidimensional RST space with respect to its distance from truthful and deceptive centers as measures of the story’s level of deception and truthfulness. Ten human judges evaluate independently whether each story is deceptive and assign their confidence levels (360 evaluations total), producing measures of the expected human ability to recognize deception. As a robustness check, a test sample of 18 truthful stories (with 180 additional evaluations) is used to determine the reliability of our RST-VSM method in determining deception. The contribution is in demonstration of the discourse structure analysis as a significant method for automated deception detection and an effective complement to lexicosemantic analysis. The potential is in developing novel discourse-based tools to alert information users to potential deception in computermediated texts. Introduction",
"title": ""
},
{
"docid": "29ba9c1cccea263a4db25da4352da754",
"text": "Distance metric learning is a fundamental problem in data mining and knowledge discovery. Many representative data mining algorithms, such as $$k$$ k -nearest neighbor classifier, hierarchical clustering and spectral clustering, heavily rely on the underlying distance metric for correctly measuring relations among input data. In recent years, many studies have demonstrated, either theoretically or empirically, that learning a good distance metric can greatly improve the performance of classification, clustering and retrieval tasks. In this survey, we overview existing distance metric learning approaches according to a common framework. Specifically, depending on the available supervision information during the distance metric learning process, we categorize each distance metric learning algorithm as supervised, unsupervised or semi-supervised. We compare those different types of metric learning methods, point out their strength and limitations. Finally, we summarize open challenges in distance metric learning and propose future directions for distance metric learning.",
"title": ""
},
{
"docid": "0c67afcb351c53c1b9e2b4bcf3b0dc08",
"text": "The Scrum methodology is an agile software development process that works as a project management wrapper around existing engineering practices to iteratively and incrementally develop software. With Scrum, for a developer to receive credit for his or her work, he or she must demonstrate the new functionality provided by a feature at the end of each short iteration during an iteration review session. Such a short-term focus without the checks and balances of sound engineering practices may lead a team to neglect quality. In this paper we present the experiences of three teams at Microsoft using Scrum with an additional nine sound engineering practices. Our results indicate that these teams were able to improve quality, productivity, and estimation accuracy through the combination of Scrum and nine engineering practices.",
"title": ""
},
{
"docid": "1bd058af9437119fc2aee4678c848802",
"text": "In this article we gave an overview of vision-based measurement (VBM), its various components, and uncertainty in the correct IM (instrumentation and measurement) metrological perspective. VBM is a fast rising technology due to the increasing affordability and capability of camera and computing hardware/software. While originally a specialized application, VBM is expected to become more ubiquitous in our everyday lives as apparent from the applications described in this article.",
"title": ""
},
{
"docid": "aca04e624f1c3dcd3f0ab9f9be1ef384",
"text": "In this paper, a novel three-phase parallel grid-connected multilevel inverter topology with a novel switching strategy is proposed. This inverter is intended to feed a microgrid from renewable energy sources (RES) to overcome the problem of the polluted sinusoidal output in classical inverters and to reduce component count, particularly for generating a multilevel waveform with a large number of levels. The proposed power converter consists of <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula> two-level <inline-formula> <tex-math notation=\"LaTeX\">$(n+1)$</tex-math></inline-formula> phase inverters connected in parallel, where <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula> is the number of RES. The more the number of RES, the more the number of voltage levels, the more faithful is the output sinusoidal waveform. In the proposed topology, both voltage pulse width and height are modulated and precalculated by using a pulse width and height modulation so as to reduce the number of switching states (i.e., switching losses) and the total harmonic distortion. The topology is investigated through simulations and validated experimentally with a laboratory prototype. Compliance with the <inline-formula><tex-math notation=\"LaTeX\">$\\text{IEEE 519-1992}$</tex-math></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$\\text{IEC 61000-3-12}$</tex-math></inline-formula> standards is presented and an exhaustive comparison of the proposed topology is made against the classical cascaded H-bridge topology.",
"title": ""
},
{
"docid": "2bed91cd91b2958eb46af613a8cb4978",
"text": "Millions of HTML tables containing structured data can be found on the Web. With their wide coverage, these tables are potentially very useful for filling missing values and extending cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph. As a prerequisite for being able to use table data for knowledge base extension, the HTML tables need to be matched with the knowledge base, meaning that correspondences between table rows/columns and entities/schema elements of the knowledge base need to be found. This paper presents the T2D gold standard for measuring and comparing the performance of HTML table to knowledge base matching systems. T2D consists of 8 700 schema-level and 26 100 entity-level correspondences between the WebDataCommons Web Tables Corpus and the DBpedia knowledge base. In contrast related work on HTML table to knowledge base matching, the Web Tables Corpus (147 million tables), the knowledge base, as well as the gold standard are publicly available. The gold standard is used afterward to evaluate the performance of T2K Match, an iterative matching method which combines schema and instance matching. T2K Match is designed for the use case of matching large quantities of mostly small and narrow HTML tables against large cross-domain knowledge bases. The evaluation using the T2D gold standard shows that T2K Match discovers table-to-class correspondences with a precision of 94%, row-to-entity correspondences with a precision of 90%, and column-to-property correspondences with a precision of 77%.",
"title": ""
},
{
"docid": "ba4f3060a36021ef60f7bc6c9cde9d35",
"text": "Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combine simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg 2), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets’ solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with 6 non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets.",
"title": ""
},
{
"docid": "935d22c1fdddaab40d8c94384f08fab2",
"text": "Face biometrics is widely used in various applications including border control and facilitating the verification of travellers' identity claim with respect to his electronic passport (ePass). As in most countries, passports are issued to a citizen based on the submitted photo which allows the applicant to provide a morphed face photo to conceal his identity during the application process. In this work, we propose a novel approach leveraging the transferable features from a pre-trained Deep Convolutional Neural Networks (D-CNN) to detect both digital and print-scanned morphed face image. Thus, the proposed approach is based on the feature level fusion of the first fully connected layers of two D-CNN (VGG19 and AlexNet) that are specifically fine-tuned using the morphed face image database. The proposed method is extensively evaluated on the newly constructed database with both digital and print-scanned morphed face images corresponding to bona fide and morphed data reflecting a real-life scenario. The obtained results consistently demonstrate improved detection performance of the proposed scheme over previously proposed methods on both the digital and the print-scanned morphed face image database.",
"title": ""
},
{
"docid": "18c8fcba57c295568942fa40b605c27e",
"text": "The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.",
"title": ""
},
{
"docid": "8fe6e954db9080e233bbc6dbf8117914",
"text": "This document defines a deterministic digital signature generation procedure. Such signatures are compatible with standard Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures and can be processed with unmodified verifiers, which need not be aware of the procedure described therein. Deterministic signatures retain the cryptographic security features associated with digital signatures but can be more easily implemented in various environments, since they do not need access to a source of high-quality randomness. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.",
"title": ""
},
{
"docid": "6b83827500e4ea22c9fed3288d0506a7",
"text": "This study develops a high-performance stand-alone photovoltaic (PV) generation system. To make the PV generation system more flexible and expandable, the backstage power circuit is composed of a high step-up converter and a pulsewidth-modulation (PWM) inverter. In the dc-dc power conversion, the high step-up converter is introduced to improve the conversion efficiency in conventional boost converters to allow the parallel operation of low-voltage PV arrays, and to decouple and simplify the control design of the PWM inverter. Moreover, an adaptive total sliding-mode control system is designed for the voltage control of the PWM inverter to maintain a sinusoidal output voltage with lower total harmonic distortion and less variation under various output loads. In addition, an active sun tracking scheme without any light sensors is investigated to make the PV modules face the sun directly for capturing the maximum irradiation and promoting system efficiency. Experimental results are given to verify the validity and reliability of the high step-up converter, the PWM inverter control, and the active sun tracker for the high-performance stand-alone PV generation system.",
"title": ""
},
{
"docid": "d39f806d1a8ecb33fab4b5ebb49b0dd1",
"text": "Texture analysis has been a particularly dynamic field with different computer vision and image processing applications. Most of the existing texture analysis techniques yield to significant results in different applications but fail in difficult situations with high sensitivity to noise. Inspired by previous works on texture analysis by structure layer modeling, this paper deals with representing the texture's structure layer using the structure tensor field. Based on texture pattern size approximation, the proposed algorithm investigates the adaptability of the structure tensor to the local geometry of textures by automatically estimating the sub-optimal structure tensor size. An extension of the algorithm targeting non-structured textures is also proposed. Results show that using the proposed tensor size regularization method, relevant local information can be extracted by eliminating the need of repetitive tensor field computation with different tensor size to reach an acceptable performance.",
"title": ""
},
{
"docid": "924146534d348e7a44970b1d78c97e9c",
"text": "Little is known of the extent to which heterosexual couples are satisfied with their current frequency of sex and the degree to which this predicts overall sexual and relationship satisfaction. A population-based survey of 4,290 men and 4,366 women was conducted among Australians aged 16 to 64 years from a range of sociodemographic backgrounds, of whom 3,240 men and 3,304 women were in regular heterosexual relationships. Only 46% of men and 58% of women were satisfied with their current frequency of sex. Dissatisfied men were overwhelmingly likely to desire sex more frequently; among dissatisfied women, only two thirds wanted sex more frequently. Age was a significant factor but only for men, with those aged 35-44 years tending to be least satisfied. Men and women who were dissatisfied with their frequency of sex were also more likely to express overall lower sexual and relationship satisfaction. The authors' findings not only highlight desired frequency of sex as a major factor in satisfaction, but also reveal important gender and other sociodemographic differences that need to be taken into account by researchers and therapists seeking to understand and improve sexual and relationship satisfaction among heterosexual couples. Other issues such as length of time spent having sex and practices engaged in may also be relevant, particularly for women.",
"title": ""
},
{
"docid": "ecd7da1f742b4c92f3c748fd19098159",
"text": "Abstract. Today, a paradigm shift is being observed in science, where the focus is gradually shifting toward the cloud environments to obtain appropriate, robust and affordable services to deal with Big Data challenges (Sharma et al. 2014, 2015a, 2015b). Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. In this paper, we study the evolution of as-a-Service modalities, stimulated by cloud computing, and explore the most complete inventory of new members beyond traditional cloud computing stack.",
"title": ""
}
] | scidocsrr |
bc04f53bf1928db5a36744a94216ce73 | Smart e-Health Gateway: Bringing intelligence to Internet-of-Things based ubiquitous healthcare systems | [
{
"docid": "be99f6ba66d573547a09d3429536049e",
"text": "With the development of sensor, wireless mobile communication, embedded system and cloud computing, the technologies of Internet of Things have been widely used in logistics, Smart Meter, public security, intelligent building and so on. Because of its huge market prospects, Internet of Things has been paid close attention by several governments all over the world, which is regarded as the third wave of information technology after Internet and mobile communication network. Bridging between wireless sensor networks with traditional communication networks or Internet, IOT Gateway plays an important role in IOT applications, which facilitates the seamless integration of wireless sensor networks and mobile communication networks or Internet, and the management and control with wireless sensor networks. In this paper, we proposed an IOT Gateway system based on Zigbee and GPRS protocols according to the typical IOT application scenarios and requirements from telecom operators, presented the data transmission between wireless sensor networks and mobile communication networks, protocol conversion of different sensor network protocols, and control functionalities for sensor networks, and finally gave an implementation of prototyping system and system validation.",
"title": ""
}
] | [
{
"docid": "ec6f53bd2cbc482c1450934b1fd9e463",
"text": "Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.",
"title": ""
},
{
"docid": "fb7f079d104e81db41b01afe67cdf3b0",
"text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.",
"title": ""
},
{
"docid": "c142745b4f4fe202fb4c59a494060f70",
"text": "We created a comprehensive set of health-system performance measurements for China nationally and regionally, with health-system coverage and catastrophic medical spending as major indicators. With respect to performance of health-care delivery, China has done well in provision of maternal and child health services, but poorly in addressing non-communicable diseases. For example, coverage of hospital delivery increased from 20% in 1993 to 62% in 2003 for women living in rural areas. However, effective coverage of hypertension treatment was only 12% for patients living in urban areas and 7% for those in rural areas in 2004. With respect to performance of health-care financing, 14% of urban and 16% of rural households incurred catastrophic medical expenditure in 2003. Furthermore, 15% of urban and 22% of rural residents had affordability difficulties when accessing health care. Although health-system coverage improved for both urban and rural areas from 1993 to 2003, affordability difficulties had worsened in rural areas. Additionally, substantial inter-regional and intra-regional inequalities in health-system coverage and health-care affordability measures exist. People with low income not only receive lower health-system coverage than those with high income, but also have an increased probability of either not seeking health care when ill or undergoing catastrophic medical spending. China's current health-system reform efforts need to be assessed for their effect on performance indicators, for which substantial data gaps exist.",
"title": ""
},
{
"docid": "9a7786a5f05876bc9265246531077c81",
"text": "PURPOSE\nThe aim of this in vivo study was to evaluate the clinical performance of porcelain veneers after 5 and 10 years of clinical service.\n\n\nMATERIALS AND METHODS\nA single operator placed porcelain laminates on 87 maxillary anterior teeth in 25 patients. All restorations were recalled at 5 years and 93% of the restorations at 10 years. Clinical performance was assessed in terms of esthetics, marginal integrity, retention, clinical microleakage, caries recurrence, fracture, vitality, and patient satisfaction. Failures were recorded either as \"clinically unacceptable but repairable\" or as \"clinically unacceptable with replacement needed\".\n\n\nRESULTS\nPorcelain veneers maintained their esthetic appearance after 10 years of clinical service. None of the veneers were lost. The percentage of restorations that remained \"clinically acceptable\" (without need for intervention) significantly decreased from an average of 92% (95 CI: 90% to 94%) at 5 years to 64% (95 CI: 51% to 77%) at 10 years. Fractures of porcelain (11%) and large marginal defects (20%) were the main reason for failure. Marginal defects were especially noticed at locations where the veneer ended in an existing composite filling. At such vulnerable locations, severe marginal discoloration (19%) and caries recurrence (10%) were frequently observed. Most of the restorations that present one or more \"clinically unacceptable\" problems (28%) were repairable. Only 4% of the restorations needed to be replaced at the 10-year recall.\n\n\nCONCLUSION\nIt was concluded that labial porcelain veneers represent a reliable, effective procedure for conservative treatment of unesthetic anterior teeth. Occlusion, preparation design, presence of composite fillings, and the adhesive used to bond veneers to tooth substrate are covariables that contribute to the clinical outcome of these restorations in the long-term.",
"title": ""
},
{
"docid": "ee4ebafe1b40e3d2020b2fb9a4b881f6",
"text": "Probing the lowest energy configuration of a complex system by quantum annealing was recently found to be more effective than its classical, thermal counterpart. By comparing classical and quantum Monte Carlo annealing protocols on the two-dimensional random Ising model (a prototype spin glass), we confirm the superiority of quantum annealing relative to classical annealing. We also propose a theory of quantum annealing based on a cascade of Landau-Zener tunneling events. For both classical and quantum annealing, the residual energy after annealing is inversely proportional to a power of the logarithm of the annealing time, but the quantum case has a larger power that makes it faster.",
"title": ""
},
{
"docid": "eb52b00d6aec954e3c64f7043427709c",
"text": "The paper presents a ball on plate balancing system useful for various educational purposes. A touch-screen placed on the plate is used for ball's position sensing and two servomotors are employed for balancing the plate in order to control ball's Cartesian coordinates. The design of control embedded systems is demonstrated for different control algorithms in compliance with FreeRTOS real time operating system and dsPIC33 microcontroller. On-line visualizations useful for system monitoring are provided by a PC host application connected with the embedded application. The measurements acquired during real-time execution and the parameters of the system are stored in specific data files, as support for any desired additional analysis. Taking into account the properties of this controlled system (instability, fast dynamics) and the capabilities of the embedded architecture (diversity of the involved communication protocols, diversity of employed hardware components, usage of an open source real time operating system), this educational setup allows a good illustration of numerous theoretical and practical aspects related to system engineering and applied informatics.",
"title": ""
},
{
"docid": "da270ae9b62c04d785ea6aad02db2ae9",
"text": "We study the complexity problem in artificial feedforward neural networks designed to approximate real valued functions of several real variables; i.e., we estimate the number of neurons in a network required to ensure a given degree of approximation to every function in a given function class. We indicate how to construct networks with the indicated number of neurons evaluating standard activation functions. Our general theorem shows that the smoother the activation function, the better the rate of approximation.",
"title": ""
},
{
"docid": "de17b1fcae6336947e82adab0881b5ba",
"text": "Presence of duplicate documents in the World Wide Web adversely affects crawling, indexing and relevance, which are the core building blocks of web search. In this paper, we present a set of techniques to mine rules from URLs and utilize these learnt rules for de-duplication using just URL strings without fetching the content explicitly. Our technique is composed of mining the crawl logs and utilizing clusters of similar pages to extract specific rules from URLs belonging to each cluster. Preserving each mined rules for de-duplication is not efficient due to the large number of specific rules. We present a machine learning technique to generalize the set of rules, which reduces the resource footprint to be usable at web-scale. The rule extraction techniques are robust against web-site specific URL conventions. We demonstrate the effectiveness of our techniques through experimental evaluation.",
"title": ""
},
{
"docid": "0ebecd74a2b4e5df55cbb51016c060c9",
"text": "Because of the increasing detail and size of virtual worlds, designers are more and more urged to consider employing procedural methods to alleviate part of their modeling work. However, such methods are often unintuitive to use, difficult to integrate, and provide little user control, making their application far from straightforward.\n In our declarative modeling approach, designers are provided with a more productive and simplified virtual world modeling workflow that matches better with their iterative way of working. Using interactive procedural sketching, they can quickly layout a virtual world, while having proper user control at the level of large terrain features. However, in practice, designers require a finer level of control. Integrating procedural techniques with manual editing in an iterative modeling workflow is an important topic that has remained relatively unaddressed until now.\n This paper identifies challenges of this integration and discusses approaches to combine these methods in such a way that designers can freely mix them, while the virtual world model is kept consistent during all modifications. We conclude that overcoming the challenges mentioned, for example in a declarative modeling context, is instrumental to achieve the much desired adoption of procedural modeling in mainstream virtual world modeling.",
"title": ""
},
{
"docid": "d95cd76008dd65d5d7f00c82bad013d3",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
},
{
"docid": "c86f477a1a2900a1b3d5dc80974c6f7c",
"text": "The understanding of the metal and transition metal dichalcogenide (TMD) interface is critical for future electronic device technologies based on this new class of two-dimensional semiconductors. Here, we investigate the initial growth of nanometer-thick Pd, Au, and Ag films on monolayer MoS2. Distinct growth morphologies are identified by atomic force microscopy: Pd forms a uniform contact, Au clusters into nanostructures, and Ag forms randomly distributed islands on MoS2. The formation of these different interfaces is elucidated by large-scale spin-polarized density functional theory calculations. Using Raman spectroscopy, we find that the interface homogeneity shows characteristic Raman shifts in E2g(1) and A1g modes. Interestingly, we show that insertion of graphene between metal and MoS2 can effectively decouple MoS2 from the perturbations imparted by metal contacts (e.g., strain), while maintaining an effective electronic coupling between metal contact and MoS2, suggesting that graphene can act as a conductive buffer layer in TMD electronics.",
"title": ""
},
{
"docid": "187c696aeb78607327fd817dfa9446ba",
"text": "OBJECTIVE\nThe integration of SNOMED CT into the Unified Medical Language System (UMLS) involved the alignment of two views of synonymy that were different because the two vocabulary systems have different intended purposes and editing principles. The UMLS is organized according to one view of synonymy, but its structure also represents all the individual views of synonymy present in its source vocabularies. Despite progress in knowledge-based automation of development and maintenance of vocabularies, manual curation is still the main method of determining synonymy. The aim of this study was to investigate the quality of human judgment of synonymy.\n\n\nDESIGN\nSixty pairs of potentially controversial SNOMED CT synonyms were reviewed by 11 domain vocabulary experts (six UMLS editors and five noneditors), and scores were assigned according to the degree of synonymy.\n\n\nMEASUREMENTS\nThe synonymy scores of each subject were compared to the gold standard (the overall mean synonymy score of all subjects) to assess accuracy. Agreement between UMLS editors and noneditors was measured by comparing the mean synonymy scores of editors to noneditors.\n\n\nRESULTS\nAverage accuracy was 71% for UMLS editors and 75% for noneditors (difference not statistically significant). Mean scores of editors and noneditors showed significant positive correlation (Spearman's rank correlation coefficient 0.654, two-tailed p < 0.01) with a concurrence rate of 75% and an interrater agreement kappa of 0.43.\n\n\nCONCLUSION\nThe accuracy in the judgment of synonymy was comparable for UMLS editors and nonediting domain experts. There was reasonable agreement between the two groups.",
"title": ""
},
{
"docid": "a7cc577ae2a09a5ff18333b7bfb47001",
"text": "Metacercariae of an unidentified species of Apophallus Lühe, 1909 are associated with overwinter mortality in coho salmon, Oncorhynchus kisutch (Walbaum, 1792), in the West Fork Smith River, Oregon. We infected chicks with these metacercariae in order to identify the species. The average size of adult worms was 197 × 57 μm, which was 2 to 11 times smaller than other described Apophallus species. Eggs were also smaller, but larger in proportion to body size, than in other species of Apophallus. Based on these morphological differences, we describe Apophallus microsoma n. sp. In addition, sequences from the cytochrome c oxidase 1 gene from Apophallus sp. cercariae collected in the study area, which are likely conspecific with experimentally cultivated A. microsoma, differ by >12% from those we obtained from Apophallus donicus ( Skrjabin and Lindtrop, 1919 ) and from Apophallus brevis Ransom, 1920 . The taxonomy and pathology of Apophallus species is reviewed.",
"title": ""
},
{
"docid": "886c284d72a01db9bc4eb9467e14bbbb",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "d13145bc68472ed9a06bafd86357c5dd",
"text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple yarn-level ambient occlusion approximation and self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.",
"title": ""
},
{
"docid": "2e0585860c1fa533412ff1fea76632cb",
"text": "Author Co-citation Analysis (ACA) has long been used as an effective method for identifying the intellectual structure of a research domain, but it relies on simple co-citation counting, which does not take the citation content into consideration. The present study proposes a new method for measuring the similarity between co-cited authors by considering author's citation content. We collected the full-text journal articles in the information science domain and extracted the citing sentences to calculate their similarity distances. We compared our method with traditional ACA and found out that our approach, while displaying a similar intellectual structure for the information science domain as the other baseline methods, also provides more details about the sub-disciplines in the domain than with traditional ACA.",
"title": ""
},
{
"docid": "3669d58dc1bed1d83e5d0d6747771f0e",
"text": "To cite: He A, Kwatra SG, Kim N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016214761 DESCRIPTION A 26-year-old woman with a reported history of tinea versicolour presented for persistent hypopigmentation on her bilateral forearms. Detailed examination revealed multiple small (5–10 mm), irregularly shaped white macules on the extensor surfaces of the bilateral forearms overlying slightly erythaematous skin. The surrounding erythaematous skin blanched with pressure and with elevation of the upper extremities the white macules were no longer visible (figures 1 and 2). A clinical diagnosis of Bier spots was made based on the patient’s characteristic clinical features. Bier spots are completely asymptomatic and are often found on the extensor surfaces of the upper and lower extremities, although they are sometimes generalised. They are a benign physiological vascular anomaly, arising either from cutaneous vessels responding to venous hypertension or from small vessel vasoconstriction leading to tissue hypoxia. 3 Our patient had neither personal nor family history of vascular disease. Bier spots are easily diagnosed by a classic sign on physical examination: the pale macules disappear with pressure applied on the surrounding skin or by elevating the affected limbs (figure 2). However, Bier spots can be easily confused with a variety of other disorders associated with hypopigmented macules. The differential diagnosis includes vitiligo, postinflammatory hypopigmentation and tinea versicolour, which was a prior diagnosis in this case. Bier spots are often idiopathic and regress spontaneously, although there are reports of Bier spots heralding systemic diseases, such as scleroderma renal crisis, mixed cryoglobulinaemia or lymphoma. Since most Bier spots are idiopathic and transient, no treatment is required.",
"title": ""
},
{
"docid": "4d832a8716aebf7c36ae6894ce1bac33",
"text": "Autonomous vehicles require a reliable perception of their environment to operate in real-world conditions. Awareness of moving objects is one of the key components for the perception of the environment. This paper proposes a method for detection and tracking of moving objects (DATMO) in dynamic environments surrounding a moving road vehicle equipped with a Velodyne laser scanner and GPS/IMU localization system. First, at every time step, a local 2.5D grid is built using the last sets of sensor measurements. Along time, the generated grids combined with localization data are integrated into an environment model called local 2.5D map. In every frame, a 2.5D grid is compared with an updated 2.5D map to compute a 2.5D motion grid. A mechanism based on spatial properties is presented to suppress false detections that are due to small localization errors. Next, the 2.5D motion grid is post-processed to provide an object level representation of the scene. The detected moving objects are tracked over time by applying data association and Kalman filtering. The experiments conducted on different sequences from KITTI dataset showed promising results, demonstrating the applicability of the proposed method.",
"title": ""
},
{
"docid": "ac34478a54d67abce7c892e058295e63",
"text": "The popularity of the term \"integrated curriculum\" has grown immensely in medical education over the last two decades, but what does this term mean and how do we go about its design, implementation, and evaluation? Definitions and application of the term vary greatly in the literature, spanning from the integration of content within a single lecture to the integration of a medical school's comprehensive curriculum. Taking into account the integrated curriculum's historic and evolving base of knowledge and theory, its support from many national medical education organizations, and the ever-increasing body of published examples, we deem it necessary to present a guide to review and promote further development of the integrated curriculum movement in medical education with an international perspective. We introduce the history and theory behind integration and provide theoretical models alongside published examples of common variations of an integrated curriculum. In addition, we identify three areas of particular need when developing an ideal integrated curriculum, leading us to propose the use of a new, clarified definition of \"integrated curriculum\", and offer a review of strategies to evaluate the impact of an integrated curriculum on the learner. This Guide is presented to assist educators in the design, implementation, and evaluation of a thoroughly integrated medical school curriculum.",
"title": ""
}
] | scidocsrr |
673ff1460830ec05f4c68c46a6b0b84e | Impact Of Employee Participation On Job Satisfaction , Employee Commitment And Employee Productivity | [
{
"docid": "3fd9fd52be3153fe84f2ea6319665711",
"text": "The theories of supermodular optimization and games provide a framework for the analysis of systems marked by complementarity. We summarize the principal results of these theories and indicate their usefulness by applying them to study the shift to 'modern manufacturing'. We also use them to analyze the characteristic features of the Lincoln Electric Company's strategy and structure.",
"title": ""
}
] | [
{
"docid": "c252cca4122984aac411a01ce28777f7",
"text": "An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.",
"title": ""
},
{
"docid": "7ad244791a1ef91495aa3e0f4cf43f0c",
"text": "T he education and research communities are abuzz with new (or at least re-discovered) ideas about the nature of cognition and learning. Terms like situated cognition,\" \"distributed cognition,\" and \"communities of practice\" fill the air. Recent dialogue in Educational Researcher (Anderson, Reder, & Simon, 1996, 1997; Greeno, 1997) typifies this discussion. Some have argued that the shifts in world view that these discussions represent are even more fundamental than the now-historical shift from behaviorist to cognitive views of learning (Shuell, 1986). These new iaeas about the nature of knowledge, thinking, and learning--which are becoming known as the \"situative perspective\" (Greeno, 1997; Greeno, Collins, & Resnick, 1996)--are interacting with, and sometimes fueling, current reform movements in education. Most discussions of these ideas and their implications for educational practice have been cast primarily in terms of students. Scholars and policymakers have considered, for example, how to help students develop deep understandings of subject matter, situate students' learning in meaningful contexts, and create learning communities in which teachers and students engage in rich discourse about important ideas (e.g., National Council of Teachers of Mathematics, 1989; National Education Goals Panel, 1991; National Research Council, 1993). Less attention has been paid to teachers--either to their roles in creating learning experiences consistent with the reform agenda or to how they themselves learn new ways of teaching. In this article we focus on the latter. Our purpose in considering teachers' learning is twofold. First, we use these ideas about the nature of learning and knowing as lenses for understanding recent research on teacher learning. Second, we explore new issues about teacher learning and teacher education that this perspective brings to light. We begin with a brief overview of three conceptual themes that are central to the situative perspect ive-that cognition is (a) situated in particular physical and social contexts; (b) social in nature; and (c) distributed across the individual, other persons, and tools.",
"title": ""
},
{
"docid": "2136c0e78cac259106d5424a2985e5d7",
"text": "Stylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 and Racchmaninof-Oct2010. The former is a constrained Markov model and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédédric Chopin (1810-1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system called Experiments in Musical Intelligence (EMI). Judges’ responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure. Data and code to accompany this paper are available from www.tomcollinsresearch.net",
"title": ""
},
{
"docid": "4b8470edc0d643e9baeceae7d15a3c8b",
"text": "The authors have investigated potential applications of artificial neural networks for electrocardiographic QRS detection and beat classification. For the task of QRS detection, the authors used an adaptive multilayer perceptron structure to model the nonlinear background noise so as to enhance the QRS complex. This provided more reliable detection of QRS complexes even in a noisy environment. For electrocardiographic QRS complex pattern classification, an artificial neural network adaptive multilayer perceptron was used as a pattern classifier to distinguish between normal and abnormal beat patterns, as well as to classify 12 different abnormal beat morphologies. Preliminary results using the MIT/BIH (Massachusetts Institute of Technology/Beth Israel Hospital, Cambridge, MA) arrhythmia database are encouraging.",
"title": ""
},
{
"docid": "d2146f1821812ca65cfd56f557252200",
"text": "This paper presents an automatic annotation tool AATOS for providing documents with semantic annotations. The tool links entities found from the texts to ontologies defined by the user. The application is highly configurable and can be used with different natural language Finnish texts. The application was developed as a part of the WarSampo and Semantic Finlex projects and tested using Kansa Taisteli magazine articles and consolidated Finnish legislation of Semantic Finlex. The quality of the automatic annotation was evaluated by measuring precision and recall against existing manual annotations. The results showed that the quality of the input text, as well as the selection and configuration of the ontologies impacted the results.",
"title": ""
},
{
"docid": "e6c0aa517c857ed217fc96aad58d7158",
"text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.",
"title": ""
},
{
"docid": "42b6aca92046022cf77b724e2704348b",
"text": "We explore a model-based approach to reinforcement learning where partially or totally unknown dynamics are learned and explicit planning is performed. We learn dynamics with neural networks, and plan behaviors with differential dynamic programming (DDP). In order to handle complicated dynamics, such as manipulating liquids (pouring), we consider temporally decomposed dynamics. We start from our recent work [1] where we used locally weighted regression (LWR) to model dynamics. The major contribution of this paper is making use of deep learning in the form of neural networks with stochastic DDP, and showing the advantages of neural networks over LWR. For this purpose, we extend neural networks for: (1) modeling prediction error and output noise, (2) computing an output probability distribution for a given input distribution, and (3) computing gradients of output expectation with respect to an input. Since neural networks have nonlinear activation functions, these extensions were not easy. We provide an analytic solution for these extensions using some simplifying assumptions. We verified this method in pouring simulation experiments. The learning performance with neural networks was better than that of LWR. The amount of spilled materials was reduced. We also present early results of robot experiments using a PR2. Accompanying video: https://youtu.be/aM3hE1J5W98.",
"title": ""
},
{
"docid": "a470aa1ba955cdb395b122daf2a17b6a",
"text": "Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of rewards and incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.",
"title": ""
},
{
"docid": "cfec098f84e157a2e12f0ff40551c977",
"text": "In this paper, an online news recommender system for the popular social network, Facebook, is described. This system provides daily newsletters for communities on Facebook. The system fetches the news articles and filters them based on the community description to prepare the daily news digest. Explicit survey feedback from the users show that most users found the application useful and easy to use. They also indicated that they could get some community specific articles that they would not have got otherwise.",
"title": ""
},
{
"docid": "1d0d5ad5371a3f7b8e90fad6d5299fa7",
"text": "Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (\"dead ends\"), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.",
"title": ""
},
{
"docid": "729cb5a59c1458ce6c9ef3fa29ca1d98",
"text": "The Simulink/Stateflow toolset is an integrated suite enabling model-based design and has become popular in the automotive and aeronautics industries. We have previously developed a translator called Simtolus from Simulink to the synchronous language Lustre and we build upon that work by encompassing Stateflow as well. Stateflow is problematical for synchronous languages because of its unbounded behaviour so we propose analysis techniques to define a subset of Stateflow for which we can define a synchronous semantics. We go further and define a \"safe\" subset of Stateflow which elides features which are potential sources of errors in Stateflow designs. We give an informal presentation of the Stateflow to Lustre translation process and show how our model-checking tool Lesar can be used to verify some of the semantical checks we have proposed. Finally, we present a small case-study.",
"title": ""
},
{
"docid": "18c56e9d096ba4ea48a0579626f83edc",
"text": "PURPOSE\nThe purpose of this study was to provide an overview of platelet-rich plasma (PRP) injected into the scalp for the management of androgenic alopecia.\n\n\nMATERIALS AND METHODS\nA literature review was performed to evaluate the benefits of PRP in androgenic alopecia.\n\n\nRESULTS\nHair restoration has been increasing. PRP's main components of platelet-derived growth factor, transforming growth factor, and vascular endothelial growth factor have the potential to stimulate hard and soft tissue wound healing. In general, PRP showed a benefit on patients with androgenic alopecia, including increased hair density and quality. Currently, different PRP preparations are being used with no standard technique.\n\n\nCONCLUSION\nThis review found beneficial effects of PRP on androgenic alopecia. However, more rigorous study designs, including larger samples, quantitative measurements of effect, and longer follow-up periods, are needed to solidify the utility of PRP for treating patients with androgenic alopecia.",
"title": ""
},
{
"docid": "bff21b4a0bc4e7cc6918bc7f107a5ca5",
"text": "This paper discusses driving system design based on traffic rules. This allows fully automated driving in an environment with human drivers, without necessarily changing equipment on other vehicles or infrastructure. It also facilitates cooperation between the driving system and the host driver during highly automated driving. The concept, referred to as legal safety, is illustrated for highly automated driving on highways with distance keeping, intelligent speed adaptation, and lane-changing functionalities. Requirements by legal safety on perception and control components are discussed. This paper presents the actual design of a legal safety decision component, which predicts object trajectories and calculates optimal subject trajectories. System implementation on automotive electronic control units and results on vehicle and simulator are discussed.",
"title": ""
},
{
"docid": "2252d2fd9955cac5e16304cb90f9dd60",
"text": "A number of solutions have been proposed to address the free-riding problem in peer-to-peer file sharing systems. The solutions are either imperfect-they allow some users to cheat the system with malicious behavior, or expensive-they require human intervention, require servers, or incur high mental transaction costs. The authors proposed a method to address these weaknesses. Specifically, a utility function was introduced to capture contributions made by a user and an auditing scheme to ensure the integrity of a utility function's values. The method enabled to reduce cheating by a malicious peer: it is shown that this approach can efficiently detect malicious peers with a probability over 98%.",
"title": ""
},
{
"docid": "7e8c99297dd2f9f73f8d50d92115090b",
"text": "This paper proposes a new wrist mechanism for robot manipulation. To develop multi-dof wrist mechanisms that can emulate human wrists, compactness and high torque density are the major challenges. Traditional wrist mechanisms consist of series of rotary motors that require gearing to amplify the output torque. This often results in a bulky wrist mechanism. Instead, large linear force can be easily realized in a compact space by using lead screw motors. Inspired by the muscle-tendon actuation pattern, the proposed mechanism consists of two parallel placed linear motors. Their linear motions are transmitted to two perpendicular rotations through a spherical mechanism and two slider crank mechanisms. High torque density can be achieved. Static and dynamic models are developed to design the wrist mechanism. A wrist prototype and its position control experiments will be presented with results discussed. The novel mechanism is expected to serve as an alternative for robot manipulators in applications that require human-friendly interactions.",
"title": ""
},
{
"docid": "eede682da157ac788a300e9c3080c460",
"text": "We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision.",
"title": ""
},
{
"docid": "116fd1ecd65f7ddfdfad6dca09c12876",
"text": "Malicious hardware Trojan circuitry inserted in safety-critical applications is a major threat to national security. In this work, we propose a novel application of a key-based obfuscation technique to achieve security against hardware Trojans. The obfuscation scheme is based on modifying the state transition function of a given circuit by expanding its reachable state space and enabling it to operate in two distinct modes -- the normal mode and the obfuscated mode. Such a modification obfuscates the rareness of the internal circuit nodes, thus making it difficult for an adversary to insert hard-to-detect Trojans. It also makes some inserted Trojans benign by making them activate only in the obfuscated mode. The combined effect leads to higher Trojan detectability and higher level of protection against such attack. Simulation results for a set of benchmark circuits show that the scheme is capable of achieving high levels of security at modest design overhead.",
"title": ""
},
{
"docid": "73b81ca84f4072188e1a263e9a7ea330",
"text": "The digital workplace is widely acknowledged as an important organizational asset for optimizing knowledge worker productivity. While there is no particular research stream on the digital workplace, scholars have conducted intensive research on related topics. This study aims to summarize the practical implications of the current academic body of knowledge on the digital workplace. For this purpose, a screening of academic-practitioner literature was conducted, followed by a systematic review of academic top journal literature. The screening revealed four main research topics on the digital workplace that are present in academic-practitioner literature: 1) Collaboration, 2) Compliance, 3) Mobility, and 4) Stress and overload. Based on the four topics, this study categorizes practical implications on the digital workplace into 15 concepts. Thereby, it provides two main contributions. First, the study delivers condensed information for practitioners about digital workplace design. Second, the results shed light on the relevance of IS research.",
"title": ""
},
{
"docid": "127405febe57f4df6f8f16d42e0ac762",
"text": "In the recent years there has been an increase in scientific papers publications in Albania and its neighboring countries that have large communities of Albanian speaking researchers. Many of these papers are written in Albanian. It is a very time consuming task to find papers related to the researchers’ work, because there is no concrete system that facilitates this process. In this paper we present the design of a modular intelligent search system for articles written in Albanian. The main part of it is the recommender module that facilitates searching by providing relevant articles to the users (in comparison with a given one). We used a cosine similarity based heuristics that differentiates the importance of term frequencies based on their location in the article. We did not notice big differences on the recommendation results when using different combinations of the importance factors of the keywords, title, abstract and body. We got similar results when using only theand body. We got similar results when using only the title and abstract in comparison with the other combinations. Because we got fairly good results in this initial approach, we believe that similar recommender systems for documents written in Albanian can be built also in contexts not related to scientific publishing. Keywords—recommender system; Albanian; information retrieval; intelligent search; digital library",
"title": ""
},
{
"docid": "ac46286c7d635ccdcd41358666026c12",
"text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.",
"title": ""
}
] | scidocsrr |
e6aec168a0bbe2d4a24e748b979b78b4 | Metaphor Clusters , Metaphor Chains : Analyzing the Multifunctionality of Metaphor in Text | [
{
"docid": "52f82d83f27ad19978088158420527c6",
"text": "This paper discusses some principles of critical discourse analysis, such as the explicit sociopolitical stance of discourse analysts, and a focus on dominance relations by elite groups and institutions as they are being enacted, legitimated or otherwise reproduced by text and talk. One of the crucial elements of this analysis of the relations between power and discourse is the patterns of access to (public) discourse for different social groups. Theoretically it is shown that in order to be able to relate power and discourse in an explicit way, we need the cognitive interface of models. knowledge, attitudes and ideologies and other social representations of the social mind, which also relate the individual and the social, and the microand the macro-levels of social structure. Finally, the argument is illustrated with an analysis of parliamentary debates about ethnic affairs.",
"title": ""
}
] | [
{
"docid": "58c488555240ded980033111a9657be4",
"text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.",
"title": ""
},
{
"docid": "f0db74061a2befca317f9333a0712ab9",
"text": "This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.",
"title": ""
},
{
"docid": "40ad6bf9f233b58e13cf6a709daba2ca",
"text": "While syntactic dependency annotations concentrate on the surface or functional structure of a sentence, semantic dependency annotations aim to capture betweenword relationships that are more closely related to the meaning of a sentence, using graph-structured representations. We extend the LSTM-based syntactic parser of Dozat and Manning (2017) to train on and generate these graph structures. The resulting system on its own achieves stateof-the-art performance, beating the previous, substantially more complex stateof-the-art system by 1.9% labeled F1. Adding linguistically richer input representations pushes the margin even higher, allowing us to beat it by 2.6% labeled F1.",
"title": ""
},
{
"docid": "3e25691aaf82d03ab61cc3b45baa6420",
"text": "The use of deceptive techniques in user-generated video portals is ubiquitous. Unscrupulous uploaders deliberately mislabel video descriptors aiming at increasing their views and subsequently their ad revenue. This problem, usually referred to as \"clickbait,\" may severely undermine user experience. In this work, we study the clickbait problem on YouTube by collecting metadata for 206k videos. To address it, we devise a deep learning model based on variational autoencoders that supports the diverse modalities of data that videos include. The proposed model relies on a limited amount of manually labeled data to classify a large corpus of unlabeled data. Our evaluation indicates that the proposed model offers improved performance when compared to other conventional models. Our analysis of the collected data indicates that YouTube recommendation engine does not take into account clickbait. Thus, it is susceptible to recommending misleading videos to users.",
"title": ""
},
{
"docid": "add72d66c626f1a4df3e0820c629c75f",
"text": "Cybersecurity is a complex and dynamic area where multiple actors act against each other through computer networks largely without any commonly accepted rules of engagement. Well-managed cybersecurity operations need a clear terminology to describe threats, attacks and their origins. In addition, cybersecurity tools and technologies need semantic models to be able to automatically identify threats and to predict and detect attacks. This paper reviews terminology and models of cybersecurity operations, and proposes approaches for semantic modelling of cybersecurity threats and attacks.",
"title": ""
},
{
"docid": "e14d4405a6da0cd4f1ee1beaeeed0fba",
"text": "Source code search plays an important role in software maintenance. The effectiveness of source code search not only relies on the search technique, but also on the quality of the query. In practice, software systems are large, thus it is difficult for a developer to format an accurate query to express what really in her/his mind, especially when the maintainer and the original developer are not the same person. When a query performs poorly, it has to be reformulated. But the words used in a query may be different from those that have similar semantics in the source code, i.e., the synonyms, which will affect the accuracy of code search results. To address this issue, we propose an approach that extends a query with synonyms generated from WordNet. Our approach extracts natural language phrases from source code identifiers, matches expanded queries with these phrases, and sorts the search results. It allows developers to explore word usage in a piece of software, helps them quickly identify relevant program elements for investigation or quickly recognize alternative words for query reformulation. Our initial empirical study on search tasks performed on the JavaScript/ECMAScript interpreter and compiler, Rhino, shows that the synonyms used to expand the queries help recommend good alternative queries. Our approach also improves the precision and recall of Conquer, a state-of-the-art query expansion/reformulation technique, by 5% and 8% respectively.",
"title": ""
},
{
"docid": "c95e58c054855c60b16db4816c626ecb",
"text": "Markerless tracking of human pose is a hard yet relevant problem. In this paper, we derive an efficient filtering algorithm for tracking human pose using a stream of monocular depth images. The key idea is to combine an accurate generative model — which is achievable in this setting using programmable graphics hardware — with a discriminative model that provides data-driven evidence about body part locations. In each filter iteration, we apply a form of local model-based search that exploits the nature of the kinematic chain. As fast movements and occlusion can disrupt the local search, we utilize a set of discriminatively trained patch classifiers to detect body parts. We describe a novel algorithm for propagating this noisy evidence about body part locations up the kinematic chain using the un-scented transform. The resulting distribution of body configurations allows us to reinitialize the model-based search. We provide extensive experimental results on 28 real-world sequences using automatic ground-truth annotations from a commercial motion capture system.",
"title": ""
},
{
"docid": "3a322129019eed67686018404366fe0b",
"text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.",
"title": ""
},
{
"docid": "d1b46a2dd14cc225c5e53bef4ba2d774",
"text": "In this project, we developed a deep learning system applied to human retina images for medical diagnostic decision support. The retina images were provided by EyePACS (Eyepacs, LLC). These images were used in the framework of a Kaggle contest (Kaggle INC, 2017), whose purpose to identify diabetic retinopathy signs through an automatic detection system. Using as inspiration one of the solutions proposed in the contest, we implemented a model that successfully detects diabetic retinopathy from retina images. After a carefully designed preprocessing, the images were used as input to a deep convolutional neural network (CNN). The CNN performed a feature extraction process followed by a classification stage, which allowed the system to differentiate between healthy and ill patients using five categories. Our model was able to identify diabetic retinopathy in the patients with an agreement rate of 76.73% with respect to the medical expert's labels for the test data.",
"title": ""
},
{
"docid": "495be81dda82d3e4d90a34b6716acf39",
"text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.",
"title": ""
},
{
"docid": "1bc13bb6f4f2598a2271240bc2963250",
"text": "A complex nature of big data resources demands new methods for structuring especially for textual content. WordNet is a good knowledge source for comprehensive abstraction of natural language as its good implementations exist for many languages. Since WordNet embeds natural language in the form of a complex network, a transformation mechanism WordNet2Vec is proposed in the paper. It creates vectors for each word from WordNet. These vectors encapsulate general position role of a given word towards all other words in the natural language. Any list or set of such vectors contains knowledge about the context of its component within the whole language. Such word representation can be easily applied to many analytic tasks like classification or clustering. The usefulness of the WordNet2Vec method was demonstrated in sentiment analysis, i.e. classification with transfer learning for the real Amazon opinion textual dataset.",
"title": ""
},
{
"docid": "4952d426d0f2aed1daea234595dcd901",
"text": "Clustering analysis is a primary method for data mining. Density clustering has such advantages as: its clusters are easy to understand and it does not limit itself to shapes of clusters. But existing density-based algorithms have trouble in finding out all the meaningful clusters for datasets with varied densities. This paper introduces a new algorithm called VDBSCAN for the purpose of varied-density datasets analysis. The basic idea of VDBSCAN is that, before adopting traditional DBSCAN algorithm, some methods are used to select several values of parameter Eps for different densities according to a k-dist plot. With different values of Eps, it is possible to find out clusters with varied densities simultaneity. For each value of Eps, DBSCAN algorithm is adopted in order to make sure that all the clusters with respect to corresponding density are clustered. And for the next process, the points that have been clustered are ignored, which avoids marking both denser areas and sparser ones as one cluster. Finally, a synthetic database with 2-dimension data is used for demonstration, and experiments show that VDBSCAN is efficient in successfully clustering uneven datasets.",
"title": ""
},
{
"docid": "fbbf7c30f7ebcd2b9bbc9cc7877309b1",
"text": "People detection is essential in a lot of different systems. Many applications nowadays tend to require people detection to achieve certain tasks. These applications come under many disciplines, such as robotics, ergonomics, biomechanics, gaming and automotive industries. This wide range of applications makes human body detection an active area of research. With the release of depth sensors or RGB-D cameras such as Micosoft Kinect, this area of research became more active, specially with their affordable price. Human body detection requires the adaptation of many scenarios and situations. Various conditions such as occlusions, background cluttering and props attached to the human body require training on custom built datasets. In this paper we present an approach to prepare training datasets to detect and track human body with attached props. The proposed approach uses rigid body physics simulation to create and animate different props attached to the human body. Three scenarios are implemented. In the first scenario the prop is closely attached to the human body, such as a person carrying a backpack. In the second scenario, the prop is slightly attached to the human body, such as a person carrying a briefcase. In the third scenario the prop is not attached to the human body, such as a person dragging a trolley bag. Our approach gives results with accuracy of 93% in identifying both the human body parts and the attached prop in all the three scenarios.",
"title": ""
},
{
"docid": "e1809a71403ee675197ef5a23945053f",
"text": "Working memory (WM) capacity is the ability to retain and manipulate information during a short period of time. This ability underlies complex reasoning and has generally been regarded as a fixed trait of the individual. Children with attention deficit hyperactivity disorder (ADHD) represent one group of subjects with a WM deficit, attributed to an impairment of the frontal lobe. In the present study, we used a new training paradigm with intensive and adaptive training of WM tasks and evaluated the effect of training with a double blind, placebo controlled design. Training significantly enhanced performance on the trained WM tasks. More importantly, the training significantly improved performance on a nontrained visuo-spatial WM task and on Raven's Progressive Matrices, which is a nonverbal complex reasoning task. In addition, motor activity--as measured by the number of head movements during a computerized test--was significantly reduced in the treatment group. A second experiment showed that similar training-induced improvements on cognitive tasks are also possible in young adults without ADHD. These results demonstrate that performance on WM tasks can be significantly improved by training, and that the training effect also generalizes to nontrained tasks requiring WM. Training improved performance on tasks related to prefrontal functioning and had also a significant effect on motor activity in children with ADHD. The results thus suggest that WM training potentially could be of clinical use for ameliorating the symptoms in ADHD.",
"title": ""
},
{
"docid": "f1b3831db9900a2f573b76113cd4068c",
"text": "Digital signature has been widely employed in wireless mobile networks to ensure the authenticity of messages and identity of nodes. A paramount concern in signature verification is reducing the verification delay to ensure the network QoS. To address this issue, researchers have proposed the batch cryptography technology. However, most of the existing works focus on designing batch verification algorithms without sufficiently considering the impact of invalid signatures. The performance of batch verification could dramatically drop, if there are verification failures caused by invalid signatures. In this paper, we propose a Game-theory-based Batch Identification Model (GBIM) for wireless mobile networks, enabling nodes to find invalid signatures with the optimal delay under heterogeneous and dynamic attack scenarios. Specifically, we design an incomplete information game model between a verifier and its attackers, and prove the existence of Nash Equilibrium, to select the dominant algorithm for identifying invalid signatures. Moreover, we propose an auto-match protocol to optimize the identification algorithm selection, when the attack strategies can be estimated based on history information. Comprehensive simulation results demonstrate that GBIM can identify invalid signatures more efficiently than existing algorithms.",
"title": ""
},
{
"docid": "023285cbd5d356266831fc0e8c176d4f",
"text": "The two authorsLakoff, a linguist and Nunez, a psychologistpurport to introduce a new field of study, i.e. \"mathematical idea analysis\", with this book. By \"mathematical idea analysis\", they mean to give a scientifically plausible account of mathematical concepts using the apparatus of cognitive science. This approach is meant to be a contribution to academics and possibly education as it helps to illuminate how we cognitise mathematical concepts, which are supposedly undecipherable and abstruse to laymen. The analysis of mathematical ideas, the authors claim, cannot be done within mathematics, for even metamathematicsrecursive theory, model theory, set theory, higherorder logic still requires mathematical idea analysis in itself! Formalism, by its very nature, voids symbols of their meanings and thus cognition is required to imbue meaning. Thus, there is a need for this new field, in which the authors, if successful, would become pioneers.",
"title": ""
},
{
"docid": "205e03f589758316987e3eaacee13430",
"text": "Motivated by the technology evolutions and the corresponding changes in user-consumer behavioral patterns, this study applies a Location Based Services (LBS) environmental determinants’ integrated theoretical framework by investigating its role on classifying, profiling and predicting user-consumer behavior. For that purpose, a laboratory LBS application was developed and tested with 110 subjects within the context of a field trial setting in the entertainment industry. Users are clustered into two main types having the “physical” and the “social density” determinants to best discriminate between the resulting clusters. Also, the two clusters differ in terms of their spatial and verbal ability and attitude towards the LBS environment. Similarly, attitude is predicted by the “location”, the “device” and the “mobile connection” LBS environmental determinants for the “walkers in place” (cluster #1) and by all LBS environmental determinants (i.e. those determinants of cluster #1 plus the “digital” and the “social environment” ones) for the “walkers in space” (cluster #2). Finally, the attitude of both clusters’ participants towards the LBS environment affects their behavioral intentions towards using LBS applications, with limited, however, predicting power observed in this relationship.",
"title": ""
},
{
"docid": "fde2aefec80624ff4bc21d055ffbe27b",
"text": "Object detector with region proposal networks such as Fast/Faster R-CNN [1, 2] have shown the state-of-the art performance on several benchmarks. However, they have limited success for detecting small objects. We argue the limitation is related to insufficient performance of Fast R-CNN block in Faster R-CNN. In this paper, we propose a refining block for Fast R-CNN. We further merge the block and Faster R-CNN into a single network (RF-RCNN). The RF-RCNN was applied on plate and human detection in RoadView image that consists of high resolution street images (over 30M pixels). As a result, the RF-RCNN showed great improvement over the Faster-RCNN.",
"title": ""
},
{
"docid": "9c4e381959ff612102d57b43f792f431",
"text": "Air quality monitoring has attracted a lot of attention from governments, academia and industry, especially for PM2.5 due to its significant impact on our respiratory systems. In this paper, we present the design, implementation, and evaluation of Mosaic, a low cost urban PM2.5 monitoring system based on mobile sensing. In Mosaic, a small number of air quality monitoring nodes are deployed on city buses to measure air quality. Current low-cost particle sensors based on light-scattering, however, are vulnerable to airflow disturbance on moving vehicles. In order to address this problem, we build our air quality monitoring nodes, Mosaic-Nodes, with a novel constructive airflow-disturbance design based on a carefully tuned airflow structure and a GPS-assisted filtering method. Further, the buses used for system deployment are selected by a novel algorithm which achieves both high coverage and low computation overhead. The collected sensor data is also used to calculate the PM2.5 of locations without direct measurements by an existing inference model. We apply the Mosaic system in a testing urban area which includes more than 70 point-of-interests. Results show that the Mosaic system can accurately obtain the urban air quality with high coverage and low cost.",
"title": ""
},
{
"docid": "f12931426173073fcbde9a2fe101dfcb",
"text": "In this letter, a novel compact electromagnetic band-gap (EBG) structure constructed by etching a complementary split ring resonator (CSRR) on the patch of a conventional mushroom-type EBG (CMT-EBG) is proposed. The first bandgap is defined in all directions in the surface structure. Compared to the CMT-EBG structure, the CSRR-based EBG presents a 28% size reduction in the start frequency of the first bandgap. However, asymmetrical frames of the CSRR-based EBG result in different properties at X and Y-directions. Another two tunable bandgaps in Y-direction are observed. Thus, the proposed EBG can be used for multi-band applications, such as dual/triple-band antennas. The EBGs have been constructed and measured.",
"title": ""
}
] | scidocsrr |
1a0cf2df8115efde02aff34cc50b31a8 | "Snapchat is more personal": An exploratory study on Snapchat behaviors and young adult interpersonal relationships | [
{
"docid": "bb709a5fd20c517769312787b82911b8",
"text": "Over the past decade, technology has become increasingly important in the lives of adolescents. As a group, adolescents are heavy users of newer electronic communication forms such as instant messaging, e-mail, and text messaging, as well as communication-oriented Internet sites such as blogs, social networking, and sites for sharing photos and videos. Kaveri Subrahmanyam and Patricia Greenfield examine adolescents' relationships with friends, romantic partners, strangers, and their families in the context of their online communication activities. The authors show that adolescents are using these communication tools primarily to reinforce existing relationships, both with friends and romantic partners. More and more they are integrating these tools into their \"offline\" worlds, using, for example, social networking sites to get more information about new entrants into their offline world. Subrahmanyam and Greenfield note that adolescents' online interactions with strangers, while not as common now as during the early years of the Internet, may have benefits, such as relieving social anxiety, as well as costs, such as sexual predation. Likewise, the authors demonstrate that online content itself can be both positive and negative. Although teens find valuable support and information on websites, they can also encounter racism and hate messages. Electronic communication may also be reinforcing peer communication at the expense of communication with parents, who may not be knowledgeable enough about their children's online activities on sites such as the enormously popular MySpace. Although the Internet was once hailed as the savior of education, the authors say that schools today are trying to control the harmful and distracting uses of electronic media while children are at school. The challenge for schools is to eliminate the negative uses of the Internet and cell phones in educational settings while preserving their significant contributions to education and social connection.",
"title": ""
}
] | [
{
"docid": "0a6a170d3ebec3ded7c596d768f9ce85",
"text": "This paper presents the method of our submission for THUMOS15 action recognition challenge. We propose a new action recognition system by exploiting very deep twostream ConvNets and Fisher vector representation of iDT features. Specifically, we utilize those successful very deep architectures in images such as GoogLeNet and VGGNet to design the two-stream ConvNets. From our experiments, we see that deeper architectures obtain higher performance for spatial nets. However, for temporal net, deeper architectures could not yield better recognition accuracy. We analyze that the UCF101 dataset is relatively very small and it is very hard to train such deep networks on the current action datasets. Compared with traditional iDT features, our implemented two-stream ConvNets significantly outperform them. We further combine the recognition scores of both two-stream ConvNets and iDT features, and achieve 68% mAP value on the validation dataset of THUMOS15.",
"title": ""
},
{
"docid": "d157e462a13515132e73888101d48ab6",
"text": "This paper describes the development of a fuzzy gain scheduling scheme of PID controllers for process control. Fuzzy rules and reasoning are utilized on-line to determine the controller parameters based on the error signal and its first difference. Simulation results demonstrate that better control performance can be achieved in comparison with ZieglerNichols controllers and Kitamori’s PID controllers.",
"title": ""
},
{
"docid": "ff75699519c0df47220624db263b483a",
"text": "We present BeThere, a proof-of-concept system designed to explore 3D input for mobile collaborative interactions. With BeThere, we explore 3D gestures and spatial input which allow remote users to perform a variety of virtual interactions in a local user's physical environment. Our system is completely self-contained and uses depth sensors to track the location of a user's fingers as well as to capture the 3D shape of objects in front of the sensor. We illustrate the unique capabilities of our system through a series of interactions that allow users to control and manipulate 3D virtual content. We also provide qualitative feedback from a preliminary user study which confirmed that users can complete a shared collaborative task using our system.",
"title": ""
},
{
"docid": "8b0a90d4f31caffb997aced79c59e50c",
"text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the",
"title": ""
},
{
"docid": "db54908608579efd067853fed5d3e4e8",
"text": "The detection of moving objects from stationary cameras is usually approached by background subtraction, i.e. by constructing and maintaining an up-to-date model of the background and detecting moving objects as those that deviate from such a model. We adopt a previously proposed approach to background subtraction based on self-organization through artificial neural networks, that has been shown to well cope with several of the well known issues for background maintenance. Here, we propose a spatial coherence variant to such approach to enhance robustness against false detections and formulate a fuzzy model to deal with decision problems typically arising when crisp settings are involved. We show through experimental results and comparisons that higher accuracy values can be reached for color video sequences that represent typical situations critical for moving object detection.",
"title": ""
},
{
"docid": "4c1b42e12fd4f19870b5fc9e2f9a5f07",
"text": "Similar to face-to-face communication in daily life, more and more evidence suggests that human emotions also spread in online social media through virtual interactions. However, the mechanism underlying the emotion contagion, like whether different feelings spread unlikely or how the spread is coupled with the social network, is rarely investigated. Indeed, due to the costly expense and spatio-temporal limitations, it is challenging for conventional questionnaires or controlled experiments. While given the instinct of collecting natural affective responses of massive connected individuals, online social media offer an ideal proxy to tackle this issue from the perspective of computational social science. In this paper, based on the analysis of millions of tweets in Weibo, a Twitter-like service in China, we surprisingly find that anger is more contagious than joy, indicating that it can sparkle more angry follow-up tweets; and anger prefers weaker ties than joy for the dissemination in social network, indicating that it can penetrate different communities and break local traps by more sharings between strangers. Through a simple diffusion model, it is unraveled that easier contagion and weaker ties function cooperatively in speeding up anger’s spread, which is further testified by the diffusion of realistic bursty events with different dominant emotions. To our best knowledge, for the first time we quantificationally provide the long-term evidence to disclose the difference between joy and anger in dissemination mechanism and our findings would shed lights on personal anger management in human communication and collective outrage control in cyber space.",
"title": ""
},
{
"docid": "cc6cf6557a8be12d8d3a4550163ac0a9",
"text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.",
"title": ""
},
{
"docid": "728bc76467b7f4ddf7c8c368cdf2d44b",
"text": "SQL is the de facto language for manipulating relational data. Though powerful, many users find it difficult to write SQL queries due to highly expressive constructs. \n While using the programming-by-example paradigm to help users write SQL queries is an attractive proposition, as evidenced by online help forums such as Stack Overflow, developing techniques for synthesizing SQL queries from given input-output (I/O) examples has been difficult, due to the large space of SQL queries as a result of its rich set of operators. \n \n In this paper, we present a new scalable and efficient algorithm for synthesizing SQL queries based on I/O examples. The key innovation of our algorithm is development of a language for abstract queries, i.e., queries with uninstantiated operators, that can be used to express a large space of SQL queries efficiently. Using abstract queries to represent the search space nicely decomposes the synthesis problem into two tasks: 1) searching for abstract queries that can potentially satisfy the given I/O examples, and 2) instantiating the found abstract queries and ranking the results. \n \n We have implemented this algorithm in a new tool called Scythe and evaluated it using 193 benchmarks collected from Stack Overflow. Our evaluation shows that Scythe can efficiently solve 74% of the benchmarks, most in just a few seconds, and the queries range from simple ones involving a single selection to complex queries with 6 nested subqueires.",
"title": ""
},
{
"docid": "53d4995e474fb897fb6f10cef54c894f",
"text": "The measurement of different target parameters using radar systems has been an active research area for the last decades. Particularly target angle measurement is a very demanding topic, because obtaining good measurement results often goes hand in hand with extensive hardware effort. Especially for sensors used in the mass market, e.g. in automotive applications like adaptive cruise control this may be prohibitive. Therefore we address target localization using a compact frequency-modulated continuous-wave (FMCW) radar sensor. The angular measurement results are improved compared to standard beamforming methods using an adaptive beamforming approach. This approach will be applied to the FMCW principle in a way that allows the use of well known methods for the determination of other target parameters like range or velocity. The applicability of the developed theory will be shown on different measurement scenarios using a 24-GHz prototype radar system.",
"title": ""
},
{
"docid": "f9119710fb15af38bc823e25eec5653b",
"text": "The emergence of knowledge-based economies has placed an importance on effective management of knowledge. The effective management of knowledge has been described as a critical ingredient for organisation seeking to ensure sustainable strategic competitive advantage. This paper reviews literature in the area of knowledge management to bring out the importance of knowledge management in organisation. The paper is able to demonstrate that knowledge management is a key driver of organisational performance and a critical tool for organisational survival, competitiveness and profitability. Therefore creating, managing, sharing and utilizing knowledge effectively is vital for organisations to take full advantage of the value of knowledge. The paper also contributes that, in order for organisations to manage knowledge effectively, attention must be paid on three key components people, processes and technology. In essence, to ensure organisation’s success, the focus should be to connect people, processes, and technology for the purpose of leveraging knowledge.",
"title": ""
},
{
"docid": "0801dc8a870053ba36c0db9d25314cfb",
"text": "Crowdsourcing is a new emerging distributed computing and business model on the backdrop of Internet blossoming. With the development of crowdsourcing systems, the data size of crowdsourcers, contractors and tasks grows rapidly. The worker quality evaluation based on big data analysis technology has become a critical challenge. This paper first proposes a general worker quality evaluation algorithm that is applied to any critical tasks such as tagging, matching, filtering, categorization and many other emerging applications, without wasting resources. Second, we realize the evaluation algorithm in the Hadoop platform using the MapReduce parallel programming model. Finally, to effectively verify the accuracy and the effectiveness of the algorithm in a wide variety of big data scenarios, we conduct a series of experiments. The experimental results demonstrate that the proposed algorithm is accurate and effective. It has high computing performance and horizontal scalability. And it is suitable for large-scale worker quality evaluations in a big data environment.",
"title": ""
},
{
"docid": "88e7cbdb4704320cd40b2e0b566c0e42",
"text": "UNLABELLED\nSince 2009, catfish farming in the southeastern United States has been severely impacted by a highly virulent and clonal population of Aeromonas hydrophila causing motile Aeromonas septicemia (MAS) in catfish. The possible origin of this newly emerged highly virulent A. hydrophila strain is unknown. In this study, we show using whole-genome sequencing and comparative genomics that A. hydrophila isolates from diseased grass carp in China and catfish in the United States have highly similar genomes. Our phylogenomic analyses suggest that U.S. catfish isolates emerged from A. hydrophila populations of Asian origin. Furthermore, we identified an A. hydrophila strain isolated in 2004 from a diseased catfish in Mississippi, prior to the onset of the major epidemic outbreaks in Alabama starting in 2009, with genomic characteristics that are intermediate between those of the Asian and Alabama fish isolates. Investigation of A. hydrophila strain virulence demonstrated that the isolate from the U.S. catfish epidemic is significantly more virulent to both channel catfish and grass carp than is the Chinese carp isolate. This study implicates the importation of fish or fishery products into the United States as the source of highly virulent A. hydrophila that has caused severe epidemic outbreaks in United States-farmed catfish and further demonstrates the potential for invasive animal species to disseminate bacterial pathogens worldwide.\n\n\nIMPORTANCE\nCatfish aquaculture farming in the southeastern United States has been severely affected by the emergence of virulent Aeromonas hydrophila responsible for epidemic disease outbreaks, resulting in the death of over 10 million pounds of catfish. Because the origin of this newly emerged A. hydrophila strain is unknown, this study used a comparative genomics approach to conduct a phylogenomic analysis of A. hydrophila isolates obtained from the United States and Asia. Our results suggest that the virulent isolates from United States-farmed catfish have a recent common ancestor with A. hydrophila isolates from diseased Asian carp. We have also observed that an Asian carp isolate, like recent U.S. catfish isolates, is virulent in catfish. The results from this study suggest that the highly virulent U.S. epidemic isolates emerged from an Asian source and provide another example of the threat that invasive species pose in the dissemination of bacterial pathogens.",
"title": ""
},
{
"docid": "63a9ff660f9d6192c1633f5fca0bc28c",
"text": "The natural world provides numerous cases for inspiration in engineering design. Biological organisms, phenomena, and strategies, which we refer to as biological systems, provide a rich set of analogies. These systems provide insight into sustainable and adaptable design and offer engineers billions of years of valuable experience, which can be used to inspire engineering innovation. This research presents a general method for functionally representing biological systems through systematic design techniques, leading to the conceptualization of biologically inspired engineering designs. Functional representation and abstraction techniques are used to translate biological systems into an engineering context. The goal is to make the biological information accessible to engineering designers who possess varying levels of biological knowledge but have a common understanding of engineering design. Creative or novel engineering designs may then be discovered through connections made between biology and engineering. To assist with making connections between the two domains concept generation techniques that use biological information, engineering knowledge, and automatic concept generation software are employed. Two concept generation approaches are presented that use a biological model to discover corresponding engineering components that mimic the biological system and use a repository of engineering and biological information to discover which biological components inspire functional solutions to fulfill engineering requirements. Discussion includes general guidelines for modeling biological systems at varying levels of fidelity, advantages, limitations, and applications of this research. The modeling methodology and the first approach for concept generation are illustrated by a continuous example of lichen.",
"title": ""
},
{
"docid": "ad33994b26dad74e6983c860c0986504",
"text": "Accurate software effort estimation has been a challenge for many software practitioners and project managers. Underestimation leads to disruption in the project's estimated cost and delivery. On the other hand, overestimation causes outbidding and financial losses in business. Many software estimation models exist; however, none have been proven to be the best in all situations. In this paper, a decision tree forest (DTF) model is compared to a traditional decision tree (DT) model, as well as a multiple linear regression model (MLR). The evaluation was conducted using ISBSG and Desharnais industrial datasets. Results show that the DTF model is competitive and can be used as an alternative in software effort prediction.",
"title": ""
},
{
"docid": "2bdfa392f14724fa1e349943e173e802",
"text": "Article history: Received 28 September 2008 Received in revised form 14 May 2009 Accepted 2 August 2009",
"title": ""
},
{
"docid": "ff20e5cd554cd628eba07776fa9a5853",
"text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.",
"title": ""
},
{
"docid": "1b6c6f7a24d1e44bce19cfb38b32e023",
"text": "The purpose of this study was to explore the ways in which audiences build parasocial relationships with media characters via reality TV and social media, and its implications for celebrity endorsement and purchase intentions. Using an online survey, this study collected 401 responses from the Korean Wave fans in Singapore. The results showed that reality TV viewing and SNS use to interact with media characters were positively associated with parasocial relationships between media characters and viewers. Parasocial relationships, in turn, were positively associated with the viewers' perception of endorser and brand credibility, and purchase intention of the brand endorsed by favorite media characters. The results also indicated that self-disclosure played an important role in forming parasocial relationships and in mediating the effectiveness of celebrity endorsement. This study specifies the links between an emerging media genre, a communication technology, and audiences' interaction with the mediated world.",
"title": ""
},
{
"docid": "3a41bcb297b991d155626af5fdbf7e92",
"text": "In this study, we compared the performance limits of building codes and assessed whether these limits could be applied to wide beams. We performed a parametric study, investigating the longitudinal reinforcement ratio, beam width, steel class, and type and spacing of the transverse reinforcement. We obtained the following results from these analyses: Increasing the longitudinal reinforcement ratio reduced the deformation limit corresponding to code-based performance limits. Changing the confinement properties significantly affected the TEC 2007 and ASCE/SEI 41 limits. The performance limits defined by EC8 and ASCE/SEI 41 had similar corresponding plastic deformations, lower than those corresponding to the TEC 2007 strain-based damage limits. The results show that assessing the performance of wide beams by using the performance limits from different codes could produce contradictory results.",
"title": ""
},
{
"docid": "5be572ea448bfe40654956112cecd4e1",
"text": "BACKGROUND\nBeta blockers reduce mortality in patients who have chronic heart failure, systolic dysfunction, and are on background treatment with diuretics and angiotensin-converting enzyme inhibitors. We aimed to compare the effects of carvedilol and metoprolol on clinical outcome.\n\n\nMETHODS\nIn a multicentre, double-blind, and randomised parallel group trial, we assigned 1511 patients with chronic heart failure to treatment with carvedilol (target dose 25 mg twice daily) and 1518 to metoprolol (metoprolol tartrate, target dose 50 mg twice daily). Patients were required to have chronic heart failure (NYHA II-IV), previous admission for a cardiovascular reason, an ejection fraction of less than 0.35, and to have been treated optimally with diuretics and angiotensin-converting enzyme inhibitors unless not tolerated. The primary endpoints were all-cause mortality and the composite endpoint of all-cause mortality or all-cause admission. Analysis was done by intention to treat.\n\n\nFINDINGS\nThe mean study duration was 58 months (SD 6). The mean ejection fraction was 0.26 (0.07) and the mean age 62 years (11). The all-cause mortality was 34% (512 of 1511) for carvedilol and 40% (600 of 1518) for metoprolol (hazard ratio 0.83 [95% CI 0.74-0.93], p=0.0017). The reduction of all-cause mortality was consistent across predefined subgroups. The composite endpoint of mortality or all-cause admission occurred in 1116 (74%) of 1511 on carvedilol and in 1160 (76%) of 1518 on metoprolol (0.94 [0.86-1.02], p=0.122). Incidence of side-effects and drug withdrawals did not differ by much between the two study groups.\n\n\nINTERPRETATION\nOur results suggest that carvedilol extends survival compared with metoprolol.",
"title": ""
},
{
"docid": "924dbc783bf8743a28c2cd4563d50de9",
"text": "This paper studies the off-policy evaluation problem, where one aims to estimate the value of a target policy based on a sample of observations collected by another policy. We first consider the multi-armed bandit case, establish a minimax risk lower bound, and analyze the risk of two standard estimators. It is shown, and verified in simulation, that one is minimax optimal up to a constant, while another can be arbitrarily worse, despite its empirical success and popularity. The results are applied to related problems in contextual bandits and fixed-horizon Markov decision processes, and are also related to semi-supervised learning.",
"title": ""
}
] | scidocsrr |
eefe7afe1ccb4fd18d014b7c2be8d1e4 | Learning scrum by doing real-life projects | [
{
"docid": "8d8e7c9777f02c6a4a131f21a66ee870",
"text": "Teaching agile practices is becoming a priority in Software engineering curricula as a result of the increasing use of agile methods (AMs) such as Scrum in the software industry. Limitations in time, scope, and facilities within academic contexts hinder students’ hands-on experience in the use of professional AMs. To enhance students’ exposure to Scrum, we have developed Virtual Scrum, an educational virtual world that simulates a Scrum-based team room through virtual elements such as blackboards, a Web browser, document viewers, charts, and a calendar. A preliminary version of Virtual Scrum was tested with a group of 45 students running a capstone project with and without Virtual Scrum support. Students’ feedback showed that Virtual Scrum is a viable and effective tool to implement the different elements in a Scrum team room and to perform activities throughout the Scrum process. 2013 Wiley Periodicals, Inc. Comput Appl Eng Educ 23:147–156, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21588",
"title": ""
}
] | [
{
"docid": "df125587d0529bafa1be721801a67f77",
"text": "This paper describes a self-aligned SiGe heterojunction bipolar transistor (HBT) based on a standard double-polysilicon architecture and nonselective epitaxial growth (i.e. DPSA-NSEG). Emitter-base self-alignment is realized by polysilicon reflow in a hydrogen ambient after emitter window patterning. The fabricated self-aligned SiGe HBTs, with emitter widths of 0.3-0.4 μm, exhibit 20% lower base resistance and 15% higher maximum oscillation frequency fmax than non-self-aligned reference devices. The minimum noise figure of a Ku-band low-noise amplifier is reduced from 0.9 to 0.8 dB by emitter-base self-alignment.",
"title": ""
},
{
"docid": "1f15775000a1837cfc168a91c4c1a2ae",
"text": "In the recent aging society, studies on health care services have been actively conducted to provide quality services to medical consumers in wire and wireless environments. However, there are some problems in these health care services due to the lack of personalized service and the uniformed way in services. For solving these issues, studies on customized services in medical markets have been processed. However, because a diet recommendation service is only focused on the personal disease information, it is difficult to provide specific customized services to users. This study provides a customized diet recommendation service for preventing and managing coronary heart disease in health care services. This service provides a customized diet to customers by considering the basic information, vital sign, family history of diseases, food preferences according to seasons and intakes for the customers who are concerning about the coronary heart disease. The users who receive this service can use a customized diet service differed from the conventional service and that supports continuous services and helps changes in customers living habits.",
"title": ""
},
{
"docid": "05fe74d25c84e46b8044faca8a350a2f",
"text": "BACKGROUND\nAn observational study was conducted in 12 European countries by the European Federation of Clinical Chemistry and Laboratory Medicine Working Group for the Preanalytical Phase (EFLM WG-PRE) to assess the level of compliance with the CLSI H3-A6 guidelines.\n\n\nMETHODS\nA structured checklist including 29 items was created to assess the compliance of European phlebotomy procedures with the CLSI H3-A6 guideline. A risk occurrence chart of individual phlebotomy steps was created from the observed error frequency and severity of harm of each guideline key issue. The severity of errors occurring during phlebotomy was graded using the risk occurrence chart.\n\n\nRESULTS\nTwelve European countries participated with a median of 33 (18-36) audits per country, and a total of 336 audits. The median error rate for the total phlebotomy procedure was 26.9 % (10.6-43.8), indicating a low overall compliance with the recommended CLSI guideline. Patient identification and test tube labelling were identified as the key guideline issues with the highest combination of probability and potential risk of harm. Administrative staff did not adhere to patient identification procedures during phlebotomy, whereas physicians did not adhere to test tube labelling policy.\n\n\nCONCLUSIONS\nThe level of compliance of phlebotomy procedures with the CLSI H3-A6 guidelines in 12 European countries was found to be unacceptably low. The most critical steps in need of immediate attention in the investigated countries are patient identification and tube labelling.",
"title": ""
},
{
"docid": "17c8766c5fcc9b6e0d228719291dcea5",
"text": "In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.",
"title": ""
},
{
"docid": "86d4296be61308ec93920d2d84f0694f",
"text": "by Jian Xu Our world produces massive data every day; they exist in diverse forms, from pairwise data and matrix to time series and trajectories. Meanwhile, we have access to the versatile toolkit of network analysis. Networks also have different forms; from simple networks to higher-order network, each representation has different capabilities in carrying information. For researchers who want to leverage the power of the network toolkit, and apply it beyond networks data to sequential data, diffusion data, and many more, the question is: how to represent big data and networks? This dissertation makes a first step to answering the question. It proposes the higherorder network, which is a critical piece for representing higher-order interaction data; it introduces a scalable algorithm for building the network, and visualization tools for interactive exploration. Finally, it presents broad applications of the higher-order network in the real-world. Dedicated to those who strive to be better persons.",
"title": ""
},
{
"docid": "682921e4e2f000384fdcb9dc6fbaa61a",
"text": "The use of Cloud Computing for computation offloading in the robotics area has become a field of interest today. The aim of this work is to demonstrate the viability of cloud offloading in a low level and intensive computing task: a vision-based navigation assistance of a service mobile robot. In order to do so, a prototype, running over a ROS-based mobile robot (Erratic by Videre Design LLC) is presented. The information extracted from on-board stereo cameras will be used by a private cloud platform consisting of five bare-metal nodes with AMD Phenom 965 × 4 CPU, with the cloud middleware Openstack Havana. The actual task is the shared control of the robot teleoperation, that is, the smooth filtering of the teleoperated commands with the detected obstacles to prevent collisions. All the possible offloading models for this case are presented and analyzed. Several performance results using different communication technologies and offloading models are explained as well. In addition to this, a real navigation case in a domestic circuit was done. The tests demonstrate that offloading computation to the Cloud improves the performance and navigation results with respect to the case where all processing is done by the robot.",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "5086a13a84d3de2d5c340c5808e03e53",
"text": "The unstructured scenario, the extraction of significant features, the imprecision of sensors along with the impossibility of using GPS signals are some of the challenges encountered in underwater environments. Given this adverse context, the Simultaneous Localization and Mapping techniques (SLAM) attempt to localize the robot in an efficient way in an unknown underwater environment while, at the same time, generate a representative model of the environment. In this paper, we focus on key topics related to SLAM applications in underwater environments. Moreover, a review of major studies in the literature and proposed solutions for addressing the problem are presented. Given the limitations of probabilistic approaches, a new alternative based on a bio-inspired model is highlighted.",
"title": ""
},
{
"docid": "d7cb103c0dd2e7c8395438950f83da3f",
"text": "We address the effects of packaging on performance, reliability and cost of photonic devices. For silicon photonics we address some specific packaging aspects. Finally we propose an approach for integration of photonics and ASICs.",
"title": ""
},
{
"docid": "fe3775919e0a88dcabdc98bd8c34e6b8",
"text": "In this work, we study the 1-bit convolutional neural networks (CNNs), of which both the weights and activations are binary. While being efficient, the classification accuracy of the current 1-bit CNNs is much worse compared to their counterpart real-valued CNN models on the large-scale dataset, like ImageNet. To minimize the performance gap between the 1-bit and real-valued CNN models, we propose a novel model, dubbed Bi-Real net, which connects the real activations (after the 1-bit convolution and/or BatchNorm layer, before the sign function) to activations of the consecutive block, through an identity shortcut. Consequently, compared to the standard 1-bit CNN, the representational capability of the Bi-Real net is significantly enhanced and the additional cost on computation is negligible. Moreover, we develop a specific training algorithm including three technical novelties for 1bit CNNs. Firstly, we derive a tight approximation to the derivative of the non-differentiable sign function with respect to activation. Secondly, we propose a magnitude-aware gradient with respect to the weight for updating the weight parameters. Thirdly, we pre-train the real-valued CNN model with a clip function, rather than the ReLU function, to better initialize the Bi-Real net. Experiments on ImageNet show that the Bi-Real net with the proposed training algorithm achieves 56.4% and 62.2% top-1 accuracy with 18 layers and 34 layers, respectively. Compared to the state-of-the-arts (e.g., XNOR Net), Bi-Real net achieves up to 10% higher top-1 accuracy with more memory saving and lower computational cost. 4",
"title": ""
},
{
"docid": "5635f52c3e02fd9e9ea54c9ea1ff0329",
"text": "As a digital version of word-of-mouth, online review has become a major information source for consumers and has very important implications for a wide range of management activities. While some researchers focus their studies on the impact of online product review on sales, an important assumption remains unexamined, that is, can online product review reveal the true quality of the product? To test the validity of this key assumption, this paper first empirically tests the underlying distribution of online reviews with data from Amazon. The results show that 53% of the products have a bimodal and non-normal distribution. For these products, the average score does not necessarily reveal the product's true quality and may provide misleading recommendations. Then this paper derives an analytical model to explain when the mean can serve as a valid representation of a product's true quality, and discusses its implication on marketing practices.",
"title": ""
},
{
"docid": "e4c2fcc09b86dc9509a8763e7293cfe9",
"text": "This paperinvestigatesthe useof particle (sub-word) -grams for languagemodelling. One linguistics-basedand two datadriven algorithmsare presentedand evaluatedin termsof perplexity for RussianandEnglish. Interpolatingword trigramand particle6-grammodelsgivesup to a 7.5%perplexity reduction over thebaselinewordtrigrammodelfor Russian.Latticerescor ing experimentsarealsoperformedon1997DARPA Hub4evaluationlatticeswheretheinterpolatedmodelgivesa 0.4%absolute reductionin worderrorrateoverthebaselinewordtrigrammodel.",
"title": ""
},
{
"docid": "fe2ef685733bae2737faa04e8a10087d",
"text": "Federal health agencies are currently developing regulatory strategies for Artificial Intelligence based medical products. Regulatory regimes need to account for the new risks and benefits that come with modern AI, along with safety concerns and potential for continual autonomous learning that makes AI non-static and dramatically different than the drugs and products that agencies are used to regulating. Currently, the U.S. Food and Drug Administration (FDA) and other regulatory agencies treat AI-enabled products as medical devices. Alternatively, we propose that AI regulation in the medical domain can analogously adopt aspects of the models used to regulate medical providers.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "845190bc7aa800405358d9a7c5b38504",
"text": "We describe a sparse Bayesian regression method for recovering 3D human body motion directly from silhouettes extracted from monocular video sequences. No detailed body shape model is needed, and realism is ensured by training on real human motion capture data. The tracker estimates 3D body pose by using Relevance Vector Machine regression to combine a learned autoregressive dynamical model with robust shape descriptors extracted automatically from image silhouettes. We studied several different combination methods, the most effective being to learn a nonlinear observation-update correction based on joint regression with respect to the predicted state and the observations. We demonstrate the method on a 54-parameter full body pose model, both quantitatively using motion capture based test sequences, and qualitatively on a test video sequence.",
"title": ""
},
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
},
{
"docid": "186d9fc899fdd92c7e74615a2a054a03",
"text": "In this paper, we propose an illumination-robust face recognition system via local directional pattern images. Usually, local pattern descriptors including local binary pattern and local directional pattern have been used in the field of the face recognition and facial expression recognition, since local pattern descriptors have important properties to be robust against the illumination changes and computational simplicity. Thus, this paper represents the face recognition approach that employs the local directional pattern descriptor and twodimensional principal analysis algorithms to achieve enhanced recognition accuracy. In particular, we propose a novel methodology that utilizes the transformed image obtained from local directional pattern descriptor as the direct input image of two-dimensional principal analysis algorithms, unlike that most of previous works employed the local pattern descriptors to acquire the histogram features. The performance evaluation of proposed system was performed using well-known approaches such as principal component analysis and Gabor-wavelets based on local binary pattern, and publicly available databases including the Yale B database and the CMU-PIE database were employed. Through experimental results, the proposed system showed the best recognition accuracy compared to different approaches, and we confirmed the effectiveness of the proposed method under varying lighting conditions.",
"title": ""
},
{
"docid": "64ac8a4b315656bfb6a5e73f3072347f",
"text": "Understanding the flow characteristic in fishways is crucial for efficient fish migration. Flow characteristic measurements can generally provide quantitative information of velocity distributions in such passages; Particle Image Velocimetry (PIV) has become one of the most versatile techniques to disclose flow fields in general and in fishways, in particular. This paper firstly gives an overview of fish migration along with fish ladders and then the application of PIV measurements on the fish migration process. The overview shows that the quantitative and detailed turbulent flow information in fish ladders obtained by PIV is critical for analyzing turbulent properties and validating numerical results.",
"title": ""
},
{
"docid": "fff6fe0a87a750e83745428b630149d2",
"text": "From 1960 through 1987, 89 patients with stage I (44 patients) or II (45 patients) vaginal carcinoma (excluding melanomas) were treated primarily at the Mayo Clinic. Treatment consisted of surgery alone in 52 patients, surgery plus radiation in 14, and radiation alone in 23. The median duration of follow-up was 4.4 years. The 5-year survival (Kaplan-Meier method) was 82% for patients with stage I disease and 53% for those with stage II disease (p = 0.009). Analysis of survival according to treatment did not show statistically significant differences. This report is consistent with previous studies showing that stage is an important prognostic factor and that treatment can be individualized, including surgical treatment for primary early-stage vaginal cancer.",
"title": ""
},
{
"docid": "dbafea1fbab901ff5a53f752f3bfb4b8",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
}
] | scidocsrr |
f51af67d9df8e757e944da109fe53a9c | Exploring the Usability of Video Game Heuristics for Pervasive Game Development in Smart Home Environments | [
{
"docid": "bdadf0088654060b3f1c749ead0eea6e",
"text": "This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.The field of pervasive games is diverse in the approaches and technologies used to create new and exciting gaming experiences that profit by the blend of real and virtual game elements. We explicitly look at the pervasive gaming sub-genres of smart toys, affective games, tabletop games, location-aware games, and augmented reality games, and discuss them in terms of their benefits and critical issues, as well as the relevant technology base.",
"title": ""
}
] | [
{
"docid": "62fa4f8712a4fcc1a3a2b6148bd3589b",
"text": "In this paper we discuss the development and application of a large formal ontology to the semantic web. The Suggested Upper Merged Ontology (SUMO) (Niles & Pease, 2001) (SUMO, 2002) is a “starter document” in the IEEE Standard Upper Ontology effort. This upper ontology is extremely broad in scope and can serve as a semantic foundation for search, interoperation, and communication on the semantic web.",
"title": ""
},
{
"docid": "f1e36a749d456326faeda90bc744b70d",
"text": "In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.",
"title": ""
},
{
"docid": "6c270eaa2b9b9a0e140e0d8879f5d383",
"text": "More than 75% of hospital-acquired or nosocomial urinary tract infections are initiated by urinary catheters, which are used during the treatment of 15-25% of hospitalized patients. Among other purposes, urinary catheters are primarily used for draining urine after surgeries and for urinary incontinence. During catheter-associated urinary tract infections, bacteria travel up to the bladder and cause infection. A major cause of catheter-associated urinary tract infection is attributed to the use of non-ideal materials in the fabrication of urinary catheters. Such materials allow for the colonization of microorganisms, leading to bacteriuria and infection, depending on the severity of symptoms. The ideal urinary catheter is made out of materials that are biocompatible, antimicrobial, and antifouling. Although an abundance of research has been conducted over the last forty-five years on the subject, the ideal biomaterial, especially for long-term catheterization of more than a month, has yet to be developed. The aim of this review is to highlight the recent advances (over the past 10years) in developing antimicrobial materials for urinary catheters and to outline future requirements and prospects that guide catheter materials selection and design.\n\n\nSTATEMENT OF SIGNIFICANCE\nThis review article intends to provide an expansive insight into the various antimicrobial agents currently being researched for urinary catheter coatings. According to CDC, approximately 75% of urinary tract infections are caused by urinary catheters and 15-25% of hospitalized patients undergo catheterization. In addition to these alarming statistics, the increasing cost and health related complications associated with catheter associated UTIs make the research for antimicrobial urinary catheter coatings even more pertinent. This review provides a comprehensive summary of the history, the latest progress in development of the coatings and a brief conjecture on what the future entails for each of the antimicrobial agents discussed.",
"title": ""
},
{
"docid": "eee51fc5cd3bee512b01193fa396e19a",
"text": "Croston’s method is a widely used to predict inventory demand when it is inter mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]",
"title": ""
},
{
"docid": "9698bfe078a32244169cbe50a04ebb00",
"text": "Maximum power point tracking (MPPT) controllers play an important role in photovoltaic systems. They maximize the output power of a PV array for a given set of conditions. This paper presents an overview of the different MPPT techniques. Each technique is evaluated on its ability to detect multiple maxima, convergence speed, ease of implementation, efficiency over a wide output power range, and cost of implementation. The perturbation and observation (P & O), and incremental conductance (IC) algorithms are widely used techniques, with many variants and optimization techniques reported. For this reason, this paper evaluates the performance of these two common approaches from a dynamic and steady state perspective.",
"title": ""
},
{
"docid": "f7c52b0076e306dff0823a38cbf103bb",
"text": "A processor architecture attempts to compromise between the needs of programs hosted on the architecture and the performance attainable in implementing the architecture. The needs of programs are most accurately reflected by the dynamic use of the instruction set as the target for a high level language compiler. In VLSI, the issue of implementation of an instruction set architecture is significant in determining the features of the architecture. Recent processor architectures have focused on two major trends: large microcoded instruction sets and simplified, or reduced, instruction sets. The attractiveness of these two approaches is affected by the choice of a single-chip implementation. The two different styles require different tradeoffs to attain an implementation in silicon with a reasonable area. The two styles consume the chip area for different purposes, thus achieving performance by different strategies. In a VLSI implementation of an architecture, many problems can arise from the base technology and its limitations. Although circuit design techniques can help alleviate many of these problems, the architects must be aware of these limitations and understand their implications at the instruction set level.",
"title": ""
},
{
"docid": "e897ab9c0f9f850582fbcb172aa8b904",
"text": "Facial expression recognition is in general a challenging problem, especially in the presence of weak expression. Most recently, deep neural networks have been emerging as a powerful tool for expression recognition. However, due to the lack of training samples, existing deep network-based methods cannot fully capture the critical and subtle details of weak expression, resulting in unsatisfactory results. In this paper, we propose Deeper Cascaded Peak-piloted Network (DCPN) for weak expression recognition. The technique of DCPN has three main aspects: (1) Peak-piloted feature transformation, which utilizes the peak expression (easy samples) to supervise the non-peak expression (hard samples) of the same type and subject; (2) the back-propagation algorithm is specially designed such that the intermediate-layer feature maps of non-peak expression are close to those of the corresponding peak expression; and (3) an novel integration training method, cascaded fine-tune, is proposed to prevent the network from overfitting. Experimental results on two popular facial expression databases, CK$$+$$ + and Oulu-CASIA, show the superiority of the proposed DCPN over state-of-the-art methods.",
"title": ""
},
{
"docid": "c7251ce24e8fd6cf1fd0261d54b0ba76",
"text": "Today there is considerable interest for making use of the latest technological advancements for several healthcare applications. However, there are several challenges for making use of different technologies for healthcare applications. In particular, there is a need to ensure that the healthcare related services receive priority during events, such as legitimate failures of devices, congestion, and attacks in the networks. In this paper, we discuss some of the requirements for making use of technology for healthcare applications and propose techniques for secure monitoring of patients with wandering behavior in a hospital or elderly care environment. One of the aims of our work is to use technology for secure monitoring of patients with wandering behavior to keep them away from danger, or detect if the behavior of the patient violates the policies of the hospital, or even violates privacy policies of other patients. Our approach makes use of software defined networking (SDN), Wireless LAN (WLAN), and wearable devices for the patients. Our approach incurs low cost since WLAN is widely deployed. However, there are some challenges for making use of WLAN for monitoring dementia patients, since it is primarily used for accessing the Internet and its open nature is vulnerable to different types of security attacks. Hence we make use of SDN to solve some of these challenges and provide priority for the monitoring services. We have developed a security application for an SDN controller that can be used to enforce fine granular policies for communication between the hosts, real time location tracking of the patients, and deal with attacks on the hospital networks. The policy-based security enforcement helps to differentiate healthcare related traffic from other traffic and provide priority to the healthcare traffic. The real time location tracking detects wandering by patients and if necessary can raise alarms to the staff. The attack detection component makes use of attack signatures and behavior-based intrusion detection to deal with attacks on hospital networks. We will also present the prototype implementation of our model using ONOS SDN controller and OpenFlow Access Points.",
"title": ""
},
{
"docid": "3dbf5b2b03f90667d5602f3121c526fb",
"text": "In order to identify the parasitic diseases, this paper propose the automatic identification of Human Parasite Eggs to eight different species : Ascaris, Uncinarias, Trichuris, Dyphillobothrium-Pacificum, Taenia-Solium, Fasciola Hepetica and Enterobius-Vermicularis from their microscopic images based on Multitexton Histogram - MTH using new structures of textons. This proposed system includes two stages. In first stage, a feature extraction mechanism that is based on MTH descriptor retrieving the relationships between textons. In second stage, an CBIR system has been implemented in orden to detect their correct species of helminths. Finally, simulation results show overall success rates of 94,78% in the detection.",
"title": ""
},
{
"docid": "cb408e52b5e96669e08f70888b11b3e3",
"text": "Centrality is one of the most studied concepts in social network analysis. There is a huge literature regarding centrality measures, as ways to identify the most relevant users in a social network. The challenge is to find measures that can be computed efficiently, and that can be able to classify the users according to relevance criteria as close as possible to reality. We address this problem in the context of the Twitter network, an online social networking service with millions of users and an impressive flow of messages that are published and spread daily by interactions between users. Twitter has different types of users, but the greatest utility lies in finding the most influential ones. The purpose of this article is to collect and classify the different Twitter influence measures that exist so far in literature. These measures are very diverse. Some are based on simple metrics provided by the Twitter API, while others are based on complex mathematical models. Several measures are based on the PageRank algorithm, traditionally used to rank the websites on the Internet. Some others consider the timeline of publication, others the content of the messages, some are focused on specific topics, and others try to make predictions. We consider all these aspects, and some additional ones. Furthermore, we include measures of activity and popularity, the traditional mechanisms to correlate measures, and some important aspects of computational complexity for this particular context.",
"title": ""
},
{
"docid": "3eeacf0fb315910975e5ff0ffc4fe800",
"text": "Social networks are rich in various kinds of contents such as text and multimedia. The ability to apply text mining algorithms effectively in the context of text data is critical for a wide variety of applications. Social networks require text mining algorithms for a wide variety of applications such as keyword search, classi cation, and clustering. While search and classi cation are well known applications for a wide variety of scenarios, social networks have a much richer structure both in terms of text and links. Much of the work in the area uses either purely the text content or purely the linkage structure. However, many recent algorithms use a combination of linkage and content information for mining purposes. In many cases, it turns out that the use of a combination of linkage and content information provides much more effective results than a system which is based purely on either of the two. This paper provides a survey of such algorithms, and the advantages observed by using such algorithms in different scenarios. We also present avenues for future research in this area.",
"title": ""
},
{
"docid": "b5205513c021eabf6798c568759799f6",
"text": "Fillers belong to the most frequently used beautifying products. They are generally well tolerated, but any one of them may occasionally produce adverse side effects. Adverse effects usually last as long as the filler is in the skin, which means that short-lived fillers have short-term side effects and permanent fillers may induce life-long adverse effects. The main goal is to prevent them, however, this is not always possible. Utmost care has to be given to the prevention of infections and the injection technique has to be perfect. Treatment of adverse effects is often with hyaluronidase or steroid injections and in some cases together with 5-fluorouracil plus allopurinol orally. Histological examination of biopsy specimens often helps to identify the responsible filler allowing a specific treatment to be adapted.",
"title": ""
},
{
"docid": "3c89e7c5fdd2269ffb17adcaec237d6c",
"text": "Numerical simulation of quantum systems is crucial to further our understanding of natural phenomena. Many systems of key interest and importance, in areas such as superconducting materials and quantum chemistry, are thought to be described by models which we cannot solve with sufficient accuracy, neither analytically nor numerically with classical computers. Using a quantum computer to simulate such quantum systems has been viewed as a key application of quantum computation from the very beginning of the field in the 1980s. Moreover, useful results beyond the reach of classical computation are expected to be accessible with fewer than a hundred qubits, making quantum simulation potentially one of the earliest practical applications of quantum computers. In this paper we survey the theoretical and experimental development of quantum simulation using quantum computers, from the first ideas to the intense research efforts currently underway.",
"title": ""
},
{
"docid": "29fa75e49d4179072ec25b8aab6b48e2",
"text": "We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituentand dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",
"title": ""
},
{
"docid": "8e3b73204d1d62337c4b2aabdbaa8973",
"text": "The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.",
"title": ""
},
{
"docid": "7e7d6eac8e70bbdd008209aeb21c5e10",
"text": "Recent research on Internet traffic classification has produced a number of approaches for distinguishing types of traffic. However, a rigorous comparison of such proposed algorithms still remains a challenge, since every proposal considers a different benchmark for its experimental evaluation. A lack of clear consensus on an objective and cientific way for comparing results has made researchers uncertain of fundamental as well as relative contributions and limitations of each proposal. In response to the growing necessity for an objective method of comparing traffic classifiers and to shed light on scientifically grounded traffic classification research, we introduce an Internet traffic classification benchmark tool, NeTraMark. Based on six design guidelines (Comparability, Reproducibility, Efficiency, Extensibility, Synergy, and Flexibility/Ease-of-use), NeTraMark is the first Internet traffic lassification benchmark where eleven different state-of-the-art traffic classifiers are integrated. NeTraMark allows researchers and practitioners to easily extend it with new classification algorithms and compare them with other built-in classifiers, in terms of three categories of performance metrics: per-whole-trace flow accuracy, per-application flow accuracy, and computational performance.",
"title": ""
},
{
"docid": "f59ed11cd56f48e7ff25e5ad21d27ded",
"text": "Recent research has begun to focus on the factors that cause people to respond to phishing attacks as well as affect user behavior on social networks. This study examines the correlation between the Big Five personality traits and email phishing response. Another aspect examined is how these factors relate to users' tendency to share information and protect their privacy on Facebook (which is one of the most popular social networking sites).\n This research shows that when using a prize phishing email, neuroticism is the factor most correlated to responding to this email, in addition to a gender-based difference in the response. This study also found that people who score high on the openness factor tend to both post more information on Facebook as well as have less strict privacy settings, which may cause them to be susceptible to privacy attacks. In addition, this work detected no correlation between the participants estimate of being vulnerable to phishing attacks and actually being phished, which suggests susceptibility to phishing is not due to lack of awareness of the phishing risks and that real-time response to phishing is hard to predict in advance by online users.\n The goal of this study is to better understand the traits that contribute to online vulnerability, for the purpose of developing customized user interfaces and secure awareness education, designed to increase users' privacy and security in the future.",
"title": ""
},
{
"docid": "85e51ac7980deac92e140d0965a35708",
"text": "Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional ‘governor’ that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a ‘consequence engine’ that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.",
"title": ""
},
{
"docid": "88505b670f71895ccc9d61e6b501a1b2",
"text": "We have been promoting a project of musculoskeletal humanoids. The project aims at the long-term goal of human-symbiotic robots as well as the mid-term goal of necessary design and control concepts for musculoskeletal robots. This paper presents the concepts and aim of the project and also shows the outline of our latest results about development of new musculoskeletal humanoid Kojiro, which is the succeeding version of our previous Kotaro.",
"title": ""
},
{
"docid": "dd085dce8ff78ed1843639f8dd3e04d1",
"text": "This article presents a new framework for synthesizing motion of a virtual character in response to the actions performed by a user-controlled character in real time. In particular, the proposed method can handle scenes in which the characters are closely interacting with each other such as those in partner dancing and fighting. In such interactions, coordinating the virtual characters with the human player automatically is extremely difficult because the system has to predict the intention of the player character. In addition, the style variations from different users affect the accuracy in recognizing the movements of the player character when determining the responses of the virtual character. To solve these problems, our framework makes use of the spatial relationship-based representation of the body parts called interaction mesh, which has been proven effective for motion adaptation. The method is computationally efficient, enabling real-time character control for interactive applications. We demonstrate its effectiveness and versatility in synthesizing a wide variety of motions with close interactions.",
"title": ""
}
] | scidocsrr |
3b46cc183e665388fea152ba35f5fc4a | Relative Analysis of Hierarchical Routing in Wireless Sensor Networks Using Cuckoo Search | [
{
"docid": "1d53b01ee1a721895a17b7d0f3535a28",
"text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.",
"title": ""
}
] | [
{
"docid": "624ddac45b110bc809db198d60f3cf97",
"text": "Poisson regression models provide a standard framework for the analysis of count data. In practice, however, count data are often overdispersed relative to the Poisson distribution. One frequent manifestation of overdispersion is that the incidence of zero counts is greater than expected for the Poisson distribution and this is of interest because zero counts frequently have special status. For example, in counting disease lesions on plants, a plant may have no lesions either because it is resistant to the disease, or simply because no disease spores have landed on it. This is the distinction between structural zeros, which are inevitable, and sampling zeros, which occur by chance. In recent years there has been considerable interest in models for count data that allow for excess zeros, particularly in the econometric literature. These models complement more conventional models for overdispersion that concentrate on modelling the variance-mean relationship correctly. Application areas are diverse and have included manufacturing defects (Lambert, 1992), patent applications (Crepon & Duguet, 1997), road safety (Miaou, 1994), species abundance (Welsh et al., 1996; Faddy, 1998), medical consultations",
"title": ""
},
{
"docid": "bb3216e89fd98751de0f187b349ba123",
"text": "This study explored the relationships among dispositional self-consciousness, situationally induced-states of self-awareness, ego-mvolvement, and intrinsic motivation Cognitive evaluation theory, as applied to both the interpersonal and intrapersonal spheres, was used as the basis for making predictions about the effects of various types of self-focus Public selfconsciousness, social anxiety, video surveillance and mirror manipulations of self-awareness, and induced ego-involvement were predicted and found to have negative effects on intrinsic motivation since all were hypothesized to involve controlling forms of regulation In contrast, dispositional pnvate self-consciousness and a no-self-focus condition were both found to be unrelated to intrinsic motivation The relationship among these constructs and manipulations was discussed in the context of both Carver and Scheier's (1981) control theory and Deci and Ryan's (1985) motivation theory Recent theory and research on self-awareness, stimulated bv the initial findings of Duval and Wicklund (1972), has suggested that qualitatively distinct styles of attention and consciousness can be involved in the ongoing process of self-regulation (Carver & Scheier, 1981) In particular, Fenigstein, Scheier, and Buss (1975) have distinguished between private self-consciousness and pubhc selfconsciousness as two independent, but not necessarily exclusive, types of attentional focus with important behavioral, cognitive, and affective implications for regulatory processes Pnvate self-consciousness refers to the tendency to be aware of one's thoughts. This research was supported by Research Grant BSN-8018628 from the National Science Foundation The authors are grateful to the following persons who helped this project to reach fruition James Connell Edward Deci, Paul Tero, Shirlev Tracey We are also grateful for the experimental assistance of Margot Cohen Scott Cohen, Loren Feldman, and Darrell Mazlish Thanks also to Eileen Plant and Miriam Gale Requests for reprints should be sent to Richard M Ryan, Department of Psychology, Uniyersity of Rochester, Rochester, NY 14627 Journal of Personality 53 3, September 1985 Copyright © 1985 by Duke University",
"title": ""
},
{
"docid": "873056ee4f2a4fff473dca4e104a4798",
"text": "Key Summary Points Health information technology has been shown to improve quality by increasing adherence to guidelines, enhancing disease surveillance, and decreasing medication errors. Much of the evidence on quality improvement relates to primary and secondary preventive care. The major efficiency benefit has been decreased utilization of care. Effect on time utilization is mixed. Empirically measured cost data are limited and inconclusive. Most of the high-quality literature regarding multifunctional health information technology systems comes from 4 benchmark research institutions. Little evidence is available on the effect of multifunctional commercially developed systems. Little evidence is available on interoperability and consumer health information technology. A major limitation of the literature is its generalizability. Health care experts, policymakers, payers, and consumers consider health information technologies, such as electronic health records and computerized provider order entry, to be critical to transforming the health care industry (1-7). Information management is fundamental to health care delivery (8). Given the fragmented nature of health care, the large volume of transactions in the system, the need to integrate new scientific evidence into practice, and other complex information management activities, the limitations of paper-based information management are intuitively apparent. While the benefits of health information technology are clear in theory, adapting new information systems to health care has proven difficult and rates of use have been limited (9-11). Most information technology applications have centered on administrative and financial transactions rather than on delivering clinical care (12). The Agency for Healthcare Research and Quality asked us to systematically review evidence on the costs and benefits associated with use of health information technology and to identify gaps in the literature in order to provide organizations, policymakers, clinicians, and consumers an understanding of the effect of health information technology on clinical care (see evidence report at www.ahrq.gov). From among the many possible benefits and costs of implementing health information technology, we focus here on 3 important domains: the effects of health information technology on quality, efficiency, and costs. Methods Analytic Frameworks We used expert opinion and literature review to develop analytic frameworks (Table) that describe the components involved with implementing health information technology, types of health information technology systems, and the functional capabilities of a comprehensive health information technology system (13). We modified a framework for clinical benefits from the Institute of Medicine's 6 aims for care (2) and developed a framework for costs using expert consensus that included measures such as initial costs, ongoing operational and maintenance costs, fraction of health information technology penetration, and productivity gains. Financial benefits were divided into monetized benefits (that is, benefits expressed in dollar terms) and nonmonetized benefits (that is, benefits that could not be directly expressed in dollar terms but could be assigned dollar values). Table. Health Information Technology Frameworks Data Sources and Search Strategy We performed 2 searches (in November 2003 and January 2004) of the English-language literature indexed in MEDLINE (1995 to January 2004) using a broad set of terms to maximize sensitivity. (See the full list of search terms and sequence of queries in the full evidence report at www.ahrq.gov.) We also searched the Cochrane Central Register of Controlled Trials, the Cochrane Database of Abstracts of Reviews of Effects, and the Periodical Abstracts Database; hand-searched personal libraries kept by content experts and project staff; and mined bibliographies of articles and systematic reviews for citations. We asked content experts to identify unpublished literature. Finally, we asked content experts and peer reviewers to identify newly published articles up to April 2005. Study Selection and Classification Two reviewers independently selected for detailed review the following types of articles that addressed the workings or implementation of a health technology system: systematic reviews, including meta-analyses; descriptive qualitative reports that focused on exploration of barriers; and quantitative reports. We classified quantitative reports as hypothesis-testing if the investigators compared data between groups or across time periods and used statistical tests to assess differences. We further categorized hypothesis-testing studies (for example, randomized and nonrandomized, controlled trials, controlled before-and-after studies) according to whether a concurrent comparison group was used. Hypothesis-testing studies without a concurrent comparison group included those using simple prepost, time-series, and historical control designs. Remaining hypothesis-testing studies were classified as cross-sectional designs and other. We classified quantitative reports as a predictive analysis if they used methods such as statistical modeling or expert panel estimates to predict what might happen with implementation of health information technology rather than what has happened. These studies typically used hybrid methodsfrequently mixing primary data collection with secondary data collection plus expert opinion and assumptionsto make quantitative estimates for data that had otherwise not been empirically measured. Cost-effectiveness and cost-benefit studies generally fell into this group. Data Extraction and Synthesis Two reviewers independently appraised and extracted details of selected articles using standardized abstraction forms and resolved discrepancies by consensus. We then used narrative synthesis methods to integrate findings into descriptive summaries. Each institution that accounted for more than 5% of the total sample of 257 papers was designated as a benchmark research leader. We grouped syntheses by institution and by whether the systems were commercially or internally developed. Role of the Funding Sources This work was produced under Agency for Healthcare Research and Quality contract no. 2002. In addition to the Agency for Healthcare Research and Quality, this work was also funded by the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, and the Office of Disease Prevention and Health Promotion, U.S. Department of Health and Human Services. The funding sources had no role in the design, analysis, or interpretation of the study or in the decision to submit the manuscript for publication. Data Synthesis Literature Selection Overview Of 867 articles, we rejected 141 during initial screening: 124 for not having health information technology as the subject, 4 for not reporting relevant outcomes, and 13 for miscellaneous reasons (categories not mutually exclusive). Of the remaining 726 articles, we excluded 469 descriptive reports that did not examine barriers (Figure). We recorded details of and summarized each of the 257 articles that we did include in an interactive database (healthit.ahrq.gov/tools/rand) that serves as the evidence table for our report (14). Twenty-four percent of all studies came from the following 4 benchmark institutions: 1) the Regenstrief Institute, 2) Brigham and Women's Hospital/Partners Health Care, 3) the Department of Veterans Affairs, and 4) LDS Hospital/Intermountain Health Care. Figure. Search flow for health information technology ( HIT ) literature. Pediatrics Types and Functions of Technology Systems The reports addressed the following types of primary systems: decision support aimed at providers (63%), electronic health records (37%), and computerized provider order entry (13%). Specific functional capabilities of systems that were described in reports included electronic documentation (31%), order entry (22%), results management (19%), and administrative capabilities (18%). Only 8% of the described systems had specific consumer health capabilities, and only 1% had capabilities that allowed systems from different facilities to connect with each other and share data interoperably. Most studies (n= 125) assessed the effect of the systems in the outpatient setting. Of the 213 hypothesis-testing studies, 84 contained some data on costs. Several studies assessed interventions with limited functionality, such as stand-alone decision support systems (15-17). Such studies provide limited information about issues that today's decision makers face when selecting and implementing health information technology. Thus, we preferentially highlight in the following paragraphs studies that were conducted in the United States, that had empirically measured data on multifunctional systems, and that included health information and data storage in the form of electronic documentation or order-entry capabilities. Predictive analyses were excluded. Seventy-six studies met these criteria: 54 from the 4 benchmark leaders and 22 from other institutions. Data from Benchmark Institutions The health information technology systems evaluated by the benchmark leaders shared many characteristics. All the systems were multifunctional and included decision support, all were internally developed by research experts at the respective academic institutions, and all had capabilities added incrementally over several years. Furthermore, most reported studies of these systems used research designs with high internal validity (for example, randomized, controlled trials). Appendix Table 1 (18-71) provides a structured summary of each study from the 4 benchmark institutions. This table also includes studies that met inclusion criteria not highlighted in this synthesis (26, 27, 30, 39, 40, 53, 62, 65, 70, 71). The data supported 5 primary themes (3 directly r",
"title": ""
},
{
"docid": "560577e6abcccdb399d437cbd52ad266",
"text": "With smart devices, particular smartphones, becoming our everyday companions, the ubiquitous mobile Internet and computing applications pervade people’s daily lives. With the surge demand on high-quality mobile services at anywhere, how to address the ubiquitous user demand and accommodate the explosive growth of mobile traffics is the key issue of the next generation mobile networks. The Fog computing is a promising solution towards this goal. Fog computing extends cloud computing by providing virtualized resources and engaged location-based services to the edge of the mobile networks so as to better serve mobile traffics. Therefore, Fog computing is a lubricant of the combination of cloud computing and mobile applications. In this article, we outline the main features of Fog computing and describe its concept, architecture and design goals. Lastly, we discuss some of the future research issues from the networking perspective.",
"title": ""
},
{
"docid": "398040041440f597b106c49c79be27ea",
"text": "BACKGROUND\nRecently, human germinal center-associated lymphoma (HGAL) gene protein has been proposed as an adjunctive follicular marker to CD10 and BCL6.\n\n\nMETHODS\nOur aim was to evaluate immunoreactivity for HGAL in 82 cases of follicular lymphomas (FLs)--67 nodal, 5 cutaneous and 10 transformed--which were all analysed histologically, by immunohistochemistry and PCR.\n\n\nRESULTS\nImmunostaining for HGAL was more frequently positive (97.6%) than that for BCL6 (92.7%) and CD10 (90.2%) in FLs; the cases negative for bcl6 and/or for CD10 were all positive for HGAL, whereas the two cases negative for HGAL were positive with BCL6; no difference in HGAL immunostaining was found among different malignant subtypes or grades.\n\n\nCONCLUSIONS\nTherefore, HGAL can be used in the immunostaining of FLs as the most sensitive germinal center (GC)-marker; when applied alone, it would half the immunostaining costs, reserving the use of the other two markers only to HGAL-negative cases.",
"title": ""
},
{
"docid": "ef8ba8ae9696333f5da066813a4b79d7",
"text": "Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping regions to words is a black box and therefore difficult to explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object classification, but cannot use a natural language sentence as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the region-to-word mapping in modern encoder-decoder networks and demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach can produce spatial or spatiotemporal heatmaps for both predicted captions, and for arbitrary query sentences. It recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets demonstrates that our approach achieves comparable captioning performance with existing methods while providing more accurate saliency heatmaps. Our code is available at visionlearninggroup.github.io/caption-guided-saliency/.",
"title": ""
},
{
"docid": "054fcf065915118bbfa3f12759cb6912",
"text": "Automatization of the diagnosis of any kind of disease is of great importance and its gaining speed as more and more deep learning solutions are applied to different problems. One of such computer-aided systems could be a decision support tool able to accurately differentiate between different types of breast cancer histological images – normal tissue or carcinoma (benign, in situ or invasive). In this paper authors present a deep learning solution, based on convolutional capsule network, for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied. The crossvalidation accuracy, averaged over four classes, was achieved to be 87 % with equally high sensitivity.",
"title": ""
},
{
"docid": "e1315cfdc9c1a33b7b871c130f34d6ce",
"text": "TextTiling is a technique for subdividing texts into multi-paragraph units that represent passages, or subtopics. The discourse cues for identifying major subtopic shifts are patterns of lexical co-occurrence and distribution. The algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts. Multi-paragraph subtopic segmentation should be useful for many text analysis tasks, including information retrieval and summarization.",
"title": ""
},
{
"docid": "aeb19f8f9c6e5068fc602682e4ae04d3",
"text": "Received: 29 November 2004 Revised: 26 July 2005 Accepted: 4 November 2005 Abstract Interpretive research in information systems (IS) is now a well-established part of the field. However, there is a need for more material on how to carry out such work from inception to publication. I published a paper a decade ago (Walsham, 1995) which addressed the nature of interpretive IS case studies and methods for doing such research. The current paper extends this earlier contribution, with a widened scope of all interpretive research in IS, and through further material on carrying out fieldwork, using theory and analysing data. In addition, new topics are discussed on constructing and justifying a research contribution, and on ethical issues and tensions in the conduct of interpretive work. The primary target audience for the paper is lessexperienced IS researchers, but I hope that the paper will also stimulate reflection for the more-experienced IS researcher and be of relevance to interpretive researchers in other social science fields. European Journal of Information Systems (2006) 15, 320–330. doi:10.1057/palgrave.ejis.3000589",
"title": ""
},
{
"docid": "6bfdd78045816085cd0fa5d8bb91fd18",
"text": "Contextual factors can greatly influence the users' preferences in listening to music. Although it is hard to capture these factors directly, it is possible to see their effects on the sequence of songs liked by the user in his/her current interaction with the system. In this paper, we present a context-aware music recommender system which infers contextual information based on the most recent sequence of songs liked by the user. Our approach mines the top frequent tags for songs from social tagging Web sites and uses topic modeling to determine a set of latent topics for each song, representing different contexts. Using a database of human-compiled playlists, each playlist is mapped into a sequence of topics and frequent sequential patterns are discovered among these topics. These patterns represent frequent sequences of transitions between the latent topics representing contexts. Given a sequence of songs in a user's current interaction, the discovered patterns are used to predict the next topic in the playlist. The predicted topics are then used to post-filter the initial ranking produced by a traditional recommendation algorithm. Our experimental evaluation suggests that our system can help produce better recommendations in comparison to a conventional recommender system based on collaborative or content-based filtering. Furthermore, the topic modeling approach proposed here is also useful in providing better insight into the underlying reasons for song selection and in applications such as playlist construction and context prediction.",
"title": ""
},
{
"docid": "638f7bf2f47895274995df166564ecc1",
"text": "In recent years, the video game market has embraced augmented reality video games, a class of video games that is set to grow as gaming technologies develop. Given the widespread use of video games among children and adolescents, the health implications of augmented reality technology must be closely examined. Augmented reality technology shows a potential for the promotion of healthy behaviors and social interaction among children. However, the full immersion and physical movement required in augmented reality video games may also put users at risk for physical and mental harm. Our review article and commentary emphasizes both the benefits and dangers of augmented reality video games for children and adolescents.",
"title": ""
},
{
"docid": "8924c1551030dc7e9aaf5611fd0a9ae2",
"text": "The term affordance describes an object’s utilitarian function or actionable possibilities. Product designers have taken great interest in the concept of affordances because of the bridge they provide relating to design, the interpretation of design and, ultimately, functionality in the hands of consumers. These concepts have been widely studied and applied in the field of psychology but have had limited formal application to packaging design and evaluation. We believe that the concepts related to affordances will reveal novel opportunities for packaging innovation. To catalyse this, presented work had the following objectives: (a) to propose a method by which packaging designers can purposefully consider affordances during the design process; (b) to explain this method in the context of a packaging-related case study; and (c) to measure the effect on package usability when an affordance-based design approach is employed. © 2014 The Authors. Packaging Technology and Science published by John Wiley & Sons Ltd.",
"title": ""
},
{
"docid": "8e077186aef0e7a4232eec0d8c73a5a2",
"text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fddadfbc6c1b34a8ac14f8973f052da5",
"text": "Abstract. Centroidal Voronoi tessellations are useful for subdividing a region in Euclidean space into Voronoi regions whose generators are also the centers of mass, with respect to a prescribed density function, of the regions. Their extensions to general spaces and sets are also available; for example, tessellations of surfaces in a Euclidean space may be considered. In this paper, a precise definition of such constrained centroidal Voronoi tessellations (CCVTs) is given and a number of their properties are derived, including their characterization as minimizers of an “energy.” Deterministic and probabilistic algorithms for the construction of CCVTs are presented and some analytical results for one of the algorithms are given. Computational examples are provided which serve to illustrate the high quality of CCVT point sets. Finally, CCVT point sets are applied to polynomial interpolation and numerical integration on the sphere.",
"title": ""
},
{
"docid": "bf272aa2413f1bc186149e814604fb03",
"text": "Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories.",
"title": ""
},
{
"docid": "7c901910ead6c3e4723803085b7495d5",
"text": "Lenneberg (1967) hypothesized that language could be acquired only within a critical period, extending from early infancy until puberty. In its basic form, the critical period hypothesis need only have consequences for first language acquisition. Nevertheless, it is essential to our understanding of the nature of the hypothesized critical period to determine whether or not it extends as well to second language acquisition. If so, it should be the case that young children are better second language learners than adults and should consequently reach higher levels of final proficiency in the second language. This prediction was tested by comparing the English proficiency attained by 46 native Korean or Chinese speakers who had arrived in the United States between the ages of 3 and 39, and who had lived in the United States between 3 and 26 years by the time of testing. These subjects were tested on a wide variety of structures of English grammar, using a grammaticality judgment task. Both correlational and t-test analyses demonstrated a clear and strong advantage for earlier arrivals over the later arrivals. Test performance was linearly related to age of arrival up to puberty; after puberty, performance was low but highly variable and unrelated to age of arrival. This age effect was shown not to be an inadvertent result of differences in amount of experience with English, motivation, self-consciousness, or American identification. The effect also appeared on every grammatical structure tested, although the structures varied markedly in the degree to which they were well mastered by later learners. The results support the conclusion that a critical period for language acquisition extends its effects to second language acquisition.",
"title": ""
},
{
"docid": "584e84ac1a061f1bf7945ab4cf54d950",
"text": "Paul White, PhD, MD§ Acupuncture has been used in China and other Asian countries for the past 3000 yr. Recently, this technique has been gaining increased popularity among physicians and patients in the United States. Even though acupuncture-induced analgesia is being used in many pain management programs in the United States, the mechanism of action remains unclear. Studies suggest that acupuncture and related techniques trigger a sequence of events that include the release of neurotransmitters, endogenous opioid-like substances, and activation of c-fos within the central nervous system. Recent developments in central nervous system imaging techniques allow scientists to better evaluate the chain of events that occur after acupuncture-induced stimulation. In this review article we examine current biophysiological and imaging studies that explore the mechanisms of acupuncture analgesia.",
"title": ""
},
{
"docid": "5c5225b5e66d49f17a881ed1843e944c",
"text": "The organic-inorganic hybrid perovskites methylammonium lead iodide (CH3NH3PbI3) and the partially chlorine-substituted mixed halide CH3NH3PbI3-xClx emit strong and broad photoluminescence (PL) around their band gap energy of ∼1.6 eV. However, the nature of the radiative decay channels behind the observed emission and, in particular, the spectral broadening mechanisms are still unclear. Here we investigate these processes for high-quality vapor-deposited films of CH3NH3PbI3-xClx using time- and excitation-energy dependent photoluminescence spectroscopy. We show that the PL spectrum is homogenously broadened with a line width of 103 meV most likely as a consequence of phonon coupling effects. Further analysis reveals that defects or trap states play a minor role in radiative decay channels. In terms of possible lasing applications, the emission spectrum of the perovskite is sufficiently broad to have potential for amplification of light pulses below 100 fs pulse duration.",
"title": ""
},
{
"docid": "9097bf29a9ad2b33919e0667d20bf6d7",
"text": "Object detection, though gaining popularity, has largely been limited to detection from the ground or from satellite imagery. Aerial images, where the target may be obfuscated from the environmental conditions, angle-of-attack, and zoom level, pose a more significant challenge to correctly detect targets in. This paper describes the implementation of a regional convolutional neural network to locate and classify objects across several categories in complex, aerial images. Our current results show promise in detecting and classifying objects. Further adjustments to the network and data input should increase the localization and classification accuracies.",
"title": ""
},
{
"docid": "08ee3e3191ac1b56b3c41e89df62d047",
"text": "This article presents a gesture recognition/adaptation system for human--computer interaction applications that goes beyond activity classification and that, as a complement to gesture labeling, characterizes the movement execution. We describe a template-based recognition method that simultaneously aligns the input gesture to the templates using a Sequential Monte Carlo inference technique. Contrary to standard template-based methods based on dynamic programming, such as Dynamic Time Warping, the algorithm has an adaptation process that tracks gesture variation in real time. The method continuously updates, during execution of the gesture, the estimated parameters and recognition results, which offers key advantages for continuous human--machine interaction. The technique is evaluated in several different ways: Recognition and early recognition are evaluated on 2D onscreen pen gestures; adaptation is assessed on synthetic data; and both early recognition and adaptation are evaluated in a user study involving 3D free-space gestures. The method is robust to noise, and successfully adapts to parameter variation. Moreover, it performs recognition as well as or better than nonadapting offline template-based methods.",
"title": ""
}
] | scidocsrr |
43800ddb4f124a9f1c20037d29855fd0 | Usability measurement for speech systems : SASSI revisited | [
{
"docid": "5750ebcfd885097aeeef66582380c286",
"text": "In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During these experiments, subjective judgments of quality have been collected by two questionnaire methods (ITU-T Rec. P.851 and SASSI), and parameters describing the interaction have been logged and annotated. Both metrics served the derivation of prediction models according to the PARADISE approach. Although the limited database allows only tentative conclusions to be drawn, the results suggest that both questionnaire methods provide valid measurements of a large number of different quality aspects; most of the perceptive dimensions underlying the subjective judgments can also be measured with a high reliability. The extracted parameters mainly describe quality aspects which are directly linked to the system, environmental and task characteristics. Used as an input to prediction models, the parameters provide helpful information for system design and optimization, but not general predictions of system usability and acceptability. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "106b7450136b9eafdddbaca5131be2f5",
"text": "This paper describes the main features of a low cost and compact Ka-band satcom terminal being developed within the ESA-project LOCOMO. The terminal will be compliant with all capacities associated with communication on the move supplying higher quality, better performance and faster speed services than the current available solutions in Ku band. The terminal will be based on a dual polarized low profile Ka-band antenna with TX and RX capabilities.",
"title": ""
},
{
"docid": "dd37e97635b0ded2751d64cafcaa1aa4",
"text": "DEVICES, AND STRUCTURES By S.E. Lyshevshi, CRC Press, 2002. This book is the first of the CRC Press “Nanoand Microscience, Engineering, Technology, and Medicine Series,” of which the author of this book is also the editor. This book could be a textbook of a semester course on microelectro mechanical systems (MEMS) and nanoelectromechanical systems (NEMS). The objective is to cover the topic from basic theory to the design and development of structures of practical devices and systems. The idea of MEMS and NEMS is to utilize and further extend the technology of integrated circuits (VLSI) to nanometer structures of mechanical and biological devices for potential applications in molecular biology and medicine. MEMS and NEMS (nanotechnology) are hot topics in the future development of electronics. The interest is not limited to electrical engineers. In fact, many scientists and researchers are interested in developing MEMS and NEMS for biological and medical applications. Thus, this field has attracted researchers from many different fields. Many new books are coming out. This book seems to be the first one aimed to be a textbook for this field, but it is very hard to write a book for readers with such different backgrounds. The author of this book has emphasized computer modeling, mostly due to his research interest in this field. It would be good to provide coverage on biological and medical MEMS, for example, by reporting a few gen or DNA-related cases. Furthermore, the mathematical modeling in term of a large number of nonlinear coupled differential equations, as used in many places in the book, does not appear to have any practical value to the actual physical structures.",
"title": ""
},
{
"docid": "4f3d2b869322125a8fad8a39726c99f8",
"text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).",
"title": ""
},
{
"docid": "3b0f5d827a58fc6077e7c304cd2d35b8",
"text": "BACKGROUND\nPatients suffering from depression experience significant mood, anxiety, and cognitive symptoms. Currently, most antidepressants work by altering neurotransmitter activity in the brain to improve these symptoms. However, in the last decade, research has revealed an extensive bidirectional communication network between the gastrointestinal tract and the central nervous system, referred to as the \"gut-brain axis.\" Advances in this field have linked psychiatric disorders to changes in the microbiome, making it a potential target for novel antidepressant treatments. The aim of this review is to analyze the current body of research assessing the effects of probiotics, on symptoms of depression in humans.\n\n\nMETHODS\nA systematic search of five databases was performed and study selection was completed using the preferred reporting items for systematic reviews and meta-analyses process.\n\n\nRESULTS\nTen studies met criteria and were analyzed for effects on mood, anxiety, and cognition. Five studies assessed mood symptoms, seven studies assessed anxiety symptoms, and three studies assessed cognition. The majority of the studies found positive results on all measures of depressive symptoms; however, the strain of probiotic, the dosing, and duration of treatment varied widely and no studies assessed sleep.\n\n\nCONCLUSION\nThe evidence for probiotics alleviating depressive symptoms is compelling but additional double-blind randomized control trials in clinical populations are warranted to further assess efficacy.",
"title": ""
},
{
"docid": "81cf3581955988c71b58e7a097ea00bd",
"text": "Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertex coloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrix estimation problems. The framework is based upon the viewpoint that a partition of a matrix into structurally orthogonal groups of columns corresponds to distance-2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrix as an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.",
"title": ""
},
{
"docid": "b6e62590995a41adb1128703060e0e2d",
"text": "Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in the arena of special education. Although 3D printing is beginning to infiltrate mainstream education, little to no research has explored 3D printing in the context of students with special support needs. We present a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing performs three functions in special education: developing 3D design and printing skills encourages STEM engagement; 3D printing can support the creation of educational aids for providing accessible curriculum content; and 3D printing can be used to create custom adaptive devices. In addition to providing opportunities to students, faculty, and caregivers in their efforts to integrate 3D printing in special education settings, our investigation also revealed several concerns and challenges. We present our investigation at three diverse sites as a case study of 3D printing in the realm of special education, discuss obstacles to efficient 3D printing in this context, and offer suggestions for designers and technologists.",
"title": ""
},
{
"docid": "95633e39a6f1dee70317edfc56e248f4",
"text": "We construct a deep portfolio theory. By building on Markowitz’s classic risk-return trade-off, we develop a self-contained four-step routine of encode, calibrate, validate and verify to formulate an automated and general portfolio selection process. At the heart of our algorithm are deep hierarchical compositions of portfolios constructed in the encoding step. The calibration step then provides multivariate payouts in the form of deep hierarchical portfolios that are designed to target a variety of objective functions. The validate step trades-off the amount of regularization used in the encode and calibrate steps. The verification step uses a cross validation approach to trace out an ex post deep portfolio efficient frontier. We demonstrate all four steps of our portfolio theory numerically.",
"title": ""
},
{
"docid": "25ad61565be2eb2490b3bbc03b196d09",
"text": "To reduce energy costs and emissions of microgrids, daily operation is critical. The problem is to commit and dispatch distributed devices with renewable generation to minimize the total energy and emission cost while meeting the forecasted energy demand. The problem is challenging because of the intermittent nature of renewables. In this paper, photovoltaic (PV) uncertainties are modeled by a Markovian process. For effective coordination, other devices are modeled as Markov processes with states depending on PV states. The entire problem is Markovian. This combinatorial problem is solved using branch-and-cut. Beyond energy and emission costs, to consider capital and maintenance costs in the long run, microgrid design is also essential. The problem is to decide device sizes with given types to minimize the lifetime cost while meeting energy demand. Its complexity increases exponentially with the problem size. To evaluate the lifetime cost including the reliability cost and the classic components such as capital and fuel costs, a linear model is established. By selecting a limited number of possible combinations of device sizes, exhaustive search is used to find the optimized design. The results show that the operation method is efficient in saving cost and scalable, and microgrids have lower lifetime costs than conventional energy systems. Implications for regulators and distribution utilities are also discussed.",
"title": ""
},
{
"docid": "b5a4b5b3e727dde52a9c858d3360a2e7",
"text": "Differential privacy is a recent framework for computation on sensitive data, which has shown considerable promise in the regime of large datasets. Stochastic gradient methods are a popular approach for learning in the data-rich regime because they are computationally tractable and scalable. In this paper, we derive differentially private versions of stochastic gradient descent, and test them empirically. Our results show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly.",
"title": ""
},
{
"docid": "924d833125453fa4c525df5f607724e1",
"text": "Strong stubborn sets have recently been analyzed and successfully applied as a pruning technique for planning as heuristic search. Strong stubborn sets are defined declaratively as constraints over operator sets. We show how these constraints can be relaxed to offer more freedom in choosing stubborn sets while maintaining the correctness and optimality of the approach. In general, many operator sets satisfy the definition of stubborn sets. We study different strategies for selecting among these possibilities and show that existing approaches can be considerably improved by rather simple strategies, eliminating most of the overhead of the previous",
"title": ""
},
{
"docid": "ec6d6d6f8dc3db0bdae42ee0173b1639",
"text": "AIMS\nWe investigated the population-level relationship between exposure to brand-specific advertising and brand-specific alcohol use among US youth.\n\n\nMETHODS\nWe conducted an internet survey of a national sample of 1031 youth, ages 13-20, who had consumed alcohol in the past 30 days. We ascertained all of the alcohol brands respondents consumed in the past 30 days, as well as which of 20 popular television shows they had viewed during that time period. Using a negative binomial regression model, we examined the relationship between aggregated brand-specific exposure to alcohol advertising on the 20 television shows [ad stock, measured in gross rating points (GRPs)] and youth brand-consumption prevalence, while controlling for the average price and overall market share of each brand.\n\n\nRESULTS\nBrands with advertising exposure on the 20 television shows had a consumption prevalence about four times higher than brands not advertising on those shows. Brand-level advertising elasticity of demand varied by exposure level, with higher elasticity in the lower exposure range. The estimated advertising elasticity of 0.63 in the lower exposure range indicates that for each 1% increase in advertising exposure, a brand's youth consumption prevalence increases by 0.63%.\n\n\nCONCLUSIONS\nAt the population level, underage youths' exposure to brand-specific advertising was a significant predictor of the consumption prevalence of that brand, independent of each brand's price and overall market share. The non-linearity of the observed relationship suggests that youth advertising exposure may need to be lowered substantially in order to decrease consumption of the most heavily advertised brands.",
"title": ""
},
{
"docid": "6de2b5fa5c8d3db9f9d599b6ebb56782",
"text": "Extreme sensitivity of soil organic carbon (SOC) to climate and land use change warrants further research in different terrestrial ecosystems. The aim of this study was to investigate the link between aggregate and SOC dynamics in a chronosequence of three different land uses of a south Chilean Andisol: a second growth Nothofagus obliqua forest (SGFOR), a grassland (GRASS) and a Pinus radiataplantation (PINUS). Total carbon content of the 0–10 cm soil layer was higher for GRASS (6.7 kg C m −2) than for PINUS (4.3 kg C m−2), while TC content of SGFOR (5.8 kg C m−2) was not significantly different from either one. High extractable oxalate and pyrophosphate Al concentrations (varying from 20.3–24.4 g kg −1, and 3.9– 11.1 g kg−1, respectively) were found in all sites. In this study, SOC and aggregate dynamics were studied using size and density fractionation experiments of the SOC, δ13C and total carbon analysis of the different SOC fractions, and C mineralization experiments. The results showed that electrostatic sorption between and among amorphous Al components and clay minerals is mainly responsible for the formation of metal-humus-clay complexes and the stabilization of soil aggregates. The process of ligand exchange between SOC and Al would be of minor importance resulting in the absence of aggregate hierarchy in this soil type. Whole soil C mineralization rate constants were highest for SGFOR and PINUS, followed by GRASS (respectively 0.495, 0.266 and 0.196 g CO 2-C m−2 d−1 for the top soil layer). In contrast, incubation experiments of isolated macro organic matter fractions gave opposite results, showing that the recalcitrance of the SOC decreased in another order: PINUS>SGFOR>GRASS. We deduced that electrostatic sorption processes and physical protection of SOC in soil aggregates were the main processes determining SOC stabilization. As a result, high aggregate carbon concentraCorrespondence to: D. Huygens ([email protected]) tions, varying from 148 till 48 g kg −1, were encountered for all land use sites. Al availability and electrostatic charges are dependent on pH, resulting in an important influence of soil pH on aggregate stability. Recalcitrance of the SOC did not appear to largely affect SOC stabilization. Statistical correlations between extractable amorphous Al contents, aggregate stability and C mineralization rate constants were encountered, supporting this hypothesis. Land use changes affected SOC dynamics and aggregate stability by modifying soil pH (and thus electrostatic charges and available Al content), root SOC input and management practices (such as ploughing and accompanying drying of the soil).",
"title": ""
},
{
"docid": "59a4bf897006a0bcadb562ff6446e4e5",
"text": "As the number and variety of cyber threats increase, it becomes more critical to share intelligence information in a fast and efficient manner. However, current cyber threat intelligence data do not contain sufficient information about how to specify countermeasures or how institutions should apply countermeasures automatically on their networks. A flexible and agile network architecture is required in order to determine and deploy countermeasures quickly. Software-defined networks facilitate timely application of cyber security measures thanks to their programmability. In this work, we propose a novel model for producing software-defined networking-based solutions against cyber threats and configuring networks automatically using risk analysis. We have developed a prototype implementation of the proposed model and demonstrated the applicability of the model. Furthermore, we have identified and presented future research directions in this area.",
"title": ""
},
{
"docid": "db55d7b7e0185d872b27c89c3892a289",
"text": "Bitcoin relies on the Unspent Transaction Outputs (UTXO) set to efficiently verify new generated transactions. Every unspent output, no matter its type, age, value or length is stored in every full node. In this paper we introduce a tool to study and analyze the UTXO set, along with a detailed description of the set format and functionality. Our analysis includes a general view of the set and quantifies the difference between the two existing formats up to the date. We also provide an accurate analysis of the volume of dust and unprofitable outputs included in the set, the distribution of the block height in which the outputs where included, and the use of non-standard outputs.",
"title": ""
},
{
"docid": "b34485c65c4e6780166ea0af5f13c08a",
"text": "The rise of the Internet of Things (IoT) and the recent focus on a gamut of 'Smart City' initiatives world-wide have pushed for new advances in data stream systems to (1) support complex analytics and evolving graph applications as continuous queries, and (2) deliver fast and scalable processing on large data streams. Unfortunately current continuous query languages (CQL) lack the features and constructs needed to support the more advanced applications. For example recursive queries are now part of SQL, Datalog, and other query languages, but they are not supported by most CQLs, a fact that caused a significant loss of expressive power, which is further aggravated by the limitation that only non-blocking queries can be supported. To overcome these limitations we have developed an a dvanced st ream r easo ning system ASTRO that builds on recent advances in supporting aggregates in recursive queries. In this demo, we will briefly elucidate the formal Streamlog semantics, which combined with the Pre-Mappability (PreM) concept, allows the declarative specification of many complex continuous queries, which are then efficiently executed in real-time by the portable ASTRO architecture. Using different case studies, we demonstrate (i) the ease-of-use, (ii) the expressive power and (iii) the robustness of our system, as compared to other state-of-the-art declarative CQL systems.",
"title": ""
},
{
"docid": "afb3098f38a8a3f0daad4d9e0e314ca2",
"text": "We have developed a genetic approach to visualize axons from olfactory sensory neurons expressing a given odorant receptor, as they project to the olfactory bulb. Neurons expressing a specific receptor project to only two topographically fixed loci among the 1800 glomeruli in the mouse olfactory bulb. Our data provide direct support for a model in which a topographic map of receptor activation encodes odor quality in the olfactory bulb. Receptor swap experiments suggest that the olfactory receptor plays an instructive role in the guidance process but cannot be the sole determinant in the establishment of this map. This genetic approach may be more broadly applied to visualize the development and plasticity of projections in the mammalian nervous system.",
"title": ""
},
{
"docid": "b25e35dd703d19860bbbd8f92d80bd26",
"text": "Business analytics (BA) systems are an important strategic investment for many organisations and can potentially contribute significantly to firm performance. Establishing strong BA capabilities is currently one of the major concerns of chief information officers. This research project aims to develop a BA capability maturity model (BACMM). The BACMM will help organisations to scope and evaluate their BA initiatives. This research-in-progress paper describes the current BACMM, relates it to existing capability maturity models and explains its theoretical base. It also discusses the design science research approach being used to develop the BACMM and provides details of further work within the research project. Finally, the paper concludes with a discussion of how the BACMM might be used in practice.",
"title": ""
},
{
"docid": "c26339c1a74de4797096d2ea58e60f25",
"text": "Existing systems deliver high accuracy and F1-scores for detecting paraphrase and semantic similarity on traditional clean-text corpus. For instance, on the clean-text Microsoft Paraphrase benchmark database, the existing systems attain an accuracy as high as 0.8596. However, existing systems for detecting paraphrases and semantic similarity on user-generated short-text content on microblogs such as Twitter, comprising of noisy and ad hoc short-text, needs significant research attention. In this paper, we propose a machine learning based approach towards this. We propose a set of features that, although well-known in the NLP literature for solving other problems, have not been explored for detecting paraphrase or semantic similarity, on noisy user-generated short-text data such as Twitter. We apply support vector machine (SVM) based learning. We use the benchmark Twitter paraphrase data, released as a part of SemEval 2015, for experiments. Our system delivers a paraphrase detection F1-score of 0.717 and semantic similarity detection F1-score of 0.741, thereby significantly outperforming the existing systems, that deliver F1-scores of 0.696 and 0.724 for the two problems respectively. Our features also allow us to obtain a rank among the top-10, when trained on the Microsoft Paraphrase corpus and tested on the corresponding test data, thereby empirically establishing our approach as ubiquitous across the different paraphrase detection databases.",
"title": ""
},
{
"docid": "bbd378407abb1c2a9a5016afee40c385",
"text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.",
"title": ""
},
{
"docid": "a57e470ad16c025f6b0aae99de25f498",
"text": "Purpose To establish the efficacy and safety of botulinum toxin in the treatment of Crocodile Tear Syndrome and record any possible complications.Methods Four patients with unilateral aberrant VII cranial nerve regeneration following an episode of facial paralysis consented to be included in this study after a comprehensive explanation of the procedure and possible complications was given. On average, an injection of 20 units of botulinum toxin type A (Dysport®) was given to the affected lacrimal gland. The effect was assessed with a Schirmer’s test during taste stimulation. Careful recording of the duration of the effect and the presence of any local or systemic complications was made.Results All patients reported a partial or complete disappearance of the reflex hyperlacrimation following treatment. Schirmer’s tests during taste stimulation documented a significant decrease in tear secretion. The onset of effect of the botulinum toxin was typically 24–48 h after the initial injection and lasted 4–5 months. One patient had a mild increase in his preexisting upper lid ptosis, but no other local or systemic side effects were experienced.Conclusions The injection of botulinum toxin type A into the affected lacrimal glands of patients with gusto-lacrimal reflex is a simple, effective and safe treatment.",
"title": ""
}
] | scidocsrr |
89614a6ddc0d9dedd24685c5b6a1164b | Short-term load forecasting in smart grid: A combined CNN and K-means clustering approach | [
{
"docid": "f9b56de3658ef90b611c78bdb787d85b",
"text": "Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.",
"title": ""
},
{
"docid": "0254d49cb759e163a032b6557f969bd3",
"text": "The smart electricity grid enables a two-way flow of power and data between suppliers and consumers in order to facilitate the power flow optimization in terms of economic efficiency, reliability and sustainability. This infrastructure permits the consumers and the micro-energy producers to take a more active role in the electricity market and the dynamic energy management (DEM). The most important challenge in a smart grid (SG) is how to take advantage of the users’ participation in order to reduce the cost of power. However, effective DEM depends critically on load and renewable production forecasting. This calls for intelligent methods and solutions for the real-time exploitation of the large volumes of data generated by a vast amount of smart meters. Hence, robust data analytics, high performance computing, efficient data network management, and cloud computing techniques are critical towards the optimized operation of SGs. This research aims to highlight the big data issues and challenges faced by the DEM employed in SG networks. It also provides a brief description of the most commonly used data processing methods in the literature, and proposes a promising direction for future research in the field.",
"title": ""
},
{
"docid": "26032527ca18ef5a8cdeff7988c6389c",
"text": "This paper aims to develop a load forecasting method for short-term load forecasting, based on an adaptive two-stage hybrid network with self-organized map (SOM) and support vector machine (SVM). In the first stage, a SOM network is applied to cluster the input data set into several subsets in an unsupervised manner. Then, groups of 24 SVMs for the next day's load profile are used to fit the training data of each subset in the second stage in a supervised way. The proposed structure is robust with different data types and can deal well with the nonstationarity of load series. In particular, our method has the ability to adapt to different models automatically for the regular days and anomalous days at the same time. With the trained network, we can straightforwardly predict the next-day hourly electricity load. To confirm the effectiveness, the proposed model has been trained and tested on the data of the historical energy load from New York Independent System Operator.",
"title": ""
}
] | [
{
"docid": "1232e633a941b7aa8cccb28287b56e5b",
"text": "This paper presents a complete system for constructing panoramic image mosaics from sequences of images. Our mosaic representation associates a transformation matrix with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In particular, to construct a full view panorama, we introduce a rotational mosaic representation that associates a rotation matrix (and optionally a focal length) with each input image. A patch-based alignment algorithm is developed to quickly align two images given motion models. Techniques for estimating and refining camera focal lengths are also presented. In order to reduce accumulated registration errors, we apply global alignment (block adjustment) to the whole sequence of images, which results in an optimally registered image mosaic. To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions, we use a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment, we significantly improve the quality of our image mosaics, thereby enabling the creation of full view panoramic mosaics with hand-held cameras. We also present an inverse texture mapping algorithm for efficiently extracting environment maps from our panoramic image mosaics. By mapping the mosaic onto an arbitrary texture-mapped polyhedron surrounding the origin, we can explore the virtual environment using standard 3D graphics viewers and hardware without requiring special-purpose players.",
"title": ""
},
{
"docid": "8ee0764d45e512bfc6b0273f7e90d2c1",
"text": "This work introduces a new dataset and framework for the exploration of topological data analysis (TDA) techniques applied to time-series data. We examine the end-toend TDA processing pipeline for persistent homology applied to time-delay embeddings of time series – embeddings that capture the underlying system dynamics from which time series data is acquired. In particular, we consider stability with respect to time series length, the approximation accuracy of sparse filtration methods, and the discriminating ability of persistence diagrams as a feature for learning. We explore these properties across a wide range of time-series datasets spanning multiple domains for single source multi-segment signals as well as multi-source single segment signals. Our analysis and dataset captures the entire TDA processing pipeline and includes time-delay embeddings, persistence diagrams, topological distance measures, as well as kernels for similarity learning and classification tasks for a broad set of time-series data sources. We outline the TDA framework and rationale behind the dataset and provide insights into the role of TDA for time-series analysis as well as opportunities for new work.",
"title": ""
},
{
"docid": "75f8f0d89bdb5067910a92553275b0d7",
"text": "It is well known that recognition performance degrades signi cantly when moving from a speakerdependent to a speaker-independent system. Traditional hidden Markov model (HMM) systems have successfully applied speaker-adaptation approaches to reduce this degradation. In this paper we present and evaluate some techniques for speaker-adaptation of a hybrid HMM-arti cial neural network (ANN) continuous speech recognition system. These techniques are applied to a well trained, speaker-independent, hybrid HMM-ANN system and the recognizer parameters are adapted to a new speaker through o -line procedures. The techniques are evaluated on the DARPA RM corpus using varying amounts of adaptation material and different ANN architectures. The results show that speaker-adaptation within the hybrid framework can substantially improve system performance.",
"title": ""
},
{
"docid": "cfeb97a848766269c2088d8191206cc8",
"text": "We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.",
"title": ""
},
{
"docid": "c67ffe3dfa6f0fe0449f13f1feb20300",
"text": "The associations between giving a history of physical, emotional, and sexual abuse in children and a range of mental health, interpersonal, and sexual problems in adult life were examined in a community sample of women. Abuse was defined to establish groups giving histories of unequivocal victimization. A history of any form of abuse was associated with increased rates of psychopathology, sexual difficulties, decreased self-esteem, and interpersonal problems. The similarities between the three forms of abuse in terms of their association with negative adult outcomes was more apparent than any differences, though there was a trend for sexual abuse to be particularly associated to sexual problems, emotional abuse to low self-esteem, and physical abuse to marital breakdown. Abuse of all types was more frequent in those from disturbed and disrupted family backgrounds. The background factors associated with reports of abuse were themselves often associated to the same range of negative adult outcomes as for abuse. Logistic regressions indicated that some, though not all, of the apparent associations between abuse and adult problems was accounted for by this matrix of childhood disadvantage from which abuse so often emerged.",
"title": ""
},
{
"docid": "a5e01cfeb798d091dd3f2af1a738885b",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "a37aae87354ff25bf7937adc7a9f8e62",
"text": "Vectorizing hand-drawn sketches is an important but challenging task. Many businesses rely on fashion, mechanical or structural designs which, sooner or later, need to be converted in vectorial form. For most, this is still a task done manually. This paper proposes a complete framework that automatically transforms noisy and complex hand-drawn sketches with different stroke types in a precise, reliable and highly-simplified vectorized model. The proposed framework includes a novel line extraction algorithm based on a multi-resolution application of Pearson’s cross correlation and a new unbiased thinning algorithm that can get rid of scribbles and variable-width strokes to obtain clean 1-pixel lines. Other contributions include variants of pruning, merging and edge linking procedures to post-process the obtained paths. Finally, a modification of the original Schneider’s vectorization algorithm is designed to obtain fewer control points in the resulting Bézier splines. All the steps presented in this framework have been extensively tested and compared with state-of-the-art algorithms, showing (both qualitatively and quantitatively) their outperformance. Moreover they exhibit fast real-time performance, making them suitable for integration in any computer graphics toolset.",
"title": ""
},
{
"docid": "b17e909f1301880e93797ed75d26ce57",
"text": "We propose a simple, yet effective, Word Sense Disambiguation method that uses a combination of a lexical knowledge-base and embeddings. Similar to the classic Lesk algorithm, it exploits the idea that overlap between the context of a word and the definition of its senses provides information on its meaning. Instead of counting the number of words that overlap, we use embeddings to compute the similarity between the gloss of a sense and the context. Evaluation on both Dutch and English datasets shows that our method outperforms other Lesk methods and improves upon a state-of-theart knowledge-based system. Additional experiments confirm the effect of the use of glosses and indicate that our approach works well in different domains.",
"title": ""
},
{
"docid": "c1d75b9a71f373a6e44526adf3694f37",
"text": "Segmentation means segregating area of interest from the image. The aim of image segmentation is to cluster the pixels into salient image regions i.e. regions corresponding to individual surfaces, objects, or natural parts of objects. Automatic Brain tumour segmentation is a sensitive step in medical field. A significant medical informatics task is to perform the indexing of the patient databases according to image location, size and other characteristics of brain tumours based on magnetic resonance (MR) imagery. This requires segmenting tumours from different MR imaging modalities. Automated brain tumour segmentation from MR modalities is a challenging, computationally intensive task.Image segmentation plays an important role in image processing. MRI is generally more useful for brain tumour detection because it provides more detailed information about its type, position and size. For this reason, MRI imaging is the choice of study for the diagnostic purpose and, thereafter, for surgery and monitoring treatment outcomes. This paper presents a review of the various methods used in brain MRI image segmentation. The review covers imaging modalities, magnetic resonance imaging and methods for segmentation approaches. The paper concludes with a discussion on the upcoming trend of advanced researches in brain image segmentation. Keywords-Region growing, Level set method, Split and merge algorithm, MRI images",
"title": ""
},
{
"docid": "51c0d682dd0d9c24e23696ba09dc4f49",
"text": "Graph embedding methods represent nodes in a continuous vector space, preserving information from the graph (e.g. by sampling random walks). There are many hyper-parameters to these methods (such as random walk length) which have to be manually tuned for every graph. In this paper, we replace random walk hyperparameters with trainable parameters that we automatically learn via backpropagation. In particular, we learn a novel attention model on the power series of the transition matrix, which guides the random walk to optimize an upstream objective. Unlike previous approaches to attention models, the method that we propose utilizes attention parameters exclusively on the data (e.g. on the random walk), and not used by the model for inference. We experiment on link prediction tasks, as we aim to produce embeddings that best-preserve the graph structure, generalizing to unseen information. We improve state-of-the-art on a comprehensive suite of real world datasets including social, collaboration, and biological networks. Adding attention to random walks can reduce the error by 20% to 45% on datasets we attempted. Further, our learned attention parameters are different for every graph, and our automatically-found values agree with the optimal choice of hyper-parameter if we manually tune existing methods.",
"title": ""
},
{
"docid": "e45e49fb299659e2e71f5c4eb825aff6",
"text": "We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledgebase. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et al. 2015) in sub-domains of Minecraft.",
"title": ""
},
{
"docid": "19a02cb59a50f247663acc77b768d7ec",
"text": "Machine learning is a useful technology for decision support systems and assumes greater importance in research and practice. Whilst much of the work focuses technical implementations and the adaption of machine learning algorithms to application domains, the factors of machine learning design affecting the usefulness of decision support are still understudied. To enhance the understanding of machine learning and its use in decision support systems, we report the results of our content analysis of design-oriented research published between 1994 and 2013 in major Information Systems outlets. The findings suggest that the usefulness of machine learning for supporting decision-makers is dependent on the task, the phase of decision-making, and the applied technologies. We also report about the advantages and limitations of prior research, the applied evaluation methods and implications for future decision support research. Our findings suggest that future decision support research should shed more light on organizational and people-related evaluation criteria.",
"title": ""
},
{
"docid": "a90dd405d9bd2ed912cacee098c0f9db",
"text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.",
"title": ""
},
{
"docid": "9b53d96025c26254b38a4325c9d2da15",
"text": "The parameter spaces of hierarchical systems such as multilayer perceptrons include singularities due to the symmetry and degeneration of hidden units. A parameter space forms a geometrical manifold, called the neuromanifold in the case of neural networks. Such a model is identified with a statistical model, and a Riemannian metric is given by the Fisher information matrix. However, the matrix degenerates at singularities. Such a singular structure is ubiquitous not only in multilayer perceptrons but also in the gaussian mixture probability densities, ARMA time-series model, and many other cases. The standard statistical paradigm of the Cramr-Rao theorem does not hold, and the singularity gives rise to strange behaviors in parameter estimation, hypothesis testing, Bayesian inference, model selection, and in particular, the dynamics of learning from examples. Prevailing theories so far have not paid much attention to the problem caused by singularity, relying only on ordinary statistical theories developed for regular (nonsingular) models. Only recently have researchers remarked on the effects of singularity, and theories are now being developed. This article gives an overview of the phenomena caused by the singularities of statistical manifolds related to multilayer perceptrons and gaussian mixtures. We demonstrate our recent results on these problems. Simple toy models are also used to show explicit solutions. We explain that the maximum likelihood estimator is no longer subject to the gaussian distribution even asymptotically, because the Fisher information matrix degenerates, that the model selection criteria such as AIC, BIC, and MDL fail to hold in these models, that a smooth Bayesian prior becomes singular in such models, and that the trajectories of dynamics of learning are strongly affected by the singularity, causing plateaus or slow manifolds in the parameter space. The natural gradient method is shown to perform well because it takes the singular geometrical structure into account. The generalization error and the training error are studied in some examples.",
"title": ""
},
{
"docid": "06c0ee8d139afd11aab1cc0883a57a68",
"text": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.",
"title": ""
},
{
"docid": "af89b3636290235e0b241c6cced2a336",
"text": "Assume we were to come up with a family of distributions parameterized by θ in order to approximate the posterior, qθ(ω). Our goal is to set θ such that qθ(ω) is as similar to the true posterior p(ω|D) as possible. For clarity, qθ(ω) is a distribution over stochastic parameters ω that is determined by a set of learnable parameters θ and some source of randomness. The approximation is therefore limited by our choice of parametric function qθ(ω) as well as the randomness.1 Given ω and an input x, an output distribution p(y|x,ω) = p(y|fω(x)) = fω(x,y) is induced by observation noise (the conditionality of which is omitted for brevity).",
"title": ""
},
{
"docid": "5ffb3e630e5f020365e471e94d678cbb",
"text": "This paper presents one perspective on recent developments related to software engineering in the industrial automation sector that spans from manufacturing factory automation to process control systems and energy automation systems. The survey's methodology is based on the classic SWEBOK reference document that comprehensively defines the taxonomy of software engineering domain. This is mixed with classic automation artefacts, such as the set of the most influential international standards and dominating industrial practices. The survey focuses mainly on research publications which are believed to be representative of advanced industrial practices as well.",
"title": ""
},
{
"docid": "56fb6fe1f6999b5d7a9dab19e8b877ef",
"text": "Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.",
"title": ""
},
{
"docid": "e6e91ce66120af510e24a10dee6d64b7",
"text": "AI plays an increasingly prominent role in society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it’s not difficult to envision that they will in the future underpin most of the decisions in society. Despite the high complexity entailed by this task, there is still not much understanding of basic properties of such systems. For instance, we currently cannot detect (neither explain nor correct) whether an AI system is operating fairly (i.e., is abiding by the decision-constraints agreed by society) or it is reinforcing biases and perpetuating a preceding prejudicial practice. Issues of discrimination have been discussed extensively in legal circles, but there exists still not much understanding of the formal conditions that a system must adhere to be deemed fair. In this paper, we use the language of structural causality (Pearl, 2000) to fill in this gap. We start by introducing three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. We then derive the causal explanation formula, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms. We apply these results to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. We conclude studying the trade-off between different types of fairness criteria (outcome and procedural), and provide a quantitative approach to policy implementation and the design of fair decision-making systems.",
"title": ""
}
] | scidocsrr |
1fd05aeb40bf9b7817dcce44a15a0459 | Understanding Cybercrime | [
{
"docid": "090887c325fa3bf3ed928011f6b14c72",
"text": "R apid advances in electronic networks and computerbased information systems have given us enormous capabilities to process, store, and transmit digital data in most business sectors. This has transformed the way we conduct trade, deliver government services, and provide health care. Changes in communication and information technologies and particularly their confluence has raised a number of concerns connected with the protection of organizational information assets. Achieving consensus regarding safeguards for an information system, among different stakeholders in an organization, has become more difficult than solving many technical problems that might arise. This “Technical Opinion” focuses on understanding the nature of information security in the next millennium. Based on this understanding it suggests a set of principles that would help in managing information security in the future.",
"title": ""
}
] | [
{
"docid": "b99c42f412408610e1bfd414f4ea6b9f",
"text": "ADPfusion combines the usual high-level, terse notation of Haskell with an underlying fusion framework. The result is a parsing library that allows the user to write algorithms in a style very close to the notation used in formal languages and reap the performance benefits of automatic program fusion. Recent developments in natural language processing and computational biology have lead to a number of works that implement algorithms that process more than one input at the same time. We provide an extension of ADPfusion that works on extended index spaces and multiple input sequences, thereby increasing the number of algorithms that are amenable to implementation in our framework. This allows us to implement even complex algorithms with a minimum of overhead, while enjoying all the guarantees that algebraic dynamic programming provides to the user.",
"title": ""
},
{
"docid": "016f1963dc657a6148afc7f067ad7c82",
"text": "Privacy protection is a crucial problem in many biomedical signal processing applications. For this reason, particular attention has been given to the use of secure multiparty computation techniques for processing biomedical signals, whereby nontrusted parties are able to manipulate the signals although they are encrypted. This paper focuses on the development of a privacy preserving automatic diagnosis system whereby a remote server classifies a biomedical signal provided by the client without getting any information about the signal itself and the final result of the classification. Specifically, we present and compare two methods for the secure classification of electrocardiogram (ECG) signals: the former based on linear branching programs (a particular kind of decision tree) and the latter relying on neural networks. The paper deals with all the requirements and difficulties related to working with data that must stay encrypted during all the computation steps, including the necessity of working with fixed point arithmetic with no truncation while guaranteeing the same performance of a floating point implementation in the plain domain. A highly efficient version of the underlying cryptographic primitives is used, ensuring a good efficiency of the two proposed methods, from both a communication and computational complexity perspectives. The proposed systems prove that carrying out complex tasks like ECG classification in the encrypted domain efficiently is indeed possible in the semihonest model, paving the way to interesting future applications wherein privacy of signal owners is protected by applying high security standards.",
"title": ""
},
{
"docid": "143a4fcc0f2949e797e6f51899e811e2",
"text": "A long-standing problem at the interface of artificial intelligence and applied mathematics is to devise an algorithm capable of achieving human level or even superhuman proficiency in transforming observed data into predictive mathematical models of the physical world. In the current era of abundance of data and advanced machine learning capabilities, the natural question arises: How can we automatically uncover the underlying laws of physics from high-dimensional data generated from experiments? In this work, we put forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time. Specifically, we approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks. The first network acts as a prior on the unknown solution and essentially enables us to avoid numerical differentiations which are inherently ill-conditioned and unstable. The second network represents the nonlinear dynamics and helps us distill the mechanisms that govern the evolution of a given spatiotemporal data-set. We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers’, Kortewegde Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and NavierStokes equations.",
"title": ""
},
{
"docid": "3c548cf1888197545dc8b9cee100039a",
"text": "Williams syndrome is caused by a microdeletion of at least 16 genes on chromosome 7q11.23. The syndrome results in mild to moderate mental retardation or learning disability. The behavioral phenotype for Williams syndrome is characterized by a distinctive cognitive profile and an unusual personality profile. Relative to overall level of intellectual ability, individuals with Williams syndrome typically show a clear strength in auditory rote memory, a strength in language, and an extreme weakness in visuospatial construction. The personality of individuals with Williams syndrome involves high sociability, overfriendliness, and empathy, with an undercurrent of anxiety related to social situations. The adaptive behavior profile for Williams syndrome involves clear strength in socialization skills (especially interpersonal skills related to initiating social interaction), strength in communication, and clear weakness in daily living skills and motor skills, relative to overall level of adaptive behavior functioning. Literature relevant to each of the components of the Williams syndrome behavioral phenotype is reviewed, including operationalizations of the Williams syndrome cognitive profile and the Williams syndrome personality profile. The sensitivity and specificity of these profiles for Williams syndrome, relative to individuals with other syndromes or mental retardation or borderline normal intelligence of unknown etiology, is considered. The adaptive behavior profile is discussed in relation to the cognitive and personality profiles. The importance of operationalizations of crucial components of the behavioral phenotype for the study of genotype/phenotype correlations in Williams syndrome is stressed. MRDD Research Reviews 2000;6:148-158.",
"title": ""
},
{
"docid": "a3a1fbfc6db97824132107d2f56386e6",
"text": "There is indirect evidence that heightened exposure to early androgen may increase the probability that a girl will develop a homosexual orientation in adulthood. One such putative marker of early androgen exposure is the ratio of the length of the index finger (2D) to the ring finger (4D), which is smaller in male humans than in females, and is smaller in lesbians than in heterosexual women. Yet there is also evidence that women may have different sexual orientations at different times in their lives, which suggests that other influences on female sexual orientation, presumably social, are at work as well. We surveyed individuals from a gay pride street fair and found that lesbians who identified themselves as \"butch\" had a significantly smaller 2D:4D than did those who identified themselves as \"femme.\" We conclude that increased early androgen exposure plays a role in only some cases of female homosexuality, and that the sexual orientation of \"femme\" lesbians is unlikely to have been influenced by early androgens.",
"title": ""
},
{
"docid": "1856090b401a304f1172c2958d05d6b3",
"text": "The Iranian government operates one of the largest and most sophisticated Internet censorship regimes in the world, but the mechanisms it employs have received little research attention, primarily due to lack of access to network connections within the country and personal risks to Iranian citizens who take part. In this paper, we examine the status of Internet censorship in Iran based on network measurements conducted from a major Iranian ISP during the lead up to the June 2013 presidential election. We measure the scope of the censorship by probing Alexa’s top 500 websites in 18 different categories. We investigate the technical mechanisms used for HTTP Host–based blocking, keyword filtering, DNS hijacking, and protocol-based throttling. Finally, we map the network topology of the censorship infrastructure and find evidence that it relies heavily on centralized equipment, a property that might be fruitfully exploited by next generation approaches to censorship circumvention.",
"title": ""
},
{
"docid": "d75f15a8b7eff7e74b95abb59cc488cc",
"text": "Up to recently autonomous mobile robots were mostly designed to run within an indoor, yet partly structured and flat, environment. In rough terrain many problems arise and position tracking becomes more difficult. The robot has to deal with wheel slippage and large orientation changes. In this paper we will first present the recent developments on the off-road rover Shrimp. Then a new method, called 3D-Odometry, which extends the standard 2D odometry to the 3D space will be developed. Since it accounts for transitions, the 3D-Odometry provides better position estimates. It will certainly help to go towards real 3D navigation for outdoor robots.",
"title": ""
},
{
"docid": "20d1cb8d2f416c1dc07e5a34c2ec43ba",
"text": "Significant research and development of algorithms in intelligent transportation has grabbed more attention in recent years. An automated, fast, accurate and robust vehicle plate recognition system has become need for traffic control and law enforcement of traffic regulations; and the solution is ANPR. This paper is dedicated on an improved technique of OCR based license plate recognition using neural network trained dataset of object features. A blended algorithm for recognition of license plate is proposed and is compared with existing methods for improve accuracy. The whole system can be categorized under three major modules, namely License Plate Localization, Plate Character Segmentation, and Plate Character Recognition. The system is simulated on 300 national and international motor vehicle LP images and results obtained justifies the main requirement.",
"title": ""
},
{
"docid": "a5f9b7b7b25ccc397acde105c39c3d9d",
"text": "Processors with multiple cores and complex cache coherence protocols are widely employed to improve the overall performance. It is a major challenge to verify the correctness of a cache coherence protocol since the number of reachable states grows exponentially with the number of cores. In this paper, we propose an efficient test generation technique, which can be used to achieve full state and transition coverage in simulation based verification for a wide variety of cache coherence protocols. Based on effective analysis of the state space structure, our method can generate more efficient test sequences (50% shorter) compared with tests generated by breadth first search. Moreover, our proposed approach can generate tests on-the-fly due to its space efficient design.",
"title": ""
},
{
"docid": "16fb241da0ca43b3cc2769332bebff97",
"text": "The safety of meat has been at the forefront of societal concerns in recent years, and indications exist that challenges to meat safety will continue in the future. Major meat safety issues and related challenges include the need to control traditional as well as \"new,\" \"emerging,\" or \"evolving\" pathogenic microorganisms, which may be of increased virulence and low infectious doses, or of resistance to antibiotics or food related stresses. Other microbial pathogen related concerns include cross-contamination of other foods and water with enteric pathogens of animal origin, meat animal manure treatment and disposal issues, foodborne illness surveillance and food attribution activities, and potential use of food safety programs at the farm. Other issues and challenges include food additives and chemical residues, animal identification and traceability issues, the safety and quality of organic and natural products, the need for and development of improved and rapid testing and pathogen detection methodologies for laboratory and field use, regulatory and inspection harmonization issues at the national and international level, determination of responsibilities for zoonotic diseases between animal health and regulatory public health agencies, establishment of risk assessment based food safety objectives, and complete and routine implementation of HACCP at the production and processing level on the basis of food handler training and consumer education. Viral pathogens will continue to be of concern at food service, bacterial pathogens such as Escherichia coli O157:H7, Salmonella and Campylobacter will continue affecting the safety of raw meat and poultry, while Listeria monocytogenes will be of concern in ready-to-eat processed products. These challenges become more important due to changes in animal production, product processing and distribution; increased international trade; changing consumer needs and increased preference for minimally processed products; increased worldwide meat consumption; higher numbers of consumers at-risk for infection; and, increased interest, awareness and scrutiny by consumers, news media, and consumer activist groups. Issues such as bovine sponginform encephalopathy will continue to be of interest mostly as a target for eradication, while viral agents affecting food animals, such as avian influenza, will always need attention for prevention or containment.",
"title": ""
},
{
"docid": "ba4d30e7ea09d84f8f7d96c426e50f34",
"text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.",
"title": ""
},
{
"docid": "56c7c065c390d1ed5f454f663289788d",
"text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"title": ""
},
{
"docid": "b3bda9c0a0ec22c5d244f8c538ab6056",
"text": "Knowledge assets represent a special set of resources for a firm and as such, their management is of great importance to academics and managers. The purpose of this paper is to review the literature as it pertains to knowledge assets and provide a suggested model for intellectual capital management that can be of benefit to both academics and practitioners. In doing so, a set of research propositions are suggested to provide guidance for future research.",
"title": ""
},
{
"docid": "e541ae262655b7f5affefb32ce9267ee",
"text": "Internet of Things (IoT) is a revolutionary technology for the modern society. IoT can connect every surrounding objects for various applications like security, medical fields, monitoring and other industrial applications. This paper considers the application of IoT in the field of medicine. IoT in E-medicine can take the advantage of emerging technologies to provide immediate treatment to the patient as well as monitors and keeps track of health record for healthy person. IoT then performs complex computations on these collected data and can provide health related advice. Though IoT can provide a cost effective medical services to any people of all age groups, there are several key issues that need to be addressed. System security, IoT interoperability, dynamic storage facility and unified access mechanisms are some of the many fundamental issues associated with IoT. This paper proposes a system level design solution for security and flexibility aspect of IoT. In this paper, the functional components are bound in security function group which ensures the management of privacy and secure operation of the system. The security function group comprises of components which offers secure communication using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Since CP-ABE are delegated to unconstrained devices with the assumption that these devices are trusted, the producer encrypts data using AES and the ABE scheme is protected through symmetric key solutions.",
"title": ""
},
{
"docid": "bfeff1e1ef24d0cb92d1844188f87cc8",
"text": "While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users’ profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users’ attributes based on their tweets.1",
"title": ""
},
{
"docid": "4076b5d1338a7552453e284019406129",
"text": "Knowledge bases (KBs) are paramount in NLP. We employ multiview learning for increasing accuracy and coverage of entity type information in KBs. We rely on two metaviews: language and representation. For language, we consider high-resource and lowresource languages from Wikipedia. For representation, we consider representations based on the context distribution of the entity (i.e., on its embedding), on the entity’s name (i.e., on its surface form) and on its description in Wikipedia. The two metaviews language and representation can be freely combined: each pair of language and representation (e.g., German embedding, English description, Spanish name) is a distinct view. Our experiments on entity typing with fine-grained classes demonstrate the effectiveness of multiview learning. We release MVET, a large multiview – and, in particular, multilingual – entity typing dataset we created. Monoand multilingual finegrained entity typing systems can be evaluated on this dataset.",
"title": ""
},
{
"docid": "db622838ba5f6c76f66125cf76c47b40",
"text": "In recent years, the study of lightweight symmetric ciphers has gained interest due to the increasing demand for security services in constrained computing environments, such as in the Internet of Things. However, when there are several algorithms to choose from and different implementation criteria and conditions, it becomes hard to select the most adequate security primitive for a specific application. This paper discusses the hardware implementations of Present, a standardized lightweight cipher called to overcome part of the security issues in extremely constrained environments. The most representative realizations of this cipher are reviewed and two novel designs are presented. Using the same implementation conditions, the two new proposals and three state-of-the-art designs are evaluated and compared, using area, performance, energy, and efficiency as metrics. From this wide experimental evaluation, to the best of our knowledge, new records are obtained in terms of implementation size and energy consumption. In particular, our designs result to be adequate in regards to energy-per-bit and throughput-per-slice.",
"title": ""
},
{
"docid": "66ca4bacfbae3ff32b105565dace5194",
"text": "In this paper, we analyze and systematize the state-ofthe-art graph data privacy and utility techniques. Specifically, we propose and develop SecGraph (available at [1]), a uniform and open-source Secure Graph data sharing/publishing system. In SecGraph, we systematically study, implement, and evaluate 11 graph data anonymization algorithms, 19 data utility metrics, and 15 modern Structure-based De-Anonymization (SDA) attacks. To the best of our knowledge, SecGraph is the first such system that enables data owners to anonymize data by state-of-the-art anonymization techniques, measure the data’s utility, and evaluate the data’s vulnerability against modern De-Anonymization (DA) attacks. In addition, SecGraph enables researchers to conduct fair analysis and evaluation of existing and newly developed anonymization/DA techniques. Leveraging SecGraph, we conduct extensive experiments to systematically evaluate the existing graph data anonymization and DA techniques. The results demonstrate that (i) most anonymization schemes can partially or conditionally preserve most graph utilities while losing some application utility; (ii) no DA attack is optimum in all scenarios. The DA performance depends on several factors, e.g., similarity between anonymized and auxiliary data, graph density, and DA heuristics; and (iii) all the state-of-the-art anonymization schemes are vulnerable to several or all of the modern SDA attacks. The degree of vulnerability of each anonymization scheme depends on how much and which data utility it preserves.",
"title": ""
},
{
"docid": "3a2bd70e7db01515aa0dbc8924a13e3c",
"text": "While packet capture has been observed in real implementations of 802.11 devices, there is a lack of accurate models that describe the phenomenon. We present a general analytical model and an iterative method that predicts error probabilities and throughputs of packet transmissions with multiple senderreceiver pairs. Our model offers a more accurate prediction than previous work by taking into account the cumulative strength of interference signals and using the BER model to convert a signal to interference and noise ratio value to a bit error probability. This permits the analysis of packet reception at any transmission rate with interference from neighbors at any set of locations. We also prove that our iterative method converges, and we verify the accuracy of our model through simulations in Qualnet. Last, we present a rate assignment algorithm to reduce the average delay as an application of our analysis.",
"title": ""
}
] | scidocsrr |
47bc10208873706fd75e142b84e15dd7 | Policy development and implementation in health promotion--from theory to practice: the ADEPT model. | [
{
"docid": "1137cdf90ff6229865ae20980739afc5",
"text": "This paper addresses the role of policy and evidence in health promotion. The concept of von Wright’s “logic of events” is introduced and applied to health policy impact analysis. According to von Wright (1976), human action can be explained by a restricted number of determinants: wants, abilities, duties, and opportunities. The dynamics of action result from changes in opportunities (logic of events). Applied to the policymaking process, the present model explains personal wants as subordinated to political goals. Abilities of individual policy makers are part of organisational resources. Also, personal duties are subordinated to institutional obligations. Opportunities are mainly related to political context and public support. The present analysis suggests that policy determinants such as concrete goals, sufficient resources and public support may be crucial for achieving an intended behaviour change on the population level, while other policy determinants, e.g., personal commitment and organisational capacities, may especially relate to the policy implementation process. The paper concludes by indicating ways in which future research using this theoretical framework might contribute to health promotion practice for improved health outcomes across populations.",
"title": ""
}
] | [
{
"docid": "c47525f2456de0b9b87a5ebbb5a972fb",
"text": "This article reviews the potential use of visual feedback, focusing on mirror visual feedback, introduced over 15 years ago, for the treatment of many chronic neurological disorders that have long been regarded as intractable such as phantom pain, hemiparesis from stroke and complex regional pain syndrome. Apart from its clinical importance, mirror visual feedback paves the way for a paradigm shift in the way we approach neurological disorders. Instead of resulting entirely from irreversible damage to specialized brain modules, some of them may arise from short-term functional shifts that are potentially reversible. If so, relatively simple therapies can be devised--of which mirror visual feedback is an example--to restore function.",
"title": ""
},
{
"docid": "0869fee5888a97f424856570f2b9dc2c",
"text": "This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are reviewed and their performance for generating high-quality PIs is compared. PI-based measures are proposed and applied for the objective and quantitative assessment of each method's performance. A selection of 12 synthetic and real-world case studies is used to examine each method's performance for PI construction. The comparison is performed on the basis of the quality of generated PIs, the repeatability of the results, the computational requirements and the PIs variability with regard to the data uncertainty. The obtained results in this paper indicate that: 1) the delta and Bayesian methods are the best in terms of quality and repeatability, and 2) the MVE and bootstrap methods are the best in terms of low computational load and the width variability of PIs. This paper also introduces the concept of combinations of PIs, and proposes a new method for generating combined PIs using the traditional PIs. Genetic algorithm is applied for adjusting the combiner parameters through minimization of a PI-based cost function subject to two sets of restrictions. It is shown that the quality of PIs produced by the combiners is dramatically better than the quality of PIs obtained from each individual method.",
"title": ""
},
{
"docid": "208b4cb4dc4cee74b9357a5ebb2f739c",
"text": "We report improved AMR parsing results by adding a new action to a transitionbased AMR parser to infer abstract concepts and by incorporating richer features produced by auxiliary analyzers such as a semantic role labeler and a coreference resolver. We report final AMR parsing results that show an improvement of 7% absolute in F1 score over the best previously reported result. Our parser is available at: https://github.com/ Juicechuan/AMRParsing",
"title": ""
},
{
"docid": "bc272e837f1071fabcc7056134bae784",
"text": "Parental vaccine hesitancy is a growing problem affecting the health of children and the larger population. This article describes the evolution of the vaccine hesitancy movement and the individual, vaccine-specific and societal factors contributing to this phenomenon. In addition, potential strategies to mitigate the rising tide of parent vaccine reluctance and refusal are discussed.",
"title": ""
},
{
"docid": "5c2b73276c9f845d7eef5c9dc4cea2a1",
"text": "The detection of QR codes, a type of 2D barcode, as described in the literature consists merely in the determination of the boundaries of the symbol region in images obtained with the specific intent of highlighting the symbol. However, many important applications such as those related with accessibility technologies or robotics, depends on first detecting the presence of a barcode in an environment. We employ Viola-Jones rapid object detection framework to address the problem of finding QR codes in arbitrarily acquired images. This framework provides an efficient way to focus the detection process in promising regions of the image and a very fast feature calculation approach for pattern classification. An extensive study of variations in the parameters of the framework for detecting finder patterns, present in three corners of every QR code, was carried out. Detection accuracy superior to 90%, with controlled number of false positives, is achieved. We also propose a post-processing algorithm that aggregates the results of the first step and decides if the detected finder patterns are part of QR code symbols. This two-step processing is done in real time.",
"title": ""
},
{
"docid": "43654115b3c64eef7b3a26d90c092e9b",
"text": "We investigate the problem of domain adaptation for parallel data in Statistical Machine Translation (SMT). While techniques for domain adaptation of monolingual data can be borrowed for parallel data, we explore conceptual differences between translation model and language model domain adaptation and their effect on performance, such as the fact that translation models typically consist of several features that have different characteristics and can be optimized separately. We also explore adapting multiple (4–10) data sets with no a priori distinction between in-domain and out-of-domain data except for an in-domain development set.",
"title": ""
},
{
"docid": "3266a3d561ee91e8f08d81e1aac6ac1b",
"text": "The seminal work of Dwork et al. [ITCS 2012] introduced a metric-based notion of individual fairness. Given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of approximate metric-fairness: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metricfairness does generalize, and leverage these generalization guarantees to construct polynomialtime PACF learning algorithms for the classes of linear and logistic predictors. [email protected]. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17). [email protected]. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17).",
"title": ""
},
{
"docid": "b640ed2bd02ba74ee0eb925ef6504372",
"text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.",
"title": ""
},
{
"docid": "30750e5ee653ee623f6ec38e957f4843",
"text": "Chroma is a widespread feature for cover song recognition, as it is robust against non-tonal components and independent of timbre and specific instruments. However, Chroma is derived from spectrogram, thus it provides a coarse approximation representation of musical score. In this paper, we proposed a similar but more effective feature Note Class Profile (NCP) derived with music transcription techniques. NCP is a multi-dimensional time serie, each column of which denotes the energy distribution of 12 note classes. Experimental results on benchmark datasets demonstrated its superior performance over existing music features. In addition, NCP feature can be enhanced further with the development of music transcription techniques. The source code can be found in github1.",
"title": ""
},
{
"docid": "ace30c4ad4a74f1ba526b4868e47b5c5",
"text": "China and India are home to two of the world's largest populations, and both populations are aging rapidly. Our data compare health status, risk factors, and chronic diseases among people age forty-five and older in China and India. By 2030, 65.6 percent of the Chinese and 45.4 percent of the Indian health burden are projected to be borne by older adults, a population with high levels of noncommunicable diseases. Smoking (26 percent in both China and India) and inadequate physical activity (10 percent and 17.7 percent, respectively) are highly prevalent. Health policy and interventions informed by appropriate data will be needed to avert this burden.",
"title": ""
},
{
"docid": "c4ccb674a07ba15417f09b81c1255ba8",
"text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.",
"title": ""
},
{
"docid": "9a8901f5787bf6db6900ad2b4b6291c5",
"text": "MOTIVATION\nAs biological inquiry produces ever more network data, such as protein-protein interaction networks, gene regulatory networks and metabolic networks, many algorithms have been proposed for the purpose of pairwise network alignment-finding a mapping from the nodes of one network to the nodes of another in such a way that the mapped nodes can be considered to correspond with respect to both their place in the network topology and their biological attributes. This technique is helpful in identifying previously undiscovered homologies between proteins of different species and revealing functionally similar subnetworks. In the past few years, a wealth of different aligners has been published, but few of them have been compared with one another, and no comprehensive review of these algorithms has yet appeared.\n\n\nRESULTS\nWe present the problem of biological network alignment, provide a guide to existing alignment algorithms and comprehensively benchmark existing algorithms on both synthetic and real-world biological data, finding dramatic differences between existing algorithms in the quality of the alignments they produce. Additionally, we find that many of these tools are inconvenient to use in practice, and there remains a need for easy-to-use cross-platform tools for performing network alignment.",
"title": ""
},
{
"docid": "df3cad5eb68df1bc5d6770f4f700ac65",
"text": "Substrate integrated waveguide (SIW) cavity-backed antenna arrays have advantages of low-profile, high-gain and low-cost fabrication. However, traditional SIW cavity-backed antenna arrays usually load with extra feeding networks, which make the whole arrays larger and more complex. A novel 4 × 4 SIW cavity-backed antenna array without using individual feeding network is presented in this letter. The proposed antenna array consists of sixteen SIW cavities connected by inductive windows as feeding network and wide slots on the surface of each cavity as radiating part. Without loading with extra feeding network, the array is compact.",
"title": ""
},
{
"docid": "710febdd18f40c9fc82f8a28039362cc",
"text": "The paper deals with engineering an electric wheelchair from a common wheelchair and then developing a Brain Computer Interface (BCI) between the electric wheelchair and the human brain. A portable EEG headset and firmware signal processing together facilitate the movement of the wheelchair integrating mind activity and frequency of eye blinks of the patient sitting on the wheelchair with the help of Microcontroller Unit (MCU). The target population for the mind controlled wheelchair is the patients who are paralyzed below the neck and are unable to use conventional wheelchair interfaces. This project aims at creating a cost efficient solution, later intended to be distributed as an add-on conversion unit for a common manual wheelchair. A Neurosky mind wave headset is used to pick up EEG signals from the brain. This is a commercialized version of the Open-EEG Project. The signal obtained from EEG sensor is processed by the ARM microcontroller FRDM KL-25Z, a Freescale board. The microcontroller takes decision for determining the direction of motion of wheelchair based on floor detection and obstacle avoidance sensors mounted on wheelchair’s footplate. The MCU shows real time information on a color LCD interfaced to it. Joystick control of the wheelchair is also provided as an additional interface option that can be chosen from the menu system of the project.",
"title": ""
},
{
"docid": "e30ae0b5cd90d091223ab38596de3109",
"text": "1 Abstract We describe a consistent hashing algorithm which performs multiple lookups per key in a hash table of nodes. It requires no additional storage beyond the hash table, and achieves a peak-to-average load ratio of 1 + ε with just 1 + 1 ε lookups per key.",
"title": ""
},
{
"docid": "60cfdc554e1078263370514ec3f04a90",
"text": "Stylistic variation is critical to render the utterances generated by conversational agents natural and engaging. In this paper, we focus on sequence-to-sequence models for open-domain dialogue response generation and propose a new method to evaluate the extent to which such models are able to generate responses that reflect different personality traits.",
"title": ""
},
{
"docid": "3d2666ab3b786fd02bb15e81b0eaeb37",
"text": "BACKGROUND\n The analysis of nursing errors in clinical management highlighted that clinical handover plays a pivotal role in patient safety. Changes to handover including conducting handover at the bedside and the use of written handover summary sheets were subsequently implemented.\n\n\nAIM\n The aim of the study was to explore nurses' perspectives on the introduction of bedside handover and the use of written handover sheets.\n\n\nMETHOD\n Using a qualitative approach, data were obtained from six focus groups containing 30 registered and enrolled (licensed practical) nurses. Thematic analysis revealed several major themes.\n\n\nFINDINGS\n Themes identified included: bedside handover and the strengths and weaknesses; patient involvement in handover, and good communication is about good communicators. Finally, three sources of patient information and other issues were also identified as key aspects.\n\n\nCONCLUSIONS\n How bedside handover is delivered should be considered in relation to specific patient caseloads (patients with cognitive impairments), the shift (day, evening or night shift) and the model of service delivery (team versus patient allocation).\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n Flexible handover methods are implicit within clinical setting issues especially in consideration to nursing teamwork. Good communication processes continue to be fundamental for successful handover processes.",
"title": ""
},
{
"docid": "b6eeb0f99ae856acb1bf2fef4d73c517",
"text": "We propose a probabilistic matrix factorization model for collaborative filtering that learns from data that is missing not at random (MNAR). Matrix factorization models exhibit state-of-the-art predictive performance in collaborative filtering. However, these models usually assume that the data is missing at random (MAR), and this is rarely the case. For example, the data is not MAR if users rate items they like more than ones they dislike. When the MAR assumption is incorrect, inferences are biased and predictive performance can suffer. Therefore, we model both the generative process for the data and the missing data mechanism. By learning these two models jointly we obtain improved performance over state-of-the-art methods when predicting the ratings and when modeling the data observation process. We present the first viable MF model for MNAR data. Our results are promising and we expect that further research on NMAR models will yield large gains in collaborative filtering.",
"title": ""
},
{
"docid": "cc6c485fdd8d4d61c7b68bfd94639047",
"text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.",
"title": ""
},
{
"docid": "f91e1638e4812726ccf96f410da2624b",
"text": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and -greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.",
"title": ""
}
] | scidocsrr |
a17978c2b85c8efb21ea7c0c5172f9cf | System Characteristics, Satisfaction and E-Learning Usage: A Structural Equation Model (SEM) | [
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
},
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
},
{
"docid": "4f51f8907402f9859a77988f967c755f",
"text": "As a promising solution, electronic learning (e-learning) has been widely adopted by many companies to offer learning-on-demand opportunities to individual employees for reducing training time and cost. While information systems (IS) success models have received much attention among researchers, little research has been conducted to assess the success and/or effectiveness of e-learning systems in an organizational context. Whether traditional information systems success models can be extended to investigating e-learning systems success is rarely addressed. Based on the previous IS success literature, this study develops and validates a multidimensional model for assessing e-learning systems success (ELSS) from employee (e-learner) perspectives. The procedures used in conceptualizing an ELSS construct, generating items, collecting data, and validating a multiple-item scale for measuring ELSS are described. This paper presents evidence of the scale’s factor structure, reliability, content validity, criterion-related validity, convergent validity, and discriminant validity on the basis of analyzing data from a sample of 206 respondents. Theoretical and managerial implications of our results are then discussed. This empirically validated instrument will be useful to researchers in developing and testing e-learning systems theories, as well as to organizations in implementing successful e-learning systems.",
"title": ""
}
] | [
{
"docid": "b41d8ca866268133f2af88495dad6482",
"text": "Text clustering is an important area of interest in the field of Text summarization, sentiment analysis etc. There have been a lot of algorithms experimented during the past years, which have a wide range of performances. One of the most popular method used is k-means, where an initial assumption is made about k, which is the number of clusters to be generated. Now a new method is introduced where the number of clusters is found using a modified spectral bisection and then the output is given to a genetic algorithm where the final solution is obtained. Keywords— Cluster, Spectral Bisection, Genetic Algorithm, kmeans.",
"title": ""
},
{
"docid": "d59b64b96cc79a2e21e705c021473f2a",
"text": "Bovine colostrum (first milk) contains very high concentrations of IgG, and on average 1 kg (500 g/liter) of IgG can be harvested from each immunized cow immediately after calving. We used a modified vaccination strategy together with established production systems from the dairy food industry for the large-scale manufacture of broadly neutralizing HIV-1 IgG. This approach provides a low-cost mucosal HIV preventive agent potentially suitable for a topical microbicide. Four cows were vaccinated pre- and/or postconception with recombinant HIV-1 gp140 envelope (Env) oligomers of clade B or A, B, and C. Colostrum and purified colostrum IgG were assessed for cross-clade binding and neutralization against a panel of 27 Env-pseudotyped reporter viruses. Vaccination elicited high anti-gp140 IgG titers in serum and colostrum with reciprocal endpoint titers of up to 1 × 10(5). While nonimmune colostrum showed some intrinsic neutralizing activity, colostrum from 2 cows receiving a longer-duration vaccination regimen demonstrated broad HIV-1-neutralizing activity. Colostrum-purified polyclonal IgG retained gp140 reactivity and neutralization activity and blocked the binding of the b12 monoclonal antibody to gp140, showing specificity for the CD4 binding site. Colostrum-derived anti-HIV antibodies offer a cost-effective option for preparing the substantial quantities of broadly neutralizing antibodies that would be needed in a low-cost topical combination HIV-1 microbicide.",
"title": ""
},
{
"docid": "5623321fb6c3a7c0b22980ce663632cd",
"text": "Vector representations for language have been shown to be useful in a number of Natural Language Processing (NLP) tasks. In this thesis, we aim to investigate the effectiveness of word vector representations for the research problem of Aspect-Based Sentiment Analysis (ABSA), which attempts to capture both semantic and sentiment information encoded in user generated content such as product reviews. In particular, we target three ABSA sub-tasks: aspect term extraction, aspect category detection, and aspect sentiment prediction. We investigate the effectiveness of vector representations over different text data, and evaluate the quality of domain-dependent vectors. We utilize vector representations to compute various vector-based features and conduct extensive experiments to demonstrate their effectiveness. Using simple vector-based features, we achieve F1 scores of 79.9% for aspect term extraction, 86.7% for category detection, and 72.3% for aspect sentiment prediction. Co Thesis Supervisor: James Glass Title: Senior Research Scientist Co Thesis Supervisor: Mitra Mohtarami Title: Postdoctoral Associate 3",
"title": ""
},
{
"docid": "f8c7fcba6d0cb889836dc868f3ba12c8",
"text": "This article reviews dominant media portrayals of mental illness, the mentally ill and mental health interventions, and examines what social, emotional and treatment-related effects these may have. Studies consistently show that both entertainment and news media provide overwhelmingly dramatic and distorted images of mental illness that emphasise dangerousness, criminality and unpredictability. They also model negative reactions to the mentally ill, including fear, rejection, derision and ridicule. The consequences of negative media images for people who have a mental illness are profound. They impair self-esteem, help-seeking behaviours, medication adherence and overall recovery. Mental health advocates blame the media for promoting stigma and discrimination toward people with a mental illness. However, the media may also be an important ally in challenging public prejudices, initiating public debate, and projecting positive, human interest stories about people who live with mental illness. Media lobbying and press liaison should take on a central role for mental health professionals, not only as a way of speaking out for patients who may not be able to speak out for themselves, but as a means of improving public education and awareness. Also, given the consistency of research findings in this field, it may now be time to shift attention away from further cataloguing of media representations of mental illness to the more challenging prospect of how to use the media to improve the life chances and recovery possibilities for the one in four people living with mental disorders.",
"title": ""
},
{
"docid": "8d30afbccfa76492b765f69d34cd6634",
"text": "Commonsense knowledge is vital to many natural language processing tasks. In this paper, we present a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. Given a user post, the model retrieves relevant knowledge graphs from a knowledge base and then encodes the graphs with a static graph attention mechanism, which augments the semantic information of the post and thus supports better understanding of the post. Then, during word generation, the model attentively reads the retrieved knowledge graphs and the knowledge triples within each graph to facilitate better generation through a dynamic graph attention mechanism. This is the first attempt that uses large-scale commonsense knowledge in conversation generation. Furthermore, unlike existing models that use knowledge triples (entities) separately and independently, our model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs. Experiments show that the proposed model can generate more appropriate and informative responses than stateof-the-art baselines.",
"title": ""
},
{
"docid": "fb31665935c1a0964e70c864af8ff46f",
"text": "In the context of object and scene recognition, state-of-the-art performances are obtained with visual Bag-of-Words (BoW) models of mid-level representations computed from dense sampled local descriptors (e.g., Scale-Invariant Feature Transform (SIFT)). Several methods to combine low-level features and to set mid-level parameters have been evaluated recently for image classification. In this chapter, we study in detail the different components of the BoW model in the context of image classification. Particularly, we focus on the coding and pooling steps and investigate the impact of the main parameters of the BoW pipeline. We show that an adequate combination of several low (sampling rate, multiscale) and mid-level (codebook size, normalization) parameters is decisive to reach good performances. Based on this analysis, we propose a merging scheme that exploits the specificities of edge-based descriptors. Low and high contrast regions are pooled separately and combined to provide a powerful representation of images. We study the impact on classification performance of the contrast threshold that determines whether a SIFT descriptor corresponds to a low contrast region or a high contrast region. Successful experiments are provided on the Caltech-101 and Scene-15 datasets. M. T. Law (B) · N. Thome · M. Cord LIP6, UPMC—Sorbonne University, Paris, France e-mail: [email protected] N. Thome e-mail: [email protected] M. Cord e-mail: [email protected] B. Ionescu et al. (eds.), Fusion in Computer Vision, Advances in Computer 29 Vision and Pattern Recognition, DOI: 10.1007/978-3-319-05696-8_2, © Springer International Publishing Switzerland 2014",
"title": ""
},
{
"docid": "ed5b6ea3b1ccc22dff2a43bea7aaf241",
"text": "Testing is an important process that is performed to support quality assurance. Testing activities support quality assurance by gathering information about the nature of the software being studied. These activities consist of designing test cases, executing the software with those test cases, and examining the results produced by those executions. Studies indicate that more than fifty percent of the cost of software development is devoted to testing, with the percentage for testing critical software being even higher. As software becomes more pervasive and is used more often to perform critical tasks, it will be required to be of higher quality. Unless we can find efficient ways to perform effective testing, the percentage of development costs devoted to testing will increase significantly. This report briefly assesses the state of the art in software testing, outlines some future directions in software testing, and gives some pointers to software testing resources.",
"title": ""
},
{
"docid": "a602a532a7b95eae050d084e10606951",
"text": "Municipal solid waste management has emerged as one of the greatest challenges facing environmental protection agencies in developing countries. This study presents the current solid waste management practices and problems in Nigeria. Solid waste management is characterized by inefficient collection methods, insufficient coverage of the collection system and improper disposal. The waste density ranged from 280 to 370 kg/m3 and the waste generation rates ranged from 0.44 to 0.66 kg/capita/day. The common constraints faced environmental agencies include lack of institutional arrangement, insufficient financial resources, absence of bylaws and standards, inflexible work schedules, insufficient information on quantity and composition of waste, and inappropriate technology. The study suggested study of institutional, political, social, financial, economic and technical aspects of municipal solid waste management in order to achieve sustainable and effective solid waste management in Nigeria.",
"title": ""
},
{
"docid": "1c66d84dfc8656a23e2a4df60c88ab51",
"text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.",
"title": ""
},
{
"docid": "6eca26209b9fcca8a9df76307108a3a8",
"text": "Transform-based lossy compression has a huge potential for hyperspectral data reduction. Hyperspectral data are 3-D, and the nature of their correlation is different in each dimension. This calls for a careful design of the 3-D transform to be used for compression. In this paper, we investigate the transform design and rate allocation stage for lossy compression of hyperspectral data. First, we select a set of 3-D transforms, obtained by combining in various ways wavelets, wavelet packets, the discrete cosine transform, and the Karhunen-Loegraveve transform (KLT), and evaluate the coding efficiency of these combinations. Second, we propose a low-complexity version of the KLT, in which complexity and performance can be balanced in a scalable way, allowing one to design the transform that better matches a specific application. Third, we integrate this, as well as other existing transforms, in the framework of Part 2 of the Joint Photographic Experts Group (JPEG) 2000 standard, taking advantage of the high coding efficiency of JPEG 2000, and exploiting the interoperability of an international standard. We introduce an evaluation framework based on both reconstruction fidelity and impact on image exploitation, and evaluate the proposed algorithm by applying this framework to AVIRIS scenes. It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance. As for impact on exploitation, we consider multiclass hard classification, spectral unmixing, binary classification, and anomaly detection as benchmark applications",
"title": ""
},
{
"docid": "d2836880ac69bf35e53f5bc6de8bc5dc",
"text": "There is currently significant interest in freeform, curve-based authoring of graphic images. In particular, \"diffusion curves\" facilitate graphic image creation by allowing an image designer to specify naturalistic images by drawing curves and setting colour values along either side of those curves. Recently, extensions to diffusion curves based on the biharmonic equation have been proposed which provide smooth interpolation through specified colour values and allow image designers to specify colour gradient constraints at curves. We present a Boundary Element Method (BEM) for rendering diffusion curve images with smooth interpolation and gradient constraints, which generates a solved boundary element image representation. The diffusion curve image can be evaluated from the solved representation using a novel and efficient line-by-line approach. We also describe \"curve-aware\" upsampling, in which a full resolution diffusion curve image can be upsampled from a lower resolution image using formula evaluated orrections near curves. The BEM solved image representation is compact. It therefore offers advantages in scenarios where solved image representations are transmitted to devices for rendering and where PDE solving at the device is undesirable due to time or processing constraints.",
"title": ""
},
{
"docid": "235e1f328a847fa7b6e074a58defed0b",
"text": "A stemming algorithm, a procedure to reduce all words with the same stem to a common form, is useful in many areas of computational linguistics and information-retrieval work. While the form of the algorithm varies with its application, certain linguistic problems are common to any stemming procedure. As a basis for evaluation of previous attempts to deal with these problems, this paper first discusses the theoretical and practical attributes of stemming algorithms. Then a new version of a context-sensitive, longest-match stemming algorithm for English is proposed; though developed for use in a library information transfer system, it is of general application. A major linguistic problem in stemming, variation in spelling of stems, is discussed in some detail and several feasible programmed solutions are outlined, along with sample results of one of these methods.",
"title": ""
},
{
"docid": "8e50613e8aab66987d650cd8763811e5",
"text": "Along with the great increase of internet and e-commerce, the use of credit card is an unavoidable one. Due to the increase of credit card usage, the frauds associated with this have also increased. There are a lot of approaches used to detect the frauds. In this paper, behavior based classification approach using Support Vector Machines are employed and efficient feature extraction method also adopted. If any discrepancies occur in the behaviors transaction pattern then it is predicted as suspicious and taken for further consideration to find the frauds. Generally credit card fraud detection problem suffers from a large amount of data, which is rectified by the proposed method. Achieving finest accuracy, high fraud catching rate and low false alarms are the main tasks of this approach.",
"title": ""
},
{
"docid": "4a9474c0813646708400fc02c344a976",
"text": "Over the years, the Web has shrunk the world, allowing individuals to share viewpoints with many more people than they are able to in real life. At the same time, however, it has also enabled anti-social and toxic behavior to occur at an unprecedented scale. Video sharing platforms like YouTube receive uploads from millions of users, covering a wide variety of topics and allowing others to comment and interact in response. Unfortunately, these communities are periodically plagued with aggression and hate attacks. In particular, recent work has showed how these attacks often take place as a result of “raids,” i.e., organized efforts coordinated by ad-hoc mobs from third-party communities. Despite the increasing relevance of this phenomenon, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human reviews. In this paper, we propose an automated solution to identify videos that are likely to be targeted by coordinated harassers. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of raid victims. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with high accuracy (AUC up to 94%). Overall, our work paves the way for providing video platforms like YouTube with proactive systems to detect and mitigate coordinated hate attacks.",
"title": ""
},
{
"docid": "733a7a024f5e408323f9b037828061bb",
"text": "Hidden Markov model (HMM) is one of the popular techniques for story segmentation, where hidden Markov states represent the topics, and the emission distributions of n-gram language model (LM) are dependent on the states. Given a text document, a Viterbi decoder finds the hidden story sequence, with a change of topic indicating a story boundary. In this paper, we propose a discriminative approach to story boundary detection. In the HMM framework, we use deep neural network (DNN) to estimate the posterior probability of topics given the bag-ofwords in the local context. We call it the DNN-HMM approach. We consider the topic dependent LM as a generative modeling technique, and the DNN-HMM as the discriminative solution. Experiments on topic detection and tracking (TDT2) task show that DNN-HMM outperforms traditional n-gram LM approach significantly and achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "3ab4c2383569fc02f0395e79070dc16d",
"text": "A report released last week by the US National Academies makes recommendations for tackling the issues surrounding the era of petabyte science.",
"title": ""
},
{
"docid": "f006fff7ddfaed4b6016d59377144b7a",
"text": "In this paper I consider whether traditional behaviors of animals, like traditions of humans, are transmitted by imitation learning. Review of the literature on problem solving by captive primates, and detailed consideration of two widely cited instances of purported learning by imitation and of culture in free-living primates (sweet-potato washing by Japanese macaques and termite fishing by chimpanzees), suggests that nonhuman primates do not learn to solve problems by imitation. It may, therefore, be misleading to treat animal traditions and human culture as homologous (rather than analogous) and to refer to animal traditions as cultural.",
"title": ""
},
{
"docid": "745451b3ca65f3388332232b370ea504",
"text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.",
"title": ""
},
{
"docid": "a00acd7a9a136914bf98478ccd85e812",
"text": "Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.",
"title": ""
},
{
"docid": "26aee4feb558468d571138cd495f51d3",
"text": "A 300-MHz, custom 64-bit VLSI, second-generation Alpha CPU chip has been developed. The chip was designed in a 0.5-um CMOS technology using four levels of metal. The die size is 16.5 mm by 18.1 mm, contains 9.3 million transistors, operates at 3.3 V, and supports 3.3-V/5.0-V interfaces. Power dissipation is 50 W. It contains an 8-KB instruction cache; an 8-KB data cache; and a 96-KB unified second-level cache. The chip can issue four instructions per cycle and delivers 1,200 mips/600 MFLOPS (peak). Several noteworthy circuit and implementation techniques were used to attain the target operating frequency.",
"title": ""
}
] | scidocsrr |
68e258b3686c79a5539d85d4f4c9ec1f | Enterprise Architecture Principles: Literature Review and Research Directions | [
{
"docid": "92b61bc041b3b35687ba1cd6f5468941",
"text": "Many organizations adopt cyclical processes to articulate and engineer technological responses to their business needs. Their objective is to increase competitive advantage and add value to the organization's processes, services and deliverables, in line with the organization's vision and strategy. The major challenges in achieving these objectives include the rapid changes in the business and technology environments themselves, such as changes to business processes, organizational structure, architectural requirements, technology infrastructure and information needs. No activity or process is permanent in the organization. To achieve their objectives, some organizations have adopted an Enterprise Architecture (EA) approach, others an Information Technology (IT) strategy approach, and yet others have adopted both EA and IT strategy for the same primary objectives. The deployment of EA and IT strategy for the same aims and objectives raises question whether there is conflict in adopting both approaches. The paper and case study presented here, aimed at both academics and practitioners, examines how EA could be employed as IT strategy to address both business and IT needs and challenges.",
"title": ""
},
{
"docid": "adcb28fcc215a74313d583c520ed3036",
"text": "I t’s rare that a business stays just as it began year after year.So if we agree that businesses evolve, it follows that information systems must evolve to keep pace.So far,so good.The disconnect occurs when an enterprise’s management knows that the information systems must evolve,but keeps patching and whipping the legacy systems to meet one more requirement. If you put the problem in its simplest terms,management has a choice about how it will grow its information systems.If there is a clear strategic vision for the enterprise, it seems logical to have an equally broad vision for the systems that support that strategy. Managers can thus choose to plan evolution,or they can react when reality hits and “evolve”parts of the information system according to the latest crisis. It’s a bit of a no-brainer as to which is the better choice. But it’s also easy to understand why few enterprises pick it. Conceiving, planning, and monitoring systems that support a long-range strategic vision is not trivial. Enterprise-wide information systems typically start from a base of legacy systems.And not just any legacy systems.They are typically unwieldy systems of systems with a staggering array of hardware,software, design strategies, and implementation platforms. To make the job even more difficult, “enterprise-wide” often means city to city, state to state, or even country to country. Getting these pieces to seamlessly interact and evolve according to long-range strategic business objectives may seem like mission impossible; for a large distributed organization,however, it is mission critical.",
"title": ""
}
] | [
{
"docid": "00309acd08acb526f58a70ead2d99249",
"text": "As mainstream news media and political campaigns start to pay attention to the political discourse online, a systematic analysis of political speech in social media becomes more critical. What exactly do people say on these sites, and how useful is this data in estimating political popularity? In this study we examine Twitter discussions surrounding seven US Republican politicians who were running for the US Presidential nomination in 2011. We show this largely negative rhetoric to be laced with sarcasm and humor and dominated by a small portion of users. Furthermore, we show that using out-of-the-box classification tools results in a poor performance, and instead develop a highly optimized multi-stage approach designed for general-purpose political sentiment classification. Finally, we compare the change in sentiment detected in our dataset before and after 19 Republican debates, concluding that, at least in this case, the Twitter political chatter is not indicative of national political polls.",
"title": ""
},
{
"docid": "198967b505c9ded9255bff7b82fb2781",
"text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.",
"title": ""
},
{
"docid": "134ecc62958fa9bb930ff934c5fad7a3",
"text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.",
"title": ""
},
{
"docid": "69a32a7a206284ca5f749ffe456bc6dc",
"text": "Urinary incontinence is the inability to willingly control bladder voiding. Stress urinary incontinence (SUI) is the most frequently occurring type of incontinence in women. No widely accepted or approved drug therapy is yet available for the treatment of stress urinary incontinence. Numerous studies have implicated the neurotransmitters, serotonin and norepinephrine in the central neural control of the lower urinary tract function. The pudendal somatic motor nucleus of the spinal cord is densely innervated by 5HT and NE terminals. Pharmacological studies confirm central modulation of the lower urinary tract activity by 5HT and NE receptor agonists and antagonists. Duloxetine is a combined serotonin/norepinephrine reuptake inhibitor currently under clinical investigation for the treatment of women with stress urinary incontinence. Duloxetine exerts balanced in vivo reuptake inhibition of 5HT and NE and exhibits no appreciable binding affinity for receptors of neurotransmitters. The action of duloxetine in the treatment of stress urinary incontinence is associated with reuptake inhibition of serotonin and norepinephrine at the presynaptic neuron in Onuf’s nucleus of the sacral spinal cord. In cats, whose bladder had initially been irritated with acetic acid, a dose–dependent improvement of the bladder capacity (5–fold) and periurethral EMG activity (8–fold) of the striated sphincter muscles was found. In a double blind, randomized, placebocontrolled, clinical trial in women with stress urinary incontinence, there was a significant reduction in urinary incontinence episodes under duloxetine treatment. In summary, the pharmacological effect of duloxetine to increase the activity of the striated urethral sphincter together with clinical results indicate that duloxetine has an interesting therapeutic potential in patients with stress urinary incontinence.",
"title": ""
},
{
"docid": "4a4a0dde01536789bd53ec180a136877",
"text": "CONTEXT\nCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.\n\n\nOBJECTIVES\nTo propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.\n\n\nDATA SOURCES\nWe searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.\n\n\nSTUDY SELECTION\nWe excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.\n\n\nDATA EXTRACTION\nData were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.\n\n\nDATA SYNTHESIS\nWe generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.\n\n\nCONCLUSIONS\nIn addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.",
"title": ""
},
{
"docid": "5e7297c25f2aafe8dbb733944ddc29e7",
"text": "Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity (\"alpha matte\") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.",
"title": ""
},
{
"docid": "4ec74a91814f1e63aace2ac43b236b9a",
"text": "This paper discusses the status of research on detection of fraud undertaken as part of the European Commission-funded ACTS ASPeCT (Advanced Security for Personal Communications Technologies) project. A first task has been the identification of possible fraud scenarios and of typical fraud indicators which can be mapped to data in toll tickets. Currently, the project is exploring the detection of fraudulent behaviour based on a combination of absolute and differential usage. Three approaches are being investigated: a rule-based approach, an approach based on neural network, where both supervised and unsupervised learning are considered. Special attention is being paid to the feasibility of the implementations.",
"title": ""
},
{
"docid": "55cfcee1d1e83600ad88a1faef13f684",
"text": "In spite of amazing progress in food supply and nutritional science, and a striking increase in life expectancy of approximately 2.5 months per year in many countries during the previous 150 years, modern nutritional research has a great potential of still contributing to improved health for future generations, granted that the revolutions in molecular and systems technologies are applied to nutritional questions. Descriptive and mechanistic studies using state of the art epidemiology, food intake registration, genomics with single nucleotide polymorphisms (SNPs) and epigenomics, transcriptomics, proteomics, metabolomics, advanced biostatistics, imaging, calorimetry, cell biology, challenge tests (meals, exercise, etc.), and integration of all data by systems biology, will provide insight on a much higher level than today in a field we may name molecular nutrition research. To take advantage of all the new technologies scientists should develop international collaboration and gather data in large open access databases like the suggested Nutritional Phenotype database (dbNP). This collaboration will promote standardization of procedures (SOP), and provide a possibility to use collected data in future research projects. The ultimate goals of future nutritional research are to understand the detailed mechanisms of action for how nutrients/foods interact with the body and thereby enhance health and treat diet-related diseases.",
"title": ""
},
{
"docid": "47e0b0fad94270b705d013364a6932e4",
"text": "This paper introduces for the first time a novel flexible magnetic composite material for RF identification (RFID) and wearable RF antennas. First, one conformal RFID tag working at 480 MHz is designed and fabricated as a benchmarking prototype and the miniaturization concept is verified. Then, the impact of the material is thoroughly investigated using a hybrid method involving electromagnetic and statistical tools. Two separate statistical experiments are performed, one for the analysis of the impact of the relative permittivity and permeability of the proposed material and the other for the evaluation of the impact of the dielectric and magnetic loss on the antenna performance. Finally, the effect of the bending of the antenna is investigated, both on the S-parameters and on the radiation pattern. The successful implementation of the flexible magnetic composite material enables the significant miniaturization of RF passives and antennas in UHF frequency bands, especially when conformal modules that can be easily fine-tuned are required in critical biomedical and pharmaceutical applications.",
"title": ""
},
{
"docid": "2542d745b0ed5c3501db4aaf8e3cc528",
"text": "We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches.",
"title": ""
},
{
"docid": "8c2adc6112d3eedc8175a61555496760",
"text": "What does a user do when he logs in to the Twitter website? Does he merely browse through the tweets of all his friends as a source of information for his own tweets, or does he simply tweet a message of his own personal interest? Does he skim through the tweets of all his friends or only of a selected few? A number of factors might influence a user in these decisions. Does this social influence vary across cultures? In our work, we propose a simple yet effective model to predict the behavior of a user - in terms of which hashtag or named entity he might include in his future tweets. We have approached the problem as a classification task with the various influences contributing as features. Further, we analyze the contribution of the weights of the different features. Using our model we analyze data from different cultures and discover interesting differences in social influence.",
"title": ""
},
{
"docid": "731c5544759a958272e08f928bd364eb",
"text": "A key method of reducing morbidity and mortality is childhood immunization, yet in 2003 only 69% of Filipino children received all suggested vaccinations. Data from the 2003 Philippines Demographic Health Survey were used to identify risk factors for non- and partial-immunization. Results of the multinomial logistic regression analyses indicate that mothers who have less education, and who have not attended the minimally-recommended four antenatal visits are less likely to have fully immunized children. To increase immunization coverage in the Philippines, knowledge transfer to mothers must improve.",
"title": ""
},
{
"docid": "36f928b473faf1e8751abbcbd61acdcd",
"text": "Normal operations of the neocortex depend critically on several types of inhibitory interneurons, but the specific function of each type is unknown. One possibility is that interneurons are differentially engaged by patterns of activity that vary in frequency and timing. To explore this, we studied the strength and short-term dynamics of chemical synapses interconnecting local excitatory neurons (regular-spiking, or RS, cells) with two types of inhibitory interneurons: fast-spiking (FS) cells, and low-threshold spiking (LTS) cells of layer 4 in the rat barrel cortex. We also tested two other pathways onto the interneurons: thalamocortical connections and recurrent collaterals from corticothalamic projection neurons of layer 6. The excitatory and inhibitory synapses interconnecting RS cells and FS cells were highly reliable in response to single stimuli and displayed strong short-term depression. In contrast, excitatory and inhibitory synapses interconnecting the RS and LTS cells were less reliable when initially activated. Excitatory synapses from RS cells onto LTS cells showed dramatic short-term facilitation, whereas inhibitory synapses made by LTS cells onto RS cells facilitated modestly or slightly depressed. Thalamocortical inputs strongly excited both RS and FS cells but rarely and only weakly contacted LTS cells. Both types of interneurons were strongly excited by facilitating synapses from axon collaterals of corticothalamic neurons. We conclude that there are two parallel but dynamically distinct systems of synaptic inhibition in layer 4 of neocortex, each defined by its intrinsic spiking properties, the short-term plasticity of its chemical synapses, and (as shown previously) an exclusive set of electrical synapses. Because of their unique dynamic properties, each inhibitory network will be recruited by different temporal patterns of cortical activity.",
"title": ""
},
{
"docid": "13452d0ceb4dfd059f1b48dba6bf5468",
"text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6afb6140edbfdabb2f2c1a0cbee23665",
"text": "The advent of Web 2.0 has led to an increase in the amount of sentimental content available in the Web. Such content is often found in social media web sites in the form of movie or product reviews, user comments, testimonials, messages in discussion forums etc. Timely discovery of the sentimental or opinionated web content has a number of advantages, the most important of all being monetization. Understanding of the sentiments of human masses towards different entities and products enables better services for contextual advertisements, recommendation systems and analysis of market trends. The focus of our project is sentiment focussed web crawling framework to facilitate the quick discovery of sentimental contents of movie reviews and hotel reviews and analysis of the same. We use statistical methods to capture elements of subjective style and the sentence polarity. The paper elaborately discusses two supervised machine learning algorithms: K-Nearest Neighbour(KNN) and Naïve Bayes‘ and compares their overall accuracy, precisions as well as recall values. It was seen that in case of movie reviews Naïve Bayes‘ gave far better results than K-NN but for hotel reviews these algorithms gave lesser, almost same accuracies.",
"title": ""
},
{
"docid": "ced0328f339248158e8414c3315330c5",
"text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB",
"title": ""
},
{
"docid": "aafda1cab832f1fe92ce406676e3760f",
"text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.",
"title": ""
},
{
"docid": "8b6d3b5fb8af809619119ee0f75cb3c6",
"text": "This paper mainly discusses how to use histogram projection and LBDM (Learning Based Digital Matting) to extract a tongue from a medical image, which is one of the most important steps in diagnosis of traditional Chinese Medicine. We firstly present an effective method to locate the tongue body, getting the convinced foreground and background area in form of trimap. Then, use this trimap as the input for LBDM algorithm to implement the final segmentation. Experiment was carried out to evaluate the proposed scheme, using 480 samples of pictures with tongue, the results of which were compared with the corresponding ground truth. Experimental results and analysis demonstrated the feasibility and effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "268ccb986855aabafa9de8f95668e7c4",
"text": "This paper investigates the performance of South Africa’s commercial banking sector for the period 20052009. Financial ratios are employed to measure the profitability, liquidity and credit quality performance of five large South African based commercial banks. The study found that overall bank performance increased considerably in the first two years of the analysis. A significant change in trend is noticed at the onset of the global financial crisis in 2007, reaching its peak during 2008-2009. This resulted in falling profitability, low liquidity and deteriorating credit quality in the South African Banking sector.",
"title": ""
},
{
"docid": "686045e2dae16aba16c26b8ccd499731",
"text": "It has been argued that platform technology owners cocreate business value with other firms in their platform ecosystems by encouraging complementary invention and exploiting indirect network effects. In this study, we examine whether participation in an ecosystem partnership improves the business performance of small independent software vendors (ISVs) in the enterprise software industry and how appropriability mechanisms influence the benefits of partnership. By analyzing the partnering activities and performance indicators of a sample of 1,210 small ISVs over the period 1996–2004, we find that joining a major platform owner’s platform ecosystem is associated with an increase in sales and a greater likelihood of issuing an initial public offering (IPO). In addition, we show that these impacts are greater when ISVs have greater intellectual property rights or stronger downstream capabilities. This research highlights the value of interoperability between software products, and stresses that value cocreation and appropriation are not mutually exclusive strategies in interfirm collaboration.",
"title": ""
}
] | scidocsrr |
91397d2975dc5c569dd936f71b13ba8a | Risks and Resilience of Collaborative Networks | [
{
"docid": "d12ba2f4c25bb7555475ac9fc6550df8",
"text": "Supply networks are composed of large numbers of firms from multiple interrelated industries. Such networks are subject to shifting strategies and objectives within a dynamic environment. In recent years, when faced with a dynamic environment, several disciplines have adopted the Complex Adaptive System (CAS) perspective to gain insights into important issues within their domains of study. Research investigations in the field of supply networks have also begun examining the merits of complexity theory and the CAS perspective. In this article, we bring the applicability of complexity theory and CAS into sharper focus, highlighting its potential for integrating existing supply chain management (SCM) research into a structured body of knowledge while also providing a framework for generating, validating, and refining new theories relevant to real-world supply networks. We suggest several potential research questions to emphasize how a ∗We sincerely thank Professors Thomas Choi (Arizona State University), David Dilts (Vanderbilt University), and Kevin Dooley (Arizona State University) for their help, guidance, and support. †Corresponding author.",
"title": ""
}
] | [
{
"docid": "f9e67768e59ba9c4048be2b78f3d2823",
"text": "Ontologies are a widely accepted tool for the modeling of context information. We view the identification of the benefits and challenges of ontologybased models to be an important next step to further improve the usability of ontologies in context-aware applications. We outline a set of criteria with respect to ontology engineering and context modeling and discuss some recent achievements in the area of ontology-based context modeling in order to determine the important next steps necessary to fully exploit ontologies in pervasive computing.",
"title": ""
},
{
"docid": "3a855c3c3329ff63037711e8d17249e3",
"text": "In this work, we present an adaptation of the sequence-tosequence model for structured vision tasks. In this model, the output variables for a given input are predicted sequentially using neural networks. The prediction for each output variable depends not only on the input but also on the previously predicted output variables. The model is applied to spatial localization tasks and uses convolutional neural networks (CNNs) for processing input images and a multi-scale deconvolutional architecture for making spatial predictions at each step. We explore the impact of weight sharing with a recurrent connection matrix between consecutive predictions, and compare it to a formulation where these weights are not tied. Untied weights are particularly suited for problems with a fixed sized structure, where different classes of output are predicted at different steps. We show that chain models achieve top performing results on human pose estimation from images and videos.",
"title": ""
},
{
"docid": "f6d3157155868f5fafe2533dfd8768b8",
"text": "Over the past few years, the task of conceiving effective attacks to complex networks has arisen as an optimization problem. Attacks are modelled as the process of removing a number k of vertices, from the graph that represents the network, and the goal is to maximise or minimise the value of a predefined metric over the graph. In this work, we present an optimization problem that concerns the selection of nodes to be removed to minimise the maximum betweenness centrality value of the residual graph. This metric evaluates the participation of the nodes in the communications through the shortest paths of the network. To address the problem we propose an artificial bee colony algorithm, which is a swarm intelligence approach inspired in the foraging behaviour of honeybees. In this framework, bees produce new candidate solutions for the problem by exploring the vicinity of previous ones, called food sources. The proposed method exploits useful problem knowledge in this neighbourhood exploration by considering the partial destruction and heuristic reconstruction of selected solutions. The performance of the method, with respect to other models from the literature that can be adapted to face this problem, such as sequential centrality-based attacks, module-based attacks, a genetic algorithm, a simulated annealing approach, and a variable neighbourhood search, is empirically shown. E-mail addresses: [email protected] (M. Lozano), [email protected] (C. GarćıaMart́ınez), [email protected] (F.J. Rodŕıguez), [email protected] (H.M. Trujillo). Preprint submitted to Information Sciences August 17, 2016 *Manuscript (including abstract) Click here to view linked References",
"title": ""
},
{
"docid": "b06844c98f1b46e6d3bd583aacd76015",
"text": "The task of network management and monitoring relies on an accurate characterization of network traffic generated by different applications and network protocols. We employ three supervisedmachine learning (ML) algorithms, BayesianNetworks, Decision Trees and Multilayer Perceptrons for the flow-based classification of six different types of Internet traffic including peer-to-peer (P2P) and content delivery (Akamai) traffic. The dependency of the traffic classification performance on the amount and composition of training data is investigated followed by experiments that show that ML algorithms such as Bayesian Networks and Decision Trees are suitable for Internet traffic flow classification at a high speed, and prove to be robust with respect to applications that dynamically change their source ports. Finally, the importance of correctly classified training instances is highlighted by an experiment that is conducted with wrongly labeled training data. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e2d83db54bc0eacfb3b562c38125fc28",
"text": "Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.",
"title": ""
},
{
"docid": "eab052e8172c62fec9b532400fe5eeb6",
"text": "An overview on state of the art automotive radar usage is presented and the changing requirements from detection and ranging towards radar based environmental understanding for highly automated and autonomous driving deduced. The traditional segmentation in driving, manoeuvering and parking tasks vanishes at the driver less stage. Situation assessment and trajectory/manoeuver planning need to operate in a more thorough way. Hence, fast situational up-date, motion prediction of all kind of dynamic objects, object dimension, ego-motion estimation, (self)-localisation and more semantic/classification information, which allows to put static and dynamic world into correlation/context with each other is mandatory. All these are new areas for radar signal processing and needs revolutionary new solutions. The article outlines the benefits that make radar essential for autonomous driving and presents recent approaches in radar based environmental perception.",
"title": ""
},
{
"docid": "0dc1bf3422e69283a93d0dd87caeb84f",
"text": "Organizations are increasingly recognizing that user satisfaction with information systems is one of the most important determinants of the success of those systems. However, current satisfaction measures involve an intrusion into the users' worlds, and are frequently deemed to be too cumbersome to be justi®ed ®nancially and practically. This paper describes a methodology designed to solve this contemporary problem. Based on theory which suggests that behavioral observations can be used to measure satisfaction, system usage statistics from an information system were captured around the clock for 6 months to determine users' satisfaction with the system. A traditional satisfaction evaluation instrument, a validated survey, was applied in parallel, to verify that the analysis of the behavioral data yielded similar results. The ®nal results were analyzed statistically to demonstrate that behavioral analysis is a viable alternative to the survey in satisfaction measurement. # 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "83cc283967bf6bc7f04729a5e08660e2",
"text": "Logicians have, by and large, engaged in the convenient fiction that sentences of natural languages (at least declarative sentences) are either true or false or, at worst, lack a truth value, or have a third value often interpreted as 'nonsense'. And most contemporary linguists who have thought seriously about semantics, especially formal semantics, have largely shared this fiction, primarily for lack of a sensible alternative. Yet students o f language, especially psychologists and linguistic philosophers, have long been attuned to the fact that natural language concepts have vague boundaries and fuzzy edges and that, consequently, natural language sentences will very often be neither true, nor false, nor nonsensical, but rather true to a certain extent and false to a certain extent, true in certain respects and false in other respects. It is common for logicians to give truth conditions for predicates in terms of classical set theory. 'John is tall' (or 'TALL(j) ' ) is defined to be true just in case the individual denoted by 'John' (or ' j ') is in the set of tall men. Putting aside the problem that tallness is really a relative concept (tallness for a pygmy and tallness for a basketball player are obviously different) 1, suppose we fix a population relative to which we want to define tallness. In contemporary America, how tall do you have to be to be tall? 5'8\"? 5'9\"? 5'10\"? 5'11\"? 6'? 6'2\"? Obviously there is no single fixed answer. How old do you have to be to be middle-aged? 35? 37? 39? 40? 42? 45? 50? Again the concept is fuzzy. Clearly any attempt to limit truth conditions for natural language sentences to true, false and \"nonsense' will distort the natural language concepts by portraying them as having sharply defined rather than fuzzily defined boundaries. Work dealing with such questions has been done in psychology. To take a recent example, Eleanor Rosch Heider (1971) took up the question of whether people perceive category membership as a clearcut issue or a matter of degree. For example, do people think of members of a given",
"title": ""
},
{
"docid": "46bee248655c79a0364fee437bc43eaf",
"text": "Parkinson disease (PD) is a universal public health problem of massive measurement. Machine learning based method is used to classify between healthy people and people with Parkinson’s disease (PD). This paper presents a comprehensive review for the prediction of Parkinson disease buy using machine learning based approaches. The brief introduction of various computational intelligence techniques based approaches used for the prediction of Parkinson diseases are presented .This paper also presents the summary of results obtained by various researchers available in literature to predict the Parkinson diseases. Keywords— Parkinson’s disease, classification, random forest, support vector machine, machine learning, signal processing, artificial neural network.",
"title": ""
},
{
"docid": "140d6d345aa6d486a30e596dde25a8ef",
"text": "This research uses the absorptive capacity (ACAP) concept as a theoretical lens to study the effect of e-business upon the competitive performance of SMEs, addressing the following research issue: To what extent are manufacturing SMEs successful in developing their potential and realized ACAP in line with their entrepreneurial orientation? A survey study of 588 manufacturing SMEs found that their e-business capabilities, considered as knowledge acquisition and assimilation capabilities have an indirect effect on their competitive performance that is mediated by their knowledge transformation and exploitation capabilities, and insofar as these capabilities are developed as a result of a more entrepreneurial orientation on their part. Finally, the effect of this orientation on the SMEs' competitive performance appears to be totally mediated by their ACAP.",
"title": ""
},
{
"docid": "0fc0816d62a8d13c3e415b5a1ae7e1d4",
"text": "The rapid pace of business process change, partially fueled by information technology, is placing increasingly difficult demands on the organization. In many industries, organizations are required to evaluate and assess new information technologies and their organization-specific strategic potential, in order to remain competitive. The scanning, adoption and diffusion of this information technology must be carefully guided by strong strategic and technological leadership in order to infuse the organization and its members with strategic and technological visions, and to coordinate their diverse and decentralized expertise. This view of technological diffusion requires us to look beyond individuals and individual adoption, toward other levels of analysis and social theoretical viewpoints to promote the appropriate and heedful diffusion of often organization-wide information technologies. Particularly important is an examination of the diffusion champions and how a feasible and shared vision of the business and information technology can be created and communicated across organizational communities in order to unify, motivate and mobilize technology change process. The feasibility of this shared vision depends on its strategic fit and whether the shared vision is properly aligned with organizational objectives in order to filter and shape technological choice and diffusion. Shared vision is viewed as an organizational barometer for assessing the appropriateness of future technologies amidst a sea of overwhelming possibilities. We present a theoretical model to address an extended program of research focusing on important phases during diffusion, shared vision, change management and social alignment. We also make a call for further research into these theoretical linkages and into the development of feasible shared visions.",
"title": ""
},
{
"docid": "b91f54fd70da385625d9df127834d8c7",
"text": "This commentary was stimulated by Yeping Li’s first editorial (2014) citing one of the journal’s goals as adding multidisciplinary perspectives to current studies of single disciplines comprising the focus of other journals. In this commentary, I argue for a greater focus on STEM integration, with a more equitable representation of the four disciplines in studies purporting to advance STEM learning. The STEM acronym is often used in reference to just one of the disciplines, commonly science. Although the integration of STEM disciplines is increasingly advocated in the literature, studies that address multiple disciplines appear scant with mixed findings and inadequate directions for STEM advancement. Perspectives on how discipline integration can be achieved are varied, with reference to multidisciplinary, interdisciplinary, and transdisciplinary approaches adding to the debates. Such approaches include core concepts and skills being taught separately in each discipline but housed within a common theme; the introduction of closely linked concepts and skills from two or more disciplines with the aim of deepening understanding and skills; and the adoption of a transdisciplinary approach, where knowledge and skills from two or more disciplines are applied to real-world problems and projects with the aim of shaping the total learning experience. Research that targets STEM integration is an embryonic field with respect to advancing curriculum development and various student outcomes. For example, we still need more studies on how student learning outcomes arise not only from different forms of STEM integration but also from the particular disciplines that are being integrated. As noted in this commentary, it seems that mathematics learning benefits less than the other disciplines in programs claiming to focus on STEM integration. Factors contributing to this finding warrant more scrutiny. Likewise, learning outcomes for engineering within K-12 integrated STEM programs appear under-researched. This commentary advocates a greater focus on these two disciplines within integrated STEM education research. Drawing on recommendations from the literature, suggestions are offered for addressing the challenges of integrating multiple disciplines faced by the STEM community.",
"title": ""
},
{
"docid": "31305b698f82e902a5829abc2f272d5f",
"text": "It is now recognized that the Consensus problem is a fundamental problem when one has to design and implement reliable asynchronous distributed systems. This chapter is on the Consensus problem. It studies Consensus in two failure models, namely, the Crash/no Recovery model and the Crash/Recovery model. The assumptions related to the detection of failures that are required to solve Consensus in a given model are particularly emphasized.",
"title": ""
},
{
"docid": "2a9d399edc3c2dcc153d966760f38d80",
"text": "Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is over a computer network and the other is on a shared memory system. We establish an ergodic convergence rate O(1/ √ K) for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by √ K (K is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.",
"title": ""
},
{
"docid": "ed0465dc58b0f9c62e729fed4054bb58",
"text": "In this study, an instructional design model was employed for restructuring a teacher education course with technology. The model was applied in a science education method course, which was offered in two different but consecutive semesters with a total enrollment of 111 students in the fall semester and 116 students in the spring semester. Using tools, such as multimedia authoring tools in the fall semester and modeling software in the spring semester, teacher educators designed high quality technology-infused lessons for science and, thereafter, modeled them in classroom for preservice teachers. An assessment instrument was constructed to assess preservice teachers technology competency, which was measured in terms of four aspects, namely, (a) selection of appropriate science topics to be taught with technology, (b) use of appropriate technology-supported representations and transformations for science content, (c) use of technology to support teaching strategies, and (d) integration of computer activities with appropriate inquiry-based pedagogy in the science classroom. The results of a MANOVA showed that preservice teachers in the Modeling group outperformed preservice teachers overall performance in the Multimedia group, F = 21.534, p = 0.000. More specifically, the Modeling group outperformed the Multimedia group on only two of the four aspects of technology competency, namely, use of technology to support teaching strategies and integration of computer activities with appropriate pedagogy in the classroom, F = 59.893, p = 0.000, and F = 10.943, p = 0.001 respectively. The results indicate that the task of preparing preservice teachers to become technology competent is difficult and requires many efforts for providing them with ample of 0360-1315/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2004.06.002 * Tel.: +357 22 753772; fax: +357 22 377950. E-mail address: [email protected]. 384 C. Angeli / Computers & Education 45 (2005) 383–398 opportunities during their education to develop the competencies needed to be able to teach with technology. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2bdc4df73912f4f2be4436e1fdd16d69",
"text": "Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological data set to a feature-based multiclass classification. In order to collect a physiological data set from multiple subjects over many weeks, we used a musical induction method that spontaneously leads subjects to real emotional states, without any deliberate laboratory setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity, and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, and positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. An improved recognition accuracy of 95 percent and 70 percent for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.",
"title": ""
},
{
"docid": "2b4a2165cebff8326f97cab3063e1a62",
"text": "Pneumatic artificial muscles (PAMs) are becoming more commonly used as actuators in modern robotics. The most made and common type of these artificial muscles in use is the McKibben artificial muscle that was developed in 1950’s. This paper presents the geometric model of PAM and different Matlab models for pneumatic artificial muscles. The aim of our models is to relate the pressure and length of the pneumatic artificial muscles to the force it exerts along its entire exists.",
"title": ""
},
{
"docid": "f9806d3542f575d53ef27620e4aa493b",
"text": "Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.",
"title": ""
},
{
"docid": "68b25c8eefc5e2045065b0cf24652245",
"text": "A backscatter-based microwave imaging technique that compensates for frequency-dependent propagation effects is proposed for detecting early-stage breast cancer. An array of antennas is located near the surface of the breast and an ultrawideband pulse is transmitted sequentially from each antenna. The received backscattered signals are passed through a space-time beamformer that is designed to image backscattered signal energy as a function of location. As a consequence of the significant dielectric-properties contrast between normal and malignant tissue, locations corresponding to malignant tumors are associated with large energy levels in the image. The effectiveness of these algorithms is demonstrated using simulated backscattered signals obtained from an anatomically realistic MRI-derived computational electromagnetic breast model. Very small (2 mm) malignant tumors embedded within the complex fibroglandular structure of the breast are easily detected above the background clutter.",
"title": ""
}
] | scidocsrr |
acd8faabc06c35cdb649f81af5b27a45 | Review : friction stir welding tools | [
{
"docid": "0efe115e60fb3b3d1152ee4e88b60e8e",
"text": "Friction stir welding (FSW) is a relatively new solid-state joining process. This joining technique is energy efficient, environment friendly, and versatile. In particular, it can be used to join high-strength aerospace aluminum alloys and other metallic alloys that are hard to weld by conventional fusion welding. FSW is considered to be the most significant development in metal joining in a decade. Recently, friction stir processing (FSP) was developed for microstructural modification of metallic materials. In this review article, the current state of understanding and development of the FSW and FSP are addressed. Particular emphasis has been given to: (a) mechanisms responsible for the formation of welds and microstructural refinement, and (b) effects of FSW/FSP parameters on resultant microstructure and final mechanical properties. While the bulk of the information is related to aluminum alloys, important results are now available for other metals and alloys. At this stage, the technology diffusion has significantly outpaced the fundamental understanding of microstructural evolution and microstructure–property relationships. # 2005 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "bba81ac392b87a123a1e2f025bffd30c",
"text": "This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to http://www.deakin.edu.au/~thanhthi/drl.htm.",
"title": ""
},
{
"docid": "1156e19011c722404e077ae64f6e526f",
"text": "Malwares are malignant softwares. It is designed t o amage computer systems without the knowledge of the owner using the system. Softwares from reputabl e vendors also contain malicious code that affects the system or leaks informations to remote servers. Malwares incl udes computer viruses, Worms, spyware, dishonest ad -ware, rootkits, Trojans, dialers etc. Malware is one of t he most serious security threats on the Internet to day. In fact, most Internet problems such as spam e-mails and denial o f service attacks have malwareas their underlying c ause. Computers that are compromised with malware are oft en networked together to form botnets and many atta cks re launched using these malicious, attacker controlled n tworks. The paper focuses on various Malware det ction and removal methods. KeywordsMalware, Intruders, Checksum, Digital Immune System , Behavior blocker",
"title": ""
},
{
"docid": "c9c0308d532f216b847400b8f188013c",
"text": "Customer interface quality, perceived security, and customer loyalty are critical factors for success of an e-commerce website; however, the relationships among them are not fully understood. We proposed a model for testing the relationships among them and the important outcomes of the site: switching costs and customer loyalty. Data was collected to test the model using a web-based survey, and empirical analyses were performed using SEM. The analytical results demonstrated that customer interface quality and perceived security positively affected customer satisfaction and switching costs, and thus customer loyalty to an e-commerce website. Specifically, our study showed that switching costs positively moderated the effect of customer satisfaction on customer loyalty; this moderating effect is discussed. Crown Copyright 2009 Published by Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +886 6 2757575x53326; fax: +886 6 2757575x53326. E-mail addresses: [email protected] (H.H. Chang), [email protected] (S.W. Chen).",
"title": ""
},
{
"docid": "2466ac1ce3d54436f74b5bb024f89662",
"text": "In this paper we discuss our work on applying media theory to the creation of narrative augmented reality (AR) experiences. We summarize the concepts of remediation and media forms as they relate to our work, argue for their importance to the development of a new medium such as AR, and present two example AR experiences we have designed using these conceptual tools. In particular, we focus on leveraging the interaction between the physical and virtual world, remediating existing media (film, stage and interactive CD-ROM), and building on the cultural expectations of our users.",
"title": ""
},
{
"docid": "82180726cc1aaaada69f3b6cb0e89acc",
"text": "The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair. Keywords—EPW, fall avoidance control, skyhook, wheeled inverted pendulum.",
"title": ""
},
{
"docid": "29c6cba747a2ad280d2185bfcd5866e2",
"text": "A millimeter-wave shaped-beam substrate integrated conformal array antenna is demonstrated in this paper. After discussing the influence of conformal shape on the characteristics of a substrate integrated waveguide (SIW) and a radiating slot, an array mounted on a cylindrical surface with a radius of 20 mm, i.e., 2.3 λ, is synthesized at the center frequency of 35 GHz. All components, including a 1-to-8 divider, a phase compensated network and an 8 × 8 slot array are fabricated in a single dielectric substrate together. In measurement, it has a - 27.4 dB sidelobe level (SLL) beam in H-plane and a flat-topped fan beam with -38° ~ 37° 3 dB beamwidth in E-plane at the center frequency of 35 GHz. The cross polarization is lower than -41.7 dB at the beam direction. Experimental results agree well with simulations, thus validating our design. This SIW scheme is able to solve the difficulty of integration between conformal array elements and a feed network in millimeter-wave frequency band, while avoid radiation leakage and element-to-element parasitic cross-coupling from the feed network.",
"title": ""
},
{
"docid": "d6bcf73a0237416318896154dfb0a764",
"text": "Singular Value Decomposition (SVD) is a popular approach in various network applications, such as link prediction and network parameter characterization. Incremental SVD approaches are proposed to process newly changed nodes and edges in dynamic networks. However, incremental SVD approaches suffer from serious error accumulation inevitably due to approximation on incremental updates. SVD restart is an effective approach to reset the aggregated error, but when to restart SVD for dynamic networks is not addressed in literature. In this paper, we propose TIMERS, Theoretically Instructed Maximum-Error-bounded Restart of SVD, a novel approach which optimally sets the restart time in order to reduce error accumulation in time. Specifically, we monitor the margin between reconstruction loss of incremental updates and the minimum loss in SVD model. To reduce the complexity of monitoring, we theoretically develop a lower bound of SVD minimum loss for dynamic networks and use the bound to replace the minimum loss in monitoring. By setting a maximum tolerated error as a threshold, we can trigger SVD restart automatically when the margin exceeds this threshold. We prove that the time complexity of our method is linear with respect to the number of local dynamic changes, and our method is general across different types of dynamic networks. We conduct extensive experiments on several synthetic and real dynamic networks. The experimental results demonstrate that our proposed method significantly outperforms the existing methods by reducing 27% to 42% in terms of the maximum error for dynamic network reconstruction when fixing the number of restarts. Our method reduces the number of restarts by 25% to 50% when fixing the maximum error tolerated.",
"title": ""
},
{
"docid": "5214faa5f3906819ad56394cf45adc55",
"text": "For most applications, the pulse width modulation (PWM) of the primary side switch varies with input line. The switch on-time regulates the output. For a constant output voltage, the volt-seconds applied to the primary will be constant but reset time varies, being relatively long at high line and short at low line. The switch voltage is minimized when the switch off-time is long and fully used for reset",
"title": ""
},
{
"docid": "995fca88b7813c5cfed1c92522cc8d29",
"text": "Diode rectifiers with large dc-bus capacitors, used in the front ends of variable-frequency drives (VFDs) and other ac-to-dc converters, draw discontinuous current from the power system, resulting in current distortion and, hence, voltage distortion. Typically, the power system can handle current distortion without showing signs of voltage distortion. However, when the majority of the load on a distribution feeder is made up of VFDs, current distortion becomes an important issue since it can cause voltage distortion. Multipulse techniques to reduce input current harmonics are popular because they do not interfere with the existing power system either from higher conducted electromagnetic interference, when active techniques are used, or from possible resonance, when capacitor-based filters are employed. In this paper, a new 18-pulse topology is proposed that has two six-pulse rectifiers powered via a phase-shifting isolation transformer, while the third six-pulse rectifier is fed directly from the ac source via a matching inductor. This idea relies on harmonic current cancellation strategy rather than flux cancellation method and results in lower overall harmonics. It is also seen to be smaller in size and weight and lower in cost compared to an isolation transformer. Experimental results are given to validate the concept.",
"title": ""
},
{
"docid": "5cc458548f26619b4cc632f25ea2e9f8",
"text": "As a consequence of the popularity of big data, many users with a variety of backgrounds seek to extract high level information from datasets collected from various sources and combined using data integration techniques. A major challenge for research in data management is to develop tools to assist users in explaining observed query outputs. In this paper we introduce a principled approach to provide explanations for answers to SQL queries based on intervention: removal of tuples from the database that significantly affect the query answers. We provide a formal definition of intervention in the presence of multiple relations which can interact with each other through foreign keys. First we give a set of recursive rules to compute the intervention for any given explanation in polynomial time (data complexity). Then we give simple and efficient algorithms based on SQL queries that can compute the top-K explanations by using standard database management systems under certain conditions. We evaluate the quality and performance of our approach by experiments on real datasets.",
"title": ""
},
{
"docid": "5325778a57d0807e9b149108ea9e57d8",
"text": "This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the \"MICCAI 2007 Grand Challenge\" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.",
"title": ""
},
{
"docid": "5da9811fb60b5f6334e05ba71902ddfd",
"text": "In this paper, a numerical TRL calibration technique is used to accurately extract the equivalent circuit parameters of post-wall iris and input/output coupling structure which are used for the design of directly-coupled substrate integrated waveguide (SIW) filter with the first/last SIW cavities directly excited by 50 Ω microstrip line. On the basis of this dimensional design process, the entire procedure of filter design can meet all of the design specifications without resort to any time-consuming tuning and optimization. A K-band 5th-degree SIW filter with relative bandwidth of 6% was designed and fabricated by low-cost PCB process on Rogers RT/duroid 5880. Measured results which agree well with simulated results validate the accurate dimensional synthesis procedure.",
"title": ""
},
{
"docid": "5706b4955db81d04398fd6a64eb70c7c",
"text": "The number of applications (or apps) in the Android Market exceeded 450,000 in 2012 with more than 11 billion total downloads. The necessity to fix bugs and add new features leads to frequent app updates. For each update, a full new version of the app is downloaded to the user's smart phone; this generates significant traffic in the network. We propose to use delta encoding algorithms and to download only the difference between two versions of an app. We implement delta encoding for Android using the bsdiff and bspatch tools and evaluate its performance. We show that app update traffic can be reduced by about 50%, this can lead to significant cost and energy savings.",
"title": ""
},
{
"docid": "1ff97f1fc404ae8a8882687b5d507857",
"text": "Music listening has changed greatly with the emergence of music streaming services, such as Spotify and YouTube. In this paper, we discuss an artistic practice that organizes streaming videos to perform a real-time improvisation via live coding. A live coder uses any available video from YouTube, a video streaming service, as source material to perform an improvised audiovisual piece. The challenge is to manipulate the emerging media that are streamed from a cloud server. The musical gesture can be limited due to the constrained functionalities of the YouTube API. However, the potential sonic and visual space that a musician can explore is practically infinite. The practice embraces the juxtaposition of manipulating emerging media in old-fashioned ways similar to experimental musicians in the 60’s physically manipulating tape loops or scratching vinyl records on a phonograph while exploring the expressiveness enabled by the gigantic repository of all kinds of videos. In this paper, we discuss the challenges of using streaming videos from the platform as musical materials in live music performance and introduce a live coding environment that we developed for real-time improvisation.",
"title": ""
},
{
"docid": "4a0756bffc50e11a0bcc2ab88502e1a2",
"text": "The interest in attribute weighting for soft subspace clustering have been increasing in the last years. However, most of the proposed approaches are designed for dealing only with numeric data. In this paper, our focus is on soft subspace clustering for categorical data. In soft subspace clustering, the attribute weighting approach plays a crucial role. Due to this, we propose an entropy-based approach for measuring the relevance of each categorical attribute in each cluster. Besides that, we propose the EBK-modes (entropy-based k-modes), an extension of the basic k-modes that uses our approach for attribute weighting. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the EBK-modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.",
"title": ""
},
{
"docid": "f65c027ab5baa981667955cc300d2f34",
"text": "In-band full-duplex (FD) wireless communication, i.e. simultaneous transmission and reception at the same frequency, in the same channel, promises up to 2x spectral efficiency, along with advantages in higher network layers [1]. the main challenge is dealing with strong in-band leakage from the transmitter to the receiver (i.e. self-interference (SI)), as TX powers are typically >100dB stronger than the weakest signal to be received, necessitating TX-RX isolation and SI cancellation. Performing this SI-cancellation solely in the digital domain, if at all possible, would require extremely clean (low-EVM) transmission and a huge dynamic range in the RX and ADC, which is currently not feasible [2]. Cancelling SI entirely in analog is not feasible either, since the SI contains delayed TX components reflected by the environment. Cancelling these requires impractically large amounts of tunable analog delay. Hence, FD-solutions proposed thus far combine SI-rejection at RF, analog BB, digital BB and cross-domain.",
"title": ""
},
{
"docid": "fa50a600b1c7e8eb87c4751ce704e19f",
"text": "Underwater soundscapes have probably played an important role in the adaptation of ears and auditory systems of fishes throughout evolutionary time, and for all species. These sounds probably contain important information about the environment and about most objects and events that confront the receiving fish so that appropriate behavior is possible. For example, the sounds from reefs appear to be used by at least some fishes for their orientation and migration. These sorts of environmental sounds should be considered much like \"acoustic daylight,\" that continuously bathes all environments and contain information that all organisms can potentially use to form a sort of image of the environment. At present, however, we are generally ignorant of the nature of ambient sound fields impinging on fishes, and the adaptive value of processing these fields to resolve the multiple sources of sound. Our field has focused almost exclusively on the adaptive value of processing species-specific communication sounds, and has not considered the informational value of ambient \"noise.\" Since all fishes can detect and process acoustic particle motion, including the directional characteristics of this motion, underwater sound fields are potentially more complex and information-rich than terrestrial acoustic environments. The capacities of one fish species (goldfish) to receive and make use of such sound source information have been demonstrated (sound source segregation and auditory scene analysis), and it is suggested that all vertebrate species have this capacity. A call is made to better understand underwater soundscapes, and the associated behaviors they determine in fishes.",
"title": ""
},
{
"docid": "32faa5a14922d44101281c783cf6defb",
"text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.",
"title": ""
},
{
"docid": "8a1ba6ce6500f96afdb2c2294bf28a44",
"text": "This paper identifies the relative importance of key enabling factors for implementing industry 4.0 from a technological readiness perspective. The research involves the identification of enabling factors, their categorization into technological readiness dimensions, followed by the determination of the relative importance of both technological readiness dimensions and key objective measures. The results show a strong relationship between technological readiness and design principles of Industry 4.0. The findings suggest that process-related objectives are more important than economic-related and environmental-related objectives when implementing industry 4.0. The results also show that “the knowledge of humans in technology and how to leverage it” and “improving the ability of machines and devices in connecting to the internet” are the most important factors for achieving all objective measures. Practitioners can use the apparent relationship between process related objectives and key technological dimensions for setting appropriate strategies and policies when moving towards Industry 4.0.",
"title": ""
},
{
"docid": "03eabf03f8ac967c728ff35b77f3dd84",
"text": "In this paper, we tackle the problem of associating combinations of colors to abstract categories (e.g. capricious, classic, cool, delicate, etc.). It is evident that such concepts would be difficult to distinguish using single colors, therefore we consider combinations of colors or color palettes. We leverage two novel databases for color palettes and we learn categorization models using low and high level descriptors. Preliminary results show that Fisher representation based on GMMs is the most rewarding strategy in terms of classification performance over a baseline model. We also suggest a process for cleaning weakly annotated data, whilst preserving the visual coherence of categories. Finally, we demonstrate how learning abstract categories on color palettes can be used in the application of color transfer, personalization and image re-ranking.",
"title": ""
}
] | scidocsrr |
599f53ac10eca1914af585c33115fe09 | UJIIndoorLoc: A new multi-building and multi-floor database for WLAN fingerprint-based indoor localization problems | [
{
"docid": "7f6edf82ddbe5b63ba5d36a7d8691dda",
"text": "This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.",
"title": ""
},
{
"docid": "7fd682de56cc80974b6d51fc86ff9dca",
"text": "We present an indoor localization application leveraging the sensing capabilities of current state of the art smart phones. To the best of our knowledge, our application is the first one to be implemented in smart phones and integrating both offline and online phases of fingerprinting, delivering an accuracy of up to 1.5 meters. In particular, we have studied the possibilities offered by WiFi radio, cellular communications radio, accelerometer and magnetometer, already embedded in smart phones, with the intention to build a multimodal solution for localization. We have also implemented a new approach for the statistical processing of radio signal strengths, showing that it can outperform existing deterministic techniques.",
"title": ""
}
] | [
{
"docid": "82e866d42fed897b66e49c92209ad805",
"text": "A fingerprinting design extracts discriminating features, called fingerprints. The extracted features are unique and specific to each image/video. The visual hash is usually a global fingerprinting technique with crypto-system constraints. In this paper, we propose an innovative video content identification process which combines a visual hash function and a local fingerprinting. Thanks to a visual hash function, we observe the video content variation and we detect key frames. A local image fingerprint technique characterizes the detected key frames. The set of local fingerprints for the whole video summarizes the video or fragments of the video. The video fingerprinting algorithm identifies an unknown video or a fragment of video within a video fingerprint database. It compares the local fingerprints of the candidate video with all local fingerprints of a database even if strong distortions are applied to an original content.",
"title": ""
},
{
"docid": "dd7115cfecdcae82b4870e4241668fe9",
"text": "Graphs are commonly used to model the topological structure of internetworks, to study problems ranging from routing to resource reservation. A variety of graphs are found in the literature, including xed topologies such as rings or stars, \\well-known\" topologies such as the ARPAnet, and randomly generated topologies. While many researchers rely upon graphs for analytic and simulation studies, there has been little analysis of the implications of using a particular model, or how the graph generation method may a ect the results of such studies. Further, the selection of one generation method over another is often arbitrary, since the di erences and similarities between methods are not well",
"title": ""
},
{
"docid": "f34e6c34a499b7b88c18049eec221d36",
"text": "The double-gimbal mechanism (DGM) is a multibody mechanical device composed of three rigid bodies, namely, a base, an inner gimbal, and an outer gimbal, interconnected by two revolute joints. A typical DGM, where the cylindrical base is connected to the outer gimbal by a revolute joint, and the inner gimbal, which is the disk-shaped payload, is connected to the outer gimbal by a revolute joint. The DGM is an integral component of an inertially stabilized platform, which provides motion to maintain line of sight between a target and a platform payload sensor. Modern, commercially available gimbals use two direct-drive or gear-driven motors on orthogonal axes to actuate the joints. Many of these mechanisms are constrained to a reduced operational region, while moresophisticated models use a slip ring to allow continuous rotation about an axis. Angle measurements for each axis are obtained from either a rotary encoder or a resolver. The DGM is a fundamental component of pointing and tracking applications that include missile guidance systems, ground-based telescopes, antenna assemblies, laser communication systems, and close-in weapon systems (CIWSs) such as the Phalanx 1B.",
"title": ""
},
{
"docid": "760edd83045a80dbb2231c0ffbef2ea7",
"text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.",
"title": ""
},
{
"docid": "a7cb2b6f96fae3e6cd71eb8c12d546e5",
"text": "Compared to other areas, artwork recommendation has received lile aention, despite the continuous growth of the artwork market. Previous research has relied on ratings and metadata to make artwork recommendations, as well as visual features extracted with deep neural networks (DNN). However, these features have no direct interpretation to explicit visual features (e.g. brightness, texture) which might hinder explainability and user-acceptance. In this work, we study the impact of artwork metadata as well as visual features (DNN-based and aractiveness-based) for physical artwork recommendation, using images and transaction data from the UGallery online artwork store. Our results indicate that: (i) visual features perform beer than manually curated data, (ii) DNN-based visual features perform beer than aractiveness-based ones, and (iii) a hybrid approach improves the performance further. Our research can inform the development of new artwork recommenders relying on diverse content data.",
"title": ""
},
{
"docid": "cafc77957e24dd361b5020e954593a75",
"text": "We introduce Ternary Weight Networks (TWNs) neural networks with weights constrained to +1, 0 and -1. The L2 distance between the full (float or double) precision weights and the ternary weights along with a scaling factor is minimized. With the optimization, the TWNs own high capacity of model expression that is good enough to approximate the Full Precision Weight Networks (FPWNs) counterpart. Besides, the TWNs achieve up to 16x or 32x model compression rate and own much fewer multiplications compared with the FPWNs. Compared with recently proposed Binary Precision Weight Networks (BPWNs), the TWNs own nearly 38x more power of expression in a 3×3 size filter, which is commonly used in most of the state-of-the-art CNN models like residual networks or VGG. Besides, the TWNs eliminate the singularity at zero and converge faster and more stablely at training time. Benchmarks on MNIST, CIFAR-10, and the large scale ImageNet dataset show that TWNs achieve state-of-the-art performance which is only slightly worse than the FPWNs counterpart but outperforms the analogous BPWNs.",
"title": ""
},
{
"docid": "3633f55c10b3975e212e6452ad999624",
"text": "We propose a method for semantic structure analysis of noun phrases using Abstract Meaning Representation (AMR). AMR is a graph representation for the meaning of a sentence, in which noun phrases (NPs) are manually annotated with internal structure and semantic relations. We extract NPs from the AMR corpus and construct a data set of NP semantic structures. We also propose a transition-based algorithm which jointly identifies both the nodes in a semantic structure tree and semantic relations between them. Compared to the baseline, our method improves the performance of NP semantic structure analysis by 2.7 points, while further incorporating external dictionary boosts the performance by 7.1 points.",
"title": ""
},
{
"docid": "0fc9ddf3920ff193de16067e523d832c",
"text": "In this paper, various existing simulation environments for general purpose and specific purpose WSNs are discussed. The features of number of different sensor network simulators and operating systems are compared. We have presented an overview of the most commonly used operating systems that can be used in different approaches to address the common problems of WSNs. For different simulation environments there are different layer, components and protocols implemented so that it is difficult to compare them. When same protocol is simulated using two different simulators still each protocol implementation differs, since their functionality is exactly not the same. Selection of simulator is purely based on the application, since each simulator has a varied range of performance depending on application.",
"title": ""
},
{
"docid": "c61b210036484009cf8077a803824695",
"text": "Synthetic Aperture Radar (SAR) image is disturbed by multiplicative noise known as speckle. In this paper, based on the power of deep fully convolutional network, an encoding-decoding framework is introduced for multisource SAR image despeckling. The network contains a series of convolution and deconvolution layers, forming an end-to-end non-linear mapping between noise and clean SAR images. With addition of skip connection, the network can keep image details and accomplish the strategy for residual learning which solves the notorious problem of vanishing gradients and accelerates convergence. The experimental results on simulated and real SAR images show that the introduced approach achieves improvements in both despeckling performance and time efficiency over the state-of-the-art despeckling methods.",
"title": ""
},
{
"docid": "196868f85571b16815127d2bd87b98ff",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "d382ffcf441df13699378368a629c08c",
"text": "This paper addresses the problem of retail product recognition on grocery shelf images. We present a technique for accomplishing this task with a low time complexity. We decompose the problem into detection and recognition. The former is achieved by a generic product detection module which is trained on a specific class of products (e.g. tobacco packages). Cascade object detection framework of Viola and Jones [1] is used for this purpose. We further make use of Support Vector Machines (SVMs) to recognize the brand inside each detected region. We extract both shape and color information; and apply feature-level fusion from two separate descriptors computed with the bag of words approach. Furthermore, we introduce a dataset (available on request) that we have collected for similar research purposes. Results are presented on this dataset of more than 5,000 images consisting of 10 tobacco brands. We show that satisfactory detection and classification can be achieved on devices with cheap computational power. Potential applications of the proposed approach include planogram compliance control, inventory management and assisting visually impaired people during shopping.",
"title": ""
},
{
"docid": "7db08db3dc8ea195b2c2e3b48d358367",
"text": "Relationships between authors based on characteristics of published literature have been studied for decades. Author cocitation analysis using mapping techniques has been most frequently used to study how closely two authors are thought to be in intellectual space based on how members of the research community co-cite their works. Other approaches exist to study author relatedness based more directly on the text of their published works. In this study we present static and dynamic word-based approaches using vector space modeling, as well as a topic-based approach based on Latent Dirichlet Allocation for mapping author research relatedness. Vector space modeling is used to define an author space consisting of works by a given author. Outcomes for the two word-based approaches and a topic-based approach for 50 prolific authors in library and information science are compared with more traditional author cocitation analysis using multidimensional scaling and hierarchical cluster analysis. The two word-based approaches produced similar outcomes except where two authors were frequent co-authors for the majority of their articles. The topic-based approach produced the most distinctive map.",
"title": ""
},
{
"docid": "b94d146408340ce2a89b95f1b47e91f6",
"text": "In order to improve the life quality of amputees, providing approximate manipulation ability of a human hand to that of a prosthetic hand is considered by many researchers. In this study, a biomechanical model of the index finger of the human hand is developed based on the human anatomy. Since the activation of finger bones are carried out by tendons, a tendon configuration of the index finger is introduced and used in the model to imitate the human hand characteristics and functionality. Then, fuzzy sliding mode control where the slope of the sliding surface is tuned by a fuzzy logic unit is proposed and applied to have the finger model to follow a certain trajectory. The trajectory of the finger model, which mimics the motion characteristics of the human hand, is pre-determined from the camera images of a real hand during closing and opening motion. Also, in order to check the robust behaviour of the controller, an unexpected joint friction is induced on the prosthetic finger on its way. Finally, the resultant prosthetic finger motion and the tendon forces produced are given and results are discussed.",
"title": ""
},
{
"docid": "afa32d9f7160e8efac5bb8a976c0cfaf",
"text": "Gesture recognition is mainly apprehensive on analyzing the functionality of human wits. The main goal of gesture recognition is to create a system which can recognize specific human gestures and use them to convey information or for device control. Hand gestures provide a separate complementary modality to speech for expressing ones ideas. Information associated with hand gestures in a conversation is degree, discourse structure, spatial and temporal structure. The approaches present can be mainly divided into Data-Glove Based and Vision Based approaches. An important face feature point is the nose tip. Since nose is the highest protruding point from the face. Besides that, it is not affected by facial expressions. Another important function of the nose is that it is able to indicate the head pose. Knowledge of the nose location will enable us to align an unknown 3D face with those in a face database. Eye detection is divided into eye position detection and eye contour detection. Existing works in eye detection can be classified into two major categories: traditional image-based passive approaches and the active IR based approaches. The former uses intensity and shape of eyes for detection and the latter works on the assumption that eyes have a reflection under near IR illumination and produce bright/dark pupil effect. The traditional methods can be broadly classified into three categories: template based methods, appearance based methods and feature based methods. The purpose of this paper is to compare various human Gesture recognition systems for interfacing machines directly to human wits without any corporeal media in an ambient environment.",
"title": ""
},
{
"docid": "5cbc93a9844fcd026a1705ee031c6530",
"text": "Accompanying the rapid urbanization, many developing countries are suffering from serious air pollution problem. The demand for predicting future air quality is becoming increasingly more important to government's policy-making and people's decision making. In this paper, we predict the air quality of next 48 hours for each monitoring station, considering air quality data, meteorology data, and weather forecast data. Based on the domain knowledge about air pollution, we propose a deep neural network (DNN)-based approach (entitled DeepAir), which consists of a spatial transformation component and a deep distributed fusion network. Considering air pollutants' spatial correlations, the former component converts the spatial sparse air quality data into a consistent input to simulate the pollutant sources. The latter network adopts a neural distributed architecture to fuse heterogeneous urban data for simultaneously capturing the factors affecting air quality, e.g. meteorological conditions. We deployed DeepAir in our AirPollutionPrediction system, providing fine-grained air quality forecasts for 300+ Chinese cities every hour. The experimental results on the data from three-year nine Chinese-city demonstrate the advantages of DeepAir beyond 10 baseline methods. Comparing with the previous online approach in AirPollutionPrediction system, we have 2.4%, 12.2%, 63.2% relative accuracy improvements on short-term, long-term and sudden changes prediction, respectively.",
"title": ""
},
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
},
{
"docid": "19700a52f05178ea1c95d576f050f57d",
"text": "With the progress of mobile devices and wireless broadband, a new eMarket platform, termed spatial crowdsourcing is emerging, which enables workers (aka crowd) to perform a set of spatial tasks (i.e., tasks related to a geographical location and time) posted by a requester. In this paper, we study a version of the spatial crowd-sourcing problem in which the workers autonomously select their tasks, called the worker selected tasks (WST) mode. Towards this end, given a worker, and a set of tasks each of which is associated with a location and an expiration time, we aim to find a schedule for the worker that maximizes the number of performed tasks. We first prove that this problem is NP-hard. Subsequently, for small number of tasks, we propose two exact algorithms based on dynamic programming and branch-and-bound strategies. Since the exact algorithms cannot scale for large number of tasks and/or limited amount of resources on mobile platforms, we also propose approximation and progressive algorithms. We conducted a thorough experimental evaluation on both real-world and synthetic data to compare the performance and accuracy of our proposed approaches.",
"title": ""
},
{
"docid": "b1dd830adf87c283ff58630eade75b3c",
"text": "Self-control is a central function of the self and an important key to success in life. The exertion of self-control appears to depend on a limited resource. Just as a muscle gets tired from exertion, acts of self-control cause short-term impairments (ego depletion) in subsequent self-control, even on unrelated tasks. Research has supported the strength model in the domains of eating, drinking, spending, sexuality, intelligent thought, making choices, and interpersonal behavior. Motivational or framing factors can temporarily block the deleterious effects of being in a state of ego depletion. Blood glucose is an important component of the energy. KEYWORDS—self-control; ego depletion; willpower; impulse; strength Every day, people resist impulses to go back to sleep, to eat fattening or forbidden foods, to say or do hurtful things to their relationship partners, to play instead of work, to engage in inappropriate sexual or violent acts, and to do countless other sorts of problematic behaviors—that is, ones that might feel good immediately or be easy but that carry long-term costs or violate the rules and guidelines of proper behavior. What enables the human animal to follow rules and norms prescribed by society and to resist doing what it selfishly wants? Self-control refers to the capacity for altering one’s own responses, especially to bring them into line with standards such as ideals, values, morals, and social expectations, and to support the pursuit of long-term goals. Many writers use the terms selfcontrol and self-regulation interchangeably, but those whomake a distinction typically consider self-control to be the deliberate, conscious, effortful subset of self-regulation. In contrast, homeostatic processes such as maintaining a constant body temperature may be called self-regulation but not self-control. Self-control enables a person to restrain or override one response, thereby making a different response possible. Self-control has attracted increasing attention from psychologists for two main reasons. At the theoretical level, self-control holds important keys to understanding the nature and functions of the self. Meanwhile, the practical applications of self-control have attracted study in many contexts. Inadequate self-control has been linked to behavioral and impulse-control problems, including overeating, alcohol and drug abuse, crime and violence, overspending, sexually impulsive behavior, unwanted pregnancy, and smoking (e.g., Baumeister, Heatherton, & Tice, 1994; Gottfredson & Hirschi, 1990; Tangney, Baumeister, & Boone, 2004; Vohs & Faber, 2007). It may also be linked to emotional problems, school underachievement, lack of persistence, various failures at task performance, relationship problems and dissolution, and more.",
"title": ""
},
{
"docid": "cdb0e65f89f94e436e8c798cd0188d66",
"text": "Visual storytelling aims to generate human-level narrative language (i.e., a natural paragraph with multiple sentences) from a photo streams. A typical photo story consists of a global timeline with multi-thread local storylines, where each storyline occurs in one different scene. Such complex structure leads to large content gaps at scene transitions between consecutive photos. Most existing image/video captioning methods can only achieve limited performance, because the units in traditional recurrent neural networks (RNN) tend to “forget” the previous state when the visual sequence is inconsistent. In this paper, we propose a novel visual storytelling approach with Bidirectional Multi-thread Recurrent Neural Network (BMRNN). First, based on the mined local storylines, a skip gated recurrent unit (sGRU) with delay control is proposed to maintain longer range visual information. Second, by using sGRU as basic units, the BMRNN is trained to align the local storylines into the global sequential timeline. Third, a new training scheme with a storyline-constrained objective function is proposed by jointly considering both global and local matches. Experiments on three standard storytelling datasets show that the BMRNN model outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "98be2f8b10c618f9d2fc8183f289c739",
"text": "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"title": ""
}
] | scidocsrr |
4a2ed308123c49183244387daa393c3b | A Study on Resolution Skills in Phishing Detection | [
{
"docid": "7c050db718a21009908655cc99705d35",
"text": "a Department of Communication, Management Science and Systems, 333 Lord Christopher Baldy Hall, State University of New York at Buffalo, Buffalo, NY 14260, United States b Department of Finance, Operations and Information Systems, Brock University, Canada c Department of Information Systems and Operations Management, Ball State University, United States d Department of Information Systems and Operations Management, University of Texas at Arlington, United States e Management Science and Systems, State University of New York at Buffalo, United States",
"title": ""
}
] | [
{
"docid": "59eaa9f4967abdc1c863f8fb256ae966",
"text": "CONTEXT\nThe projected expansion in the next several decades of the elderly population at highest risk for Parkinson disease (PD) makes identification of factors that promote or prevent the disease an important goal.\n\n\nOBJECTIVE\nTo explore the association of coffee and dietary caffeine intake with risk of PD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nData were analyzed from 30 years of follow-up of 8004 Japanese-American men (aged 45-68 years) enrolled in the prospective longitudinal Honolulu Heart Program between 1965 and 1968.\n\n\nMAIN OUTCOME MEASURE\nIncident PD, by amount of coffee intake (measured at study enrollment and 6-year follow-up) and by total dietary caffeine intake (measured at enrollment).\n\n\nRESULTS\nDuring follow-up, 102 men were identified as having PD. Age-adjusted incidence of PD declined consistently with increased amounts of coffee intake, from 10.4 per 10,000 person-years in men who drank no coffee to 1.9 per 10,000 person-years in men who drank at least 28 oz/d (P<.001 for trend). Similar relationships were observed with total caffeine intake (P<.001 for trend) and caffeine from non-coffee sources (P=.03 for trend). Consumption of increasing amounts of coffee was also associated with lower risk of PD in men who were never, past, and current smokers at baseline (P=.049, P=.22, and P=.02, respectively, for trend). Other nutrients in coffee, including niacin, were unrelated to PD incidence. The relationship between caffeine and PD was unaltered by intake of milk and sugar.\n\n\nCONCLUSIONS\nOur findings indicate that higher coffee and caffeine intake is associated with a significantly lower incidence of PD. This effect appears to be independent of smoking. The data suggest that the mechanism is related to caffeine intake and not to other nutrients contained in coffee. JAMA. 2000;283:2674-2679.",
"title": ""
},
{
"docid": "fd7b4fb86b650c18cbc1d720679d94d5",
"text": "Thermal sensors are used in modern microprocessors to provide information for: 1) throttling at the maximum temperature of operation, and 2) fan regulation at temperatures down to 50°C. Today's microprocessors are thermally limited in many applications, so accurate temperature readings are essential in order to maximize performance. There are fairly large thermal gradients across the core, which vary for different instructions, so it is necessary to position thermal sensors near hot-spots. In addition, the locations of the hot-spots may not be predictable during the design phase. Thus it is necessary for hot-spot sensors to be small enough to be moved late in the design cycle or even after first Silicon.",
"title": ""
},
{
"docid": "1fc965670f71d9870a4eea93d129e285",
"text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "44665a3d2979031aca85010b9ad1ec90",
"text": "Studies in humans and non-human primates have provided evidence for storage of working memory contents in multiple regions ranging from sensory to parietal and prefrontal cortex. We discuss potential explanations for these distributed representations: (i) features in sensory regions versus prefrontal cortex differ in the level of abstractness and generalizability; and (ii) features in prefrontal cortex reflect representations that are transformed for guidance of upcoming behavioral actions. We propose that the propensity to produce persistent activity is a general feature of cortical networks. Future studies may have to shift focus from asking where working memory can be observed in the brain to how a range of specialized brain areas together transform sensory information into a delayed behavioral response.",
"title": ""
},
{
"docid": "cb4518f95b82e553b698ae136362bd59",
"text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the
eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:",
"title": ""
},
{
"docid": "4d7e876d61060061ba6419869d00675e",
"text": "Context-aware recommender systems (CARS) take context into consideration when modeling user preferences. There are two general ways to integrate context with recommendation: contextual filtering and contextual modeling. Currently, the most effective context-aware recommendation algorithms are based on a contextual modeling approach that estimate deviations in ratings across different contexts. In this paper, we propose context similarity as an alternative contextual modeling approach and examine different ways to represent context similarity and incorporate it into recommendation. More specifically, we show how context similarity can be integrated into the sparse linear method and matrix factorization algorithms. Our experimental results demonstrate that learning context similarity is a more effective approach to contextaware recommendation than modeling contextual rating deviations.",
"title": ""
},
{
"docid": "c30a60cdcdc894594692bd730cd09846",
"text": "Healthcare sector is totally different from other industry. It is on high priority sector and people expect highest level of care and services regardless of cost. It did not achieve social expectation even though it consume huge percentage of budget. Mostly the interpretations of medical data is being done by medical expert. In terms of image interpretation by human expert, it is quite limited due to its subjectivity, complexity of the image, extensive variations exist across different interpreters, and fatigue. After the success of deep learning in other real world application, it is also providing exciting solutions with good accuracy for medical imaging and is seen as a key method for future applications in health secotr. In this chapter, we discussed state of the art deep learning architecture and its optimization used for medical image segmentation and classification. In the last section, we have discussed the challenges deep learning based methods for medical imaging and open research issue.",
"title": ""
},
{
"docid": "f103277dbbcab26d8e5c176520666db9",
"text": "Air pollution in urban environments has risen steadily in the last several decades. Such cities as Beijing and Delhi have experienced rises to dangerous levels for citizens. As a growing and urgent public health concern, cities and environmental agencies have been exploring methods to forecast future air pollution, hoping to enact policies and provide incentives and services to benefit their citizenry. Much research is being conducted in environmental science to generate deterministic models of air pollutant behavior; however, this is both complex, as the underlying molecular interactions in the atmosphere need to be simulated, and often inaccurate. As a result, with greater computing power in the twenty-first century, using machine learning methods for forecasting air pollution has become more popular. This paper investigates the use of the LSTM recurrent neural network (RNN) as a framework for forecasting in the future, based on time series data of pollution and meteorological information in Beijing. Due to the sequence dependencies associated with large-scale and longer time series datasets, RNNs, and in particular LSTM models, are well-suited. Our results show that the LSTM framework produces equivalent accuracy when predicting future timesteps compared to the baseline support vector regression for a single timestep. Using our LSTM framework, we can now extend the prediction from a single timestep out to 5 to 10 hours into the future. This is promising in the quest for forecasting urban air quality and leveraging that insight to enact beneficial policy.",
"title": ""
},
{
"docid": "74ea9bde4e265dba15cf9911fce51ece",
"text": "We consider a system aimed at improving the resolution of a conventional airborne radar, looking in the forward direction, by forming an end-fire synthetic array along the airplane line of flight. The system is designed to operate even in slant (non-horizontal) flight trajectories, and it allows imaging along the line of flight. By using the array theory, we analyze system geometry and ambiguity problems, and analytically evaluate the achievable resolution and the required pulse repetition frequency. Processing computational burden is also analyzed, and finally some simulation results are provided.",
"title": ""
},
{
"docid": "f69d669235d54858eb318b53cdadcb47",
"text": "We present a complete vision guided robot system for model based 3D pose estimation and picking of singulated 3D objects. Our system employs a novel vision sensor consisting of a video camera surrounded by eight flashes (light emitting diodes). By capturing images under different flashes and observing the shadows, depth edges or silhouettes in the scene are obtained. The silhouettes are segmented into different objects and each silhouette is matched across a database of object silhouettes in different poses to find the coarse 3D pose. The database is pre-computed using a Computer Aided Design (CAD) model of the object. The pose is refined using a fully projective formulation [ACB98] of Lowe’s model based pose estimation algorithm [Low91, Low87]. The estimated pose is transferred to robot coordinate system utilizing the handeye and camera calibration parameters, which allows the robot to pick the object. Our system outperforms conventional systems using 2D sensors with intensity-based features as well as 3D sensors. We handle complex ambient illumination conditions, challenging specular backgrounds, diffuse as well as specular objects, and texture-less objects, on which traditional systems usually fail. Our vision sensor is capable of computing depth edges in real time and is low cost. Our approach is simple and fast for practical implementation. We present real experimental results using our custom designed sensor mounted on a robot arm to demonstrate the effectiveness of our technique. International Journal of Robotics Research This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2009 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "8dcb0f20c000a30c0d3330f6ac6b373b",
"text": "Although social networking sites (SNSs) have attracted increased attention and members in recent years, there has been little research on it: particularly on how a users’ extroversion or introversion can affect their intention to pay for these services and what other factors might influence them. We therefore proposed and tested a model that measured the users’ value and satisfaction perspectives by examining the influence of these factors in an empirical survey of 288 SNS members. At the same time, the differences due to their psychological state were explored. The causal model was validated using PLSGraph 3.0; six out of eight study hypotheses were supported. The results indicated that perceived value significantly influenced the intention to pay SNS subscription fees while satisfaction did not. Moreover, extroverts thought more highly of the social value of the SNS, while introverts placed more importance on its emotional and price value. The implications of these findings are discussed. Crown Copyright 2010 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "78d88298e0b0e197f44939ee96210778",
"text": "Cyber-security research and development for SCADA is being inhibited by the lack of available SCADA attack datasets. This paper presents a modular dataset generation framework for SCADA cyber-attacks, to aid the development of attack datasets. The presented framework is based on requirements derived from related prior research, and is applicable to any standardised or proprietary SCADA protocol. We instantiate our framework and validate the requirements using a Python implementation. This paper provides experiments of the framework's usage on a state-of-the-art DNP3 critical infrastructure test-bed, thus proving framework's ability to generate SCADA cyber-attack datasets.",
"title": ""
},
{
"docid": "6762134c344053fb167ea286e21995f3",
"text": "Image processing techniques are widely used in the domain of medical sciences for detecting various diseases, infections, tumors, cell abnormalities and various cancers. Detecting and curing a dise ase on time is very important in the field of medicine for protecting and saving human life. Mostly in case of high severity diseases where the mortality rates are more, the waiting time of patients for their reports such as blood test, MRI is more. The time taken for generation of any of the test is from 1-10 days. In high risk diseases like Hepatitis B, it is recommended that the patient’s waiting time should be as less as possible and the treatment should be started immediately. The current system used by the pathologists for identification of blood parameters is costly and the time involved in generation of the reports is also more sometimes leading to loss of patient’s life. Also the pathological tests are expensive, which are sometimes not affordable by the patient. This paper deals with an image processing technique used for detecting the abnormalities of blood cells in less time. The proposed technique also helps in segregating the blood cells in different categories based on the form factor.",
"title": ""
},
{
"docid": "9dc80bb779837f615a7f379ab2bbec99",
"text": "Twitter, as a social media is a very popular way of expressing opinions and interacting with other people in the online world. When taken in aggregation tweets can provide a reflection of public sentiment towards events. In this paper, we provide a positive or negative sentiment on Twitter posts using a well-known machine learning method for text categorization. In addition, we use manually labeled (positive/negative) tweets to build a trained method to accomplish a task. The task is looking for a correlation between twitter sentiment and events that have occurred. The trained model is based on the Bayesian Logistic Regression (BLR) classification method. We used external lexicons to detect subjective or objective tweets, added Unigram and Bigram features and used TF-IDF (Term Frequency-Inverse Document Frequency) to filter out the features. Using the FIFA World Cup 2014 as our case study, we used Twitter Streaming API and some of the official world cup hashtags to mine, filter and process tweets, in order to analyze the reflection of public sentiment towards unexpected events. The same approach, can be used as a basis for predicting future events.",
"title": ""
},
{
"docid": "2c832dea09e5fc622a5c1bbfdb53f8b2",
"text": "A recent meta-analysis (S. Vazire & D. C. Funder, 2006) suggested that narcissism and impulsivity are related and that impulsivity partially accounts for the relation between narcissism and self-defeating behaviors (SDB). This research examines these hypotheses in two studies and tests a competing hypothesis that Extraversion and Agreeableness account for this relation. In Study 1, we examined the relations among narcissism, impulsivity, and aggression. Both narcissism and impulsivity predicted aggression, but impulsivity did not mediate the narcissism-aggression relation. In Study 2, narcissism was related to a measure of SDB and manifested divergent relations with a range of impulsivity traits from three measures. None of the impulsivity models accounted for the narcissism-SDB relation, although there were unique mediating paths for traits related to sensation and fun seeking. The domains of Extraversion and low Agreeableness successfully mediated the entire narcissism-SDB relation. We address the discrepancy between the current and meta-analytic findings.",
"title": ""
},
{
"docid": "c0d3c14e792a02a9ad57745b31b84be6",
"text": "INTRODUCTION\nCritically ill patients are characterized by increased loss of muscle mass, partially attributed to sepsis and multiple organ failure, as well as immobilization. Recent studies have shown that electrical muscle stimulation (EMS) may be an alternative to active exercise in chronic obstructive pulmonary disease (COPD) and chronic heart failure (CHF) patients with myopathy. The aim of our study was to investigate the EMS effects on muscle mass preservation of critically ill patients with the use of ultrasonography (US).\n\n\nMETHODS\nForty-nine critically ill patients (age: 59 +/- 21 years) with an APACHE II admission score >or=13 were randomly assigned after stratification upon admission to receive daily EMS sessions of both lower extremities (EMS-group) or to the control group (control group). Muscle mass was evaluated with US, by measuring the cross sectional diameter (CSD) of the vastus intermedius and the rectus femoris of the quadriceps muscle.\n\n\nRESULTS\nTwenty-six patients were finally evaluated. Right rectus femoris and right vastus intermedius CSD decreased in both groups (EMS group: from 1.42 +/- 0.48 to 1.31 +/- 0.45 cm, P = 0.001 control group: from 1.59 +/- 0.53 to 1.37 +/- 0.5 cm, P = 0.002; EMS group: from 0.91 +/- 0.39 to 0.81 +/- 0.38 cm, P = 0.001 control group: from 1.40 +/- 0.64 to 1.11 +/- 0.56 cm, P = 0.004, respectively). However, the CSD of the right rectus femoris decreased significantly less in the EMS group (-0.11 +/- 0.06 cm, -8 +/- 3.9%) as compared to the control group (-0.21 +/- 0.10 cm, -13.9 +/- 6.4%; P < 0.05) and the CSD of the right vastus intermedius decreased significantly less in the EMS group (-0.10 +/- 0.05 cm, -12.5 +/- 7.4%) as compared to the control group (-0.29 +/- 0.28 cm, -21.5 +/- 15.3%; P < 0.05).\n\n\nCONCLUSIONS\nEMS is well tolerated and seems to preserve the muscle mass of critically ill patients. The potential use of EMS as a preventive and rehabilitation tool in ICU patients with polyneuromyopathy needs to be further investigated.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov: NCT00882830.",
"title": ""
},
{
"docid": "0b6ce2e4f3ef7f747f38068adef3da54",
"text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.",
"title": ""
},
{
"docid": "c6674729e565fa357f7eda74d48c71b8",
"text": "Rating scales are employed as a means of extracting more information out of an item than would be obtained from a mere \"yes/no\", \"right/wrong\" or other dichotomy. But does this additional information increase measurement accuracy and precision? Eight guidelines are suggested to aid the analyst in optimizing the manner in which rating scales categories cooperate in order to improve the utility of the resultant measures. Though these guidelines are presented within the context of Rasch analysis, they reflect aspects of rating scale functioning which impact all methods of analysis. The guidelines feature rating-scale-based data such as category frequency, ordering, rating-to-measure inferential coherence, and the quality of the scale from measurement and statistical perspectives. The manner in which the guidelines prompt recategorization or reconceptualization of the rating scale is indicated. Utilization of the guidelines is illustrated through their application to two published data sets.",
"title": ""
},
{
"docid": "60eeb0468dff5a3eeb9c9d133a81759f",
"text": "To evaluate cone and cone-driven retinal function in patients with Smith-Lemli-Opitz syndrome (SLOS), a condition characterized by low cholesterol. Rod and rod-driven function in patients with SLOS are known to be abnormal. Electroretinographic (ERG) responses to full-field stimuli presented on a steady, rod suppressing background were recorded in 13 patients who had received long-term cholesterol supplementation. Cone photoresponse sensitivity (S CONE) and saturated amplitude (R CONE) parameters were estimated using a model of the activation of phototransduction, and post-receptor b-wave and 30 Hz flicker responses were analyzed. The responses of the patients were compared to those of control subjects (N = 13). Although average values of both S CONE and R CONE were lower than in controls, the differences were not statistically significant. Post-receptor b-wave amplitude and implicit time and flicker responses were normal. The normal cone function contrasts with the significant abnormalities in rod function that were found previously in these same patients. Possibly, cholesterol supplementation has a greater protective effect on cones than on rods as has been demonstrated in the rat model of SLOS.",
"title": ""
},
{
"docid": "be0f836ec6431b74342b670921ac41f7",
"text": "This paper addresses the issue of expert finding in a social network. The task of expert finding, as one of the most important research issues in social networks, is aimed at identifying persons with relevant expertise or experience for a given topic. In this paper, we propose a propagation-based approach that takes into consideration of both person local information and network information (e.g. relationships between persons). Experimental results show that our approach can outperform the baseline approach.",
"title": ""
}
] | scidocsrr |
2a09dc621edbba94f97580e33efd53e9 | Safety climate in OHSAS 18001-certified organisations: antecedents and consequences of safety behaviour. | [
{
"docid": "d9de6a277eec1156e680ee6f656cea10",
"text": "Research in the areas of organizational climate and work performance was used to develop a framework for measuring perceptions of safety at work. The framework distinguished perceptions of the work environment from perceptions of performance related to safety. Two studies supported application of the framework to employee perceptions of safety in the workplace. Safety compliance and safety participation were distinguished as separate components of safety-related performance. Perceptions of knowledge about safety and motivation to perform safely influenced individual reports of safety performance and also mediated the link between safety climate and safety performance. Specific dimensions of safety climate were identified and constituted a higher order safety climate factor. The results support conceptualizing safety climate as an antecedent to safety performance in organizations.",
"title": ""
},
{
"docid": "bf1a2bf0efb47627f07288251f5baf5b",
"text": "Industrial safety is an important issue for operations managers — it has implications for cost, delivery, quality, and social responsibility. Minor accidents can interfere with production in a variety of ways, and a serious accident can shut down an entire operation. In this context, questions about the causes of workplace accidents are highly relevant. There is a popular notion that employees’ unsafe acts are the primary causes of workplace accidents, but a number of authors suggest a perspective that highlights influences from operating and social systems. The study described herein addresses this subject by assessing steelworkers’ responses to a survey about social, technical, and personal factors related to safe work behaviors. Results provide evidence that a chain reaction of technical and social constructs operate through employees to influence safe behaviors. These results demonstrate that safety hazards, safety culture, and production pressures can influence safety efficacy and cavalier attitudes, on a path leading to safe or unsafe work behaviors. Based on these results, we conclude with prescriptions for operations managers and others who play roles in the causal sequence. q 2000 Elsevier Science B.V. All",
"title": ""
}
] | [
{
"docid": "17b2adeaa934fe769ae3f3460e87b5cc",
"text": "We aim to improve on the design of procedurally generated game levels. We propose a method which empowers game designers to author and control level generators, by expressing gameplay-related design constraints. Following a survey conducted on recent procedural level generation methods, we argue that gameplay-based control is currently the most natural control mechanism available for generative methods. Our method uses graph grammars, the result of the designer-expressed constraints, to generate sequences of desired player actions. These action graphs are used as the basis for the spatial structure and content of game levels; they guide the layout process and indicate the required content related to such actions. We showcase our approach with a case study on a 3D dungeon crawler game. Results allow us to conclude that our control mechanisms are both expressive and powerful, effectively supporting designers to procedurally generate game",
"title": ""
},
{
"docid": "43f200b97e2b6cb9c62bbbe71bed72e3",
"text": "We compare nonreturn-to-zero (NRZ) with return-to-zero (RZ) modulation format for wavelength-division-multiplexed systems operating at data rates up to 40 Gb/s. We find that in 10-40-Gb/s dispersion-managed systems (single-mode fiber alternating with dispersion compensating fiber), NRZ is more adversely affected by nonlinearities, whereas RZ is more affected by dispersion. In this dispersion map, 10- and 20-Gb/s systems operate better using RZ modulation format because nonlinearity dominates. However, 40-Gb/s systems favor the usage of NRZ because dispersion becomes the key limiting factor at 40 Gb/s.",
"title": ""
},
{
"docid": "be8b89fc46c919ab53abf86642bb8f8a",
"text": "us to rethink our whole value frame concerning means and ends, and the place of technology within this frame. The ambit of HCI has expanded enormously since the field’s emergence in the early 1980s. Computing has changed significantly; mobile and ubiquitous communication networks span the globe, and technology has been integrated into all aspects of our daily lives. Computing is not simply for calculating, but rather is a medium through which we collaborate and interact with other people. The focus of HCI is not so much on human-computer interaction as it is on human activities mediated by computing [1]. Just as the original meaning of ACM (Association for Computing Machinery) has become dated, perhaps so too has the original meaning of HCI (humancomputer interaction). It is time for us to rethink how we approach issues of people and technology. In this article I explore how we might develop a more humancentered approach to computing. for the 21st century, centered on the exploration of new forms of living with and through technologies that give primacy to human actors, their values, and their activities. The area of concern is much broader than the simple “fit” between people and technology to improve productivity (as in the classic human factors mold); it encompasses a much more challenging territory that includes the goals and activities of people, their values, and the tools and environments that help shape their everyday lives. We have evermore sophisticated and complex technologies available to us in the home, at work, and on the go, yet in many cases, rather than augmenting our choices and capabilities, this plethora of new widgets and systems seems to confuse us—or even worse, disable us. (Surely there is something out of control when a term such as “IT disability” can be taken seriously in national research programs.) Solutions do not reside simply in ergonomic corrections to the interface, but instead require Some years ago, HCI researcher Panu Korhonen of Nokia outlined to me how HCI is changing, as follows: In the early days the Nokia HCI people were told “Please evaluate our user interface, and make it easy to use.” That gave way to “Please help us design this user interface so that it is easy to use.” That, in turn, led to a request: “Please help us find what the users really need so that we know how to design this user interface.” And now, the engineers are pleading with us: “Look at this area of",
"title": ""
},
{
"docid": "d0205fc884821d10d6939748012bcfcb",
"text": "We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD), and employ it to develop novel generalization bounds. This is in contrast to previous distribution-free algorithmic stability results for SGD which depend on the worst-case constants. By virtue of the data-dependent argument, our bounds provide new insights into learning with SGD on convex and non-convex problems. In the convex case, we show that the bound on the generalization error depends on the risk at the initialization point. In the non-convex case, we prove that the expected curvature of the objective function around the initialization point has crucial influence on the generalization error. In both cases, our results suggest a simple data-driven strategy to stabilize SGD by pre-screening its initialization. As a corollary, our results allow us to show optimistic generalization bounds that exhibit fast convergence rates for SGD subject to a vanishing empirical risk and low noise of stochastic gradient.",
"title": ""
},
{
"docid": "4d9ad24707702e70747143ad477ed831",
"text": "The paper presents a high-speed (500 f/s) large-format 1 K/spl times/1 K 8 bit 3.3 V CMOS active pixel sensor (APS) with 1024 ADCs integrated on chip. The sensor achieves an extremely high output data rate of over 500 Mbytes per second and a low power dissipation of 350 mW at the 66 MHz master clock rate. Principal architecture and circuit solutions allowing such a high throughput are discussed along with preliminary results of the chip characterization.",
"title": ""
},
{
"docid": "87eab42827061426dfc9b335530e7037",
"text": "OBJECTIVES\nHealth behavior theories focus on the role of conscious, reflective factors (e.g., behavioral intentions, risk perceptions) in predicting and changing behavior. Dual-process models, on the other hand, propose that health actions are guided not only by a conscious, reflective, rule-based system but also by a nonconscious, impulsive, associative system. This article argues that research on health decisions, actions, and outcomes will be enriched by greater consideration of nonconscious processes.\n\n\nMETHODS\nA narrative review is presented that delineates research on implicit cognition, implicit affect, and implicit motivation. In each case, we describe the key ideas, how they have been taken up in health psychology, and the possibilities for behavior change interventions, before outlining directions that might profitably be taken in future research.\n\n\nRESULTS\nCorrelational research on implicit cognitive and affective processes (attentional bias and implicit attitudes) has recently been supplemented by intervention studies using implementation intentions and practice-based training that show promising effects. Studies of implicit motivation (health goal priming) have also observed encouraging findings. There is considerable scope for further investigations of implicit affect control, unconscious thought, and the automatization of striving for health goals.\n\n\nCONCLUSION\nResearch on nonconscious processes holds significant potential that can and should be developed by health psychologists. Consideration of impulsive as well as reflective processes will engender new targets for intervention and should ultimately enhance the effectiveness of behavior change efforts.",
"title": ""
},
{
"docid": "68f3b26e27184a10a085aa1762c984ed",
"text": "Automation has gained importance in every field of human life. But there are still some fields where more traditional methods are being employed. One such field is the ordering system in restaurants. Generally, in restaurants menu ordering system will be available in paper format from that the customer has to select the menu items and then the order is handed over to waiter who takes the corresponding order, which is a very time consuming process. In this paper we propose a fully automated ordering system in which the conventional paper based menu is replaced by a more user friendly touchscreen based menu card. The system consists of microcontroller which is interfaced with the input and output modules. The input module is the touchscreen sensor which is placed on GLCD to have graphical image display, which takes the input from the user and provides the same to the microcontroller. The output module is a RF module which is used for communication between system at table and system at ordering department. Microcontroller also displays the menu items on the GLCD. At the receiving end the selected items will be displayed on the LCD and the ordering department will note down the order received.",
"title": ""
},
{
"docid": "749785a3973c6d2d760cbfbe6f1dbdac",
"text": "Research on spreadsheet errors is substantial, compelling, and unanimous. It has three simple conclusions. The first is that spreadsheet errors are rare on a per-cell basis, but in large programs, at least one incorrect bottom-line value is very likely to be present. The second is that errors are extremely difficult to detect and correct. The third is that spreadsheet developers and corporations are highly overconfident in the accuracy of their spreadsheets. The disconnect between the first two conclusions and the third appears to be due to the way human cognition works. Most importantly, we are aware of very few of the errors we make. In addition, while we are proudly aware of errors that we fix, we have no idea of how many remain, but like Little Jack Horner we are impressed with our ability to ferret out errors. This paper reviews human cognition processes and shows first that humans cannot be error free no matter how hard they try, and second that our intuition about errors and how we can reduce them is based on appallingly bad knowledge. This paper argues that we should reject any prescription for reducing errors that has not been rigorously proven safe and effective. This paper also argues that our biggest need, based on empirical data, is to do massively more testing than we do now. It suggests that the code inspection methodology developed in software development is likely to apply very well to spreadsheet inspection.",
"title": ""
},
{
"docid": "eff17ece2368b925f0db8e18ea0fc897",
"text": "Blockchain, as the backbone technology of the current popular Bitcoin digital currency, has become a promising decentralized data management framework. Although blockchain has been widely adopted in many applications (e.g., finance, healthcare, and logistics), its application in mobile services is still limited. This is due to the fact that blockchain users need to solve preset proof-of-work puzzles to add new data (i.e., a block) to the blockchain. Solving the proof of work, however, consumes substantial resources in terms of CPU time and energy, which is not suitable for resource-limited mobile devices. To facilitate blockchain applications in future mobile Internet of Things systems, multiple access mobile edge computing appears to be an auspicious solution to solve the proof-of-work puzzles for mobile users. We first introduce a novel concept of edge computing for mobile blockchain. Then we introduce an economic approach for edge computing resource management. Moreover, a prototype of mobile edge computing enabled blockchain systems is presented with experimental results to justify the proposed concept.",
"title": ""
},
{
"docid": "233cb91d9d3b6aefbeb065f6ad6d8e80",
"text": "This thesis addresses the problem of verifying the geographic locations of Internet clients. First, we demonstrate how current state-of-the-art delay-based geolocation techniques are susceptible to evasion through delay manipulations, which involve both increasing and decreasing the Internet delays that are observed between a client and a remote measuring party. We find that delay-based techniques generally lack appropriate mechanisms to measure delays in an integrity-preserving manner. We then discuss different strategies enabling an adversary to benefit from being able to manipulate the delays. Upon analyzing the effect of these strategies on three representative delay-based techniques, we found that the strategies combined with the ability of full delay manipulation can allow an adversary to (fraudulently) control the location returned by those geolocation techniques accurately. We then propose Client Presence Verification (CPV) as a delay-based technique to verify an assertion about a client’s physical presence in a prescribed geographic region. Three verifiers geographically encapsulating a client’s asserted location are used to corroborate that assertion by measuring the delays between themselves and the client. CPV infers geographic distances from these delays and thus, using the smaller of the forward and reverse one-way delay between each verifier and the client is expected to result in a more accurate distance inference than using the conventional round-trip times. Accordingly, we devise a novel protocol for accurate one-way delay measurements between the client and the three verifiers to be used by CPV, taking into account that the client could manipulate the measurements to defeat the verification process. We evaluate CPV through extensive real-world experiments with legitimate clients (those truly present at where they asserted to be) modeled to use both wired and wireless access networks. Wired evaluation is done using the PlanetLab testbed, during which we examine various factors affecting CPV’s efficacy, such as the client’s geographical nearness to the verifiers. For wireless evaluation, we leverage the Internet delay information collected for wired clients from PlanetLab, and model additional delays representing the last-mile wireless link. The additional delays were generated following wireless delay distribution models studied in the literature. Again, we examine various factors that affect CPV’s efficacy, including the number of devices actively competing for the wireless media in the vicinity of a wireless legitimate CPV client. Finally, we reinforce CPV against a (hypothetical) middlebox that an adversary specifically customizes to defeat CPV (i.e., assuming an adversary that is aware of how CPV operates). We postulate that public middlebox service providers (e.g., in the form of Virtual Private Networks) would be motivated to defeat CPV if it is to be widely adopted in practice. To that end, we propose to use a Proof-ofWork mechanism that allows CPV to impose constraints, which effectively limit the number of clients (now adversaries) simultaneously colluding with that middlebox; beyond that number, CPV detects the middlebox.",
"title": ""
},
{
"docid": "2643c7960df0aed773aeca6e04fde67e",
"text": "Many studies utilizing dogs, cats, birds, fish, and robotic simulations of animals have tried to ascertain the health benefits of pet ownership or animal-assisted therapy in the elderly. Several small unblinded investigations outlined improvements in behavior in demented persons given treatment in the presence of animals. Studies piloting the use of animals in the treatment of depression and schizophrenia have yielded mixed results. Animals may provide intangible benefits to the mental health of older persons, such as relief social isolation and boredom, but these have not been formally studied. Several investigations of the effect of pets on physical health suggest animals can lower blood pressure, and dog walkers partake in more physical activity. Dog walking, in epidemiological studies and few preliminary trials, is associated with lower complication risk among patients with cardiovascular disease. Pets may also have harms: they may be expensive to care for, and their owners are more likely to fall. Theoretically, zoonotic infections and bites can occur, but how often this occurs in the context of pet ownership or animal-assisted therapy is unknown. Despite the poor methodological quality of pet research after decades of study, pet ownership and animal-assisted therapy are likely to continue due to positive subjective feelings many people have toward animals.",
"title": ""
},
{
"docid": "a0e66a56d9fd7c5591487a3aaa5d0851",
"text": "Link prediction on knowledge graphs is useful in numerous application areas such as semantic search, question answering, entity disambiguation, enterprise decision support, recommender systems and so on. While many of these applications require a reasonably quick response and may operate on data that is constantly changing, existing methods often lack speed and adaptability to cope with these requirements. This is aggravated by the fact that knowledge graphs are often extremely large and may easily contain millions of entities rendering many of these methods impractical. In this paper, we address the weaknesses of current methods by proposing Random Semantic Tensor Ensemble (RSTE), a scalable ensemble-enabled framework based on tensor factorization. Our proposed approach samples a knowledge graph tensor in its graph representation and performs link prediction via ensembles of tensor factorization. Our experiments on both publicly available datasets and real world enterprise/sales knowledge bases have shown that our approach is not only highly scalable, parallelizable and memory efficient, but also able to increase the prediction accuracy significantly across all datasets.",
"title": ""
},
{
"docid": "247534c6b5416e4330a84e10daf2bc0c",
"text": "The aim of the present study was to determine metabolic responses, movement patterns and distance covered at running speeds corresponding to fixed blood lactate concentrations (FBLs) in young soccer players during a match play. A further aim of the study was to evaluate the relationships between FBLs, maximal oxygen consumption (VO2max) and distance covered during a game. A multistage field test was administered to 32 players to determine FBLs and VO2max. Blood lactate (LA), heart rate (HR) and rate of perceived exertion (RPE) responses were obtained from 36 players during tournament matches filmed using six fixed cameras. Images were transferred to a computer, for calibration and synchronization. In all players, values for LA and HR were higher and RPE lower during the 1(st) half compared to the 2(nd) half of the matches (p < 0.01). Players in forward positions had higher LA levels than defenders, but HR and RPE values were similar between playing positions. Total distance and distance covered in jogging, low-moderate-high intensity running and low intensity sprint were higher during the 1(st) half (p < 0.01). In the 1(st) half, players also ran longer distances at FBLs [p<0.01; average running speed at 2mmol·L(-1) (FBL2): 3.32 ± 0.31m·s(-1) and average running speed at 4mmol·L(-1) (FBL4): 3.91 ± 0.25m·s(-1)]. There was a significant difference between playing positions in distance covered at different running speeds (p < 0.05). However, when distance covered was expressed as FBLs, the players ran similar distances. In addition, relationships between FBLs and total distance covered were significant (r = 0.482 to 0.570; p < 0.01). In conclusion, these findings demonstrated that young soccer players experienced higher internal load during the 1(st) half of a game compared to the 2(nd) half. Furthermore, although movement patterns of players differed between playing positions, all players experienced a similar physiological stress throughout the game. Finally, total distance covered was associated to fixed blood lactate concentrations during play. Key pointsBased on LA, HR and RPE responses, young top soccer players experienced a higher physiological stress during the 1(st) half of the matches compared to the 2(nd) half.Movement patterns differed in accordance with the players' positions but that all players experienced a similar physiological stress during match play.Approximately one quarter of total distance was covered at speeds that exceeded the 4 mmol·L(-1) fixed LA threshold.Total distance covered was influenced by running speeds at fixed lactate concentrations in young soccer players during match play.",
"title": ""
},
{
"docid": "6ec3f783ec49c0b3e51a704bc3bd03ec",
"text": "Abstract: It has been suggested by many supply chain practitioners that in certain cases inventory can have a stimulating effect on the demand. In mathematical terms this amounts to the demand being a function of the inventory level alone. In this work we propose a logistic growth model for the inventory dependent demand rate and solve first the continuous time deterministic optimal control problem of maximising the present value of the total net profit over an infinite horizon. It is shown that under a strict condition there is a unique optimal stock level which the inventory planner should maintain in order to satisfy demand. The stochastic version of the optimal control problem is considered next. A bang-bang type of optimal control problem is formulated and the associated Hamilton-Jacobi-Bellman equation is solved. The inventory level that signifies a switch in the ordering strategy is worked out in the stochastic case.",
"title": ""
},
{
"docid": "a8920f6ba4500587cf2a160b8d91331a",
"text": "In this paper, we present an approach that can handle Z-numbers in the context of multi-criteria decision-making problems. The concept of Z-number as an ordered pair Z=(A, B) of fuzzy numbers A and B is used, where A is a linguistic value of a variable of interest and B is a linguistic value of the probability measure of A. As human beings, we communicate with each other by means of natural language using sentences like “the journey from home to university most likely takes about half an hour.” The Z-numbers are converted to fuzzy numbers. Then the Z-TODIM and Z-TOPSIS are presented as a direct extension of the fuzzy TODIM and fuzzy TOPSIS, respectively. The proposed methods are applied to two case studies and compared with the standard approach using crisp values. The results obtained show the feasibility of the approach.",
"title": ""
},
{
"docid": "5f4235a8f9095afe6697c9fdb00e0a43",
"text": "Typically, firms decide whether or not to develop a new product based on their resources, capabilities and the return on investment that the product is estimated to generate. We propose that firms adopt a broader heuristic for making new product development choices. Our heuristic approach requires moving beyond traditional finance-based thinking, and suggests that firms concentrate on technological trajectories by combining technology roadmapping, information technology (IT) and supply chain management to make more sustainable new product development decisions. Using the proposed holistic heuristic methods, versus relying on traditional finance-based decision-making tools (e.g., emphasizing net present value or internal rate of return projections), enables firms to plan beyond the short-term and immediate set of technologies at hand. Our proposed heuristic approach enables firms to forecast technologies and markets, and hence, new product priorities in the longer term. Investments in new products should, as a result, generate returns over a longer period than traditionally expected, giving firms more sustainable investments. New products are costly and need to have a 0040-1625/$ – see front matter D 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0040-1625(03)00064-7 * Corresponding author. Tel.: +1-814-863-7133. E-mail addresses: [email protected] (I.J. Petrick), [email protected] (A.E. Echols). 1 Tel.: +1-814-863-0642. I.J. Petrick, A.E. Echols / Technological Forecasting & Social Change 71 (2004) 81–100 82 durable presence in the market. Transaction costs and resources will be saved, as firms make new product development decisions less frequently. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1f247e127866e62029310218c380bc31",
"text": "Human Resource is the most important asset for any organization and it is the resource of achieving competitive advantage. Managing human resources is very challenging as compared to managing technology or capital and for its effective management, organization requires effective HRM system. HRM system should be backed up by strong HRM practices. HRM practices refer to organizational activities directed at managing the group of human resources and ensuring that the resources are employed towards the fulfillment of organizational goals. The purpose of this study is to explore contribution of Human Resource Management (HRM) practices including selection, training, career planning, compensation, performance appraisal, job definition and employee participation on perceived employee performance. This research describe why human resource management (HRM) decisions are likely to have an important and unique influence on organizational performance. This research forum will help advance research on the link between HRM and organizational performance. Unresolved questions is trying to identify in need of future study and make several suggestions intended to help researchers studying these questions build a more cumulative body of knowledge that will have key implications for body theory and practice. This study comprehensively evaluated the links between systems of High Performance Work Practices and firm performance. Results based on a national sample of firms indicate that these practices have an economically and statistically significant impact on employee performance. Support for predictions that the impact of High Performance Work Practices on firm performance is in part contingent on their interrelationships and links with competitive strategy was limited.",
"title": ""
},
{
"docid": "97691304930a85066a15086877473857",
"text": "In the context of modern cryptosystems, a common theme is the creation of distributed trust networks. In most of these designs, permanent storage of a contract is required. However, permanent storage can become a major performance and cost bottleneck. As a result, good code compression schemes are a key factor in scaling these contract based cryptosystems. For this project, we formalize and implement a data structure called the Merkelized Abstract Syntax Tree (MAST) to address both data integrity and compression. MASTs can be used to compactly represent contractual programs that will be executed remotely, and by using some of the properties of Merkle trees, they can also be used to verify the integrity of the code being executed. A concept by the same name has been discussed in the Bitcoin community for a while, the terminology originates from the work of Russel O’Connor and Pieter Wuille, however this discussion was limited to private correspondences. We present a formalization of it and provide an implementation.The project idea was developed with Bitcoin applications in mind, and the experiment we set up uses MASTs in a crypto currency network simulator. Using MASTs in the Bitcoin protocol [2] would increase the complexity (length) of contracts permitted on the network, while simultaneously maintaining the security of broadcasted data. Additionally, contracts may contain privileged, secret branches of execution.",
"title": ""
},
{
"docid": "524ed6f753bb059130a6076323e8aa63",
"text": "Deep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of dependency structures, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets.",
"title": ""
}
] | scidocsrr |
87995e40fe92f97da59567bd39c02d9b | The Impact of Computer Self Efficacy and Technology Acceptance Model on Behavioral Intention in Internet Banking System | [
{
"docid": "5ed955ddaaf09fc61c214adba6b18449",
"text": "This study investigates how customers perceive and adopt Internet Banking (IB) in Hong Kong. We developed a theoretical model based on the Technology Acceptance Model (TAM) with an added construct Perceived Web Security, and empirically tested its ability in predicting customers’ behavioral intention of adopting IB. We designed a questionnaire and used it to survey a randomly selected sample of customers of IB from the Yellow Pages, and obtained 203 usable responses. We analyzed the data using Structured Equation Modeling (SEM) to evaluate the strength of the hypothesized relationships, if any, among the constructs, which include Perceived Ease of Use and Perceived Web Security as independent variables, Perceived Usefulness and Attitude as intervening variables, and Intention to Use as the dependent variable. The results provide support of the extended TAM model and confirm its robustness in predicting customers’ intention of adoption of IB. This study contributes to the literature by formulating and validating TAM to predict IB adoption, and its findings provide useful information for bank management in formulating IB marketing strategies.",
"title": ""
},
{
"docid": "25ce68e2b2d9e9d8ff741e4e9ad1e378",
"text": "Advances in electronic banking technology have created novel ways of handling daily banking affairs, especially via the online banking channel. The acceptance of online banking services has been rapid in many parts of the world, and in the leading ebanking countries the number of e-banking contracts has exceeded 50 percent. Investigates online banking acceptance in the light of the traditional technology acceptance model (TAM), which is leveraged into the online environment. On the basis of a focus group interview with banking professionals, TAM literature and e-banking studies, we develop a model indicating onlinebanking acceptance among private banking customers in Finland. The model was tested with a survey sample (n 1⁄4 268). The findings of the study indicate that perceived usefulness and information on online banking on the Web site were the main factors influencing online-banking acceptance.",
"title": ""
}
] | [
{
"docid": "a6c9ff64c9c007e71192eb7023c8617f",
"text": "Elderly individuals can access online 3D virtual stores from their homes to make purchases. However, most virtual environments (VEs) often elicit physical responses to certain types of movements in the VEs. Some users exhibit symptoms that parallel those of classical motion sickness, called cybersickness, both during and after the VE experience. This study investigated the factors that contribute to cybersickness among the elderly when immersed in a 3D virtual store. The results of the first experiment show that the simulator sickness questionnaire (SSQ) scores increased significantly by the reasons of navigational rotating speed and duration of exposure. Based on these results, a warning system with fuzzy control for combating cybersickness was developed. The results of the second and third experiments show that the proposed system can efficiently determine the level of cybersickness based on the fuzzy sets analysis of operating signals from scene rotating speed and exposure duration, and subsequently combat cybersickness. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "559a4175347e5fea57911d9b8c5080e6",
"text": "Online social networks offering various services have become ubiquitous in our daily life. Meanwhile, users nowadays are usually involved in multiple online social networks simultaneously to enjoy specific services provided by different networks. Formally, social networks that share some common users are named as partially aligned networks. In this paper, we want to predict the formation of social links in multiple partially aligned social networks at the same time, which is formally defined as the multi-network link (formation) prediction problem. In multiple partially aligned social networks, users can be extensively correlated with each other by various connections. To categorize these diverse connections among users, 7 \"intra-network social meta paths\" and 4 categories of \"inter-network social meta paths\" are proposed in this paper. These \"social meta paths\" can cover a wide variety of connection information in the network, some of which can be helpful for solving the multi-network link prediction problem but some can be not. To utilize useful connection, a subset of the most informative \"social meta paths\" are picked, the process of which is formally defined as \"social meta path selection\" in this paper. An effective general link formation prediction framework, Mli (Multi-network Link Identifier), is proposed in this paper to solve the multi-network link (formation) prediction problem. Built with heterogenous topological features extracted based on the selected \"social meta paths\" in the multiple partially aligned social networks, Mli can help refine and disambiguate the prediction results reciprocally in all aligned networks. Extensive experiments conducted on real-world partially aligned heterogeneous networks, Foursquare and Twitter, demonstrate that Mli can solve the multi-network link prediction problem very well.",
"title": ""
},
{
"docid": "2417402cb45e5c96c8cb808afe38a4e3",
"text": "The problems of finding a longest common subsequence of two sequencesA andB and a shortest edit script for transformingA intoB have long been known to be dual problems. In this paper, they are shown to be equivalent to finding a shortest/longest path in an edit graph. Using this perspective, a simpleO(ND) time and space algorithm is developed whereN is the sum of the lengths ofA andB andD is the size of the minimum edit script forA andB. The algorithm performs well when differences are small (sequences are similar) and is consequently fast in typical applications. The algorithm is shown to haveO(N+D 2) expected-time performance under a basic stochastic model. A refinement of the algorithm requires onlyO(N) space, and the use of suffix trees leads to anO(N logN+D 2) time variation.",
"title": ""
},
{
"docid": "7a2d4032d79659a70ed2f8a6b75c4e71",
"text": "In recent years, transition-based parsers have shown promise in terms of efficiency and accuracy. Though these parsers have been extensively explored for multiple Indian languages, there is still considerable scope for improvement by properly incorporating syntactically relevant information. In this article, we enhance transition-based parsing of Hindi and Urdu by redefining the features and feature extraction procedures that have been previously proposed in the parsing literature of Indian languages. We propose and empirically show that properly incorporating syntactically relevant information like case marking, complex predication and grammatical agreement in an arc-eager parsing model can significantly improve parsing accuracy. Our experiments show an absolute improvement of ∼2% LAS for parsing of both Hindi and Urdu over a competitive baseline which uses rich features like part-of-speech (POS) tags, chunk tags, cluster ids and lemmas. We also propose some heuristics to identify ezafe constructions in Urdu texts which show promising results in parsing these constructions.",
"title": ""
},
{
"docid": "f8984d660f39c66b3bd484ec766fa509",
"text": "The present paper focuses on Cyber Security Awareness Campaigns, and aims to identify key factors regarding security which may lead them to failing to appropriately change people’s behaviour. Past and current efforts to improve information-security practices and promote a sustainable society have not had the desired impact. It is important therefore to critically reflect on the challenges involved in improving information-security behaviours for citizens, consumers and employees. In particular, our work considers these challenges from a Psychology perspective, as we believe that understanding how people perceive risks is critical to creating effective awareness campaigns. Changing behaviour requires more than providing information about risks and reactive behaviours – firstly, people must be able to understand and apply the advice, and secondly, they must be motivated and willing to do so – and the latter requires changes to attitudes and intentions. These antecedents of behaviour change are identified in several psychological models of behaviour. We review the suitability of persuasion techniques, including the widely used ‘fear appeals’. From this range of literature, we extract essential components for an awareness campaign as well as factors which can lead to a campaign’s success or failure. Finally, we present examples of existing awareness campaigns in different cultures (the UK and Africa) and reflect on these.",
"title": ""
},
{
"docid": "dcfc6f3c1eba7238bd6c6aa18dcff6df",
"text": "With the evaluation and simulation of long-term evolution/4G cellular network and hot discussion about new technologies or network architecture for 5G, the appearance of simulation and evaluation guidelines for 5G is in urgent need. This paper analyzes the challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures. Based on the overview of evaluation methodologies issued for 4G candidates, challenges in 5G evaluation are formulated. Additionally, a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for 5G.",
"title": ""
},
{
"docid": "f5817d371dd3e8bd93d99a41210aed48",
"text": "Early works on human action recognition focused on tracking and classifying articulated body motions. Such methods required accurate localisation of body parts, which is a difficult task, particularly under realistic imaging conditions. As such, recent trends have shifted towards the use of more abstract, low-level appearance features such as spatio-temporal interest points. Motivated by the recent progress in pose estimation, we feel that pose-based action recognition systems warrant a second look. In this paper, we address the question of whether pose estimation is useful for action recognition or if it is better to train a classifier only on low-level appearance features drawn from video data. We compare pose-based, appearance-based and combined pose and appearance features for action recognition in a home-monitoring scenario. Our experiments show that posebased features outperform low-level appearance features, even when heavily corrupted by noise, suggesting that pose estimation is beneficial for the action recognition task.",
"title": ""
},
{
"docid": "e8880b633c3f4b9646a7f6e9c9273f6f",
"text": "A) CTMC states. Since we assume that c, d and Xmax are integers, while the premiums that the customers pay are worth 1, every integer between 0 and Xmax is achievable. Accordingly, given our assumptions every cash flow consists of an integer-valued amount of money. Thus, the CTMC cannot reach any non-integer state. We are obviously assuming that the initial amount of cash X(0) is also an integer. Consequently, the state space of the CTMC consists of every nonnegative integer number between 0 and Xmax.",
"title": ""
},
{
"docid": "735fe41fe73d527b3cbeb03926530344",
"text": "Premalignant lesions of the lower female genital tract encompassing the cervix, vagina and vulva are variably common and many, but by no means all, are related to infection by human papillomavirus (HPV). In this review, pathological aspects of the various premalignant lesions are discussed, mainly concentrating on new developments. The value of ancillary studies, mainly immunohistochemical, is discussed at the appropriate points. In the cervix, the terminology and morphological features of premalignant glandular lesions is covered, as is the distinction between adenocarcinoma in situ (AIS) and early invasive adenocarcinoma, which may be very problematic. A spectrum of benign, premalignant and malignant cervical glandular lesions exhibiting gastric differentiation is emerging with lobular endocervical glandular hyperplasia (LEGH), including so-called atypical LEGH, representing a possible precursor of non HPV-related cervical adenocarcinomas exhibiting gastric differentiation; these include the cytologically bland adenoma malignum and the morphologically malignant gastric type adenocarcinoma. Stratified mucin producing intraepithelial lesion (SMILE) is a premalignant cervical lesion with morphological overlap between cervical intraepithelial neoplasia (CIN) and AIS and which is variably regarded as a form of reserve cell dysplasia or stratified AIS. It is now firmly established that there are two distinct types of vulval intraepithelial neoplasia (VIN) with a different pathogenesis, molecular events, morphological features and risk of progression to squamous carcinoma. These comprise a more common HPV-related usual type VIN (also referred to as classic, undifferentiated, basaloid, warty, Bowenoid type) and a more uncommon differentiated (simplex) type which is non-HPV related and which is sometimes associated with lichen sclerosus. The former has a relatively low risk of progression to HPV-related vulval squamous carcinoma and the latter a high risk of progression to non-HPV related vulval squamous carcinoma. Various aspects of vulval Paget's disease are also discussed.",
"title": ""
},
{
"docid": "6c5b72121519e40934ac3ffe6a05c1c7",
"text": "Learner modeling is a basis of personalized, adaptive learning. The research literature provides a wide range of modeling approaches, but it does not provide guidance for choosing a model suitable for a particular situation. We provide a systematic and up-to-date overview of current approaches to tracing learners’ knowledge and skill across interaction with multiple items, focusing in particular on the widely used Bayesian knowledge tracing and logistic models. We discuss factors that influence the choice of a model and highlight the importance of the learner modeling context: models are used for different purposes and deal with different types of learning processes. We also consider methodological issues in the evaluation of learner models and their relation to the modeling context. Overall, the overview provides basic guidelines for both researchers and practitioners and identifies areas that require further clarification in future research.",
"title": ""
},
{
"docid": "db70302a3d7e7e7e5974dd013e587b12",
"text": "In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \\emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.",
"title": ""
},
{
"docid": "2bbc3d5c5b20249a2674b4d495f662d9",
"text": "The effective work function of a reactively sputtered TiN metal gate is shown to be tunable from 4.30 to 4.65 eV. The effective work function decreases with nitrogen flow during reactive sputter deposition. Nitrogen annealing increases the effective work function and reduces Dit. Thinner TiN improves the variation in effective work function and reduces gate dielectric charge. Doping of the polysilicon above the TiN metal gate with B or P has negligible effect on the effective work function. The work-function-tuned TiN is integrated into ultralow-power fully depleted silicon-on-insulator CMOS transistors optimized for subthreshold operation at 0.3 V. The following performance metrics are achieved: 64-80-mV/dec subthreshold swing, PMOS/NMOS on-current ratio near 1, 71% reduction in Cgd, and 55% reduction in Vt variation when compared with conventional transistors, although significant short-channel effects are observed.",
"title": ""
},
{
"docid": "9c519c7040192b1f726614513fbdbb11",
"text": "We propose a novel recurrent encoder-decoder network model for real-time video-based face alignment. Our proposed model predicts 2D facial point maps regularized by a regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model. At the temporal level, we first decouple the features in the bottleneck of the network into temporalvariant factors, such as pose and expression, and temporalinvariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporalvariant features, yielding better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state-of-the-art in standard datasets.",
"title": ""
},
{
"docid": "3cc5648cab5d732d3d30bd95d9d06c00",
"text": "We are concerned with the utility of social laws in a computational environment laws which guarantee the successful coexistence of multi ple programs and programmers In this paper we are interested in the o line design of social laws where we as designers must decide ahead of time on useful social laws In the rst part of this paper we sug gest the use of social laws in the domain of mobile robots and prove analytic results about the usefulness of this approach in that setting In the second part of this paper we present a general model of social law in a computational system and investigate some of its proper ties This includes a de nition of the basic computational problem involved with the design of multi agent systems and an investigation of the automatic synthesis of useful social laws in the framework of a model which refers explicitly to social laws This work was supported in part by a grant from the US Israel Binational Science Foundation",
"title": ""
},
{
"docid": "c2fb88df12e97e8475bb923063c8a46e",
"text": "This paper addresses the job shop scheduling problem in the presence of machine breakdowns. In this work, we propose to exploit the advantages of data mining techniques to resolve the problem. We proposed an approach to discover a set of classification rules by using historic scheduling data. Intelligent decisions are then made in real time based on this constructed rules to assign the corresponding dispatching rule in a dynamic job shop scheduling environment. A simulation study is conducted at last with the constructed rules and four other dispatching rules from literature. The experimental results verify the performance of classification rule for minimizing mean tardiness.",
"title": ""
},
{
"docid": "9d50be44155665f5fa2fb213c23d51f2",
"text": "A number of proposals have been put forth in recent years for the solution of Markov decision processes (MDPs) whose state (and sometimes action) spaces are factored. One recent class of methods involves linear value function approximation, where the optimal value function is assumed to be a linear combination of some set of basis functions, with the aim of finding suitable weights. While sophisticated techniques have been developed for finding the best approximation within this constrained space, few methods have been proposed for choosing a suitable basis set, or modifying it if solution quality is found wanting. We propose a general framework, and specific proposals, that address both of these questions. In particular, we examine <i>weakly coupled MDPs</i> where a number of subtasks can be viewed independently modulo resource constraints. We then describe methods for constructing a piecewise linear combination of the subtask value functions, using greedy decision tree techniques. We argue that this architecture is suitable for many types of MDPs whose combinatorics are determined largely by the existence multiple conflicting objectives.",
"title": ""
},
{
"docid": "bf241075beac4fedfb0ad9f8551c652d",
"text": "This paper discloses a new very broadband compact transition between double-ridge waveguide and coaxial line. The transition includes an original waveguide to coaxial mode converter and modified impedance transformer. Very good performance is predicted theoretically and confirmed experimentally over a 3:1 bandwidth.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "7c4cb5f52509ad5a3795e9ce59980fec",
"text": "Line-of-sight stabilization against various disturbances is an essential property of gimbaled imaging systems mounted on mobile platforms. In recent years, the importance of target detection from higher distances has increased. This has raised the need for better stabilization performance. For that reason, stabilization loops are designed such that they have higher gains and larger bandwidths. As these are required for good disturbance attenuation, sufficient loop stability is also needed. However, model uncertainties around structural resonances impose strict restrictions on sufficient loop stability. Therefore, to satisfy high stabilization performance in the presence of model uncertainties, robust control methods are required. In this paper, a robust controller design in LQG/LTR, H∞ , and μ -synthesis framework is described for a two-axis gimbal. First, the performance criteria and weights are determined to minimize the stabilization error with moderate control effort under known platform disturbance profile. Second, model uncertainties are determined by considering locally linearized models at different operating points. Next, robust LQG/LTR, H∞ , and μ controllers are designed. Robust stability and performance of the three designs are investigated and compared. The paper finishes with the experimental performances to validate the designed robust controllers.",
"title": ""
}
] | scidocsrr |
5ddac02311b8e3bda5b0039980d0ca71 | SCREENING OF PLANT ESSENTIAL OILS FOR ANTIFUNGAL ACTIVITY AGAINST MALASSEZIA FURFUR | [
{
"docid": "39db226d1f8980b3f0bc008c42248f2f",
"text": "In vitro studies have demonstrated antibacterial activity of essential oils (EOs) against Listeria monocytogenes, Salmonella typhimurium, Escherichia coli O157:H7, Shigella dysenteria, Bacillus cereus and Staphylococcus aureus at levels between 0.2 and 10 microl ml(-1). Gram-negative organisms are slightly less susceptible than gram-positive bacteria. A number of EO components has been identified as effective antibacterials, e.g. carvacrol, thymol, eugenol, perillaldehyde, cinnamaldehyde and cinnamic acid, having minimum inhibitory concentrations (MICs) of 0.05-5 microl ml(-1) in vitro. A higher concentration is needed to achieve the same effect in foods. Studies with fresh meat, meat products, fish, milk, dairy products, vegetables, fruit and cooked rice have shown that the concentration needed to achieve a significant antibacterial effect is around 0.5-20 microl g(-1) in foods and about 0.1-10 microl ml(-1) in solutions for washing fruit and vegetables. EOs comprise a large number of components and it is likely that their mode of action involves several targets in the bacterial cell. The hydrophobicity of EOs enables them to partition in the lipids of the cell membrane and mitochondria, rendering them permeable and leading to leakage of cell contents. Physical conditions that improve the action of EOs are low pH, low temperature and low oxygen levels. Synergism has been observed between carvacrol and its precursor p-cymene and between cinnamaldehyde and eugenol. Synergy between EO components and mild preservation methods has also been observed. Some EO components are legally registered flavourings in the EU and the USA. Undesirable organoleptic effects can be limited by careful selection of EOs according to the type of food.",
"title": ""
}
] | [
{
"docid": "a6c9ff64c9c007e71192eb7023c8617f",
"text": "Elderly individuals can access online 3D virtual stores from their homes to make purchases. However, most virtual environments (VEs) often elicit physical responses to certain types of movements in the VEs. Some users exhibit symptoms that parallel those of classical motion sickness, called cybersickness, both during and after the VE experience. This study investigated the factors that contribute to cybersickness among the elderly when immersed in a 3D virtual store. The results of the first experiment show that the simulator sickness questionnaire (SSQ) scores increased significantly by the reasons of navigational rotating speed and duration of exposure. Based on these results, a warning system with fuzzy control for combating cybersickness was developed. The results of the second and third experiments show that the proposed system can efficiently determine the level of cybersickness based on the fuzzy sets analysis of operating signals from scene rotating speed and exposure duration, and subsequently combat cybersickness. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "32a45d3c08e24d29ad5f9693253c0e9e",
"text": "This paper presents comparative study of high-speed, low-power and low voltage full adder circuits. Our approach is based on XOR-XNOR design full adder circuits in a single unit. A low power and high performance 9T full adder cell using a design style called “XOR (3T)” is discussed. The designed circuit commands a high degree of regularity and symmetric higher density than the conventional CMOS design style as well as it lowers power consumption by using XOR (3T) logic circuits. Gate Diffusion Input (GDI) technique of low-power digital combinatorial circuit design is also described. This technique helps in reducing the power consumption and the area of digital circuits while maintaining low complexity of logic design. This paper analyses, evaluates and compares the performance of various adder circuits. Several simulations conducted using different voltage supplies, load capacitors and temperature variation demonstrate the superiority of the XOR (3T) based full adder designs in term of delay, power and power delay product (PDP) compared to the other full adder circuits. Simulation results illustrate the superiority of the designed adder circuits against the conventional CMOS, TG and Hybrid full adder circuits in terms of power, delay and power delay product (PDP). .",
"title": ""
},
{
"docid": "d10c17324f8f6d4523964f10bc689d8e",
"text": "This article studied a novel Log-Periodic Dipole Antenna (LPDA) with distributed inductive load for size reduction. By adding a short circuit stub at top of the each element, the dimensions of the LPDA are reduced by nearly 50% compared to the conventional one. The impedance bandwidth of the presented antenna is nearly 122% (54~223MHz) (S11<;10dB), and this antenna is very suited for BROADCAST and TV applications.",
"title": ""
},
{
"docid": "0551e9faef769350102a404fa0b61dc1",
"text": "Lignocellulosic biomass is a complex biopolymer that is primary composed of cellulose, hemicellulose, and lignin. The presence of cellulose in biomass is able to depolymerise into nanodimension biomaterial, with exceptional mechanical properties for biocomposites, pharmaceutical carriers, and electronic substrate's application. However, the entangled biomass ultrastructure consists of inherent properties, such as strong lignin layers, low cellulose accessibility to chemicals, and high cellulose crystallinity, which inhibit the digestibility of the biomass for cellulose extraction. This situation offers both challenges and promises for the biomass biorefinery development to utilize the cellulose from lignocellulosic biomass. Thus, multistep biorefinery processes are necessary to ensure the deconstruction of noncellulosic content in lignocellulosic biomass, while maintaining cellulose product for further hydrolysis into nanocellulose material. In this review, we discuss the molecular structure basis for biomass recalcitrance, reengineering process of lignocellulosic biomass into nanocellulose via chemical, and novel catalytic approaches. Furthermore, review on catalyst design to overcome key barriers regarding the natural resistance of biomass will be presented herein.",
"title": ""
},
{
"docid": "6951f051c3fe9ab24259dcc6f812fc68",
"text": "User Generated Content has become very popular since the birth of web services such as YouTube allowing the distribution of such user-produced media content in an easy manner. YouTube-like services are different from existing traditional VoD services because the service provider has only limited control over the creation of new content. We analyze how the content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. The analysis of the traffic shows that: (1) No strong correlation is observed between global and local popularity; (2) neither time scale nor user population has an impact on the local popularity distribution; (3) video clips of local interest have a high local popularity. Using our measurement data to drive trace-driven simulations, we also demonstrate the implications of alternative distribution infrastructures on the performance of a YouTube-like VoD service. The results of these simulations show that client-based local caching, P2P-based distribution, and proxy caching can reduce network traffic significantly and allow faster access to video clips.",
"title": ""
},
{
"docid": "d3fc62a9858ddef692626b1766898c9f",
"text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.",
"title": ""
},
{
"docid": "0f25f9bc31f4913e8ad8e5015186c0d4",
"text": "Fractures of the scaphoid bone mainly occur in young adults and constitute 2-7% of all fractures. The specific blood supply in combination with the demanding functional requirements can easily lead to disturbed fracture healing. Displaced scaphoid fractures are seen on radiographs. The diagnostic strategy of suspected scaphoid fractures, however, is surrounded by controversy. Bone scintigraphy, magnetic resonance imaging and computed tomography have their shortcomings. Early treatment leads to a better outcome. Scaphoid fractures can be treated conservatively and operatively. Proximal scaphoid fractures and displaced scaphoid fractures have a worse outcome and might be better off with an open or closed reduction and internal fixation. The incidence of scaphoid non-unions has been reported to be between 5 and 15%. Non-unions are mostly treated operatively by restoring the anatomy to avoid degenerative wrist arthritis.",
"title": ""
},
{
"docid": "6cc8164c14c6a95617590e66817c0db7",
"text": "nor fazila k & ku Halim kH. 2012. Effects of soaking on yield and quality of agarwood oil. The aims of this study were to investigate vaporisation temperature of agarwood oil, determine enlargement of wood pore size, analyse chemical components in soaking solvents and examine the chemical composition of agarwood oil extracted from soaked and unsoaked agarwood. Agarwood chips were soaked in two different acids, namely, sulphuric and lactic acids for 168 hours at room temperature (25 °C). Effects of soaking were determined using thermogravimetric analysis (TGA), scanning electron microscope (SEM) and gas chromatography-mass spectrum analysis. With regard to TGA curve, a small portion of weight loss was observed between 110 and 200 °C for agarwood soaked in lactic acid. SEM micrograph showed that the lactic acid-soaked agarwood demonstrated larger pore size. High quality agarwood oil was obtained from soaked agarwood. In conclusion, agarwood soaked in lactic acid with concentration of 0.1 M had the potential to reduce the vaporisation temperature of agarwood oil and enlarge the pore size of wood, hence, improving the yield and quality of agarwood oil.",
"title": ""
},
{
"docid": "fcd5bdd4e7e4d240638c84f7d61f8f4b",
"text": "We investigate the performance of hysteresis-free short-channel negative-capacitance FETs (NCFETs) by combining quantum-mechanical calculations with the Landau–Khalatnikov equation. When the subthreshold swing (SS) becomes smaller than 60 mV/dec, a negative value of drain-induced barrier lowering is obtained. This behavior, drain-induced barrier rising (DIBR), causes negative differential resistance in the output characteristics of the NCFETs. We also examine the performance of an inverter composed of hysteresis-free NCFETs to assess the effects of DIBR at the circuit level. Contrary to our expectation, although hysteresis-free NCFETs are used, hysteresis behavior is observed in the transfer properties of the inverter. Furthermore, it is expected that the NCFET inverter with hysteresis behavior can be used as a Schmitt trigger inverter.",
"title": ""
},
{
"docid": "4a86a0707e6ac99766f89e81cccc5847",
"text": "Magnetic core loss is an emerging concern for integrated POL converters. As switching frequency increases, core loss is comparable to or even higher than winding loss. Accurate measurement of core loss is important for magnetic design and converter loss estimation. And exploring new high frequency magnetic materials need a reliable method to evaluate their losses. However, conventional method is limited to low frequency due to sensitivity to phase discrepancy. In this paper, a new method is proposed for high frequency (1MHz∼50MHz) core loss measurement. The new method reduces the phase induced error from over 100% to <5%. So with the proposed methods, the core loss can be accurately measured.",
"title": ""
},
{
"docid": "643e97c3bc0cdde54bf95720fe52f776",
"text": "Ego-motion estimation based on images from a stereo camera has become a common function for autonomous mobile systems and is gaining increasing importance in the automotive sector. Unlike general robotic platforms, vehicles have a suspension adding degrees of freedom and thus complexity to their dynamics model. Some parameters of the model, such as the vehicle mass, are non-static as they depend on e.g. the specific load conditions and thus need to be estimated online to guarantee a concise and safe autonomous maneuvering of the vehicle. In this paper, a novel visual odometry based approach to simultaneously estimate ego-motion and selected vehicle parameters using a dual Ensemble Kalman Filter and a non-linear single-track model with pitch dynamics is presented. The algorithm has been validated using simulated data and showed a good performance for both the estimation of the ego-motion and of the relevant vehicle parameters.",
"title": ""
},
{
"docid": "e7f91b90eab54dfd7f115a3a0225b673",
"text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.",
"title": ""
},
{
"docid": "6816bb15dba873244306f22207525bee",
"text": "Imbalance suggests a feeling of dynamism and movement in static objects. It is therefore not surprising that many 3D models stand in impossibly balanced configurations. As long as the models remain in a computer this is of no consequence: the laws of physics do not apply. However, fabrication through 3D printing breaks the illusion: printed models topple instead of standing as initially intended. We propose to assist users in producing novel, properly balanced designs by interactively deforming an existing model. We formulate balance optimization as an energy minimization, improving stability by modifying the volume of the object, while preserving its surface details. This takes place during interactive editing: the user cooperates with our optimizer towards the end result. We demonstrate our method on a variety of models. With our technique, users can produce fabricated objects that stand in one or more surprising poses without requiring glue or heavy pedestals.",
"title": ""
},
{
"docid": "46a47931c51a3b5580580d27a9a6d132",
"text": "In airline service industry, it is difficult to collect data about customers' feedback by questionnaires, but Twitter provides a sound data source for them to do customer sentiment analysis. However, little research has been done in the domain of Twitter sentiment classification about airline services. In this paper, an ensemble sentiment classification strategy was applied based on Majority Vote principle of multiple classification methods, including Naive Bayes, SVM, Bayesian Network, C4.5 Decision Tree and Random Forest algorithms. In our experiments, six individual classification approaches, and the proposed ensemble approach were all trained and tested using the same dataset of 12864 tweets, in which 10 fold evaluation is used to validate the classifiers. The results show that the proposed ensemble approach outperforms these individual classifiers in this airline service Twitter dataset. Based on our observations, the ensemble approach could improve the overall accuracy in twitter sentiment classification for other services as well.",
"title": ""
},
{
"docid": "ba34f6120b08c57cec8794ec2b9256d2",
"text": "Principles of reconstruction dictate a number of critical points for successful repair. To achieve aesthetic and functional goals, the dermatologic surgeon should avoid deviation of anatomical landmarks and free margins, maintain shape and symmetry, and repair with skin of similar characteristics. Reconstruction of the ear presents a number of unique challenges based on the limited amount of adjacent lax tissue within the cosmetic unit and the structure of the auricle, which consists of a relatively thin skin surface and flexible cartilaginous framework.",
"title": ""
},
{
"docid": "e96cf46cc99b3eff60d32f3feb8afc47",
"text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "0e2fdb9fc054e47a3f0b817f68de68b1",
"text": "Recent regulatory guidance suggests that drug metabolites identified in human plasma should be present at equal or greater levels in at least one of the animal species used in safety assessments (MIST). Often synthetic standards for the metabolites do not exist, thus this has introduced multiple challenges regarding the quantitative comparison of metabolites between human and animals. Various bioanalytical approaches are described to evaluate the exposure of metabolites in animal vs. human. A simple LC/MS/MS peak area ratio comparison approach is the most facile and applicable approach to make a first assessment of whether metabolite exposures in animals exceed that in humans. In most cases, this measurement is sufficient to demonstrate that an animal toxicology study of the parent drug has covered the safety of the human metabolites. Methods whereby quantitation of metabolites can be done in the absence of chemically synthesized authentic standards are also described. Only in rare cases, where an actual exposure measurement of a metabolite is needed, will a validated or qualified method requiring a synthetic standard be needed. The rigor of the bioanalysis is increased accordingly based on the results of animal:human ratio measurements. This data driven bioanalysis strategy to address MIST issues within standard drug development processes is described.",
"title": ""
},
{
"docid": "18defc8666f7fea7ae89ff3d5d833e0a",
"text": "[1] We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.",
"title": ""
},
{
"docid": "be1b9731df45408571e75d1add5dfe9c",
"text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.",
"title": ""
},
{
"docid": "93e5ed1d67fe3d20c7b0177539e509c4",
"text": "Business models that rely on social media and user-generated content have shifted from the more traditional business model, where value for the organization is derived from the one-way delivery of products and/or services, to the provision of intangible value based on user engagement. This research builds a model that hypothesizes that the user experiences from social interactions among users, operationalized as personalization, transparency, access to social resources, critical mass of social acquaintances, and risk, as well as with the technical features of the social media platform, operationalized as the completeness, flexibility, integration, and evolvability, influence user engagement and subsequent usage behavior. Using survey responses from 408 social media users, findings suggest that both social and technical factors impact user engagement and ultimately usage with additional direct impacts on usage by perceptions of the critical mass of social acquaintances and risk. KEywORdS Social Interactions, Social Media, Social Networking, Technical Features, Use, User Engagement, User Experience",
"title": ""
}
] | scidocsrr |
8296a5113d06eaa5ce94aecce8cdaf91 | Manufacturing Strategy , Capabilities and Performance | [
{
"docid": "cddca3a23ea0568988243c8f005e0edc",
"text": "This paper investigates the mechanisms through which organizations develop dynamic capabilities, defined as routinized activities directed to the development and adaptation of operating routines, and reflects upon the role of (1) experience accumulation, (2) knowledge articulation and (3) knowledge codification processes in the evolution of dynamic, as well as operational, routines. The argument is made that dynamic capabilities are shaped by the co-evolution of these learning mechanisms. At any point in time, firms adopt a mix of learning behaviors constituted by a semi-automatic accumulation of experience and by deliberate investments in knowledge articulation and codification activities. The relative effectiveness of these capability-building mechanisms is analyzed here as contingent upon selected features of the task to be learned, such as its frequency, homogeneity and degree of causal ambiguity, and testable hypotheses are derived. Somewhat counterintuitive implications of the analysis include the relatively superior effectiveness of highly deliberate learning processes, such as knowledge codification, at lower levels of frequency and homogeneity of the organizational task, in contrast with common managerial practice.",
"title": ""
},
{
"docid": "cf2e23cddb72b02d1cca83b4c3bf17a8",
"text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr",
"title": ""
}
] | [
{
"docid": "a38d0e0d032c3e4074f9ac0f09719737",
"text": "A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to \"straightforward\" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straight-forward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.",
"title": ""
},
{
"docid": "18827ff3d37d293846f37cbed65f7a09",
"text": "The growth of Instagram continues, with the majority of its users being young women. This study investigates the impact of Instagram upon source credibility, consumer buying intention and social identification with different types of celebrities. In-depth interviews were conducted with 18 female Instagram users aged 18-30 to determine the extent to which Instagram influences their buying behaviour. The research findings show that celebrities on Instagram are influential in the purchase behaviour of young female users. However, non-traditional celebrities such as bloggers, YouTube personalities and ‘Instafamous’ profiles are more powerful, as participants regard them as more credible and are able to relate to these, rather than more traditional, celebrities. Female users are perceptively aware and prefer to follow Instagram profiles that intentionally portray positive images and provide encouraging reviews.",
"title": ""
},
{
"docid": "8b947250873921478dd7798c47314979",
"text": "In this letter, an ultra-wideband (UWB) bandpass filter (BPF) using stepped-impedance stub-loaded resonator (SISLR) is presented. Characterized by theoretical analysis, the proposed SISLR is found to have the advantage of providing more degrees of freedom to adjust the resonant frequencies. Besides, two transmission zeros can be created at both lower and upper sides of the passband. Benefiting from these features, a UWB BPF is then investigated by incorporating this SISLR and two aperture-backed interdigital coupled-lines. Finally, this filter is built and tested. The simulated and measured results are in good agreement with each other, showing good wideband filtering performance with sharp rejection skirts outside the passband.",
"title": ""
},
{
"docid": "2a5f555c00d98a87fe8dd6d10e27dc38",
"text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.",
"title": ""
},
{
"docid": "5cdb945589f528d28fe6d0dce360a0e1",
"text": "Bankruptcy prediction has been a subject of interests for almost a century and it still ranks high among hottest topics in economics. The aim of predicting financial distress is to develop a predictive model that combines various econometric measures and allows to foresee a financial condition of a firm. In this domain various methods were proposed that were based on statistical hypothesis testing, statistical modelling (e.g., generalized linear models), and recently artificial intelligence (e.g., neural networks, Support Vector Machines, decision tress). In this paper, we propose a novel approach for bankruptcy prediction that utilizes Extreme Gradient Boosting for learning an ensemble of decision trees. Additionally, in order to reflect higher-order statistics in data and impose a prior knowledge about data representation, we introduce a new concept that we refer as to synthetic features. A synthetic feature is a combination of the econometric measures using arithmetic operations (addition, subtraction, multiplication, division). Each synthetic feature can be seen as a single regression model that is developed in an evolutionary manner. We evaluate our solution using the collected data about Polish companies in five tasks corresponding to the bankruptcy prediction in the 1st, 2nd, 3rd, 4th, and 5th year. We compare our approach with the reference methods. ∗Corresponding author, Tel.: (+48) 71 320 44 53. Email addresses: [email protected] (Maciej Zięba ), [email protected] (Sebastian K. Tomczak), [email protected] (Jakub M. Tomczak) Preprint submitted to Expert Systems with Applications April 4, 2016",
"title": ""
},
{
"docid": "a936b6d3b0f4a99042260abea0f39032",
"text": "In this paper, a new type of 3D bin packing problem (BPP) is proposed, in which a number of cuboidshaped items must be put into a bin one by one orthogonally. The objective is to find a way to place these items that can minimize the surface area of the bin. This problem is based on the fact that there is no fixed-sized bin in many real business scenarios and the cost of a bin is proportional to its surface area. Our research shows that this problem is NP-hard. Based on previous research on 3D BPP, the surface area is determined by the sequence, spatial locations and orientations of items. Among these factors, the sequence of items plays a key role in minimizing the surface area. Inspired by recent achievements of deep reinforcement learning (DRL) techniques, especially Pointer Network, on combinatorial optimization problems such as TSP, a DRL-based method is applied to optimize the sequence of items to be packed into the bin. Numerical results show that the method proposed in this paper achieve about 5% improvement than heuristic method.",
"title": ""
},
{
"docid": "9a071b23eb370f053a5ecfd65f4a847d",
"text": "INTRODUCTION\nConcomitant obesity significantly impairs asthma control. Obese asthmatics show more severe symptoms and an increased use of medications.\n\n\nOBJECTIVES\nThe primary aim of the study was to identify genes that are differentially expressed in the peripheral blood of asthmatic patients with obesity, asthmatic patients with normal body mass, and obese patients without asthma. Secondly, we investigated whether the analysis of gene expression in peripheral blood may be helpful in the differential diagnosis of obese patients who present with symptoms similar to asthma.\n\n\nPATIENTS AND METHODS\nThe study group included 15 patients with asthma (9 obese and 6 normal-weight patients), while the control group-13 obese patients in whom asthma was excluded. The analysis of whole-genome expression was performed on RNA samples isolated from peripheral blood.\n\n\nRESULTS\nThe comparison of gene expression profiles between asthmatic patients with obesity and those with normal body mass revealed a significant difference in 6 genes. The comparison of the expression between controls and normal-weight patients with asthma showed a significant difference in 23 genes. The analysis of genes with a different expression revealed a group of transcripts that may be related to an increased body mass (PI3, LOC100008589, RPS6KA3, LOC441763, IFIT1, and LOC100133565). Based on gene expression results, a prediction model was constructed, which allowed to correctly classify 92% of obese controls and 89% of obese asthmatic patients, resulting in the overall accuracy of the model of 90.9%.\n\n\nCONCLUSIONS\nThe results of our study showed significant differences in gene expression between obese asthmatic patients compared with asthmatic patients with normal body mass as well as in obese patients without asthma compared with asthmatic patients with normal body mass.",
"title": ""
},
{
"docid": "075e263303b73ee5d1ed6cff026aee63",
"text": "Automatic and accurate whole-heart and great vessel segmentation from 3D cardiac magnetic resonance (MR) images plays an important role in the computer-assisted diagnosis and treatment of cardiovascular disease. However, this task is very challenging due to ambiguous cardiac borders and large anatomical variations among different subjects. In this paper, we propose a novel densely-connected volumetric convolutional neural network, referred as DenseVoxNet, to automatically segment the cardiac and vascular structures from 3D cardiac MR images. The DenseVoxNet adopts the 3D fully convolutional architecture for effective volume-to-volume prediction. From the learning perspective, our DenseVoxNet has three compelling advantages. First, it preserves the maximum information flow between layers by a densely-connected mechanism and hence eases the network training. Second, it avoids learning redundant feature maps by encouraging feature reuse and hence requires fewer parameters to achieve high performance, which is essential for medical applications with limited training data. Third, we add auxiliary side paths to strengthen the gradient propagation and stabilize the learning process. We demonstrate the effectiveness of DenseVoxNet by comparing it with the state-of-the-art approaches from HVSMR 2016 challenge in conjunction with MICCAI, and our network achieves the best dice coefficient. We also show that our network can achieve better performance than other 3D ConvNets but with fewer parameters.",
"title": ""
},
{
"docid": "cf419597981ba159ac3c1e85af683871",
"text": "Energy is a vital input for social and economic development. As a result of the generalization of agricultural, industrial and domestic activities the demand for energy has increased remarkably, especially in emergent countries. This has meant rapid grower in the level of greenhouse gas emissions and the increase in fuel prices, which are the main driving forces behind efforts to utilize renewable energy sources more effectively, i.e. energy which comes from natural resources and is also naturally replenished. Despite the obvious advantages of renewable energy, it presents important drawbacks, such as the discontinuity of ulti-criteria decision analysis",
"title": ""
},
{
"docid": "8e52cdff14dddd82a4ad8fc5b967c1b2",
"text": "Learning-based binary hashing has become a powerful paradigm for fast search and retrieval in massive databases. However, due to the requirement of discrete outputs for the hash functions, learning such functions is known to be very challenging. In addition, the objective functions adopted by existing hashing techniques are mostly chosen heuristically. In this paper, we propose a novel generative approach to learn hash functions through Minimum Description Length principle such that the learned hash codes maximally compress the dataset and can also be used to regenerate the inputs. We also develop an efficient learning algorithm based on the stochastic distributional gradient, which avoids the notorious difficulty caused by binary output constraints, to jointly optimize the parameters of the hash function and the associated generative model. Extensive experiments on a variety of large-scale datasets show that the proposed method achieves better retrieval results than the existing state-of-the-art methods.",
"title": ""
},
{
"docid": "7ea56b976524d77b7234340318f7e8dc",
"text": "Market Integration and Market Structure in the European Soft Drinks Industry: Always Coca-Cola? by Catherine Matraves* This paper focuses on the question of European integration, considering whether the geographic level at which competition takes place differs across the two major segments of the soft drinks industry: carbonated soft drinks and mineral water. Our evidence shows firms are competing at the European level in both segments. Interestingly, the European market is being integrated through corporate strategy, defined as increased multinationality, rather than increased trade flows. To interpret these results, this paper uses the new theory of market structure where the essential notion is that in endogenous sunk cost industries such as soft drinks, the traditional inverse structure-size relation may break down, due to the escalation of overhead expenditures.",
"title": ""
},
{
"docid": "df35b679204e0729266a1076685600a1",
"text": "A new innovations state space modeling framework, incorporating Box-Cox transformations, Fourier series with time varying coefficients and ARMA error correction, is introduced for forecasting complex seasonal time series that cannot be handled using existing forecasting models. Such complex time series include time series with multiple seasonal periods, high frequency seasonality, non-integer seasonality and dual-calendar effects. Our new modelling framework provides an alternative to existing exponential smoothing models, and is shown to have many advantages. The methods for initialization and estimation, including likelihood evaluation, are presented, and analytical expressions for point forecasts and interval predictions under the assumption of Gaussian errors are derived, leading to a simple, comprehensible approach to forecasting complex seasonal time series. Our trigonometric formulation is also presented as a means of decomposing complex seasonal time series, which cannot be decomposed using any of the existing decomposition methods. The approach is useful in a broad range of applications, and we illustrate its versatility in three empirical studies where it demonstrates excellent forecasting performance over a range of prediction horizons. In addition, we show that our trigonometric decomposition leads to the identification and extraction of seasonal components, which are otherwise not apparent in the time series plot itself.",
"title": ""
},
{
"docid": "89d736c68d2befba66a0b7d876e52502",
"text": "The optical properties of human skin, subcutaneous adipose tissue and human mucosa were measured in the wavelength range 400–2000 nm. The measurements were carried out using a commercially available spectrophotometer with an integrating sphere. The inverse adding–doubling method was used to determine the absorption and reduced scattering coefficients from the measurements.",
"title": ""
},
{
"docid": "b27ac6851bb576cac1c8d2f7e76fc8f1",
"text": "A novel 3-dimensional Dual Control-gate with Surrounding Floating-gate (DC-SF) NAND flash cell has been successfully developed, for the first time. The DC-SF cell consists of a surrounding floating gate with stacked dual control gate. With this structure, high coupling ratio, low voltage cell operation (program: 15V and erase: −11V), and wide P/E window (9.2V) can be obtained. Moreover, negligible FG-FG interference (12mV/V) is achieved due to the control gate shield effect. Then we propose 3D DC-SF NAND flash cell as the most promising candidate for 1Tb and beyond with stacked multi bit FG cell (2 ∼ 4bit/cell).",
"title": ""
},
{
"docid": "aa50aeb6c1c4b52ff677a313d49fd8df",
"text": "Monocular depth estimation, which plays a key role in understanding 3D scene geometry, is fundamentally an illposed problem. Existing methods based on deep convolutional neural networks (DCNNs) have examined this problem by learning convolutional networks to estimate continuous depth maps from monocular images. However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions. In this paper, we hypothesize that achieving a compromise between spatial and depth resolutions can improve network training. Based on this “compromise principle”, we propose a regression-classification cascaded network (RCCN), which consists of a regression branch predicting a low spatial resolution continuous depth map and a classification branch predicting a high spatial resolution discrete depth map. The two branches form a cascaded structure allowing the main classification branch to benefit from the auxiliary regression branch. By leveraging large-scale raw training datasets and some data augmentation strategies, our network achieves competitive or state-of-the-art results on three challenging benchmarks, including NYU Depth V2 [1], KITTI [2], and Make3D [3].",
"title": ""
},
{
"docid": "8d7f2c3d2b0a02f6ad571ae44a6f7a9f",
"text": "Synthetic Aperture Radar (SAR) satellite images have proven to be a successful tool for identifying oil slicks. Natural oil seeps can be detected as elongated, radar-dark slicks in SAR images. Use of SAR images for seep detection is enhanced by a Texture Classifying Neural Network Algorithm (TCNNA), which delineates areas where layers of floating oil suppress Bragg scattering. The effect is strongly influenced by wind strength and sea state. A multi orientation Leung-Malik filter bank [1] is used to identify slick shapes under projection of edges. By integrating ancillary data consisting of the incidence angle, descriptors of texture and environmental variables, considerable accuracy were added to the classification ability to discriminate false targets from oil slicks and look-alike pixels. The reliability of the TCNNA is measured after processing 71 images containing oil slicks.",
"title": ""
},
{
"docid": "f0659349cab12decbc4d07eb74361b79",
"text": "This article suggests that the context and process of resource selection have an important influence on firm heterogeneity and sustainable competitive advantage. It is argued that a firm’s sustainable advantage depends on its ability to manage the institutional context of its resource decisions. A firm’s institutional context includes its internal culture as well as broader influences from the state, society, and interfirm relations that define socially acceptable economic behavior. A process model of firm heterogeneity is proposed that combines the insights of a resourcebased view with the institutional perspective from organization theory. Normative rationality, institutional isolating mechanisms, and institutional sources of firm homogeneity are proposed as determinants of rent potential that complement and extend resource-based explanations of firm variation and sustainable competitive advantage. The article suggests that both resource capital and institutional capital are indispensable to sustainable competitive advantage. 1997 by John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "66133239610bb08d83fb37f2c11a8dc5",
"text": "sists of two excitation laser beams. One beam scans the volume of the brain from the side of a horizontally positioned zebrafish but is rapidly switched off when inside an elliptical exclusion region located over the eye (Fig. 1b). Simultaneously, a second beam scans from the front, to cover the forebrain and the regions between the eyes. Together, these two beams achieve nearly complete coverage of the brain without exposing the retina to direct laser excitation, which allows unimpeded presentation of visual stimuli that are projected onto a screen below the fish. To monitor intended swimming behavior, we used existing methods for recording activity from motor neuron axons in the tail of paralyzed larval zebrafish1 (Fig. 1a and Supplementary Note). This system provides imaging speeds of up to three brain volumes per second (40 planes per brain volume); increases in camera speed will allow for faster volumetric sampling. Because light-sheet imaging may still introduce some additional sensory stimulation (excitation light scattering in the brain and reflected from the glass walls of the chamber), we assessed whether fictive behavior in 5–7 d post-fertilization (d.p.f.) fish was robust to the presence of the light sheets. We tested two visuoLight-sheet functional imaging in fictively behaving zebrafish",
"title": ""
},
{
"docid": "109644763e3a5ee5f59ec8e83719cc8d",
"text": "The field of Natural Language Processing (NLP) is growing rapidly, with new research published daily along with an abundance of tutorials, codebases and other online resources. In order to learn this dynamic field or stay up-to-date on the latest research, students as well as educators and researchers must constantly sift through multiple sources to find valuable, relevant information. To address this situation, we introduce TutorialBank, a new, publicly available dataset which aims to facilitate NLP education and research. We have manually collected and categorized over 6,300 resources on NLP as well as the related fields of Artificial Intelligence (AI), Machine Learning (ML) and Information Retrieval (IR). Our dataset is notably the largest manually-picked corpus of resources intended for NLP education which does not include only academic papers. Additionally, we have created both a search engine 1 and a command-line tool for the resources and have annotated the corpus to include lists of research topics, relevant resources for each topic, prerequisite relations among topics, relevant subparts of individual resources, among other annotations. We are releasing the dataset and present several avenues for further research.",
"title": ""
},
{
"docid": "bf00f7d7cdcbdc3e9d082bf92eec075c",
"text": "Network software is a critical component of any distributed system. Because of its complexity, network software is commonly layered into a hierarchy of protocols, or more generally, into a protocol graph. Typical protocol graphs—including those standardized in the ISO and TCP/IP network architectures—share three important properties; the protocol graph is simple, the nodes of the graph (protocols) encapsulate complex functionality, and the topology of the graph is relatively static. This paper describes a new way to organize network software that differs from conventional architectures in all three of these properties. In our approach, the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic. The main contribution of this paper is to describe the ideas behind our new architecture, illustrate the advantages of using the architecture, and demonstrate that the architecture results in efficient network software.",
"title": ""
}
] | scidocsrr |
7c6962f955fef9a1cce3c32f8237b476 | The effect of organizational support on ERP implementation | [
{
"docid": "e964a46706179a92b775307166a64c8a",
"text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).",
"title": ""
}
] | [
{
"docid": "f6ba46b72139f61cfb098656d71553ed",
"text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.",
"title": ""
},
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
},
{
"docid": "30d6cab338420bc48b93aeb70d3e72c0",
"text": "This paper presents a real-time video traffic monitoring application based on object detection and tracking, for determining traffic parameters such as vehicle velocity and number of vehicles. In detection step, background modeling approach based on edge information is proposed for separating moving foreground objects from the background. An advantage of edge is more robust to lighting changes in outdoor environments and requires significantly less computing resource. In tracking step, optical flow Lucas-Kanade (Pyramid) is applied to track each segmented object. The proposed system was evaluated on six video sequences recorded in various daytime environment",
"title": ""
},
{
"docid": "26d6ffbc4ee2e0f5e3e6699fd33bdc5f",
"text": "We present a method for efficient learning of control policies for multiple related robotic motor skills. Our approach consists of two stages, joint training and specialization training. During the joint training stage, a neural network policy is trained with minimal information to disambiguate the motor skills. This forces the policy to learn a common representation of the different tasks. Then, during the specialization training stage we selectively split the weights of the policy based on a per-weight metric that measures the disagreement among the multiple tasks. By splitting part of the control policy, it can be further trained to specialize to each task. To update the control policy during learning, we use Trust Region Policy Optimization with Generalized Advantage Function (TRPOGAE). We propose a modification to the gradient update stage of TRPO to better accommodate multi-task learning scenarios. We evaluate our approach on three continuous motor skill learning problems in simulation: 1) a locomotion task where three single legged robots with considerable difference in shape and size are trained to hop forward, 2) a manipulation task where three robot manipulators with different sizes and joint types are trained to reach different locations in 3D space, and 3) locomotion of a two-legged robot, whose range of motion of one leg is constrained in different ways. We compare our training method to three baselines. The first baseline uses only jointtraining for the policy, the second trains independent policies for each task, and the last randomly selects weights to split. We show that our approach learns more efficiently than each of the baseline methods.",
"title": ""
},
{
"docid": "0f505d193991bd0e3186409290e56217",
"text": "This stamp, honoring a Mexican artist who has transcended “la frontera” and has become and icon to Hispanics, feminists, and art lovers, will be a further reminder of the continuous cultural contributions of Latinos to the United States. (Cecilia Alvear, President of National Association of Hispanic Journalists (NAHJ) on the occasion of the introduction of the Frida Kahlo U.S. postage stamp; 2001; emphasis added)",
"title": ""
},
{
"docid": "2c0a4b5c819a8fcfd5a9ab92f59c311e",
"text": "Line starting capability of Synchronous Reluctance Motors (SynRM) is a crucial challenge in their design that if solved, could lead to a valuable category of motors. In this paper, the so-called crawling effect as a potential problem in Line-Start Synchronous Reluctance Motors (LS-SynRM) is analyzed. Two interfering scenarios on LS-SynRM start-up are introduced and one of them is treated in detail by constructing the asynchronous model of the motor. In the third section, a definition of this phenomenon is given utilizing a sample cage configuration. The LS-SynRM model and characteristics are compared with that of a reference induction motor (IM) in all sections of this work to convey a better perception of successful and unsuccessful synchronization consequences to the reader. Several important post effects of crawling on motor performance are discussed in the rest of the paper to evaluate how it would influence the motor operation. All simulations have been performed using Finite Element Analysis (FEA).",
"title": ""
},
{
"docid": "7277ab3a4228a9f266549952fc668afd",
"text": "Anomaly detection in a WSN is an important aspect of data analysis in order to identify data items that significantly differ from normal data. A characteristic of the data generated by a WSN is that the data distribution may alter over the lifetime of the network due to the changing nature of the phenomenon being observed. Anomaly detection techniques must be able to adapt to a non-stationary data distribution in order to perform optimally. In this survey, we provide a comprehensive overview of approaches to anomaly detection in a WSN and their operation in a non-stationary environment.",
"title": ""
},
{
"docid": "d2d7595f04af96d7499d7b7c06ba2608",
"text": "Deep Neural Network (DNN) is a widely used deep learning technique. How to ensure the safety of DNN-based system is a critical problem for the research and application of DNN. Robustness is an important safety property of DNN. However, existing work of verifying DNN’s robustness is timeconsuming and hard to scale to large-scale DNNs. In this paper, we propose a boosting method for DNN robustness verification, aiming to find counter-examples earlier. Our observation is DNN’s different inputs have different possibilities of existing counter-examples around them, and the input with a small difference between the largest output value and the second largest output value tends to be the achilles’s heel of the DNN. We have implemented our method and applied it on Reluplex, a state-ofthe-art DNN verification tool, and four DNN attacking methods. The results of the extensive experiments on two benchmarks indicate the effectiveness of our boosting method.",
"title": ""
},
{
"docid": "01572c84840fe3449dca555a087d2551",
"text": "A printed two-multiple-input multiple-output (MIMO)-antenna system incorporating a neutralization line for antenna port decoupling for wireless USB-dongle applications is proposed. The two monopoles are located on the two opposite corners of the system PCB and spaced apart by a small ground portion, which serves as a layout area for antenna feeding network and connectors for the use of standalone antennas as an optional scheme. It was found that by removing only 1.5 mm long inwards from the top edge in the small ground portion and connecting the two antennas therein with a thin printed line, the antenna port isolation can be effectively improved. The neutralization line in this study occupies very little board space, and the design requires no conventional modification to the ground plane for mitigating mutual coupling. The behavior of the neutralization line was rigorously analyzed, and the MIMO characteristics of the proposed antennas was also studied and tested in the reverberation chamber. Details of the constructed prototype are described and discussed in this paper.",
"title": ""
},
{
"docid": "3503074668bd55868f86a99a8a171073",
"text": "Deep Neural Networks (DNNs) provide state-of-the-art solutions in several difficult machine perceptual tasks. However, their performance relies on the availability of a large set of labeled training data, which limits the breadth of their applicability. Hence, there is a need for new semisupervised learning methods for DNNs that can leverage both (a small amount of) labeled and unlabeled training data. In this paper, we develop a general loss function enabling DNNs of any topology to be trained in a semi-supervised manner without extra hyper-parameters. As opposed to current semi-supervised techniques based on topology-specific or unstable approaches, ours is both robust and general. We demonstrate that our approach reaches state-of-the-art performance on the SVHN (9.82% test error, with 500 labels and wide Resnet) and CIFAR10 (16.38% test error, with 8000 labels and sigmoid convolutional neural network) data sets.",
"title": ""
},
{
"docid": "6d2449941d27774451edde784d3521fe",
"text": "Convolutional neural networks (CNNs) have recently been applied to the optical flow estimation problem. As training the CNNs requires sufficiently large amounts of labeled data, existing approaches resort to synthetic, unrealistic datasets. On the other hand, unsupervised methods are capable of leveraging real-world videos for training where the ground truth flow fields are not available. These methods, however, rely on the fundamental assumptions of brightness constancy and spatial smoothness priors that do not hold near motion boundaries. In this paper, we propose to exploit unlabeled videos for semi-supervised learning of optical flow with a Generative Adversarial Network. Our key insight is that the adversarial loss can capture the structural patterns of flow warp errors without making explicit assumptions. Extensive experiments on benchmark datasets demonstrate that the proposed semi-supervised algorithm performs favorably against purely supervised and baseline semi-supervised learning schemes.",
"title": ""
},
{
"docid": "bd3dd79aa5ecb5815b7ca4d461578f20",
"text": "Deep Reinforcement Learning (RL) recently emerged as one of the most competitive approaches for learning in sequential decision making problems with fully observable environments, e.g., computer Go. However, very little work has been done in deep RL to handle partially observable environments. We propose a new architecture called Action-specific Deep Recurrent QNetwork (ADRQN) to enhance learning performance in partially observable domains. Actions are encoded by a fully connected layer and coupled with a convolutional observation to form an action-observation pair. The time series of actionobservation pairs are then integrated by an LSTM layer that learns latent states based on which a fully connected layer computes Q-values as in conventional Deep Q-Networks (DQNs). We demonstrate the effectiveness of our new architecture in several partially observable domains, including flickering Atari games.",
"title": ""
},
{
"docid": "1a11a6cb0795d432ecdbb5ab4a540a43",
"text": "This article offers a critical examination and reassessment of the history of CALL, and argues for three new categories—Restricted, Open and Integrated CALL. It offers definitions and description of the three approaches and argues that they allow a more detailed analysis of institutions and classrooms than earlier analyses. It is suggested that we are currently using the second approach, Open CALL, but that our aim should be to attain a state of ‘normalisation’ in which the technology is invisible and truly integrated. This state is defined and discussed. In the final section the article proposes some ways in which this normalisation can be achieved—using ethnographic assessments and action research, for example—thus setting an agenda for CALL practice in the future. # 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "375de005698ccaf54d7b82875f1f16c5",
"text": "This paper describes design, Simulation and manufacturing procedures of HIRAD - a teleoperated Tracked Surveillance UGV for military, Rescue and other civilian missions in various hazardous environments. A Double Stabilizer Flipper mechanism mounted on front pulleys enables the Robot to have good performance in travelling over uneven terrains and climbing stairs. Using this Stabilizer flipper mechanism reduces energy consumption while climbing the stairs or crossing over obstacles. The locomotion system mechanical design is also described in detail. The CAD geometry 3D-model has been produced by CATIA software. To analyze the system mobility, a virtual model was developed with ADAMS Software. This simulation included different mobility maneuvers such as stair climbing, gap crossing and travelling over steep slopes. The simulations enabled us to define motor torque requirements. We performed many experiments with manufactured prototype under various terrain conditions Such as stair climbing, gap crossing and slope elevation. In experiments, HIRAD shows good overcoming ability for the tested terrain conditions.",
"title": ""
},
{
"docid": "9e65315d4e241dc8d4ea777247f7c733",
"text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.",
"title": ""
},
{
"docid": "61aa40b1119dcc636c2f4a15a64ffce3",
"text": "While online product reviews are valuable sources of information to facilitate consumers’ purchase decisions, it is deemed meaningful and important to distinguish helpful reviews from unhelpful ones for consumers facing huge amounts of reviews nowadays. Thus, in light of review classification, this paper proposes a novel approach to identifying review helpfulness. In doing so, a Bayesian inference is introduced to estimate the probabilities of the reviews belonging to respective classes, which differs from the traditional approach that only assigns class labels in a binary manner. Furthermore, an extended fuzzy associative classifier, namely GARCfp, is developed to train review helpfulness classification models based on review class probabilities and fuzzily partitioned review feature values. Finally, data experiments conducted on the reviews from amazon.com reveal the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "a646dd3603e0204f0eccdf24c415b3be",
"text": "A new re¯ow parameter, heating factor (Q g), which is de®ned as the integral of the measured temperature over the dwell time above liquidus, has been proposed in this report. It can suitably represent the combined eect of both temperature and time in usual re¯ow process. Relationship between reliability of the micro-ball grid array (micro-BGA) package and heating factor has been discussed. The fatigue failure of micro-BGA solder joints re¯owed with dierent heating factor in nitrogen ambient has been investigated using the bending cycle test. The fatigue lifetime of the micro-BGA assemblies ®rstly increases and then decreases with increasing heating factor. The greatest lifetime happens at Q g near 500 s °C. The optimal Q g range is between 300 and 750 s °C. In this range, the lifetime of the micro-BGA assemblies is greater than 4500 cycles. SEM micrographs reveal that cracks always initiate at the point of the acute angle where the solder joint joins the PCB pad.",
"title": ""
},
{
"docid": "051d402ce90d7d326cc567e228c8411f",
"text": "CDM ESD event has become the main ESD reliability concern for integrated-circuits products using nanoscale CMOS technology. A novel CDM ESD protection design, using self-biased current trigger (SBCT) and source pumping, has been proposed and successfully verified in 0.13-lm CMOS technology to achieve 1-kV CDM ESD robustness. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dbcef163643232313207cd45402158de",
"text": "Every industry has significant data output as a product of their working process, and with the recent advent of big data mining and integrated data warehousing it is the case for a robust methodology for assessing the quality for sustainable and consistent processing. In this paper a review is conducted on Data Quality (DQ) in multiple domains in order to propose connections between their methodologies. This critical review suggests that within the process of DQ assessment of heterogeneous data sets, not often are they treated as separate types of data in need of an alternate data quality assessment framework. We discuss the need for such a directed DQ framework and the opportunities that are foreseen in this research area and propose to address it through degrees of heterogeneity.",
"title": ""
},
{
"docid": "1eea111c3efcc67fcc1bb6f358622475",
"text": "Methyl Cellosolve (the monomethyl ether of ethylene glycol) has been widely used as the organic solvent in ninhydrin reagents for amino acid analysis; it has, however, properties that are disadvantageous in a reagent for everyday employment. The solvent is toxic and it is difficult to keep the ether peroxide-free. A continuing effort to arrive at a chemically preferable and relatively nontoxic substitute for methyl Cellosolve has led to experiments with dimethyl s&oxide, which proves to be a better solvent for the reduced form of ninhydrin (hydrindantin) than is methyl Cellosolve. Dimethyl sulfoxide can replace the latter, volume for volume, in a ninhydrin reagent mixture that gives equal performance and has improved stability. The result is a ninhydrin-hydrindantin solution in 75% dimethyl sulfoxide25 % 4 M lithium acetate buffer at pH 5.2. This type of mixture, with appropriate hydrindantin concentrations, is recommended to replace methyl Cellosolve-containing reagents in the quantitative determination of amino acids by automatic analyzers and by the manual ninhydrin method.",
"title": ""
}
] | scidocsrr |
0b221b3182715e8cd03771a46fb0cf45 | The effect of social media on brand loyalty | [
{
"docid": "368f904533e17beec78d347ee8ceabb1",
"text": "A brand community from a customer-experiential perspective is a fabric of relationships in which the customer is situated. Crucial relationships include those between the customer and the brand, between the customer and the firm, between the customer and the product in use, and among fellow customers. The authors delve ethnographically into a brand community and test key findings through quantitative methods. Conceptually, the study reveals insights that differ from prior research in four important ways: First, it expands the definition of a brand community to entities and relationships neglected by previous research. Second, it treats vital characteristics of brand communities, such as geotemporal concentrations and the richness of social context, as dynamic rather than static phenomena. Third, it demonstrates that marketers can strengthen brand communities by facilitating shared customer experiences in ways that alter those dynamic characteristics. Fourth, it yields a new and richer conceptualization of customer loyalty as integration in a brand community.",
"title": ""
}
] | [
{
"docid": "5699f53cb26484b33b52df57f97a3e92",
"text": "In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.",
"title": ""
},
{
"docid": "1ca70e99cf3dc1957627efc68af32e0c",
"text": "In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.",
"title": ""
},
{
"docid": "0778eff54b2f48c9ed4554c617b2dcab",
"text": "The diagnosis of heart disease is a significant and tedious task in medicine. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not “mined” to determine concealed information for effective decision making by healthcare practitioners. The term Heart disease encompasses the diverse diseases that affect the heart. Cardiomyopathy and Cardiovascular disease are some categories of heart diseases. The reduction of blood and oxygen supply to the heart leads to heart disease. In this paper the data classification is based on supervised machine learning algorithms which result in accuracy, time taken to build the algorithm. Tanagra tool is used to classify the data and the data is evaluated using 10-fold cross validation and the results are compared.",
"title": ""
},
{
"docid": "b169e0e76f26db1f08cd84524aa10a53",
"text": "A very lightweight, broad-band, dual polarized antenna array with 128 elements for the frequency range from 7 GHz to 18 GHz has been designed, manufactured and measured. The total gain at the center frequency was measured to be 20 dBi excluding feeding network losses.",
"title": ""
},
{
"docid": "4702fceea318c326856cc2a7ae553e1f",
"text": "The Institute of Medicine identified “timeliness” as one of six key “aims for improvement” in its most recent report on quality. Yet patient delays remain prevalent, resulting in dissatisfaction, adverse clinical consequences, and often, higher costs. This tutorial describes several areas in which patients routinely experience significant and potentially dangerous delays and presents operations research (OR) models that have been developed to help reduce these delays, often at little or no cost. I also describe the difficulties in developing and implementing models as well as the factors that increase the likelihood of success. Finally, I discuss the opportunities, large and small, for using OR methodologies to significantly impact practices and policies that will affect timely access to healthcare.",
"title": ""
},
{
"docid": "f28008f9a90112a412d91d61916ac9a6",
"text": "Steady-State Visual Evoked Potential (SSVEP) is a visual cortical response evoked by repetitive stimuli with a light source flickering at frequencies above 4 Hz and could be classified into three ranges: low (up to 12 Hz), medium (12-30) and high frequency (> 30 Hz). SSVEP-based Brain-Computer Interfaces (BCI) are principally focused on the low and medium range of frequencies whereas there are only a few projects in the high-frequency range. However, they only evaluate the performance of different methods to extract SSVEP. This research proposed a high-frequency SSVEP-based asynchronous BCI in order to control the navigation of a mobile object on the screen through a scenario and to reach its final destination. This could help impaired people to navigate a robotic wheelchair. There were three different scenarios with different difficulty levels (easy, medium and difficult). The signal processing method is based on Fourier transform and three EEG measurement channels. The research obtained accuracies ranging in classification from 65% to 100% with Information Transfer Rate varying from 9.4 to 45 bits/min. Our proposed method allows all subjects participating in the study to control the mobile object and to reach a final target without prior training.",
"title": ""
},
{
"docid": "810a4573ca075d83e8bf2ece4fafe236",
"text": "IN this chapter we analyze four paradigms that currently are competing, or have until recently competed, for acceptance as the paradigm of choice in informing and guiding inquiry, especially qualitative inquiry: positivism, postpositivism, critical theory and related ideological positions, and constructivism. We acknowledge at once our own commitment to constructivism (which we earlier called \"naturalistic inquiry\"; Lincoln & Guba, 1985); the reader may wish to take that fact into account in judging the appropriateness and usefulness of our analysis. Although the title of this volume, Handbook of Qualitative Research, implies that the term qualitative is an umbrella term superior to the term paradigm (and, indeed, that usage is not uncommon), it is our position that it is a term that ought to be reserved for a description of types of methods. From our perspective, both qualitative and quantitative methods may be used appropriately with any research paradigm. Questions of method are secondary to questions of paradigm, which we define as the basic belief system or worldview that guides the investigator, not only in choices of method but in ontologicallyandepistemologicallyfundamentalways. It is certainly the case that interest in alternative paradigms has been stimulated by a growing dissatisfaction with the patent overemphasis on quantitative methods. But as efforts were made to build a case for a renewed interest in qualitative approaches, it became clear that the metaphysical assumptions undergirding the conventional paradigm (the \"received view\") must be seriously questioned. Thus the emphasis of this chapter is on paradigms, their assumptions, and the implications of those assumptions for a variety of research issues, not on the relative utility of qualitative versus quantitative methods. Nevertheless, as discussions of paradigms/methods over the past decade have often begun with a consideration of problems associated with overquantification, we will also begin there, shifting only later to our predominant interest.",
"title": ""
},
{
"docid": "7ab15804bd53aa8288aafc5374a12499",
"text": "We have used a modified technique in five patients to correct winging of the scapula caused by injury to the brachial plexus or the long thoracic nerve during transaxillary resection of the first rib. The procedure stabilises the scapulothoracic articulation by using strips of autogenous fascia lata wrapped around the 4th, 6th and 7th ribs at least two, and preferably three, times. The mean age of the patients at the time of operation was 38 years (26 to 47) and the mean follow-up six years and four months (three years and three months to 11 years). Satisfactory stability was achieved in all patients with considerable improvement in shoulder function. There were no complications.",
"title": ""
},
{
"docid": "405acd07ad0d1b3b82ada19e85e23ce6",
"text": "Self-driving technology is advancing rapidly — albeit with significant challenges and limitations. This progress is largely due to recent developments in deep learning algorithms. To date, however, there has been no systematic comparison of how different deep learning architectures perform at such tasks, or an attempt to determine a correlation between classification performance and performance in an actual vehicle, a potentially critical factor in developing self-driving systems. Here, we introduce the first controlled comparison of multiple deep-learning architectures in an end-to-end autonomous driving task across multiple testing conditions. We used a simple and affordable platform consisting of an off-the-shelf, remotely operated vehicle, a GPU-equipped computer, and an indoor foamrubber racetrack. We compared performance, under identical driving conditions, across seven architectures including a fully-connected network, a simple 2 layer CNN, AlexNet, VGG-16, Inception-V3, ResNet, and an LSTM by assessing the number of laps each model was able to successfully complete without crashing while traversing an indoor racetrack. We compared performance across models when the conditions exactly matched those in training as well as when the local environment and track were configured differently and objects that were not included in the training dataset were placed on the track in various positions. In addition, we considered performance using several different data types for training and testing including single grayscale and color frames, and multiple grayscale frames stacked together in sequence. With the exception of a fully-connected network, all models performed reasonably well (around or above 80%) and most very well (∼95%) on at least one input type but with considerable variation across models and inputs. Overall, AlexNet, operating on single color frames as input, achieved the best level of performance (100% success rate in phase one and 55% in phase two) while VGG-16 performed well most consistently across image types. Performance with obstacles on the track and conditions that were different than those in training was much more variable than without objects and under conditions similar to those in the training set. Analysis of the model’s driving paths found greater consistency within vs. between models. Path similarity between models did not correlate strongly with success similarity. Our novel pixelflipping method allowed us to create a heatmap for each given image to observe what features of the image were weighted most heavily by the network when making its decision. Finally, we found that the variability across models in the driving task was not fully predicted by validation performance, indicating the presence of a ‘deployment gap’ between model training and performance in a simple, real-world task. Overall, these results demonstrate the need for increased field research in self-driving. 1Center for Complex Systems and Brain Sciences, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA 2College of Computer and Information Science, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA 3Department of Ocean and Mechanical Engineering, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA † [email protected]",
"title": ""
},
{
"docid": "70c75c5456563a80276d883d8a5241b3",
"text": "One of the main problems in underwater communications is the low data rate available due to the use of low frequencies. Moreover, there are many problems inherent to the medium such as reflections, refraction, energy dispersion, etc., that greatly degrade communication between devices. In some cases, wireless sensors must be placed quite close to each other in order to take more accurate measurements from the water while having high communication bandwidth. In these cases, while most researchers focus their efforts on increasing the data rate for low frequencies, we propose the use of the 2.4 GHz ISM frequency band in these special cases. In this paper, we show our wireless sensor node deployment and its performance obtained from a real scenario and measures taken for different frequencies, modulations and data transfer rates. The performed tests show the maximum distance between sensors, the number of lost packets and the average round trip time. Based on our measurements, we provide some experimental models of underwater communication in fresh water using EM waves in the 2.4 GHz ISM frequency band. Finally, we compare our communication system proposal with the existing systems. Although our proposal provides short communication distances, it provides high data transfer rates. It can be used for precision monitoring in applications such as contaminated ecosystems or for device communicate at high depth.",
"title": ""
},
{
"docid": "0bf3c08b71fedd629bdc584c3deeaa34",
"text": "Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary.",
"title": ""
},
{
"docid": "9ea9b364e2123d8917d4a2f25e69e084",
"text": "Movement observation and imagery are increasingly propagandized for motor rehabilitation. Both observation and imagery are thought to improve motor function through repeated activation of mental motor representations. However, it is unknown what stimulation parameters or imagery conditions are optimal for rehabilitation purposes. A better understanding of the mechanisms underlying movement observation and imagery is essential for the optimization of functional outcome using these training conditions. This study systematically assessed the corticospinal excitability during rest, observation, imagery and execution of a simple and a complex finger-tapping sequence in healthy controls using transcranial magnetic stimulation (TMS). Observation was conducted passively (without prior instructions) as well as actively (in order to imitate). Imagery was performed visually and kinesthetically. A larger increase in corticospinal excitability was found during active observation in comparison with passive observation and visual or kinesthetic imagery. No significant difference between kinesthetic and visual imagery was found. Overall, the complex task led to a higher corticospinal excitability in comparison with the simple task. In conclusion, the corticospinal excitability was modulated during both movement observation and imagery. Specifically, active observation of a complex motor task resulted in increased corticospinal excitability. Active observation may be more effective than imagery for motor rehabilitation purposes. In addition, the activation of mental motor representations may be optimized by varying task-complexity.",
"title": ""
},
{
"docid": "e995adcdeb6c290eb484ad136d48e8a0",
"text": "Extended-connectivity fingerprints (ECFPs) are a novel class of topological fingerprints for molecular characterization. Historically, topological fingerprints were developed for substructure and similarity searching. ECFPs were developed specifically for structure-activity modeling. ECFPs are circular fingerprints with a number of useful qualities: they can be very rapidly calculated; they are not predefined and can represent an essentially infinite number of different molecular features (including stereochemical information); their features represent the presence of particular substructures, allowing easier interpretation of analysis results; and the ECFP algorithm can be tailored to generate different types of circular fingerprints, optimized for different uses. While the use of ECFPs has been widely adopted and validated, a description of their implementation has not previously been presented in the literature.",
"title": ""
},
{
"docid": "ce650daedc7ba277d245a2150062775f",
"text": "Amongst the large number of write-and-throw-away-spreadsheets developed for one-time use there is a rather neglected proportion of spreadsheets that are huge, periodically used, and submitted to regular update-cycles like any conventionally evolving valuable legacy application software. However, due to the very nature of spreadsheets, their evolution is particularly tricky and therefore error-prone. In our strive to develop tools and methodologies to improve spreadsheet quality, we analysed consolidation spreadsheets of an internationally operating company for the errors they contain. The paper presents the results of the field audit, involving 78 spreadsheets with 60,446 non-empty cells. As a by-product, the study performed was also to validate our analysis tools in an industrial context. The evaluated auditing tool offers the auditor a new view on the formula structure of the spreadsheet by grouping similar formulas into equivalence classes. Our auditing approach defines three similarity criteria between formulae, namely copy, logical and structural equivalence. To improve the visualization of large spreadsheets, equivalences and data dependencies are displayed in separated windows that are interlinked with the spreadsheet. The auditing approach helps to find irregularities in the geometrical pattern of similar formulas.",
"title": ""
},
{
"docid": "9cd92fa5085c1f7edec5c2ba53c549cc",
"text": "Support theory represents probability judgment in terms of the support, or strength of evidence, of the focal relative to the alternative hypothesis. It assumes that the judged probability of an event generally increases when its description is unpacked into disjoint components (implicit subadditivity). This article presents a significant extension of the theory in which the judged probability of an explicit disjunction is less than or equal to the sum of the judged probabilities of its disjoint components (explicit subadditivity). Several studies of probability and frequency judgment demonstrate both implicit and explicit subadditivity. The former is attributed to enhanced availability, whereas the latter is attributed to repacking and anchoring.",
"title": ""
},
{
"docid": "b045e59c52ff1d555f79831f96309d5c",
"text": "In this paper, we show that for several clustering problems one can extract a small set of points, so that using those core-sets enable us to perform approximate clustering efficiently. The surprising property of those core-sets is that their size is independent of the dimension.Using those, we present a (1+ ε)-approximation algorithms for the k-center clustering and k-median clustering problems in Euclidean space. The running time of the new algorithms has linear or near linear dependency on the number of points and the dimension, and exponential dependency on 1/ε and k. As such, our results are a substantial improvement over what was previously known.We also present some other clustering results including (1+ ε)-approximate 1-cylinder clustering, and k-center clustering with outliers.",
"title": ""
},
{
"docid": "38499d78ab2b66f87e8314d75ff1c72f",
"text": "We investigated large-scale systems organization of the whole human brain using functional magnetic resonance imaging (fMRI) data acquired from healthy volunteers in a no-task or 'resting' state. Images were parcellated using a prior anatomical template, yielding regional mean time series for each of 90 regions (major cortical gyri and subcortical nuclei) in each subject. Significant pairwise functional connections, defined by the group mean inter-regional partial correlation matrix, were mostly either local and intrahemispheric or symmetrically interhemispheric. Low-frequency components in the time series subtended stronger inter-regional correlations than high-frequency components. Intrahemispheric connectivity was generally related to anatomical distance by an inverse square law; many symmetrical interhemispheric connections were stronger than predicted by the anatomical distance between bilaterally homologous regions. Strong interhemispheric connectivity was notably absent in data acquired from a single patient, minimally conscious following a brainstem lesion. Multivariate analysis by hierarchical clustering and multidimensional scaling consistently defined six major systems in healthy volunteers-- corresponding approximately to four neocortical lobes, medial temporal lobe and subcortical nuclei- - that could be further decomposed into anatomically and functionally plausible subsystems, e.g. dorsal and ventral divisions of occipital cortex. An undirected graph derived by thresholding the healthy group mean partial correlation matrix demonstrated local clustering or cliquishness of connectivity and short mean path length compatible with prior data on small world characteristics of non-human cortical anatomy. Functional MRI demonstrates a neurophysiological architecture of the normal human brain that is anatomically sensible, strongly symmetrical, disrupted by acute brain injury, subtended predominantly by low frequencies and consistent with a small world network topology.",
"title": ""
},
{
"docid": "9c77080dbab62dc7a5ddafcde98d094c",
"text": "A cornucopia of dimensionality reduction techniques have emerged over the past decade, leaving data analysts with a wide variety of choices for reducing their data. Means of evaluating and comparing low-dimensional embeddings useful for visualization, however, are very limited. When proposing a new technique it is common to simply show rival embeddings side-by-side and let human judgment determine which embedding is superior. This study investigates whether such human embedding evaluations are reliable, i.e., whether humans tend to agree on the quality of an embedding. We also investigate what types of embedding structures humans appreciate a priori. Our results reveal that, although experts are reasonably consistent in their evaluation of embeddings, novices generally disagree on the quality of an embedding. We discuss the impact of this result on the way dimensionality reduction researchers should present their results, and on applicability of dimensionality reduction outside of machine learning.",
"title": ""
},
{
"docid": "48ecc1d2fd36350dc1dd2f097f05d9ba",
"text": "This paper is divided in two parts. Part one examines the relevance of Artificial Neural Networks (ANNs) for various business applications. The first section sets the stage for ANNs in the context of modern day business by discussing the evolution of businesses from Industrial Revolution to current Information Age to outline why business today are in critical need of technology that sifts through massive data. Next section introduces Artificial Neural Network technology as a favorable alternative to traditional analytics and informs the reader of the basic concept underlying the technology. Finally, third section screens through four different applications of ANNs to gain an insight into potential business opportunities that lie abound.",
"title": ""
},
{
"docid": "a1f60b03cf3a7dde3090cbf0a926a7e9",
"text": "Secondary analyses of Revised NEO Personality Inventory data from 26 cultures (N = 23,031) suggest that gender differences are small relative to individual variation within genders; differences are replicated across cultures for both college-age and adult samples, and differences are broadly consistent with gender stereotypes: Women reported themselves to be higher in Neuroticism, Agreeableness, Warmth, and Openness to Feelings, whereas men were higher in Assertiveness and Openness to Ideas. Contrary to predictions from evolutionary theory, the magnitude of gender differences varied across cultures. Contrary to predictions from the social role model, gender differences were most pronounced in European and American cultures in which traditional sex roles are minimized. Possible explanations for this surprising finding are discussed, including the attribution of masculine and feminine behaviors to roles rather than traits in traditional cultures.",
"title": ""
}
] | scidocsrr |
1253bb1881b37652b762b6d5799d3457 | MoonGen: A Scriptable High-Speed Packet Generator | [
{
"docid": "a7d5ba182deefef418e03725f664d68e",
"text": "Network stacks currently implemented in operating systems can no longer cope with the packet rates offered by 10 Gbit Ethernet. Thus, frameworks were developed claiming to offer a faster alternative for this demand. These frameworks enable arbitrary packet processing systems to be built from commodity hardware handling a traffic rate of several 10 Gbit interfaces, entering a domain previously only available to custom-built hardware. In this paper, we survey various frameworks for high-performance packet IO. We analyze the performance of the most prominent frameworks based on representative measurements in packet forwarding scenarios. Therefore, we quantify the effects of caching and look at the tradeoff between throughput and latency. Moreover, we introduce a model to estimate and assess the performance of these packet processing frameworks.",
"title": ""
}
] | [
{
"docid": "59f3c511765c52702b9047a688256532",
"text": "Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots. Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult— especially when using inexpensive (and therefore preferable) sensors. This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described. In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data. We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.",
"title": ""
},
{
"docid": "9be415bd0789f77029fc99a6ac52a614",
"text": "Image classification problem is one of most important research directions in image processing and has become the focus of research in many years due to its diversity and complexity of image information. In view of the existing image classification models’ failure to fully utilize the information of images, this paper proposes a novel image classification method of combining the Convolutional Neural Network (CNN) and eXtreme Gradient Boosting (XGBoost), which are two outstanding classifiers. The presented CNNXGBoost model provides more precise output by integrating CNN as a trainable feature extractor to automatically obtain features from input and XGBoost as a recognizer in the top level of the network to produce results. Experiments are implemented on the well-known MNIST and CIFAR-10 databases. The results prove that the new method performs better compared with other methods on the same databases, which verify the effectiveness of the proposed method in image classification problem.",
"title": ""
},
{
"docid": "908e2a94523743a90a57f9419fef8d28",
"text": "Heart rate variability (HRV) is generated by the interaction of multiple regulatory mechanisms that operate on different time scales. This article examines the regulation of the heart, the meaning of HRV, Thayer and Lane’s neurovisceral integration model, the sources of HRV, HRV frequency and time domain measurements, Porges’s polyvagal theory, and resonance frequency breathing. The medical implications of HRV biofeedback for cardiovascular rehabilitation and inflammatory disorders are considered.",
"title": ""
},
{
"docid": "b281f1244dbf31c492d34f0314f8b3e2",
"text": "CONTEXT\nThe National Consensus Project for Quality Palliative Care includes spiritual care as one of the eight clinical practice domains. There are very few standardized spirituality history tools.\n\n\nOBJECTIVES\nThe purpose of this pilot study was to test the feasibility for the Faith, Importance and Influence, Community, and Address (FICA) Spiritual History Tool in clinical settings. Correlates between the FICA qualitative data and quality of life (QOL) quantitative data also were examined to provide additional insight into spiritual concerns.\n\n\nMETHODS\nThe framework of the FICA tool includes Faith or belief, Importance of spirituality, individual's spiritual Community, and interventions to Address spiritual needs. Patients with solid tumors were recruited from ambulatory clinics of a comprehensive cancer center. Items assessing aspects of spirituality within the Functional Assessment of Cancer Therapy QOL tools were used, and all patients were assessed using the FICA. The sample (n=76) had a mean age of 57, and almost half were of diverse religions.\n\n\nRESULTS\nMost patients rated faith or belief as very important in their lives (mean 8.4; 0-10 scale). FICA quantitative ratings and qualitative comments were closely correlated with items from the QOL tools assessing aspects of spirituality.\n\n\nCONCLUSION\nFindings suggest that the FICA tool is a feasible tool for clinical assessment of spirituality. Addressing spiritual needs and concerns in clinical settings is critical in enhancing QOL. Additional use and evaluation by clinicians of the FICA Spiritual Assessment Tool in usual practice settings are needed.",
"title": ""
},
{
"docid": "4480840e6dbab77e4f032268ea69bff1",
"text": "This chapter provides a critical survey of emergence definitions both from a conceptual and formal standpoint. The notions of downward / backward causation and weak / strong emergence are specially discussed, for application to complex social system with cognitive agents. Particular attention is devoted to the formal definitions introduced by (Müller 2004) and (Bonabeau & Dessalles, 1997), which are operative in multi-agent frameworks and make sense from both cognitive and social point of view. A diagrammatic 4-Quadrant approach, allow us to understanding of complex phenomena along both interior/exterior and individual/collective dimension.",
"title": ""
},
{
"docid": "db6904a5aa2196dedf37b279e04b3ea8",
"text": "The use of animation and multimedia for learning is now further extended by the provision of entire Virtual Reality Learning Environments (VRLE). This highlights a shift in Web-based learning from a conventional multimedia to a more immersive, interactive, intuitive and exciting VR learning environment. VRLEs simulate the real world through the application of 3D models that initiates interaction, immersion and trigger the imagination of the learner. The question of good pedagogy and use of technology innovations comes into focus once again. Educators attempt to find theoretical guidelines or instructional principles that could assist them in developing and applying a novel VR learning environment intelligently. This paper introduces the educational use of Web-based 3D technologies and highlights in particular VR features. It then identifies constructivist learning as the pedagogical engine driving the construction of VRLE and discusses five constructivist learning approaches. Furthermore, the authors provide two case studies to investigate VRLEs for learning purposes. The authors conclude with formulating some guidelines for the effective use of VRLEs, including discussion of the limitations and implications for the future study of VRLEs. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8e7cfad4f1709101e5790343200d1e16",
"text": "Although electronic commerce experts often cite privacy concerns as barriers to consumer electronic commerce, there is a lack of understanding about how these privacy concerns impact consumers' willingness to conduct transactions online. Therefore, the goal of this study is to extend previous models of e-commerce adoption by specifically assessing the impact that consumers' concerns for information privacy (CFIP) have on their willingness to engage in online transactions. To investigate this, we conducted surveys focusing on consumers’ willingness to transact with a well-known and less well-known Web merchant. Results of the study indicate that concern for information privacy affects risk perceptions, trust, and willingness to transact for a wellknown merchant, but not for a less well-known merchant. In addition, the results indicate that merchant familiarity does not moderate the relationship between CFIP and risk perceptions or CFIP and trust. Implications for researchers and practitioners are discussed. 1 Elena Karahanna was the accepting senior editor. Kathy Stewart Schwaig and David Gefen were the reviewers. This paper was submitted on October 12, 2004, and went through 4 revisions. Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 416 Introduction Although information privacy concerns have long been cited as barriers to consumer adoption of business-to-consumer (B2C) e-commerce (Hoffman et al., 1999, Sullivan, 2005), the results of studies focusing on privacy concerns have been equivocal. Some studies find that mechanisms intended to communicate information about privacy protection such as privacy seals and policies increase intentions to engage in online transactions (Miyazaki and Krishnamurthy, 2002). In contrast, others find that these mechanisms have no effect on consumer willingness to engage in online transactions (Kimery and McCord, 2002). Understanding how consumers’ concerns for information privacy (CFIP), or their concerns about how organizations use and protect personal information (Smith et al., 1996), impact consumers’ willingness to engage in online transactions is important to our knowledge of consumer-oriented e-commerce. For example, if CFIP has a strong direct impact on willingness to engage in online transactions, both researchers and practitioners may want to direct efforts at understanding how to allay some of these concerns. In contrast, if CFIP only impacts willingness to transact through other factors, then efforts may be directed at influencing these factors through both CFIP as well as through their additional antecedents. Prior research on B2C e-commerce examining consumer willingness to transact has focused primarily on the role of trust and trustworthiness either using trust theory or using acceptance, and adoption-based theories as frameworks from which to study trust. The research based on trust theories tends to focus on the structure of trust or on antecedents to trust (Bhattacherjee, 2002; Gefen, 2000; Jarvenpaa et al., 2000; McKnight et al., 2002a). Adoptionand acceptance-based research includes studies using the Technology Acceptance Model (Gefen et al., 2003) and diffusion theory (Van Slyke et al., 2004) to examine the effects of trust within well-established models. To our knowledge, studies of the effects of trust in the context of e-commerce transactions have not included CFIP as an antecedent in their models. The current research addresses this by examining the effect of CFIP on willingness to transact within a nomological network of additional antecedents (i.e., trust and risk) that we expect will be influenced by CFIP. In addition, familiarity with the Web merchant may moderate the relationship between CFIP and both trust and risk perceptions. As an individual becomes more familiar with the Web merchant and how it collects and protects personal information, perceptions may be driven more by knowledge of the merchant than by information concerns. This differential relationship between factors for more familiar (e.g. experienced) and less familiar merchants is similar to findings of previous research on user acceptance for potential and repeat users of technology (Karahanna et al., 1999) and e-commerce customers (Gefen et al., 2003). Thus, this research has two goals. The first goal is to better understand the role that consumers’ concerns for information privacy (CFIP) have on their willingness to engage in online transactions. The second goal is to investigate whether familiarity moderates the effects of CFIP on key constructs in our nomological network. Specifically, the following research questions are investigated: How do consumers’ concerns for information privacy affect their willingness to engage in online transactions? Does consumers' familiarity with a Web merchant moderate the impact of concern for information privacy on risk and on trust? Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 417 This paper is organized as follows. First, we provide background information regarding the existing literature and the constructs of interest. Next, we present our research model and develop the hypotheses arising from the model. We then describe the method by which we investigated the hypotheses. This is followed by a discussion of the results of our analysis. We conclude the paper by discussing the implications and limitations of our work, along with suggestions for future research. Research Model and Hypotheses Figure 1 presents this study's research model. Given that concern for information privacy is the central focus of the study, we embed the construct within a nomological network of willingness to transact in prior research. Specifically, we include risk, familiarity with the merchant, and trust (Bhattacherjee, 2002; Gefen et al., 2003; Jarvenpaa and Tractinsky, 1999; Van Slyke et al., 2004) constructs that CFIP is posited to influence and that have been found to influence. We first discuss CFIP and then present the theoretical rationale that underlies the relationships presented in the research model. We begin our discussion of the research model by providing an overview of CFIP, focusing on this construct in the context of e-commerce.",
"title": ""
},
{
"docid": "b6fa1ee8c2f07b34768a78591c33bbbe",
"text": "We prove that there are arbitrarily long arithmetic progressions of primes. There are three major ingredients. [. . . ] [. . . ] for all x ∈ ZN (here (m0, t0, L0) = (3, 2, 1)) and E ( ν((x− y)/2)ν((x− y + h2)/2)ν(−y)ν(−y − h1)× × ν((x− y′)/2)ν((x− y′ + h2)/2)ν(−y)ν(−y − h1)× × ν(x)ν(x + h1)ν(x + h2)ν(x + h1 + h2) ∣∣∣∣ x, h1, h2, y, y′ ∈ ZN) = 1 + o(1) (0.1) (here (m0, t0, L0) = (12, 5, 2)). [. . . ] Proposition 0.1 (Generalised von Neumann). Suppose that ν is k-pseudorandom. Let f0, . . . , fk−1 ∈ L(ZN) be functions which are pointwise bounded by ν+νconst, or in other words |fj(x)| 6 ν(x) + 1 for all x ∈ ZN , 0 6 j 6 k − 1. (0.2) Let c0, . . . , ck−1 be a permutation of {0, 1, . . . , k − 1} (in practice we will take cj := j). Then E ( k−1 ∏ j=0 fj(x + cjr) ∣∣∣∣ x, r ∈ ZN) = O( inf 06j6k−1 ‖fj‖Uk−1) + o(1).",
"title": ""
},
{
"docid": "c4e6d52d87bbf910196ddc955fd161d3",
"text": "Virtual reality (VR) has recently emerged as a potentially effective way to provide general and specialty health care services, and appears poised to enter mainstream psychotherapy delivery. Because VR could be part of the future of clinical psychology, it is critical to all psychotherapists that it be defined broadly. To ensure appropriate development of VR applications, clinicians must have a clear understanding of the opportunities and challenges it will provide in professional practice. This review outlines the current state of clinical research relevant to the development of virtual environments for use in psychotherapy. In particular, the paper focuses its analysis on both actual applications of VR in clinical psychology and how different clinical perspectives can use this approach to improve the process of therapeutic change.",
"title": ""
},
{
"docid": "b2c265eb287b95bf87ecf38a5a4aa97b",
"text": "Photographs of hazy scenes typically suffer having low contrast and offer a limited visibility of the scene. This article describes a new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines. We derive a local formation model that explains the color-lines in the context of hazy scenes and use it for recovering the scene transmission based on the lines' offset from the origin. The lack of a dominant color-line inside a patch or its lack of consistency with the formation model allows us to identify and avoid false predictions. Thus, unlike existing approaches that follow their assumptions across the entire image, our algorithm validates its hypotheses and obtains more reliable estimates where possible.\n In addition, we describe a Markov random field model dedicated to producing complete and regularized transmission maps given noisy and scattered estimates. Unlike traditional field models that consist of local coupling, the new model is augmented with long-range connections between pixels of similar attributes. These connections allow our algorithm to properly resolve the transmission in isolated regions where nearby pixels do not offer relevant information.\n An extensive evaluation of our method over different types of images and its comparison to state-of-the-art methods over established benchmark images show a consistent improvement in the accuracy of the estimated scene transmission and recovered haze-free radiances.",
"title": ""
},
{
"docid": "559be3dd29ae8f6f9a9c99951c82a8d3",
"text": "This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area.",
"title": ""
},
{
"docid": "5c872c3538d2f70c63bd3b39d696c2f4",
"text": "Massive pulmonary embolism (PE) is characterized by systemic hypotension (defined as a systolic arterial pressure < 90 mm Hg or a drop in systolic arterial pressure of at least 40 mm Hg for at least 15 min which is not caused by new onset arrhythmias) or shock (manifested by evidence of tissue hypoperfusion and hypoxia, including an altered level of consciousness, oliguria, or cool, clammy extremities). Massive pulmonary embolism has a high mortality rate despite advances in diagnosis and therapy. A subgroup of patients with nonmassive PE who are hemodynamically stable but with right ventricular (RV) dysfunction or hypokinesis confirmed by echocardiography is classified as submassive PE. Their prognosis is different from that of others with non-massive PE and normal RV function. This article attempts to review the evidence-based risk stratification, diagnosis, initial stabilization, and management of massive and nonmassive pulmonary embolism.",
"title": ""
},
{
"docid": "72ee3bf58497eddeda11f19488fc8e55",
"text": "People can benefit from disclosing negative emotions or stigmatized facets of their identities, and psychologists have noted that imagery can be an effective medium for expressing difficult emotions. Social network sites like Instagram offer unprecedented opportunity for image-based sharing. In this paper, we investigate sensitive self-disclosures on Instagram and the responses they attract. We use visual and textual qualitative content analysis and statistical methods to analyze self-disclosures, associated comments, and relationships between them. We find that people use Instagram to engage in social exchange and story-telling about difficult experiences. We find considerable evidence of social support, a sense of community, and little aggression or support for harmful or pro-disease behaviors. Finally, we report on factors that influence engagement and the type of comments these disclosures attract. Personal narratives, food and beverage, references to illness, and self-appearance concerns are more likely to attract positive social support. Posts seeking support attract significantly more comments. CAUTION: This paper includes some detailed examples of content about eating disorders and self-injury illnesses.",
"title": ""
},
{
"docid": "91365154a173be8be29ef14a3a76b08e",
"text": "Fraud is a criminal practice for illegitimate gain of wealth or tampering information. Fraudulent activities are of critical concern because of their severe impact on organizations, communities as well as individuals. Over the last few years, various techniques from different areas such as data mining, machine learning, and statistics have been proposed to deal with fraudulent activities. Unfortunately, the conventional approaches display several limitations, which were addressed largely by advanced solutions proposed in the advent of Big Data. In this paper, we present fraud analysis approaches in the context of Big Data. Then, we study the approaches rigorously and identify their limits by exploiting Big Data analytics.",
"title": ""
},
{
"docid": "f02587ac75edc7a7880131a4db077bb2",
"text": "Single-unit recordings in monkeys have revealed neurons in the lateral prefrontal cortex that increase their firing during a delay between the presentation of information and its later use in behavior. Based on monkey lesion and neurophysiology studies, it has been proposed that a dorsal region of lateral prefrontal cortex is necessary for temporary storage of spatial information whereas a more ventral region is necessary for the maintenance of nonspatial information. Functional neuroimaging studies, however, have not clearly demonstrated such a division in humans. We present here an analysis of all reported human functional neuroimaging studies plotted onto a standardized brain. This analysis did not find evidence for a dorsal/ventral subdivision of prefrontal cortex depending on the type of material held in working memory, but a hemispheric organization was suggested (i.e., left-nonspatial; right-spatial). We also performed functional MRI studies in 16 normal subjects during two tasks designed to probe either nonspatial or spatial working memory, respectively. A group and subgroup analysis revealed similarly located activation in right middle frontal gyrus (Brodmann's area 46) in both spatial and nonspatial [working memory-control] subtractions. Based on another model of prefrontal organization [M. Petrides, Frontal lobes and behavior, Cur. Opin. Neurobiol., 4 (1994) 207-211], a reconsideration of the previous imaging literature data suggested that a dorsal/ventral subdivision of prefrontal cortex may depend upon the type of processing performed upon the information held in working memory.",
"title": ""
},
{
"docid": "d5d44a76ddb04a34f395239341fe6081",
"text": "Commercial eye-gaze trackers have the potential to be an important tool for quantifying the benefits of new visualization techniques. The expense of such trackers has made their use relatively infrequent in visualization studies. As such, it is difficult for researchers to compare multiple devices – obtaining several demonstration models is impractical in cost and time, and quantitative measures from real-world use are not readily available. In this paper, we present a sample protocol to determine the accuracy of a gaze-tacking device.",
"title": ""
},
{
"docid": "a1fe2227bc9d6ddeda58ff8d137d660b",
"text": "Vulnerability exploits remain an important mechanism for malware delivery, despite efforts to speed up the creation of patches and improvements in software updating mechanisms. Vulnerabilities in client applications (e.g., Browsers, multimedia players, document readers and editors) are often exploited in spear phishing attacks and are difficult to characterize using network vulnerability scanners. Analyzing their lifecycle requires observing the deployment of patches on hosts around the world. Using data collected over 5 years on 8.4 million hosts, available through Symantec's WINE platform, we present the first systematic study of patch deployment in client-side vulnerabilities. We analyze the patch deployment process of 1,593 vulnerabilities from 10 popular client applications, and we identify several new threats presented by multiple installations of the same program and by shared libraries distributed with several applications. For the 80 vulnerabilities in our dataset that affect code shared by two applications, the time between patch releases in the different applications is up to 118 days (with a median of 11 days). Furthermore, as the patching rates differ considerably among applications, many hosts patch the vulnerability in one application but not in the other one. We demonstrate two novel attacks that enable exploitation by invoking old versions of applications that are used infrequently, but remain installed. We also find that the median fraction of vulnerable hosts patched when exploits are released is at most 14%. Finally, we show that the patching rate is affected by user-specific and application-specific factors, for example, hosts belonging to security analysts and applications with an automated updating mechanism have significantly lower median times to patch.",
"title": ""
},
{
"docid": "64ff9d7e0671f869b109a3426fbc4d2c",
"text": "Weight stigma is pervasive, and a number of scholars argue that this profound stigma contributes to the negative effects of weight on psychological and physical health. Some lay individuals and health professionals assume that stigmatizing weight can actually motivate healthier behaviors and promote weight loss. However, as we review, weight stigma is consistently associated with poorer mental and physical health outcomes. In this article we propose a social identity threat model elucidating how weight stigma contributes to weight gain and poorer mental and physical health among overweight individuals. We propose that weight-based social identity threat increases physiological stress, undermines self-regulation, compromises psychological health, and increases the motivation to avoid stigmatizing domains (e.g., the gym) and escape the stigma by engaging in unhealthy weight loss behaviors. Given the prevalence of overweight and obesity in the US, weight stigma thus has the potential to undermine the health and wellbeing of millions of Americans.",
"title": ""
},
{
"docid": "b7789464ca4cfd39672187935d95e2fa",
"text": "MATLAB Toolbox functions and communication tools are developed, interfaced, and tested for the motion control of KUKA KR6-R900-SIXX.. This KUKA manipulator has a new controller version that uses KUKA.RobotSensorInterface s KUKA.RobotSensorInterface package to connect the KUKA controller with a remote PC via UDP/IP Ethernet connection. This toolbox includes many functions for initialization, networking, forward kinematics, inverse kinematics and homogeneous transformation.",
"title": ""
}
] | scidocsrr |
be2cbdf4964cedad69a2a6dda6439db3 | The doing of doing stuff: understanding the coordination of social group-activities | [
{
"docid": "ce3d81c74ef3918222ad7d2e2408bdb0",
"text": "This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.\nA key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.\nSection 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.",
"title": ""
},
{
"docid": "ff933c57886cfb4ab74b9cbd9e4f3a58",
"text": "Many systems, applications, and features that support cooperative work share two characteristics: A significant investment has been made in their development, and their successes have consistently fallen far short of expectations. Examination of several application areas reveals a common dynamic: 1) A factor contributing to the application’s failure is the disparity between those who will benefit from an application and those who must do additional work to support it. 2) A factor contributing to the decision-making failure that leads to ill-fated development efforts is the unique lack of management intuition for CSCW applications. 3) A factor contributing to the failure to learn from experience is the extreme difficulty of evaluating these applications. These three problem areas escape adequate notice due to two natural but ultimately misleading analogies: the analogy between multi-user application programs and multi-user computer systems, and the analogy between multi-user applications and single-user applications. These analogies influence the way we think about cooperative work applications and designers and decision-makers fail to recognize their limits. Several CSCW application areas are examined in some detail. Introduction. An illustrative example: automatic meeting",
"title": ""
},
{
"docid": "22bf1c80bb833a7cdf6dd70936b40cb7",
"text": "Text messaging has become a popular form of communication with mobile phones worldwide. We present findings from a large scale text messaging study of 70 university students in the United States. We collected almost 60, 000 text messages over a period of 4 months using a custom logging tool on our participants' phones. Our re- sults suggest that students communicate with a large number of contacts for extended periods of time, engage in simultaneous conversations with as many as 9 contacts, and often use text messaging as a method to switch between a variety of communication mediums. We also explore the content of text messages, and ways text message habits have changed over the last decade as it has become more popular. Finally, we offer design suggestions for future mobile communication tools.",
"title": ""
}
] | [
{
"docid": "61c4e955604011a9b9a50ccbd2858070",
"text": "This paper presents a second-order pulsewidth modulation (PWM) feedback loop to improve power supply rejection (PSR) of any open-loop PWM class-D amplifiers (CDAs). PSR of the audio amplifier has always been a key parameter in mobile phone applications. In contrast to class-AB amplifiers, the poor PSR performance has always been the major drawback for CDAs with a half-bridge connected power stage. The proposed PWM feedback loop is fabricated using GLOBALFOUNDRIES' 0.18-μm CMOS process technology. The measured PSR is more than 80 dB and the measured total harmonic distortion is less than 0.04% with a 1-kHz input sinusoidal test tone.",
"title": ""
},
{
"docid": "1a4119ab58993a64719065f9c3aa416c",
"text": "In the OPPS we have been studying the effects of marihuana used during pregnancy since 1978. The subjects are primarily middle-class, low risk women who entered the study early in their pregnancy. Extensive demographic and life-style information was gathered several times during pregnancy and postnatally. The offspring have been assessed repeatedly during the neonatal period, at least annually until the age of 6 and less frequently thereafter. The outcome measures include a variety of age appropriate standardized global measures as well as a large battery of neuropsychological tests attempting to assess discrete functioning within particular domains including language development, memory, visual/perceptual functioning, components of reading and sustained attention. The results suggest that in the neonate, state alterations and altered visual responsiveness may be associated with in utero exposure to marihuana. Global measures, particularly between the ages of 1 and 3 years, did not reveal an association with prenatal marihuana exposure. However, this initial, apparent absence of effect during early childhood should not be interpreted as in utero marihuana exposure having only transient effects for, as the children became older, aspects of neuropsychological functioning did discriminate between marihuana and control children. Domains associated with prenatal marihuana exposure at four years of age and older included increased behavioral problems and decreased performance on visual perceptual tasks, language comprehension, sustained attention and memory. The nature and the timing of the appearance of these deficits is congruent with the notion of prenatal marihuana exposure affecting 'executive functioning'--goal directed behavior that includes planning, organized search, and impulse control. Such an interpretation would be consistent with the extant literature with animals and non-pregnant adult users suggesting that chronic marihuana use may impact upon prefrontal lobe functioning.",
"title": ""
},
{
"docid": "5c9b0687acc2975c78e58f0c72e03b55",
"text": "OBJECTIVE\nBrain-gut-microbiota interactions may play an important role in human health and behavior. Although rodent models have demonstrated effects of the gut microbiota on emotional, nociceptive, and social behaviors, there is little translational human evidence to date. In this study, we identify brain and behavioral characteristics of healthy women clustered by gut microbiota profiles.\n\n\nMETHODS\nForty women supplied fecal samples for 16S rRNA profiling. Microbial clusters were identified using Partitioning Around Medoids. Functional magnetic resonance imaging was acquired. Microbiota-based group differences were analyzed in response to affective images. Structural and diffusion tensor imaging provided gray matter metrics (volume, cortical thickness, mean curvature, surface area) as well as fiber density between regions. A sparse Partial Least Square-Discrimination Analysis was applied to discriminate microbiota clusters using white and gray matter metrics.\n\n\nRESULTS\nTwo bacterial genus-based clusters were identified, one with greater Bacteroides abundance (n = 33) and one with greater Prevotella abundance (n = 7). The Prevotella group showed less hippocampal activity viewing negative valences images. White and gray matter imaging discriminated the two clusters, with accuracy of 66.7% and 87.2%, respectively. The Prevotella cluster was associated with differences in emotional, attentional, and sensory processing regions. For gray matter, the Bacteroides cluster showed greater prominence in the cerebellum, frontal regions, and the hippocampus.\n\n\nCONCLUSIONS\nThese results support the concept of brain-gut-microbiota interactions in healthy humans. Further examination of the interaction between gut microbes, brain, and affect in humans is needed to inform preclinical reports that microbial modulation may affect mood and behavior.",
"title": ""
},
{
"docid": "17833f9cf4eec06dbc4d7954b6cc6f3f",
"text": "Automated vehicles rely on the accurate and robust detection of the drivable area, often classified into free space, road area and lane information. Most current approaches use monocular or stereo cameras to detect these. However, LiDAR sensors are becoming more common and offer unique properties for road area detection such as precision and robustness to weather conditions. We therefore propose two approaches for a pixel-wise semantic binary segmentation of the road area based on a modified U-Net Fully Convolutional Network (FCN) architecture. The first approach UView-Cam employs a single camera image, whereas the second approach UGrid-Fused incorporates a early fusion of LiDAR and camera data into a multi-dimensional occupation grid representation as FCN input. The fusion of camera and LiDAR allows for efficient and robust leverage of individual sensor properties in a single FCN. For the training of UView-Cam, multiple publicly available datasets of street environments are used, while the UGrid-Fused is trained with the KITTI dataset. In the KITTI Road/Lane Detection benchmark, the proposed networks reach a MaxF score of 94.23% and 93.81% respectively. Both approaches achieve realtime performance with a detection rate of about 10 Hz.",
"title": ""
},
{
"docid": "cff3d568c05d2164a6bb598d7ffd916f",
"text": "Text normalization is an important component in text-to-speech system and the difficulty in text normalization is to disambiguate the non-standard words (NSWs). This paper develops a taxonomy of NSWs on the basis of a large scale Chinese corpus, and proposes a two-stage NSWs disambiguation strategy, finite state automata (FSA) for initial classification and maximum entropy (ME) classifiers for subclass disambiguation. Based on the above NSWs taxonomy, the two-stage approach achieves an F-score of 98.53% in open test, 5.23% higher than that of FSA based approach. Experiments show that the NSWs taxonomy ensures FSA a high baseline performance and ME classifiers make considerable improvement, and the two-stage approach adapts well to new domains.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman [email protected] 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "d513e7f66de64e90b93dcf02ae2ccfb3",
"text": "The first aim of this investigation was to assemble a group of photographs of 30 male and 30 female faces representing a standardized spectrum of facial attractiveness, against which orthognathic treatment outcomes could be compared. The second aim was to investigate the influence of the relationship between ANB differences and anterior lower face height (ALFH) percentages on facial attractiveness. The initial sample comprised standardized photographs of 41 female and 35 male Caucasian subjects. From these, the photographs of two groups of 30 male and 30 female subjects were compiled. A panel of six clinicians and six non-clinicians ranked the photographs. The results showed there to be a good level of reliability for each assessor when ranking the photographs on two occasions, particularly for the clinicians (female subjects r = 0.76-0.97, male subjects r = 0.72-0.94). Agreement among individuals within each group was also high, particularly when ranking facial attractiveness in male subjects (female subjects r = 0.57-0.84, male subjects r = 0.91-0.94). Antero-posterior (AP) discrepancies, as measured by soft tissue ANB, showed minimal correlation with facial attractiveness. However, a trend emerged that would suggest that in faces where the ANB varies widely from 5 degrees, the face is considered less attractive. The ALFH percentage also showed minimal correlation with facial attractiveness. However, there was a trend that suggested that greater ALFH percentages are considered less attractive in female faces, while in males the opposite trend was seen. Either of the two series of ranked photographs as judged by clinicians and non-clinicians could be used as a standard against which facial attractiveness could be assessed, as both were in total agreement about the most attractive faces. However, to judge the outcome of orthognathic treatment, the series of ranked photographs produced by the non-clinician group should be used as the 'standard' to reflect lay opinion.",
"title": ""
},
{
"docid": "9489ca5b460842d5a8a65504965f0bd5",
"text": "This article, based on a tutorial the author presented at ITC 2008, is an overview and introduction to mixed-signal production test. The article focuses on the fundamental techniques and procedures in production test and explores key issues confronting the industry.",
"title": ""
},
{
"docid": "aeb4de700406fb1cc90d6f61dc17b93b",
"text": "This paper presents text mining using SAS® Text Miner and Megaputer PolyAnalyst® specifically applied for hotel customer survey data, and its data management. The paper reviews current literature of text mining, and discusses features of these two text mining software packages in analyzing unstructured qualitative data in the following key steps: data preparation, data analysis, and result reporting. Some screen shots are presented for web-based hotel customer survey data as well as comparisons and conclusions.",
"title": ""
},
{
"docid": "9c01e0dff555a29cc3ffdcab1e861994",
"text": "To determine whether there is any new anatomical structure present within the labia majora. A case serial study was executed on eleven consecutive fresh human female cadavers. Stratum-by-stratum dissections of the labia majora were performed. Twenty-two anatomic dissections of labia majora were completed. Eosin and Hematoxylin agents were used to stain newly discovered adipose sac’s tissues of the labia majora and the cylinder-like structures, which cover condensed adipose tissues. The histology of these two structures was compared. All dissected labia majora demonstrated the presence of the anatomic existence of the adipose sac structure. Just under the dermis of the labia majora, the adipose sac was located, which was filled with lobules containing condensed fatty tissues in the form of cylinders. The histological investigation established that the well-organized fibro-connective-adipose tissues represented the adipose sac. The absence of descriptions of the adipose sac within the labia majora in traditional anatomic and gynecologic textbooks was noted. In this study group, the newly discovered adipose sac is consistently present within the anatomical structure of the labia majora. The well-organized fibro-connective-adipose tissue represents microscopic characteristic features of the adipose sac.",
"title": ""
},
{
"docid": "2314d6d1c294c9d3753404ebe123edd3",
"text": "The magnification of mobile devices in everyday life prompts the idea that these devices will increasingly have evidential value in criminal cases. While this may have been assumed in digital forensics communities, there has been no empirical evidence to support this idea. This research investigates the extent to which mobile phones are being used in criminal proceedings in the United Kingdom thorough the examination of appeal judgments retrieved from the Westlaw, Lexis Nexis and British and Irish Legal Information Institute (BAILII) legal databases. The research identified 537 relevant appeal cases from a dataset of 12,763 criminal cases referring to mobile phones for a period ranging from 1st of January, 2006 to 31st of July, 2011. The empirical analysis indicates that mobile phone evidence is rising over time with some correlations to particular crimes.",
"title": ""
},
{
"docid": "bd8cdb4b89f2a0e4c91798da71621c75",
"text": "Anthocyanins are one of the most widespread families of natural pigments in the plant kingdom. Their health beneficial effects have been documented in many in vivo and in vitro studies. This review summarizes the most recent literature regarding the health benefits of anthocyanins and their molecular mechanisms. It appears that several signaling pathways, including mitogen-activated protein kinase, nuclear factor κB, AMP-activated protein kinase, and Wnt/β-catenin, as well as some crucial cellular processes, such as cell cycle, apoptosis, autophagy, and biochemical metabolism, are involved in these beneficial effects and may provide potential therapeutic targets and strategies for the improvement of a wide range of diseases in future. In addition, specific anthocyanin metabolites contributing to the observed in vivo biological activities, structure-activity relationships as well as additive and synergistic efficacy of anthocyanins are also discussed.",
"title": ""
},
{
"docid": "345328749b90f990e2f67415a067957a",
"text": "Astrocyte swelling represents the major factor responsible for the brain edema associated with fulminant hepatic failure (FHF). The edema may be of such magnitude as to increase intracranial pressure leading to brain herniation and death. Of the various agents implicated in the generation of astrocyte swelling, ammonia has had the greatest amount of experimental support. This article reviews mechanisms of ammonia neurotoxicity that contribute to astrocyte swelling. These include oxidative stress and the mitochondrial permeability transition (MPT). The involvement of glutamine in the production of cell swelling will be highlighted. Evidence will be provided that glutamine induces oxidative stress as well as the MPT, and that these events are critical in the development of astrocyte swelling in hyperammonemia.",
"title": ""
},
{
"docid": "6c8e7fc9f7c21ad1ae529c3033bfe02b",
"text": "In this paper, we establish a formula expressing explicitly the general term of a linear recurrent sequence, allowing us to generalize the original result of J. McLaughlin [7] concerning powers of a matrix of size 2, to the case of a square matrix of size m ≥ 2. Identities concerning Fibonacci and Stirling numbers and various combinatorial relations are derived.",
"title": ""
},
{
"docid": "eb6f04484b832750187e7d97334f4c5f",
"text": "There are in-numerous applications that deal with real scenarios where data are captured over time making them potential candidates for time series analysis. Time series contain temporal dependencies that divide different points in time into different classes. This paper aims at reviewing marriage of a concept i.e. time series modeling with an approach i.e. Machine learning in tackling real life problems. Like time series is ubiquitous and has found extensive usage in our daily life, machine learning approaches have found its applicability in dealing with complex real world scenarios where approximation, uncertainty, chaotic data are prime characteristics.",
"title": ""
},
{
"docid": "0dd558f3094d82f55806d1170218efce",
"text": "As the key supporting system of telecommunication enterprises, OSS/BSS needs to support the service steadily in the long-term running and maintenance process. The system architecture must remain steady and consistent in order to accomplish its goal, which is quite difficult when both the technique and business requirements are changing so rapidly. The framework method raised in this article can guarantee the system architecture’s steadiness and business processing’s consistence by means of describing business requirements, application and information abstractly, becoming more specific and formalized in the planning, developing and maintaining process, and getting the results needed. This article introduces firstly the concepts of framework method, then recommends its applications and superiority in OSS/BSS systems, and lastly gives the prospect of its application.",
"title": ""
},
{
"docid": "1f0a926ac8e9d897f42c9a217e8556a9",
"text": "We have previously shown that motor areas are engaged when subjects experience illusory limb movements elicited by tendon vibration. However, traditionally cytoarchitectonic area 2 is held responsible for kinesthesia. Here we use functional magnetic resonance imaging and cytoarchitectural mapping to examine whether area 2 is engaged in kinesthesia, whether it is engaged bilaterally because area 2 in non-human primates has strong callosal connections, which other areas are active members of the network for kinesthesia, and if there is a dominance for the right hemisphere in kinesthesia as has been suggested. Ten right-handed blindfolded healthy subjects participated. The tendon of the extensor carpi ulnaris muscles of the right or left hand was vibrated at 80 Hz, which elicited illusory palmar flexion in an immobile hand (illusion). As control we applied identical stimuli to the skin over the processus styloideus ulnae, which did not elicit any illusions (vibration). We found robust activations in cortical motor areas [areas 4a, 4p, 6; dorsal premotor cortex (PMD) and bilateral supplementary motor area (SMA)] and ipsilateral cerebellum during kinesthetic illusions (illusion-vibration). The illusions also activated contralateral area 2 and right area 2 was active in common irrespective of illusions of right or left hand. Right areas 44, 45, anterior part of intraparietal region (IP1) and caudo-lateral part of parietal opercular region (OP1), cortex rostral to PMD, anterior insula and superior temporal gyrus were also activated in common during illusions of right or left hand. These right-sided areas were significantly more activated than the corresponding areas in the left hemisphere. The present data, together with our previous results, suggest that human kinesthesia is associated with a network of active brain areas that consists of motor areas, cerebellum, and the right fronto-parietal areas including high-order somatosensory areas. Furthermore, our results provide evidence for a right hemisphere dominance for perception of limb movement.",
"title": ""
},
{
"docid": "21dd193ec6849fa78ba03333708aebea",
"text": "Since the inception of Bitcoin technology, its underlying data structureâĂŞ-the blockchainâĂŞ-has garnered much attention due to properties such as decentralization, transparency, and immutability. These properties make blockchains suitable for apps that require disintermediation through trustless exchange, consistent and incorruptible transaction records, and operational models beyond cryptocurrency. In particular, blockchain and its programmable smart contracts have the potential to address healthcare interoperability issues, such as enabling effective interactions between users and medical applications, delivering patient data securely to a variety of organizations and devices, and improving the overall efficiency of medical practice workflow. Despite the interest in using blockchain technology for healthcare interoperability, however, little information is available on the concrete architectural styles and recommendations for designing blockchain-based apps targeting healthcare. This paper provides an initial step in filling this gap by showing: (1) the features and implementation challenges in healthcare interoperability, (2) an end-to-end case study of a blockchain-based healthcare app that we are developing, and (3) how designing blockchain-based apps using familiar software patterns can help address healthcare specific challenges.",
"title": ""
},
{
"docid": "9e55c0db2a56139b65205bec4ba5ec5d",
"text": "Single image super-resolution (SR) algorithms based on joint dictionaries and sparse representations of image patches have received significant attention in the literature and deliver the state-of-the-art results. Recently, Gaussian mixture models (GMMs) have emerged as favored prior for natural image patches in various image restoration problems. In this paper, we approach the single image SR problem by using a joint GMM learnt from concatenated vectors of high and low resolution patches sampled from a large database of pairs of high resolution and the corresponding low resolution images. Covariance matrices of the learnt Gaussian models capture the inherent correlations between high and low resolution patches, which are utilized for inferring high resolution patches from given low resolution patches. The proposed joint GMM method can be interpreted as the GMM analogue of joint dictionary-based algorithms for single image SR. We study the performance of the proposed joint GMM method by comparing with various competing algorithms for single image SR. Our experiments on various natural images demonstrate the competitive performance obtained by the proposed method at low computational cost.",
"title": ""
},
{
"docid": "760133d80110b5fa42c4f29291b67949",
"text": "In this work, we propose a game theoretic framework to analyze the behavior of cognitive radios for distributed adaptive channel allocation. We define two different objective functions for the spectrum sharing games, which capture the utility of selfish users and cooperative users, respectively. Based on the utility definition for cooperative users, we show that the channel allocation problem can be formulated as a potential game, and thus converges to a deterministic channel allocation Nash equilibrium point. Alternatively, a no-regret learning implementation is proposed for both scenarios and it is shown to have similar performance with the potential game when cooperation is enforced, but with a higher variability across users. The no-regret learning formulation is particularly useful to accommodate selfish users. Non-cooperative learning games have the advantage of a very low overhead for information exchange in the network. We show that cooperation based spectrum sharing etiquette improves the overall network performance at the expense of an increased overhead required for information exchange",
"title": ""
}
] | scidocsrr |
a46d0a29c078e13ab90409bbff71c217 | Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors | [
{
"docid": "0ee97a3afcc2471a05924a1171ac82cf",
"text": "A number of researchers around the world have built machines that recognize, express, model, communicate, and respond to emotional information, instances of ‘‘affective computing.’’ This article raises and responds to several criticisms of affective computing, articulating state-of-the art research challenges, especially with respect to affect in humancomputer interaction. r 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5e428061a28f7fa08656590984e6e12a",
"text": "Will consumer wearable technology ever be adopted or accepted by the medical community? Patients and practitioners regularly use digital technology (e.g., thermometers and glucose monitors) to identify and discuss symptoms. In addition, a third of general practitioners in the United Kingdom report that patients arrive with suggestions for treatment based on online search results [1]. However, consumer health wearables are predicted to become the next “Dr Google.”One in six (15%) consumers in the United States currently uses wearable technology, including smartwatches or fitness bands. While 19 million fitness devices are likely to be sold this year, that number is predicted to grow to 110 million in 2018 [2]. As the line between consumer health wearables and medical devices begins to blur, it is now possible for a single wearable device to monitor a range of medical risk factors (Fig 1). Potentially, these devices could give patients direct access to personal analytics that can contribute to their health, facilitate preventive care, and aid in the management of ongoing illness. However, how this new wearable technology might best serve medicine remains unclear.",
"title": ""
}
] | [
{
"docid": "4eead577c1b3acee6c93a62aee8a6bb5",
"text": "The present study examined teacher attitudes toward dyslexia and the effects of these attitudes on teacher expectations and the academic achievement of students with dyslexia compared to students without learning disabilities. The attitudes of 30 regular education teachers toward dyslexia were determined using both an implicit measure and an explicit, self-report measure. Achievement scores for 307 students were also obtained. Implicit teacher attitudes toward dyslexia related to teacher ratings of student achievement on a writing task and also to student achievement on standardized tests of spelling but not math for those students with dyslexia. Self-reported attitudes of the teachers toward dyslexia did not relate to any of the outcome measures. Neither the implicit nor the explicit measures of teacher attitudes related to teacher expectations. The results show implicit attitude measures to be a more valuable predictor of the achievement of students with dyslexia than explicit, self-report attitude measures.",
"title": ""
},
{
"docid": "b267bf90b86542e3032eaddcc2c3350f",
"text": "Many modalities of treatment for acquired skin hyperpigmentation are available including chemical agents or physical therapies, but none are completely satisfactory. Depigmenting compounds should act selectively on hyperactivated melanocytes, without short- or long-term side-effects, and induce a permanent removal of undesired pigment. Since 1961 hydroquinone, a tyrosinase inhibitor, has been introduced and its therapeutic efficacy demonstrated, and other whitening agents specifically acting on tyrosinase by different mechanisms have been proposed. Compounds with depigmenting activity are now numerous and the classification of molecules, based on their mechanism of action, has become difficult. Systematic studies to assess both the efficacy and the safety of such molecules are necessary. Moreover, the evidence that bleaching compounds are fairly ineffective on dermal accumulation of melanin has prompted investigations on the effectiveness of physical therapies, such as lasers. This review which describes the different approaches to obtain depigmentation, suggests a classification of whitening molecules on the basis of the mechanism by which they interfere with melanogenesis, and confirms the necessity to apply standardized protocols to evaluate depigmenting treatments.",
"title": ""
},
{
"docid": "0b587770a13ba76572a1e51df52d95a3",
"text": "Current approaches to supervised learning of metaphor tend to use sophisticated features and restrict their attention to constructions and contexts where these features apply. In this paper, we describe the development of a supervised learning system to classify all content words in a running text as either being used metaphorically or not. We start by examining the performance of a simple unigram baseline that achieves surprisingly good results for some of the datasets. We then show how the recall of the system can be improved over this strong baseline.",
"title": ""
},
{
"docid": "c72a42af9b6c69bc780c93997c6c2c5f",
"text": "Water strider can slide agilely on water surface at high speed. To study its locomotion characters, movements of water strider are recorded by a high speed camera. The trajectories and angle variations of water strider leg are obtained from the photo series, and provide basic information for bionic robot design. Thus a water strider robot based on surface tension is proposed. The driving mechanism is designed to replicate the trajectory of water strider's middle leg.",
"title": ""
},
{
"docid": "5cfaec0f198065bb925a1fb4ffb53f60",
"text": "In the emerging inter-disciplinary field of art and image processing, algorithms have been developed to assist the analysis of art work. In most applications, especially brush stroke analysis, high resolution digital images of paintings are required to capture subtle patterns and details in the high frequency range of the spectrum. Algorithms have been developed to learn styles of painters from their digitized paintings to help identify authenticity of controversial paintings. However, high quality testing datasets containing both original and forgery are limited to confidential image files provided by museums, which is not publicly available, and a small sets of original/copy paintings painted by the same artist, where copies were deferred to two weeks after the originals were finished. Up to date, no synthesized painting by computers from a real painting has been used as a negative test case, mainly due to the limitation of prevailing style transfer algorithms. There are two main types of style transfer algorithms, either transferring the tone (color, contrast, saturation, etc.) of an image, preserving its patterns and details, or distorting the texture uniformly of an image to create “style”. In this paper, we are interested in a higher level of style transfer, particularly, transferring a source natural image (e.g. a photo) to a high resolution painting given a reference painting of similar object. The transferred natural image would have a similar presentation of the original object to that of the reference painting. In general, an object is painted in a different style of brush strokes than that of the background, hence the desired style transferring algorithm should be able to recognize the object in the source natural image and transfer brush stroke styles in the reference painting in a content-aware way such that the styles of the foreground and the background, and moreover different parts of the foreground in the transferred image, are consistent to that in the reference painting. Recently, an algorithm based on deep convolutional neural network has been developed to transfer artistic style from an art painting to a photo [2]. Successful as it is in transferring styles from impressionist paintings of artists such as Vincent van Gogh to photos of various scenes, the algorithm is prone to distorting the structure of the content in the source image and introducing artifacts/new",
"title": ""
},
{
"docid": "7fb967f01038fb24c7d2b0c98df68b51",
"text": "Any modern organization that is serious about security deploys a network intrusion detection system (NIDS) to monitor network traffic for signs of malicious activity. The most widely deployed NIDS system is Snort, an open source system originally released in 1998. Snort is a single threaded system that uses a set of clear text rules to instruct a base engine how to react when particular traffic patterns are detected. In 2009, the US Department of Homeland Security and a consortium of private companies provided substantial grant funding to a newly created organization known as the Open Information Security Foundation (OISF), to build a multi-threaded alternative to Snort, called Suricata. Despite many similarities between Snort and Suricata, the OISF stated it was essential to replace the older single-threaded Snort engine with a multi-threaded system that could deliver higher performance and better scalability. Key Snort developers argued that Suricata’s multi-threaded architecture would actually slow the detection process. Given these competing claims, an objective head-to-head comparison of the performance of Snort and Suricata is needed. In this paper, we present a comprehensive quantitative comparison of the two systems. We have developed a rigorous testing framework that examines the performance of both systems as we scale system resources. Our results show that a single instance of Suricata is able to deliver substantially higher performance than a corresponding single instance of Snort, but has problems scaling with a higher number of cores. We find that while Suricata needs tuning for a higher number of cores, it is still able to surpass Snort even at 1 core where we would have expected Snort to shine.",
"title": ""
},
{
"docid": "1e27234f694ac4ac307e9088804a7444",
"text": "Anomaly detection in social media refers to the detection of users’ abnormal opinions, sentiment patterns, or special temporal aspects of such patterns. Social media platforms, such as Sina Weibo or Twitter, provide a Big-data platform for information retrieval, which include user feedbacks, opinions, and information on most issues. This paper proposes a hybrid neural network model called Convolutional Neural Network-Long-Short Term Memory(CNN-LSTM), we successfully applies the model to sentiment analysis on a microblog Big-data platform and obtains significant improvements that enhance the generalization ability. Based on the sentiment of a single post in Weibo, this study also adopted the multivariate Gaussian model and the power law distribution to analyze the users’ emotion and detect abnormal emotion on microblog, the multivariate Gaussian method automatically captures the correlation between different features of the emotions and saves a certain amount of time through the batch calculation of the joint probability density of data sets. Through the measure of a joint probability density value and validation of the corpus from social network, anomaly detection accuracy of an individual user is 83.49% and that for a different month is 87.84%. The results of the distribution test show that individual user’s neutral, happy, and sad emotions obey the normal distribution but the surprised and angry emotions do not. In addition, the group-based emotions on microblogs obey the power law distribution but individual emotions do not.",
"title": ""
},
{
"docid": "53edb03722153d091fb2e78c811d4aa5",
"text": "One of the main reasons for failure in Software Process Improvement (SPI) initiatives is the lack of motivation of the professionals involved. Therefore, motivation should be encouraged throughout the software process. Gamification allows us to define mechanisms that motivate people to develop specific tasks. A gamification framework was adapted to the particularities of an organization and software professionals to encourage motivation. Thus, it permitted to facilitate the adoption of SPI improvements and a higher success rate. The objective of this research was to validate the framework presented and increase the actual implementation of gamification in organizations. To achieve this goal, a qualitative research methodology was employed through interviews that involved a total of 29 experts in gamification and SPI. The results of this study confirm the validity of the framework presented, its relevance in the field of SPI and its alignment with the standard practices of gamification implementation within organizations.",
"title": ""
},
{
"docid": "aebdcd5b31d26ec1b4147efe842053e4",
"text": "We describe a novel camera calibration algorithm for square, circle, and ring planar calibration patterns. An iterative refinement approach is proposed that utilizes the parameters obtained from traditional calibration algorithms as initialization to perform undistortion and unprojection of calibration images to a canonical fronto-parallel plane. This canonical plane is then used to localize the calibration pattern control points and recompute the camera parameters in an iterative refinement until convergence. Undistorting and unprojecting the calibration pattern to the canonical plane increases the accuracy of control point localization and consequently of camera calibration. We have conducted an extensive set of experiments with real and synthetic images for the square, circle and ring pattern, and the pixel reprojection errors obtained by our method are about 50% lower than those of the OpenCV Camera Calibration Toolbox. Increased accuracy of camera calibration directly leads to improvements in other applications; we demonstrate recovery of fine object structure for visual hull reconstruction, and recovery of precise epipolar geometry for stereo camera calibration.",
"title": ""
},
{
"docid": "db8cd5dad5c3d3bda0f10f3369351bbd",
"text": "The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.",
"title": ""
},
{
"docid": "055a7be9623e794168b858e41bceaabd",
"text": "Lexical Pragmatics is a research field that tries to give a systematic and explanatory account of pragmatic phenomena that are connected with the semantic underspecification of lexical items. Cases in point are the pragmatics of adjectives, systematic polysemy, the distribution of lexical and productive causatives, blocking phenomena, the interpretation of compounds, and many phenomena presently discussed within the framework of Cognitive Semantics. The approach combines a constrained-based semantics with a general mechanism of conversational implicature. The basic pragmatic mechanism rests on conditions of updating the common ground and allows to give a precise explication of notions as generalized conversational implicature and pragmatic anomaly. The fruitfulness of the basic account is established by its application to a variety of recalcitrant phenomena among which its precise treatment of Atlas & Levinson's Qand I-principles and the formalization of the balance between informativeness and efficiency in natural language processing (Horn's division of pragmatic labor) deserve particular mention. The basic mechanism is subsequently extended by an abductive reasoning system which is guided by subjective probability. The extended mechanism turned out to be capable of giving a principled account of lexical blocking, the pragmatics of adjectives, and systematic polysemy.",
"title": ""
},
{
"docid": "04ed69959c28c3c4185d3af55521d864",
"text": "A new differential-fed broadband antenna element with unidirectional radiation is proposed. This antenna is composed of a folded bowtie, a center-fed loop, and a box-shaped reflector. A pair of differential feeds is developed to excite the antenna and provide an ultrawideband (UWB) impedance matching. The box-shaped reflector is used for the reduction of the gain fluctuation across the operating frequency band. An antenna prototype for UWB applications is fabricated and measured, exhibiting an impedance bandwidth of 132% with standing wave ratio ≤ 2 from 2.48 to 12.12 GHz, over which the gain varies between 7.2 and 14.1 dBi at boresight. The proposed antenna radiates unidirectionally with low cross polarization and low back radiation. Furthermore, the time-domain characteristic of the proposed antenna is evaluated. In addition, a 2 × 2 element array using the proposed element is also investigated in this communication.",
"title": ""
},
{
"docid": "5ca5cfcd0ed34d9b0033977e9cde2c74",
"text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We
nd that RP signi
cantly reduces both brand-name and generic prices, and results in signi
cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi
cant cost-savings, and that patients copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi
cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for
nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: [email protected]. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: [email protected]. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: [email protected].",
"title": ""
},
{
"docid": "3e2012134aa2e88b230f95518c11994d",
"text": "Echo Chamber is a game that persuades players to re-examine their argumentation style and adopt new rhetorical techniques procedurally delivered through gameplay. Several games have been made addressing the environmental impacts of climate change; none have examined the gap between scientific and public discourse over climate change, and our goal was to teach players more effective communication techniques for conveying climate change in public venues. Our game provides other developers insight into persuasion through game mechanics with good design practices for similar persuasive games.",
"title": ""
},
{
"docid": "fdcf6e60ad11b10fba077a62f7f1812d",
"text": "Delivering web software as a service has grown into a powerful paradigm for deploying a wide range of Internetscale applications. However for end-users, accessing software as a service is fundamentally at odds with free software, because of the associated cost of maintaining server infrastructure. Users end up paying for the service in one way or another, often indirectly through ads or the sale of their private data. In this paper, we aim to enable a new generation of portable and free web apps by proposing an alternative model to the existing client-server web architecture. freedom.js is a platform for developing and deploying rich multi-user web apps, where application logic is pushed out from the cloud and run entirely on client-side browsers. By shifting the responsibility of where code runs, we can explore a novel incentive structure where users power applications with their own resources, gain the ability to control application behavior and manage privacy of data. For developers, we lower the barrier of writing popular web apps by removing much of the deployment cost and making applications simpler to write. We provide a set of novel abstractions that allow developers to automatically scale their application with low complexity and overhead. freedom.js apps are inherently sandboxed, multi-threaded, and composed of reusable modules. We demonstrate the flexibility of freedom.js through a number of applications that we have built on top of the platform, including a messaging application, a social file synchronization tool, and a peer-to-peer (P2P) content delivery network (CDN). Our experience shows that we can implement a P2P-CDN with 50% fewer lines of application-specific code in the freedom.js framework when compared to a standalone version. In turn, we incur an additional startup latency of 50-60ms (about 6% of the page load time) with the freedom.js version, without any noticeable impact on system throughput.",
"title": ""
},
{
"docid": "9b75357d49ece914e02b04a6eaa927a0",
"text": "Feminist criticism of health care and ofbioethics has become increasingly rich andsophisticated in the last years of thetwentieth century. Nonetheless, this body ofwork remains quite marginalized. I believe thatthere are (at least) two reasons for this.First, many people are still confused aboutfeminism. Second, many people are unconvincedthat significant sexism still exists and aretherefore unreceptive to arguments that itshould be remedied if there is no largerbenefit. In this essay I argue for a thin,``core'' conception of feminism that is easy tounderstand and difficult to reject. Corefeminism would render debate within feminismmore fruitful, clear the way for appropriaterecognition of differences among women andtheir circumstances, provide intellectuallycompelling reasons for current non-feminists toadopt a feminist outlook, and facilitatemutually beneficial cooperation betweenfeminism and other progressive socialmovements. This conception of feminism alsomakes it clear that feminism is part of alarger egalitarian moral and political agenda,and adopting it would help bioethics focus onthe most urgent moral priorities. In addition,integrating core feminism into bioethics wouldopen a gateway to the more speculative parts offeminist work where a wealth of creativethinking is occurring. Engaging with thisfeminist work would challenge and strengthenmainstream approaches; it should also motivatemainstream bioethicists to explore othercurrently marginalized parts of bioethics.",
"title": ""
},
{
"docid": "facf85be0ae23eacb7e7b65dd5c45b33",
"text": "We review evidence for partially segregated networks of brain areas that carry out different attentional functions. One system, which includes parts of the intraparietal cortex and superior frontal cortex, is involved in preparing and applying goal-directed (top-down) selection for stimuli and responses. This system is also modulated by the detection of stimuli. The other system, which includes the temporoparietal cortex and inferior frontal cortex, and is largely lateralized to the right hemisphere, is not involved in top-down selection. Instead, this system is specialized for the detection of behaviourally relevant stimuli, particularly when they are salient or unexpected. This ventral frontoparietal network works as a 'circuit breaker' for the dorsal system, directing attention to salient events. Both attentional systems interact during normal vision, and both are disrupted in unilateral spatial neglect.",
"title": ""
},
{
"docid": "14ca9dfee206612e36cd6c3b3e0ca61e",
"text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.",
"title": ""
},
{
"docid": "59b10765f9125e9c38858af901a39cc7",
"text": "--------__------------------------------------__---------------",
"title": ""
},
{
"docid": "dba3434c600ed7ddbb944f0a3adb1ba0",
"text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.",
"title": ""
}
] | scidocsrr |
92a6ff6616ba7c6622b8b1510ef7f142 | Interactive whiteboards: Interactive or just whiteboards? | [
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
}
] | [
{
"docid": "8a3dba8aa5aa8cf69da21079f7e36de6",
"text": "This letter presents a novel technique for synthesis of coupled-resonator filters with inter-resonator couplings varying linearly with frequency. The values of non-zero elements of the coupling matrix are found by solving a nonlinear least squares problem involving eigenvalues of matrix pencils derived from the coupling matrix and reference zeros and poles of scattering parameters. The proposed method was verified by numerical tests carried out for various coupling schemes including triplets and quadruplets for which the frequency-dependent coupling was found to produce an extra zero.",
"title": ""
},
{
"docid": "e55ad28c68a422ec959e8b247aade1b9",
"text": "Developing reliable methods for representing and managing information uncertainty remains a persistent and relevant challenge to GIScience. Information uncertainty is an intricate idea, and recent examinations of this concept have generated many perspectives on its representation and visualization, with perspectives emerging from a wide range of disciplines and application contexts. In this paper, we review and assess progress toward visual tools and methods to help analysts manage and understand information uncertainty. Specifically, we report on efforts to conceptualize uncertainty, decision making with uncertainty, frameworks for representing uncertainty, visual representation and user control of displays of information uncertainty, and evaluative efforts to assess the use and usability of visual methods of uncertainty. We conclude by identifying seven key research challenges in visualizing information uncertainty, particularly as it applies to decision making and analysis.",
"title": ""
},
{
"docid": "c8911f38bfd68baa54b49b9126c2ad22",
"text": "This document presents a performance comparison of three 2D SLAM techniques available in ROS: Gmapping, Hec-torSLAM and CRSM SLAM. These algorithms were evaluated using a Roomba 645 robotic platform with differential drive and a RGB-D Kinect sensor as an emulator of a scanner lasser. All tests were realized in static indoor environments. To improve the quality of the maps, some rosbag files were generated and used to build the maps in an off-line way.",
"title": ""
},
{
"docid": "baa0bf8fe429c4fe8bfb7ebf78a1ed94",
"text": "The weakly supervised object localization (WSOL) is to locate the objects in an image while only image-level labels are available during the training procedure. In this work, the Selective Feature Category Mapping (SFCM) method is proposed, which introduces the Feature Category Mapping (FCM) and the widely-used selective search method to solve the WSOL task. Our FCM replaces layers after the specific layer in the state-of-the-art CNNs with a set of kernels and learns the weighted pooling for previous feature maps. It is trained with only image-level labels and then map the feature maps to their corresponding categories in the test phase. Together with selective search method, the location of each object is finally obtained. Extensive experimental evaluation on ILSVRC2012 and PASCAL VOC2007 benchmarks shows that SFCM is simple but very effective, and it is able to achieve outstanding classification performance and outperform the state-of-the-art methods in the WSOL task.",
"title": ""
},
{
"docid": "b305e3504e3a99a5cd026e7845d98dab",
"text": "This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST and the backwards-smoothing extended Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A twostep approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, Associate Professor, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Associate Fellow AIAA. Aerospace Engineer, Guidance, Navigation and Control Systems Engineering Branch. Email: [email protected]. Fellow AIAA. Postdoctoral Research Fellow, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Member AIAA.",
"title": ""
},
{
"docid": "fc79bfdb7fbbfa42d2e1614964113101",
"text": "Probability Theory, 2nd ed. Princeton, N. J.: 960. Van Nostrand, 1 121 T. T. Kadota, “Optimum reception of binary gaussian signals,” Bell Sys. Tech. J., vol. 43, pp. 2767-2810, November 1964. 131 T. T. Kadota. “Ootrmum recention of binarv sure and Gaussian signals,” Bell Sys. ?‘ech: J., vol. 44;~~. 1621-1658, October 1965. 141 U. Grenander, ‘Stochastic processes and statistical inference,” Arkiv fiir Matematik, vol. 17, pp. 195-277, 1950. 151 L. A. Zadeh and J. R. Ragazzini, “Optimum filters for the detection of signals in noise,” Proc. IRE, vol. 40, pp. 1223-1231, O,+nhm 1 a.63 161 J. H. Laning and R. H. Battin, Random Processes in Automatic Control. New York: McGraw-Hill. 1956. nn. 269-358. 171 C.. W. Helstrom, “ Solution of the dete&on integral equation for stationary filtered white noise,” IEEE Trans. on Information Theory, vol. IT-II, pp. 335-339, July 1965. 181 T. Kailath, “The detection of known signals in colored Gaussian noise,” Stanford Electronics Labs., Stanford Univ., Stanford, Calif. Tech. Rept. 7050-4, July 1965. 191 T. T. Kadota, “Optimum reception of nf-ary Gaussian signals in Gaussian noise,” Bell. Sys. Tech. J., vol. 44, pp. 2187-2197, November 1965. [lOI T. T. Kadota, “Term-by-term differentiability of Mercer’s expansion,” Proc. of Am. Math. Sot., vol. 18, pp. 69-72, February 1967.",
"title": ""
},
{
"docid": "ecea888d3b2d6b9ce0a26a4af6382db8",
"text": "Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provides an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications, and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.",
"title": ""
},
{
"docid": "03ba329de93f763ff6f0a8c4c6e18056",
"text": "Nowadays, with the availability of massive amount of trade data collected, the dynamics of the financial markets pose both a challenge and an opportunity for high frequency traders. In order to take advantage of the rapid, subtle movement of assets in High Frequency Trading (HFT), an automatic algorithm to analyze and detect patterns of price change based on transaction records must be available. The multichannel, time-series representation of financial data naturally suggests tensor-based learning algorithms. In this work, we investigate the effectiveness of two multilinear methods for the mid-price prediction problem against other existing methods. The experiments in a large scale dataset which contains more than 4 millions limit orders show that by utilizing tensor representation, multilinear models outperform vector-based approaches and other competing ones.",
"title": ""
},
{
"docid": "4f631769d8267c81ea568c9eed71ac09",
"text": "To study a phenomenon scientifically, it must be appropriately described and measured. How mindfulness is conceptualized and assessed has considerable importance for mindfulness science, and perhaps in part because of this, these two issues have been among the most contentious in the field. In recognition of the growing scientific and clinical interest in",
"title": ""
},
{
"docid": "f1f72a6d5d2ab8862b514983ac63480b",
"text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.",
"title": ""
},
{
"docid": "68865e653e94d3366961434cc012363f",
"text": "Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called \"contrastive analysis\"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.",
"title": ""
},
{
"docid": "c224cc83b4c58001dbbd3e0ea44a768a",
"text": "We review the current status of research in dorsal-ventral (D-V) patterning in vertebrates. Emphasis is placed on recent work on Xenopus, which provides a paradigm for vertebrate development based on a rich heritage of experimental embryology. D-V patterning starts much earlier than previously thought, under the influence of a dorsal nuclear -Catenin signal. At mid-blastula two signaling centers are present on the dorsal side: The prospective neuroectoderm expresses bone morphogenetic protein (BMP) antagonists, and the future dorsal endoderm secretes Nodal-related mesoderm-inducing factors. When dorsal mesoderm is formed at gastrula, a cocktail of growth factor antagonists is secreted by the Spemann organizer and further patterns the embryo. A ventral gastrula signaling center opposes the actions of the dorsal organizer, and another set of secreted antagonists is produced ventrally under the control of BMP4. The early dorsal -Catenin signal inhibits BMP expression at the transcriptional level and promotes expression of secreted BMP antagonists in the prospective central nervous system (CNS). In the absence of mesoderm, expression of Chordin and Noggin in ectoderm is required for anterior CNS formation. FGF (fibroblast growth factor) and IGF (insulin-like growth factor) signals are also potent neural inducers. Neural induction by anti-BMPs such as Chordin requires mitogen-activated protein kinase (MAPK) activation mediated by FGF and IGF. These multiple signals can be integrated at the level of Smad1. Phosphorylation by BMP receptor stimulates Smad1 transcriptional activity, whereas phosphorylation by MAPK has the opposite effect. Neural tissue is formed only at very low levels of activity of BMP-transducing Smads, which require the combination of both low BMP levels and high MAPK signals. Many of the molecular players that regulate D-V patterning via regulation of BMP signaling have been conserved between Drosophila and the vertebrates.",
"title": ""
},
{
"docid": "aad34b3e8acc311d0d32964c6607a6e1",
"text": "This paper looks at the performance of photovoltaic modules in nonideal conditions and proposes topologies to minimize the degradation of performance caused by these conditions. It is found that the peak power point of a module is significantly decreased due to only the slightest shading of the module, and that this effect is propagated through other nonshaded modules connected in series with the shaded one. Based on this result, two topologies for parallel module connections have been outlined. In addition, dc/dc converter technologies, which are necessary to the design, are compared by way of their dynamic models, frequency characteristics, and component cost. Out of this comparison, a recommendation has been made",
"title": ""
},
{
"docid": "1ad06e5eee4d4f29dd2f0e8f0dd62370",
"text": "Recent research on map matching algorithms for land vehicle navigation has been based on either a conventional topological analysis or a probabilistic approach. The input to these algorithms normally comes from the global positioning system and digital map data. Although the performance of some of these algorithms is good in relatively sparse road networks, they are not always reliable for complex roundabouts, merging or diverging sections of motorways and complex urban road networks. In high road density areas where the average distance between roads is less than 100 metres, there may be many road patterns matching the trajectory of the vehicle reported by the positioning system at any given moment. Consequently, it may be difficult to precisely identify the road on which the vehicle is travelling. Therefore, techniques for dealing with qualitative terms such as likeliness are essential for map matching algorithms to identify a correct link. Fuzzy logic is one technique that is an effective way to deal with qualitative terms, linguistic vagueness, and human intervention. This paper develops a map matching algorithm based on fuzzy logic theory. The inputs to the proposed algorithm are from the global positioning system augmented with data from deduced reckoning sensors to provide continuous navigation. The algorithm is tested on different road networks of varying complexity. The validation of this algorithm is carried out using high precision positioning data obtained from GPS carrier phase observables. The performance of the developed map matching algorithm is evaluated against the performance of several well-accepted existing map matching algorithms. The results show that the fuzzy logic-based map matching algorithm provides a significant improvement over existing map matching algorithms both in terms of identifying correct links and estimating the vehicle position on the links.",
"title": ""
},
{
"docid": "39eaf3ad7373d36404e903a822a3d416",
"text": "We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.",
"title": ""
},
{
"docid": "b0c5c8e88e9988b6548acb1c8ebb5edd",
"text": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “ a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms.",
"title": ""
},
{
"docid": "c1b5b1dcbb3e7ff17ea6ad125bbc4b4b",
"text": "This article focuses on a new type of wireless devices in the domain between RFIDs and sensor networks—Energy-Harvesting Active Networked Tags (EnHANTs). Future EnHANTs will be small, flexible, and self-powered devices that can be attached to objects that are traditionally not networked (e.g., books, furniture, toys, produce, and clothing). Therefore, they will provide the infrastructure for various tracking applications and can serve as one of the enablers for the Internet of Things. We present the design considerations for the EnHANT prototypes, developed over the past 4 years. The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. We describe a small-scale testbed that uniquely allows evaluating different algorithms with trace-based light energy inputs. Then, we experimentally evaluate the performance of different energy-harvesting adaptive policies with organic solar cells and UWB-IR transceivers. Finally, we discuss the lessons learned during the prototype and testbed design process.",
"title": ""
},
{
"docid": "c44ef4f4242147affdbe613c70ec4a85",
"text": "The physical and generalized sensor models are two widely used imaging geometry models in the photogrammetry and remote sensing. Utilizing the rational function model (RFM) to replace physical sensor models in photogrammetric mapping is becoming a standard way for economical and fast mapping from high-resolution images. The RFM is accepted for imagery exploitation since high accuracies have been achieved in all stages of the photogrammetric process just as performed by rigorous sensor models. Thus it is likely to become a passkey in complex sensor modeling. Nowadays, commercial off-the-shelf (COTS) digital photogrammetric workstations have incorporated the RFM and related techniques. Following the increasing number of RFM related publications in recent years, this paper reviews the methods and key applications reported mainly over the past five years, and summarizes the essential progresses and address the future research directions in this field. These methods include the RFM solution, the terrainindependent and terrain-dependent computational scenarios, the direct and indirect RFM refinement methods, the photogrammetric exploitation techniques, and photogrammetric interoperability for cross sensor/platform imagery integration. Finally, several open questions regarding some aspects worth of further study are addressed.",
"title": ""
},
{
"docid": "d5b004af32bd747c2b5ad175975f8c06",
"text": "This paper presents a design of a quasi-millimeter wave wideband antenna array consisting of a leaf-shaped bowtie antenna (LSBA) and series-parallel feed networks in which parallel strip and microstrip lines are employed. A 16-element LSBA array is designed such that the antenna array operates over the frequency band of 22-30GHz. In order to demonstrate the effective performance of the presented configuration, characteristics of the designed LSBA array are evaluated by the finite-difference time domain (FDTD) analysis and measurements. Over the frequency range from 22GHz to 30GHz, the simulated reflection coefficient is observed to be less than -8dB, and the actual gain of 12.3-19.4dBi is obtained.",
"title": ""
},
{
"docid": "c117a5fc0118f3ea6c576bb334759d59",
"text": "While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.",
"title": ""
}
] | scidocsrr |
8f987fc6af07b2a7a901591d1269a9d0 | Eye Movement-Based Human-Computer Interaction Techniques : Toward Non-Command Interfaces | [
{
"docid": "074fb5576ea24d6ffb44924fd2b50cff",
"text": "I treat three related subjects: virtual-worlds research—the construction of real-time 3-D illusions by computer graphics; some observations about interfaces to virtual worlds; and the coming application of virtual-worlds techniques to the enhancement of scientific computing.\nWe need to design generalized interfaces for visualizing, exploring, and steering scientific computations. Our interfaces must be direct-manipulation, not command-string; interactive, not batch; 3-D, not 2-D; multisensory, not just visual.\nWe need generalized research results for 3-D interactive interfaces. More is known than gets reported, because of a reluctance to share “unproven” results. I propose a shells-of-certainty model for such knowledge.",
"title": ""
}
] | [
{
"docid": "8d3e93e59a802535e9d5ef7ca7ace362",
"text": "Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain's function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.",
"title": ""
},
{
"docid": "cbe1dc1b56716f57fca0977383e35482",
"text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.",
"title": ""
},
{
"docid": "089808010a2925a7eaca71736fbabcaf",
"text": "In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of images, the global motion can be described by independent motion models. On the other hand, in a sequence there exist as many as \u000e pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fitting a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (ie. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.",
"title": ""
},
{
"docid": "8aa305f217314d60ed6c9f66d20a7abf",
"text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.",
"title": ""
},
{
"docid": "7d23d8d233a3fc7ff75edf361acbe642",
"text": "The diagnosis and treatment of chronic patellar instability caused by trochlear dysplasia can be challenging. A dysplastic trochlea leads to biomechanical and kinematic changes that often require surgical correction when symptomatic. In the past, trochlear dysplasia was classified using the 4-part Dejour classification system. More recently, new classification systems have been proposed. Future studies are needed to investigate long-term outcomes after trochleoplasty.",
"title": ""
},
{
"docid": "a4741a4d6e01902f252b5a6fb59eb64b",
"text": "Scan and segmented scan algorithms are crucial building blocks for a great many data-parallel algorithms. Segmented scan and related primitives also provide the necessary support for the flattening transform, which allows for nested data-parallel programs to be compiled into flat data-parallel languages. In this paper, we describe the design of efficient scan and segmented scan parallel primitives in CUDA for execution on GPUs. Our algorithms are designed using a divide-and-conquer approach that builds all scan primitives on top of a set of primitive intra-warp scan routines. We demonstrate that this design methodology results in routines that are simple, highly efficient, and free of irregular access patterns that lead to memory bank conflicts. These algorithms form the basis for current and upcoming releases of the widely used CUDPP library.",
"title": ""
},
{
"docid": "8f1d27581e7a83e378129e4287c64bd9",
"text": "Online social media plays an increasingly significant role in shaping the political discourse during elections worldwide. In the 2016 U.S. presidential election, political campaigns strategically designed candidacy announcements on Twitter to produce a significant increase in online social media attention. We use large-scale online social media communications to study the factors of party, personality, and policy in the Twitter discourse following six major presidential campaign announcements for the 2016 U.S. presidential election. We observe that all campaign announcements result in an instant bump in attention, with up to several orders of magnitude increase in tweets. However, we find that Twitter discourse as a result of this bump in attention has overwhelmingly negative sentiment. The bruising criticism, driven by crosstalk from Twitter users of opposite party affiliations, is organized by hashtags such as #NoMoreBushes and #WhyImNotVotingForHillary. We analyze how people take to Twitter to criticize specific personality traits and policy positions of presidential candidates.",
"title": ""
},
{
"docid": "2717779fa409f10f3a509e398dc24233",
"text": "Hallyu refers to the phenomenon of Korean popular culture which came into vogue in Southeast Asia and mainland China in late 1990s. Especially, hallyu is very popular among young people enchanted with Korean music (K-pop), dramas (K-drama), movies, fashion, food, and beauty in China, Taiwan, Hong Kong, and Vietnam, etc. This cultural phenomenon has been closely connected with multi-layered transnational movements of people, information and capital flows in East Asia. Since the 15 century, East and West have been the two subjects of cultural phenomena. Such East–West dichotomy was articulated by Westerners in the scholarly tradition known as “Orientalism.”During the Age of Exploration (1400–1600), West didn’t only take control of East by military force, but also created a new concept of East/Orient, as Edward Said analyzed it expertly in his masterpiece Orientalism in 1978. Throughout the history of imperialism for nearly 4-5 centuries, west was a cognitive subject, but East was an object being recognized by the former. Accordingly, “civilization and modernization” became the exclusive properties of which West had copyright (?!), whereas East was a “sub-subject” to borrow or even plagiarize from Western standards. In this sense, (making) modern history in East Asia was a compulsive imitation of Western civilization or a catch-up with the West in other wards. Thus, it is interesting to note that East Asian people, after gaining economic power through “compressed modernization,” are eager to be main agents of their cultural activities in and through the enjoyment of East Asian popular culture in a postmodern era. In this transition from Westerncentered into East Asian-based popular culture, they are no longer sub-subjects of modernity.",
"title": ""
},
{
"docid": "2dc23ce5b1773f12905ebace6ef221a5",
"text": "With the increasing demand for higher data rates and more reliable service capabilities for wireless devices, wireless service providers are facing an unprecedented challenge to overcome a global bandwidth shortage. Early global activities on beyond fourth-generation (B4G) and fifth-generation (5G) wireless communication systems suggest that millimeter-wave (mmWave) frequencies are very promising for future wireless communication networks due to the massive amount of raw bandwidth and potential multigigabit-per-second (Gb/s) data rates [1]?[3]. Both industry and academia have begun the exploration of the untapped mmWave frequency spectrum for future broadband mobile communication networks. In April 2014, the Brooklyn 5G Summit [4], sponsored by Nokia and the New York University (NYU) WIRELESS research center, drew global attention to mmWave communications and channel modeling. In July 2014, the IEEE 802.11 next-generation 60-GHz study group was formed to increase the data rates to over 20 Gb/s in the unlicensed 60-GHz frequency band while maintaining backward compatibility with the emerging IEEE 802.11ad wireless local area network (WLAN) standard [5].",
"title": ""
},
{
"docid": "c2791d704241b604f7f064d1b1077c36",
"text": "Stemming is an operation that splits a word into the constituent root part and affix without doing complete morphological analysis. It is used to improve the performance of spelling checkers and information retrieval applications, where morphological analysi would be too computationally expensive. For spelling checkers specifically, using stemming may drastically reduce the dictionary size, often a bottleneck for mobile and embedded devices. This paper presents a computationally inexpensive stemming algorithm for Bengali, which handles suffix removal in a domain independent way. The evaluation of the proposed algorithm in a Bengali spelling checker indicates that it can be effectively used in information retrieval applications in general.",
"title": ""
},
{
"docid": "b6a8f45bd10c30040ed476b9d11aa908",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
},
{
"docid": "219ed374347b81553d3208ad1dfb80ad",
"text": "Various metabolic disorders are associated with changes in inflammatory tone. Among the latest advances in the metabolism field, the discovery that gut microorganisms have a major role in host metabolism has revealed the possibility of a plethora of associations between gut bacteria and numerous diseases. However, to date, few mechanisms have been clearly established. Accumulating evidence indicates that the endocannabinoid system and related bioactive lipids strongly contribute to several physiological processes and are a characteristic of obesity, type 2 diabetes mellitus and inflammation. In this Review, we briefly define the gut microbiota as well as the endocannabinoid system and associated bioactive lipids. We discuss existing literature regarding interactions between gut microorganisms and the endocannabinoid system, focusing specifically on the triad of adipose tissue, gut bacteria and the endocannabinoid system in the context of obesity and the development of fat mass. We highlight gut-barrier function by discussing the role of specific factors considered to be putative 'gate keepers' or 'gate openers', and their role in the gut microbiota–endocannabinoid system axis. Finally, we briefly discuss data related to the different pharmacological strategies currently used to target the endocannabinoid system, in the context of cardiometabolic disorders and intestinal inflammation.",
"title": ""
},
{
"docid": "2172e78731ee63be5c15549e38c4babb",
"text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.",
"title": ""
},
{
"docid": "e643f7f29c2e96639a476abb1b9a38b1",
"text": "Weather forecasting has been one of the most scientifically and technologically challenging problem around the world. Weather data is one of the meteorological data that is rich with important information, which can be used for weather prediction We extract knowledge from weather historical data collected from Indian Meteorological Department (IMD) Pune. From the collected weather data comprising of 36 attributes, only 7 attributes are most relevant to rainfall prediction. We made data preprocessing and data transformation on raw weather data set, so that it shall be possible to work on Bayesian, the data mining, prediction model used for rainfall prediction. The model is trained using the training data set and has been tested for accuracy on available test data. The meteorological centers uses high performance computing and supercomputing power to run weather prediction model. To address the issue of compute intensive rainfall prediction model, we proposed and implemented data intensive model using data mining technique. Our model works with good accuracy and takes moderate compute resources to predict the rainfall. We have used Bayesian approach to prove our model for rainfall prediction, and found to be working well with good accuracy.",
"title": ""
},
{
"docid": "45940a48b86645041726120fb066a1fa",
"text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"title": ""
},
{
"docid": "5ae4b1d4ef00afbde49edfaa2728934b",
"text": "A wideband, low loss inline transition from microstrip line to rectangular waveguide is presented. This transition efficiently couples energy from a microstrip line to a ridge and subsequently to a TE10 waveguide. This unique structure requires no mechanical pressure for electrical contact between the microstrip probe and the ridge because the main planar circuitry and ridge sections are placed on a single housing. The measured insertion loss for back-to-back transition is 0.5 – 0.7 dB (0.25 – 0.35 dB/transition) in the band 50 – 72 GHz.",
"title": ""
},
{
"docid": "ddf09617b266d483d5e3ab3dcb479b69",
"text": "Writing a research article can be a daunting task, and often, writers are not certain what should be included and how the information should be conveyed. Fortunately, scientific and engineering journal articles follow an accepted format. They contain an introduction which includes a statement of the problem, a literature review, and a general outline of the paper, a methods section detailing the methods used, separate or combined results, discussion and application sections, and a final summary and conclusions section. Here, each of these elements is described in detail using examples from the published literature as illustration. Guidance is also provided with respect to style, getting started, and the revision/review process.",
"title": ""
},
{
"docid": "5b03f69a2e7a21e5e1144080b604af2e",
"text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach, in comparison to other spectral domain convolutional architectures, on spectral image classification, community detection, vertex classification, and matrix completion tasks.",
"title": ""
},
{
"docid": "76753fe26a2ed69c5b7099009c9a094f",
"text": "A total of 82 strains of presumptive Aeromonas spp. were identified biochemically and genetically (16S rDNA-RFLP). The strains were isolated from 250 samples of frozen fish (Tilapia, Oreochromis niloticus niloticus) purchased in local markets in Mexico City. In the present study, we detected the presence of several genes encoding for putative virulence factors and phenotypic activities that may play an important role in bacterial infection. In addition, we studied the antimicrobial patterns of those strains. Molecular identification demonstrated that the prevalent species in frozen fish were Aeromonas salmonicida (67.5%) and Aeromonas bestiarum (20.9%), accounting for 88.3% of the isolates, while the other strains belonged to the species Aeromonas veronii (5.2%), Aeromonas encheleia (3.9%) and Aeromonas hydrophila (2.6%). Detection by polymerase chain reaction (PCR) of genes encoding putative virulence factors common in Aeromonas, such as aerolysin/hemolysin, lipases including the glycerophospholipid-cholesterol acyltransferase (GCAT), serine protease and DNases, revealed that they were all common in these strains. Our results showed that first generation quinolones and second and third generation cephalosporins were the drugs with the best antimicrobial effect against Aeromonas spp. In Mexico, there have been few studies on Aeromonas and its putative virulence factors. The present work therefore highlights an important incidence of Aeromonas spp., with virulence potential and antimicrobial resistance, isolated from frozen fish intended for human consumption in Mexico City.",
"title": ""
}
] | scidocsrr |
eacb65f10b0211b0129209075e070a3f | A serious game model for cultural heritage | [
{
"docid": "49e3c33aa788d3d075c7569c6843065a",
"text": "Cultural heritage around the globe suffers from wars, natural disasters and human negligence. The importance of cultural heritage documentation is well recognized and there is an increasing pressure to document our heritage both nationally and internationally. This has alerted international organizations to the need for issuing guidelines describing the standards for documentation. Charters, resolutions and declarations by international organisations underline the importance of documentation of cultural heritage for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research. Important ones include the International Council on Monuments and Sites, ICOMOS (ICOMOS, 2005) and UNESCO, including the famous Venice Charter, The International Charter for the Conservation and Restoration of Monuments and Sites, 1964, (UNESCO, 2005).",
"title": ""
},
{
"docid": "c1e12a4feec78d480c8f0c02cdb9cb7d",
"text": "Although the Parthenon has stood on the Athenian Acropolis for nearly 2,500 years, its sculptural decorations have been scattered to museums around the world. Many of its sculptures have been damaged or lost. Fortunately, most of the decoration survives through drawings, descriptions, and casts. A component of our Parthenon Project has been to assemble digital models of the sculptures and virtually reunite them with the Parthenon. This sketch details our effort to digitally record the Parthenon sculpture collection in the Basel Skulpturhalle museum, which exhibits plaster casts of almost all of the existing pediments, metopes, and frieze. Our techniques have been designed to work as quickly as possible and at low cost.",
"title": ""
}
] | [
{
"docid": "5d8bc135f10c1a9b741cc60ad7aae04f",
"text": "In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation (Bahdanau et al. (2014)). Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et al. (2015) on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance.",
"title": ""
},
{
"docid": "75e1e8e65bd5dcf426bf9f3ee7c666a5",
"text": "This paper offers a new, nonlinear model of informationseeking behavior, which contrasts with earlier stage models of information behavior and represents a potential cornerstone for a shift toward a new perspective for understanding user information behavior. The model is based on the findings of a study on interdisciplinary information-seeking behavior. The study followed a naturalistic inquiry approach using interviews of 45 academics. The interview results were inductively analyzed and an alternative framework for understanding information-seeking behavior was developed. This model illustrates three core processes and three levels of contextual interaction, each composed of several individual activities and attributes. These interact dynamically through time in a nonlinear manner. The behavioral patterns are analogous to an artist’s palette, in which activities remain available throughout the course of information-seeking. In viewing the processes in this way, neither start nor finish points are fixed, and each process may be repeated or lead to any other until either the query or context determine that information-seeking can end. The interactivity and shifts described by the model show information-seeking to be nonlinear, dynamic, holistic, and flowing. The paper offers four main implications of the model as it applies to existing theory and models, requirements for future research, and the development of information literacy curricula. Central to these implications is the creation of a new nonlinear perspective from which user information-seeking can be interpreted.",
"title": ""
},
{
"docid": "d3b24655e01cbb4f5d64006222825361",
"text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "808115043786372af3e3fb726cc3e191",
"text": "Scapy is a free and open source packet manipulation environment written in Python language. In this paper we present a Modbus extension to Scapy, and show how this environment can be used to build tools for security analysis of industrial network protocols. Our implementation can be extended to other industrial network protocols and can help security analysts to understand how these protocols work under attacks or adverse conditions.",
"title": ""
},
{
"docid": "a3345ad4a18be52b478d3e75cf05a371",
"text": "In the course of the routine use of NMR as an aid for organic chemistry, a day-to-day problem is the identification of signals deriving from common contaminants (water, solvents, stabilizers, oils) in less-than-analytically-pure samples. This data may be available in the literature, but the time involved in searching for it may be considerable. Another issue is the concentration dependence of chemical shifts (especially 1H); results obtained two or three decades ago usually refer to much more concentrated samples, and run at lower magnetic fields, than today’s practice. We therefore decided to collect 1H and 13C chemical shifts of what are, in our experience, the most popular “extra peaks” in a variety of commonly used NMR solvents, in the hope that this will be of assistance to the practicing chemist.",
"title": ""
},
{
"docid": "12f6f7e9350d436cc167e00d72b6e1b1",
"text": "This paper reviews the state of the art of a polyphase complex filter for RF front-end low-IF transceivers applications. We then propose a multi-stage polyphase filter design to generate a quadrature I/Q signal to achieve a wideband precision quadrature phase shift with a constant 90 ° phase difference for self-interference cancellation circuit for full duplex radio. The number of the stages determines the bandwidth requirement of the channel. An increase of 87% in bandwidth is attained when our design is implemented in multi-stage from 2 to an extended 6 stages. A 4-stage polyphase filter achieves 2.3 GHz bandwidth.",
"title": ""
},
{
"docid": "671eb73ad86525cb183e2b8dbfe09947",
"text": "We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent’s experience. Because this loss is highly flexible in its ability to take into account the agent’s history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG’s learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.",
"title": ""
},
{
"docid": "6021968dc39e13620e90c30d9c008d19",
"text": "In recent years, Deep Reinforcement Learning has made impressive advances in solving several important benchmark problems for sequential decision making. Many control applications use a generic multilayer perceptron (MLP) for non-vision parts of the policy network. In this work, we propose a new neural network architecture for the policy network representation that is simple yet effective. The proposed Structured Control Net (SCN) splits the generic MLP into two separate sub-modules: a nonlinear control module and a linear control module. Intuitively, the nonlinear control is for forward-looking and global control, while the linear control stabilizes the local dynamics around the residual of global control. We hypothesize that this will bring together the benefits of both linear and nonlinear policies: improve training sample efficiency, final episodic reward, and generalization of learned policy, while requiring a smaller network and being generally applicable to different training methods. We validated our hypothesis with competitive results on simulations from OpenAI MuJoCo, Roboschool, Atari, and a custom 2D urban driving environment, with various ablation and generalization tests, trained with multiple black-box and policy gradient training methods. The proposed architecture has the potential to improve upon broader control tasks by incorporating problem specific priors into the architecture. As a case study, we demonstrate much improved performance for locomotion tasks by emulating the biological central pattern generators (CPGs) as the nonlinear part of the architecture.",
"title": ""
},
{
"docid": "cd3bbec4c7f83c9fb553056b1b593bec",
"text": "We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. Recurrent Neural Networks and Music Many researchers are familiar with feedforward neural networks consisting of 2 or more layers of processing units, each with weighted connections to the next layer. Each unit passes the sum of its weighted inputs through a nonlinear sigmoid function. Each layer’s outputs are fed forward through the network to the next layer, until the output layer is reached. Weights are initialized to small initial random values. Via the back-propagation algorithm (Rumelhart et al. 1986), outputs are compared to targets, and the errors are propagated back through the connection weights. Weights are updated by gradient descent. Through an iterative training procedure, examples (inputs) and targets are presented repeatedly; the network learns a nonlinear function of the inputs. It can then generalize and produce outputs for new examples. These networks have been explored by the computer music community for classifying chords (Laden and Keefe 1991) and other musical tasks (Todd and Loy 1991, Griffith and Todd 1999). A recurrent network uses feedback from one or more of its units as input in choosing the next output. This means that values generated by units at time step t-1, say y(t-1), are part of the inputs x(t) used in selecting the next set of outputs y(t). A network may be fully recurrent; that is all units are connected back to each other and to themselves. Or part of the network may be fed back in recurrent links. Todd (Todd 1991) uses a Jordan recurrent network (Jordan 1986) to reproduce classical songs and then to produce new songs. The outputs are recurrently fed back as inputs as shown in Figure 1. In addition, self-recurrence on the inputs provides a decaying history of these inputs. The weight update algorithm is back-propagation, using teacher forcing (Williams and Zipser 1988). With teacher forcing, the target outputs are presented to the recurrent inputs from the output units (instead of the actual outputs, which are not correct yet during training). Pitches (on output or input) are represented in a localized binary representation, with one bit for each of the 12 chromatic notes. More bits can be added for more octaves. C is represented as 100000000000. C# is 010000000000, D is 001000000000. Time is divided into 16th note increments. Note durations are determined by how many increments a pitch’s output unit is on (one). E.g. an eighth note lasts for two time increments. Rests occur when all outputs are off (zero). Figure 1. Jordan network, with outputs fed back to inputs. (Mozer 1994)’s CONCERT uses a backpropagationthrough-time (BPTT) recurrent network to learn various musical tasks and to learn melodies with harmonic accompaniment. Then, CONCERT can run in generation mode to compose new music. The BPTT algorithm (Williams and Zipser 1992, Werbos 1988, Campolucci 1998) can be used with a fully recurrent network where the outputs of all units are connected to the inputs of all units, including themselves. The network can include external inputs and optionally, may include a regular feedforward output network (see Figure 2). The BPTT weight updates are proportional to the gradient of the sum of errors over every time step in the interval between start time t0 and end time t1, assuming the error at time step t is affected by the outputs at all previous time steps, starting with t0. BPTT requires saving all inputs, states, and errors for all time steps, and updating the weights in a batch operation at the end, time t1. One sequence (each example) requires one batch weight update. Figure 2. A fully self-recurrent network with external inputs, and optional feedforward output attachment. If there is no output attachment, one or more recurrent units are designated as output units. CONCERT is a combination of BPTT with a layer of output units that are probabilistically interpreted, and a maximum likelihood training criterion (rather than a squared error criterion). There are two sets of outputs (and two sets of inputs), one set for pitch and the other for duration. One pass through the network corresponds to a note, rather than a slice of time. We present only the pitch representation here since that is our focus. Mozer uses a psychologically based representation of musical notes. Figure 3 shows the chromatic circle (CC) and the circle of fifths (CF), used with a linear octave value for CONCERT’s pitch representation. Ignoring octaves, we refer to the rest of the representation as CCCF. Six digits represent the position of a pitch on CC and six more its position on CF. C is represented as 000000 000000, C# as 000001 111110, D as 000011 111111, and so on. Mozer uses -1,1 rather than 0,1 because of implementation details. Figure 3. Chromatic Circle on Left, Circle of Fifths on Right. Pitch position on each circle determines its representation. For chords, CONCERT uses the overlapping subharmonics representation of (Laden and Keefe, 1991). Each chord tone starts in Todd’s binary representation, but 5 harmonics (integer multiples of its frequency) are added. C3 is now C3, C4, G4, C5, E5 requiring a 3 octave representation. Because the 7th of the chord does not overlap with the triad harmonics, Laden and Keefe use triads only. C major triad C3, E3, G3, with harmonics, is C3, C4, G4, C5, E5, E3, E4, B4, E5, G#5, G3, G4, D4, G5, B5. The triad pitches and harmonics give an overlapping representation. Each overlapping pitch adds 1 to its corresponding input. CONCERT excludes octaves, leaving 12 highly overlapping chord inputs, plus an input that is positive when certain key-dependent chords appear, and learns waltzes over a harmonic chord structure. Eck and Schmidhuber (2002) use Long Short-term Memory (LSTM) recurrent networks to learn and compose blues music (Hochreiter and Schmidhuber 1997, and see Gers et al., 2000 for succinct pseudo-code for the algorithm). An LSTM network consists of input units, output units, and a set of memory blocks, each of which includes one or more memory cells. Blocks are connected to each other recurrently. Figure 4 shows an LSTM network on the left, and the contents of one memory block (this one with one cell) on the right. There may also be a direct connection from external inputs to the output units. This is the configuration found in Gers et al., and the one we use in our experiments. Eck and Schmidhuber also add recurrent connections from output units to memory blocks. Each block contains one or more memory cells that are self-recurrent. All other units in the block gate the inputs, outputs, and the memory cell itself. A memory cell can “cache” errors and release them for weight updates much later in time. The gates can learn to delay a block’s outputs, to reset the memory cells, and to inhibit inputs from reaching the cell or to allow inputs in. Figure 4. An LSTM network on the left and a one-cell memory block on the right, with input, forget, and output gates. Black squares on gate connections show that the gates can control whether information is passed to the cell, from the cell, or even within the cell. Weight updates are based on gradient descent, with multiplicative gradient calculations for gates, and approximations from the truncated BPTT (Williams and Peng 1990) and Real-Time Recurrent Learning (RTRL) (Robinson and Fallside 1987) algorithms. LSTM networks are able to perform counting tasks in time-series. Eck and Schmidhuber’s model of blues music is a 12-bar chord sequence over which music is composed/improvised. They successfully trained an LSTM network to learn a sequence of blues chords, with varying durations. Splitting time into 8th note increments, each chord’s duration is either 8 or 4 time steps (whole or half durations). Chords are sets of 3 or 4 tones (triads or triads plus sevenths), represented in a 12-bit localized binary representation with values of 1 for a chord pitch, and 0 for a non-chord pitch. Chords are inverted to fit in 1 octave. For example, C7 is represented as 100010010010 (C,E,G,B-flat), and F7 is 100101000100 (F,A,C,E-flat inverted to C,E-flat,F,A). The network has 4 memory blocks, each containing 2 cells. The outputs are considered probabilities of whether the corresponding note is on or off. The goal is to obtain an output of more that .5 for each note that should be on in a particular chord, with all other outputs below .5. Eck and Schmidhuber’s work includes learning melody and chords with two LSTM networks containing 4 blocks each. Connections are made from the chord network to the melody network, but not vice versa. The authors composed short 1-bar melodies over each of the 12 possible bars. The network is trained on concatenations of the short melodies over the 12-bar blues chord sequence. The melody network is trained until the chords network has learned according to the criterion. In music generation mode, the network can generate new melodies using this training. In a system called CHIME (Franklin 2000, 2001), we first train a Jordan recurrent network (Figure 1) to produce 3 Sonny Rollins jazz/blues melodies. The current chord and index number of the song are non-recurrent inputs to the network. Chords are represented as sets of 4 note values of 1 in a 12-note input layer, with non-chord note inputs set to 0 just as in Eck and Schmidhuber’s chord representation. Chords are also inverted to fit within one octave. 24 (2 octaves) of the outputs are notes, and the 25th is a rest. Of these 25, the unit with the largest value ",
"title": ""
},
{
"docid": "092bf4ee1626553206ee9b434cda957b",
"text": ".......................................................................................................... 3 Introduction ...................................................................................................... 4 Methods........................................................................................................... 7 Procedure ..................................................................................................... 7 Inclusion and exclusion criteria ..................................................................... 8 Data extraction and quality assessment ....................................................... 8 Results ............................................................................................................ 9 Included studies ........................................................................................... 9 Quality of included articles .......................................................................... 13 Excluded studies ........................................................................................ 15 Fig. 1 CONSORT 2010 Flow Diagram ....................................................... 16 Table 1: Primary studies ............................................................................. 17 Table2: Secondary studies ......................................................................... 18 Discussion ..................................................................................................... 19 Conclusion ..................................................................................................... 22 Acknowledgements ....................................................................................... 22 References .................................................................................................... 23 Appendix ....................................................................................................... 32",
"title": ""
},
{
"docid": "5c7678fae587ef784b4327d545a73a3e",
"text": "The vision of Future Internet based on standard communication protocols considers the merging of computer networks, Internet of Things (IoT), Internet of People (IoP), Internet of Energy (IoE), Internet of Media (IoM), and Internet of Services (IoS), into a common global IT platform of seamless networks and networked “smart things/objects”. However, with the widespread deployment of networked, intelligent sensor technologies, an Internet of Things (IoT) is steadily evolving, much like the Internet decades ago. In the future, hundreds of billions of smart sensors and devices will interact with one another without human intervention, on a Machine-to-Machine (M2M) basis. They will generate an enormous amount of data at an unprecedented scale and resolution, providing humans with information and control of events and objects even in remote physical environments. This paper will provide an overview of performance evaluation, challenges and opportunities of IOT results for machine learning presented by this new paradigm.",
"title": ""
},
{
"docid": "c26919afa32708786ae7f96b88883ed9",
"text": "A Privacy Enhancement Technology (PET) is an application or a mechanism which allows users to protect the privacy of their personally identifiable information. Early PETs were about enabling anonymous mailing and anonymous browsing, but lately there have been active research and development efforts in many other problem domains. This paper describes the first pattern language for developing privacy enhancement technologies. Currently, it contains 12 patterns. These privacy patterns are not limited to a specific problem domain; they can be applied to design anonymity systems for various types of online communication, online data sharing, location monitoring, voting and electronic cash management. The pattern language guides a developer when he or she is designing a PET for an existing problem, or innovating a solution for a new problem.",
"title": ""
},
{
"docid": "c6058966ef994d7b447f47d41d7fff33",
"text": "The advancement in computer technology has encouraged the researchers to develop software for assisting doctors in making decision without consulting the specialists directly. The software development exploits the potential of human intelligence such as reasoning, making decision, learning (by experiencing) and many others. Artificial intelligence is not a new concept, yet it has been accepted as a new technology in computer science. It has been applied in many areas such as education, business, medical and manufacturing. This paper explores the potential of artificial intelligence techniques particularly for web-based medical applications. In addition, a model for web-based medical diagnosis and prediction is",
"title": ""
},
{
"docid": "753b167933f5dd92c4b8021f6b448350",
"text": "The advent of social media and microblogging platforms has radically changed the way we consume information and form opinions. In this paper, we explore the anatomy of the information space on Facebook by characterizing on a global scale the news consumption patterns of 376 million users over a time span of 6 y (January 2010 to December 2015). We find that users tend to focus on a limited set of pages, producing a sharp community structure among news outlets. We also find that the preferences of users and news providers differ. By tracking how Facebook pages \"like\" each other and examining their geolocation, we find that news providers are more geographically confined than users. We devise a simple model of selective exposure that reproduces the observed connectivity patterns.",
"title": ""
},
{
"docid": "b0a0ad5f90d849696e3431373db6b4a5",
"text": "A comparative study of the structure of the flower in three species of Robinia L., R. pseudoacacia, R. × ambigua, and R. neomexicana, was carried out. The widely naturalized R. pseudoacacia, as compared to the two other species, has the smallest sizes of flower organs at all stages of development. Qualitative traits that describe each phase of the flower development were identified. A set of microscopic morphological traits of the flower (both quantitative and qualitative) was analyzed. Additional taxonomic traits were identified: shape of anthers, size and shape of pollen grains, and the extent of pollen fertility.",
"title": ""
},
{
"docid": "da72f2990b3e21c45a92f7b54be1d202",
"text": "A low-profile, high-gain, and wideband metasurface (MS)-based filtering antenna with high selectivity is investigated in this communication. The planar MS consists of nonuniform metallic patch cells, and it is fed by two separated microstrip-coupled slots from the bottom. The separation between the two slots together with a shorting via is used to provide good filtering performance in the lower stopband, whereas the MS is elaborately designed to provide a sharp roll-off rate at upper band edge for the filtering function. The MS also simultaneously works as a high-efficient radiator, enhancing the impedance bandwidth and antenna gain of the feeding slots. To verify the design, a prototype operating at 5 GHz has been fabricated and measured. The reflection coefficient, radiation pattern, antenna gain, and efficiency are studied, and reasonable agreement between the measured and simulated results is observed. The prototype with dimensions of 1.3 λ0 × 1.3 λ0 × 0.06 λ0 has a 10-dB impedance bandwidth of 28.4%, an average gain of 8.2 dBi within passband, and an out-of-band suppression level of more than 20 dB within a very wide stop-band.",
"title": ""
},
{
"docid": "157c084aa6622c74449f248f98314051",
"text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.",
"title": ""
},
{
"docid": "5912dda99171351acc25971d3c901624",
"text": "New cultivars with very erect leaves, which increase light capture for photosynthesis and nitrogen storage for grain filling, may have increased grain yields. Here we show that the erect leaf phenotype of a rice brassinosteroid–deficient mutant, osdwarf4-1, is associated with enhanced grain yields under conditions of dense planting, even without extra fertilizer. Molecular and biochemical studies reveal that two different cytochrome P450s, CYP90B2/OsDWARF4 and CYP724B1/D11, function redundantly in C-22 hydroxylation, the rate-limiting step of brassinosteroid biosynthesis. Therefore, despite the central role of brassinosteroids in plant growth and development, mutation of OsDWARF4 alone causes only limited defects in brassinosteroid biosynthesis and plant morphology. These results suggest that regulated genetic modulation of brassinosteroid biosynthesis can improve crops without the negative environmental effects of fertilizers.",
"title": ""
},
{
"docid": "7f3bccab6d6043d3dedc464b195df084",
"text": "This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.",
"title": ""
}
] | scidocsrr |
0677c5968c3e97d00c7b64b5465f9a0a | SDN and OpenFlow Evolution: A Standards Perspective | [
{
"docid": "e93c5395f350d44b59f549a29e65d75c",
"text": "Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.",
"title": ""
}
] | [
{
"docid": "056d7d639d91636b382860a3df08d0dd",
"text": "This paper describes a novel monolithic low voltage (1-V) CMOS RF front-end architecture with an integrated quadrature coupler (QC) and two subharmonic mixers for direct-down conversion. The LC-folded-cascode technique is adopted to achieve low-voltage operation while the subharmonic mixers in conjunction with the QC are used to eliminate LO self-mixing. In addition, the inherent bandpass characteristic of the LC tanks helps suppression of LO leakage at RF port. The circuit was fabricated in a standard 0.18-mum CMOS process for 5-6 GHz applications. At 5.4 GHz, the RF front-end exhibits a voltage gain of 26.2 dB and a noise figure of 5.2 dB while dissipating 45.5 mW from a 1.0-V supply. The achieved input-referred DC-offset due to LO self-mixing is below -110.7 dBm.",
"title": ""
},
{
"docid": "57fa4164381d9d9691b9ba5c506addbd",
"text": "The aim of this study was to evaluate the acute effects of unilateral ankle plantar flexors static-stretching (SS) on the passive range of movement (ROM) of the stretched limb, surface electromyography (sEMG) and single-leg bounce drop jump (SBDJ) performance measures of the ipsilateral stretched and contralateral non-stretched lower limbs. Seventeen young men (24 ± 5 years) performed SBDJ before and after (stretched limb: immediately post-stretch, 10 and 20 minutes and non-stretched limb: immediately post-stretch) unilateral ankle plantar flexor SS (6 sets of 45s/15s, 70-90% point of discomfort). SBDJ performance measures included jump height, impulse, time to reach peak force, contact time as well as the sEMG integral (IEMG) and pre-activation (IEMGpre-activation) of the gastrocnemius lateralis. Ankle dorsiflexion passive ROM increased in the stretched limb after the SS (pre-test: 21 ± 4° and post-test: 26.5 ± 5°, p < 0.001). Post-stretching decreases were observed with peak force (p = 0.029), IEMG (P<0.001), and IEMGpre-activation (p = 0.015) in the stretched limb; as well as impulse (p = 0.03), and jump height (p = 0.032) in the non-stretched limb. In conclusion, SS effectively increased passive ankle ROM of the stretched limb, and transiently (less than 10 minutes) decreased muscle peak force and pre-activation. The decrease of jump height and impulse for the non-stretched limb suggests a SS-induced central nervous system inhibitory effect. Key pointsWhen considering whether or not to SS prior to athletic activities, one must consider the potential positive effects of increased ankle dorsiflexion motion with the potential deleterious effects of power and muscle activity during a simple jumping task or as part of the rehabilitation process.Since decreased jump performance measures can persist for 10 minutes in the stretched leg, the timing of SS prior to performance must be taken into consideration.Athletes, fitness enthusiasts and therapists should also keep in mind that SS one limb has generalized effects upon contralateral limbs as well.",
"title": ""
},
{
"docid": "7ca863355d1fb9e4954c360c810ece53",
"text": "The detection of community structure is a widely accepted means of investigating the principles governing biological systems. Recent efforts are exploring ways in which multiple data sources can be integrated to generate a more comprehensive model of cellular interactions, leading to the detection of more biologically relevant communities. In this work, we propose a mathematical programming model to cluster multiplex biological networks, i.e. multiple network slices, each with a different interaction type, to determine a single representative partition of composite communities. Our method, known as SimMod, is evaluated through its application to yeast networks of physical, genetic and co-expression interactions. A comparative analysis involving partitions of the individual networks, partitions of aggregated networks and partitions generated by similar methods from the literature highlights the ability of SimMod to identify functionally enriched modules. It is further shown that SimMod offers enhanced results when compared to existing approaches without the need to train on known cellular interactions.",
"title": ""
},
{
"docid": "1594afac3fe296478bd2a0c5a6ca0bb4",
"text": "Executive Summary The market turmoil of 2008 highlighted the importance of risk management to investors in the UK and worldwide. Realized risk levels and risk forecasts from the Barra Europe Equity Model (EUE2L) are both currently at the highest level for the last two decades. According to portfolio theory, institutional investors can gain significant risk-reduction and return-enhancement benefits from venturing out of their domestic markets. These effects from international diversification are due to imperfect correlations among markets. In this paper, we explore the historical diversification effects of an international allocation for UK investors. We illustrate that investing only in the UK market can be considered an active deviation from a global benchmark. Although a domestic allocation to UK large-cap stocks has significant international exposure when revenue sources are taken into account, as an active deviation from a global benchmark a UK domestic strategy has high concentration, leading to high asset-specific risk, and significant style and industry tilts. We show that an international allocation resulted in higher returns and lower risk for a UK investor in the last one, three, five, and ten years. In GBP terms, the MSCI All Country World Investable Market Index (ACWI IMI) — a global index that could be viewed as a proxy for a global portfolio — achieved higher return and lower risk compared to the MSCI UK Index during these periods. A developed market minimum-variance portfolio, represented by the MSCI World Minimum Volatility Index, 1 The market turmoil of 2008 highlighted the importance of risk management to investors in the UK and worldwide. Figure 1 illustrates that the historical standard deviation of the MSCI UK Index is now near the highest level in recent history. The risk forecast for the index, obtained using the Barra Europe Equity Model, typically showed still better risk and return performance during these periods. The decreases in risk represented by allocations to MSCI ACWI IMI and the MSCI World Minimum Volatility Index were robust based on four different measures of portfolio risk. We also consider a stepwise approach to international diversification, sequentially adding small cap and international assets to a large cap UK portfolio. We show that this approach also reduced risk during the observed period, but we did not find evidence that it was more efficient for risk reduction than a passive allocation to MSCI ACWI IMI.",
"title": ""
},
{
"docid": "8147143579de86a5eeb668037c2b8c5d",
"text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.",
"title": ""
},
{
"docid": "e4d10b57a9ddb304263abd869c1a79d9",
"text": "rrrmrrxvivt This research explores the effectiveness of interactive advertising on a new medium platform. Like the presence in industry and the media themselves, the academic research stream is fairly new. Our research seeks to isolate the key feature of interactivity from confounding factors and to begin to tease apart those situations for which interactivity might be highly desirable from those situations for which traditional advertising vehicles may be sufficient or superior. We find that the traditional linear advertising format of conventional ads is actually better than interactive advertising for certain kinds of consumers and for certain kinds of ads. In particular, we find that a cognitive “matching” of the system properties (being predominately visual or verbal) and the consumer segment needs (preferring their information to be presented in a visual or verbal manner) appears to be critical. More research should be conducted before substantial expenditures are devoted to advertising on these interactive media. These new means of communicating with customers are indeed exciting, but they must be demonstrated to be effective on consumer engagement and persuasion. INTERACTIVE MARKETING SYSTEMS are enjoying explosive growth, giving firms a plethora of ways of contacting consumers (e.g., kiosks, Web pages, home computers). In these interactive systems, a customer controls the content of the interaction, requesting or giving information, at the attributelevel (e.g., a PC’s RAM and MHz) or in terms of benefits (e.g., a PC’s capability and speed). A customer can control the presentation order of the information, and unwanted options may be deleted. The consumer may request that the information sought be presented in comparative table format, in video, audio, pictorial format, or in standard text. Increasingly, customers can also order products using the interactive system. These new media are no fad, and while they are only in the infancy of their development, they are already changing the marketplace (cf. Hoffman and Novak, 1996). The hallmark of all of these new media is their irlteuactivity-the consumer and the manufacturer enter into dialogue in a way not previously possible. Interactive marketing, as defined in this paper, is: “the immediately iterative process by which customer needs and desires are uncovered, met, modified, and satisfied by the providing firm.” Interactivity iterates between the firm and the customer, eliciting information from both parties, and attempting to align interests and possibilities. The iterations occur over some duration, allowing the firm to build databases that provide subsequent purchase opportunities tailored to the consumer (Blattberg and Deighton, 1991). The consumer’s input allows subsequent information to be customized to pertinent interests and bars irrelevant communications, thereby enhancing both the consumer experience and the efficiency of the firm’s advertising and marketing dollar. As exciting as these new interactive media appear to be, little is actually known about their effect on consumers’ consideration of the advertised products. As Berthon, Pitt, and Watson (1996) state, “advertising and marketing practitioners, and academics are by now aware that more systematic research is required to reveal the true nature of commerce on the Web” or for interactive systems more generally. Our research is intended to address this need, and more specifically to focus on the effects of interactivity. We investigate interactive marketing in terms of its performance in persuading consumers to buy the advertised products. We wish to begin to understand whether interactive methods are truly superior to standard advertising formats as the excitement about the new media would suggest. Alternatively, perhaps there are some circumstances for which traditional advertising is more effective. Certainly it would not be desirable to channel the majority of one’s advertising resources toward interactive media until they are demonstrated to be superior persuasion vehicles. To this end we present an experimental study comparing consumer reactions to products advertised through an interactive medium with re-",
"title": ""
},
{
"docid": "5a44a37f5ae6e485a4096861f53f6245",
"text": "The goal of the paper is to show that some types of L evy processes such as the hyperbolic motion and the CGMY are particularly suitable for asset price modelling and option pricing. We wish to review some fundamental mathematic properties of L evy distributions, such as the one of infinite divisibility, and how they translate observed features of asset price returns. We explain how these processes are related to Brownian motion, the central process in finance, through stochastic time changes which can in turn be interpreted as a measure of the economic activity. Lastly, we focus on two particular classes of pure jump L evy processes, the generalized hyperbolic model and the CGMY models, and report on the goodness of fit obtained both on stock prices and option prices. 2002 Published by Elsevier Science B.V. JEL classification: G12; G13",
"title": ""
},
{
"docid": "6933e944e88307c85f0b398b5abbb48f",
"text": "The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the \"best\" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS) data. If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.",
"title": ""
},
{
"docid": "dfcc6b34f008e4ea9d560b5da4826f4d",
"text": "The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. Keywords—Gesture recognition, Kinect, shadow play animation, VRPN.",
"title": ""
},
{
"docid": "8582c4a040e4dec8fd141b00eaa45898",
"text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.",
"title": ""
},
{
"docid": "cde6d84d22ca9d8cd851f3067bc9b41e",
"text": "The purpose of the present study was to examine the reciprocal relationships between authenticity and measures of life satisfaction and distress using a 2-wave panel study design. Data were collected from 232 college students attending 2 public universities. Structural equation modeling was used to analyze the data. The results of the cross-lagged panel analysis indicated that after controlling for temporal stability, initial authenticity (Time 1) predicted later distress and life satisfaction (Time 2). Specifically, higher levels of authenticity at Time 1 were associated with increased life satisfaction and decreased distress at Time 2. Neither distress nor life satisfaction at Time 1 significantly predicted authenticity at Time 2. However, the relationship between Time 1 distress and Time 2 authenticity was not significantly different from the relationship between Time 1 authenticity and Time 2 distress. Results are discussed in light of humanistic-existential theories and the empirical research on well-being.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
},
{
"docid": "6c1d7a70d0fa21222a0d1046eee128c7",
"text": "A BSTRACT Background Goal-directed therapy has been used for severe sepsis and septic shock in the intensive care unit. This approach involves adjustments of cardiac preload, afterload, and contractility to balance oxygen delivery with oxygen demand. The purpose of this study was to evaluate the efficacy of early goal-directed therapy before admission to the intensive care unit. Methods We randomly assigned patients who arrived at an urban emergency department with severe sepsis or septic shock to receive either six hours of early goal-directed therapy or standard therapy (as a control) before admission to the intensive care unit. Clinicians who subsequently assumed the care of the patients were blinded to the treatment assignment. In-hospital mortality (the primary efficacy outcome), end points with respect to resuscitation, and Acute Physiology and Chronic Health Evaluation (APACHE II) scores were obtained serially for 72 hours and compared between the study groups. Results Of the 263 enrolled patients, 130 were randomly assigned to early goal-directed therapy and 133 to standard therapy; there were no significant differences between the groups with respect to base-line characteristics. In-hospital mortality was 30.5 percent in the group assigned to early goal-directed therapy, as compared with 46.5 percent in the group assigned to standard therapy (P=0.009). During the interval from 7 to 72 hours, the patients assigned to early goaldirected therapy had a significantly higher mean (±SD) central venous oxygen saturation (70.4±10.7 percent vs. 65.3±11.4 percent), a lower lactate concentration (3.0±4.4 vs. 3.9±4.4 mmol per liter), a lower base deficit (2.0±6.6 vs. 5.1±6.7 mmol per liter), and a higher pH (7.40±0.12 vs. 7.36±0.12) than the patients assigned to standard therapy (P«0.02 for all comparisons). During the same period, mean APACHE II scores were significantly lower, indicating less severe organ dysfunction, in the patients assigned to early goal-directed therapy than in those assigned to standard therapy (13.0±6.3 vs. 15.9±6.4, P<0.001). Conclusions Early goal-directed therapy provides significant benefits with respect to outcome in patients with severe sepsis and septic shock. (N Engl J Med 2001;345:1368-77.)",
"title": ""
},
{
"docid": "0c70966c4dbe41458f7ec9692c566c1f",
"text": "By 2012 the U.S. military had increased its investment in research and production of unmanned aerial vehicles (UAVs) from $2.3 billion in 2008 to $4.2 billion [1]. Currently UAVs are used for a wide range of missions such as border surveillance, reconnaissance, transportation and armed attacks. UAVs are presumed to provide their services at any time, be reliable, automated and autonomous. Based on these presumptions, governmental and military leaders expect UAVs to improve national security through surveillance or combat missions. To fulfill their missions, UAVs need to collect and process data. Therefore, UAVs may store a wide range of information from troop movements to environmental data and strategic operations. The amount and kind of information enclosed make UAVs an extremely interesting target for espionage and endangers UAVs of theft, manipulation and attacks. Events such as the loss of an RQ-170 Sentinel to Iranian military forces on 4th December 2011 [2] or the “keylogging” virus that infected an U.S. UAV fleet at Creech Air Force Base in Nevada in September 2011 [3] show that the efforts of the past to identify risks and harden UAVs are insufficient. Due to the increasing governmental and military reliance on UAVs to protect national security, the necessity of a methodical and reliable analysis of the technical vulnerabilities becomes apparent. We investigated recent attacks and developed a scheme for the risk assessment of UAVs based on the provided services and communication infrastructures. We provide a first approach to an UAV specific risk assessment and take into account the factors exposure, communication systems, storage media, sensor systems and fault handling mechanisms. We used this approach to assess the risk of some currently used UAVs: The “MQ-9 Reaper” and the “AR Drone”. A risk analysis of the “RQ-170 Sentinel” is discussed.",
"title": ""
},
{
"docid": "b1b18ffff0f9efdef25dd15099139b7e",
"text": "This paper presents a fast and accurate alignment method for polyphonic symbolic music signals. It is known that to accurately align piano performances, methods using the voice structure are needed. However, such methods typically have high computational cost and they are applicable only when prior voice information is given. It is pointed out that alignment errors are typically accompanied by performance errors in the aligned signal. This suggests the possibility of correcting (or realigning) preliminary results by a fast (but not-so-accurate) alignment method with a refined method applied to limited segments of aligned signals, to save the computational cost. To realise this, we develop a method for detecting performance errors and a realignment method that works fast and accurately in local regions around performance errors. To remove the dependence on prior voice information, voice separation is performed to the reference signal in the local regions. By applying our method to results obtained by previously proposed hidden Markov models, the highest accuracies are achieved with short computation time. Our source code is published in the accompanying web page, together with a user interface to examine and correct alignment results.",
"title": ""
},
{
"docid": "c2fb88df12e97e8475bb923063c8a46e",
"text": "This paper addresses the job shop scheduling problem in the presence of machine breakdowns. In this work, we propose to exploit the advantages of data mining techniques to resolve the problem. We proposed an approach to discover a set of classification rules by using historic scheduling data. Intelligent decisions are then made in real time based on this constructed rules to assign the corresponding dispatching rule in a dynamic job shop scheduling environment. A simulation study is conducted at last with the constructed rules and four other dispatching rules from literature. The experimental results verify the performance of classification rule for minimizing mean tardiness.",
"title": ""
},
{
"docid": "7edb8a803734f4eb9418b8c34b1bf07c",
"text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.",
"title": ""
},
{
"docid": "21f6a18e34579ae482c93c3476828729",
"text": "A low power highly sensitive Thoracic Impedance Variance (TIV) and Electrocardiogram (ECG) monitoring SoC is designed and implemented into a poultice-like plaster sensor for wearable cardiac monitoring. 0.1 Ω TIV detection is possible with a sensitivity of 3.17 V/Ω and SNR > 40 dB. This is achieved with the help of a high quality (Q-factor > 30) balanced sinusoidal current source and low noise reconfigurable readout electronics. A cm-range 13.56 MHz fabric inductor coupling is adopted to start/stop the SoC remotely. Moreover, a 5% duty-cycled Body Channel Communication (BCC) is exploited for 0.2 nJ/b 1 Mbps energy efficient external data communication. The proposed SoC occupies 5 mm × 5 mm including pads in a standard 0.18 μm 1P6M CMOS technology. It dissipates a peak power of 3.9 mW when operating in body channel receiver mode, and consumes 2.4 mW when operating in TIV and ECG detection mode. The SoC is integrated on a 15 cm × 15 cm fabric circuit board together with a flexible battery to form a compact wearable sensor. With 25 adhesive screen-printed fabric electrodes, detection of TIV and ECG at 16 different sites of the heart is possible, allowing optimal detection sites to be configured to accommodate different user dependencies.",
"title": ""
}
] | scidocsrr |
ba4cf772bab99ee296d0eb8eef7c57bd | An Experimental Evaluation of DoS Attack and Its Impact on Throughput of IEEE 802.11 Wireless Networks | [
{
"docid": "09211bc28dea118cc114b261d13f098e",
"text": "IEEE 802.11 Wireless LAN (WLAN) has gained popularity. WLANs use different security protocols like WEP, WPA and WPA2. The newly ratified WPA2 provides the highest level of security for data frames. However WPA2 does not really mention about protection of management frames. In other words IEEE 802.11 management frames are always sent in an unsecured manner. In fact the only security mechanism for management frames is CRC-32 bit algorithm. While useful for unintentional error detection, CRC-32 bit is not safe to completely verify data integrity in the face of intentional modifications. Therefore an unsecured management frame allows an attacker to start different kinds of attack. This paper proposes a new model to address these security problems in management frames. First we summarize security threats on management frames and their influences in WLANs. Then based on these security threats, we propose a new per frames security model to provide efficient security for these frames. Finally simulation methodology is presented and results are provided. Mathematical probabilities are discussed to demonstrate that the proposed security model is robust and efficient to secure management frames.",
"title": ""
},
{
"docid": "8dd540b33035904f63c67b57d4c97aa3",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "8dcb99721a06752168075e6d45ee64c7",
"text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti ble to malicious denial-of-service (DoS) attacks tar geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.",
"title": ""
}
] | [
{
"docid": "6aadba165b0eb979b32e7549b3e8da7c",
"text": "In multimedia scenario, number of saliency detection designs has been portrayed for different intelligent applications regarding the accurate saliency detection like human visual system. More challenges exist regarding complexity in natural images and lesser scale prototypes in salient objects. In lots of the prevailing methods, the competency of identifying objects’ instances in the discovered salient regions is still not up to the mark. Hence it is planned to assist a new strategy by considering certain parameters of feature evaluation under optimization algorithms which diverts the new method of capturing the optimal parameters to acquire better outcomes. The given paper proposes a new saliency detection design that encompasses 2 phases like Feature extraction and depth saliency detection. In which Gaussian kernel model is processed for extracting the STFT features (Short-Time Fourier Transform), Texture features, and Depth features; and Gabor filter is used to get the depth saliency map. Here, the color space information is progressed under STFT model for extracting the STFT features. This is the major contribution, where all the STFT feature, Texture feature and Depth features are taken out to gain the depth saliency map. Additionally, this paper proffers a new optimization prototype that optimizes 2 coefficients namely feature difference amongst image patches from feature evaluation, and fine scale, by which more precise saliency detection outcomes can be attained through the proposed model. Subsequently, the proposed GSDU (Glowworm Swarm optimization with Dragonfly Update) contrasts its performance over other conventional methodologies concerning i) ROC (Receiver Operator Curve), ii) PCC (Pearson Correlation Coefficient), iii) KLD (Kullback Leibler Divergence) as well as iv) AUC (Area Under the Curve), and the efficiency of proposed model is proven grounded on higher accuracy rate.",
"title": ""
},
{
"docid": "e48903be16ccab7bf1263e0a407e5d66",
"text": "This research applies Lotka’s Law to metadata on open source software development. Lotka’s Law predicts the proportion of authors at different levels of productivity. Open source software development harnesses the creativity of thousands of programmers worldwide, is important to the progress of the Internet and many other computing environments, and yet has not been widely researched. We examine metadata from the Linux Software Map (LSM), which documents many open source projects, and Sourceforge, one of the largest resources for open source developers. Authoring patterns found are comparable to prior studies of Lotka’s Law for scientific and scholarly publishing. Lotka’s Law was found to be effective in understanding software development productivity patterns, and offer promise in predicting aggregate behavior of open source developers.",
"title": ""
},
{
"docid": "2bb356ac7620bacc9190f73f92b04da1",
"text": "It is well known that it is possible to construct “adversarial examples” for neural networks: inputs which are misclassified by the network yet indistinguishable from true data. We propose a simple modification to standard neural network architectures, thermometer encoding, which significantly increases the robustness of the network to adversarial examples. We demonstrate this robustness with experiments on the MNIST, CIFAR-10, CIFAR-100, and SVHN datasets, and show that models with thermometer-encoded inputs consistently have higher accuracy on adversarial examples, without decreasing generalization. State-of-the-art accuracy under the strongest known white-box attack was increased from 93.20% to 94.30% on MNIST and 50.00% to 79.16% on CIFAR-10. We explore the properties of these networks, providing evidence that thermometer encodings help neural networks to find more-non-linear decision boundaries.",
"title": ""
},
{
"docid": "8adcbd916e99e63d5dcebf58f19e2e05",
"text": "Cloud computing is still a juvenile and most dynamic field characterized by a buzzing IT industry. Virtually every industry and even some parts of the public sector are taking on cloud computing today, either as a provider or as a consumer. It has now become essentially an inseparable part of everyone's life. The cloud thus has become a part of the critical global infrastructure but is unique in that it has no customary borders to safeguard it from attacks. Once weakened these web servers can serve as a launching point for conducting further attacks against users in the cloud. One such attack is the DoS or its version DDOS attack. Distributed Denial of Service (DdoS) Attacks have recently emerged as one of the most newsworthy, if not the greatest weaknesses of the Internet. DDoS attacks cause economic losses due to the unavailability of services and potentially serious security problems due to incapacitation of critical infrastructures. This paper presents a simple distance estimation based technique to detect and prevent the cloud from flooding based DDoS attack and thereby protect other servers and users from its adverse effects.",
"title": ""
},
{
"docid": "1aa7e7fe70bdcbc22b5d59b0605c34e9",
"text": "Surgical tasks are complex multi-step sequences of smaller subtasks (often called surgemes) and it is useful to segment task demonstrations into meaningful subsequences for:(a) extracting finite-state machines for automation, (b) surgical training and skill assessment, and (c) task classification. Existing supervised methods for task segmentation use segment labels from a dictionary of motions to build classifiers. However, as the datasets become voluminous, the labeling becomes arduous and further, this method doesnt́ generalize to new tasks that dont́ use the same dictionary. We propose an unsupervised semantic task segmentation framework by learning “milestones”, ellipsoidal regions of the position and feature states at which a task transitions between motion regimes modeled as locally linear. Milestone learning uses a hierarchy of Dirichlet Process Mixture Models, learned through Expectation-Maximization, to cluster the transition points and optimize the number of clusters. It leverages transition information from kinematic state as well as environment state such as visual features. We also introduce a compaction step which removes repetitive segments that correspond to a mid-demonstration failure recovery by retrying an action. We evaluate Milestones Learning on three surgical subtasks: pattern cutting, suturing, and needle passing. Initial results suggest that our milestones qualitatively match manually annotated segmentation. While one-to-one correspondence of milestones with annotated data is not meaningful, the milestones recovered from our method have exactly one annotated surgeme transition in 74% (needle passing) and 66% (suturing) of total milestones, indicating a semantic match.",
"title": ""
},
{
"docid": "2da4c992e8e2e9bfdab188bedd47a4d2",
"text": "Hybrid neural networks (hybrid-NNs) have been widely used and brought new challenges to NN processors. Thinker is an energy efficient reconfigurable hybrid-NN processor fabricated in 65-nm technology. To achieve high energy efficiency, three optimization techniques are proposed. First, each processing element (PE) supports bit-width adaptive computing to meet various bit-widths of neural layers, which raises computing throughput by 91% and improves energy efficiency by <inline-formula> <tex-math notation=\"LaTeX\">$1.93 \\times $ </tex-math></inline-formula> on average. Second, PE array supports on-demand array partitioning and reconfiguration for processing different NNs in parallel, which results in 13.7% improvement of PE utilization and improves energy efficiency by <inline-formula> <tex-math notation=\"LaTeX\">$1.11 \\times $ </tex-math></inline-formula>. Third, a fused data pattern-based multi-bank memory system is designed to exploit data reuse and guarantee parallel data access, which improves computing throughput and energy efficiency by <inline-formula> <tex-math notation=\"LaTeX\">$1.11 \\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$1.17 \\times $ </tex-math></inline-formula>, respectively. Measurement results show that this processor achieves 5.09-TOPS/W energy efficiency at most.",
"title": ""
},
{
"docid": "227fb4fe1a8cba3b696708bf191be7e3",
"text": "Qualitative research has experienced broad acceptance in the IS discipline. Despite the merits for exploring new phenomena, qualitative methods are criticized for their subjectivity when it comes to interpretation. Therefore, research mostly emphasized the development of criteria and guidelines for good practice. I present an approach to counteract the issue of credibility and traceability in qualitative data analysis and expand the repertoire of approaches used in IS research. I draw on an existing approach from the information science discipline and adapt it to analyze coded qualitative data. The developed approach is designed to answer questions about the specific relevance of codes and aims to support the researcher in detecting hidden information in the coded material. For this reason, the paper contributes to the IS methodology with bringing new insights to current methods by enhancing them with an approach from another",
"title": ""
},
{
"docid": "736ee2bed70510d77b1f9bb13b3bee68",
"text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.",
"title": ""
},
{
"docid": "91771b6c50d7193e5612d9552913dec8",
"text": "The expected diffusion of EVehicles (EVs) to limit the impact of fossil fuel on mobility is going to cause severe issues to the management of electric grid. A large number of charging stations is going to be installed on the power grid to support EVs. Each of the charging station could require more than 100 kW from the grid. The grid consumption is unpredictable and it depends from the need of EVs in the neighborhood. The impact of the EV on the power grid can be limited by the proper exploitation of Vehicle to Grid communication (V2G). The advent of Low Power Wide Area Network (LPWAN) promoted by Internet Of Things applications offers new opportunity for wireless communications. In this work, an example of such a technology (the LoRaWAN solution) is tested in a real-world scenario as a candidate for EV to grid communications. The experimental results highlight as LoRaWAN technology can be used to cover an area with a radius under 2 km, in an urban environment. At this distance, the Received Signal Strength Indicator (RSSI) is about −117 dBm. Such a result demonstrates the feasibility of the proposed approach.",
"title": ""
},
{
"docid": "f51583c6eb5a0d6e27823e0714d40ef5",
"text": "Studies of emotion regulation typically contrast two or more strategies (e.g., reappraisal vs. suppression) and ignore variation within each strategy. To address such variation, we focused on cognitive reappraisal and considered the effects of goals (i.e., what people are trying to achieve) and tactics (i.e., what people actually do) on outcomes (i.e., how affective responses change). To examine goals, we randomly assigned participants to either increase positive emotion or decrease negative emotion to a negative stimulus. To examine tactics, we categorized participants' reports of how they reappraised. To examine reappraisal outcomes, we measured experience and electrodermal responding. Findings indicated that (a) the goal of increasing positive emotion led to greater increases in positive affect and smaller decreases in skin conductance than the goal of decreasing negative emotion, and (b) use of the reality challenge tactic was associated with smaller increases in positive affect during reappraisal. These findings suggest that reappraisal can be implemented in the service of different emotion goals, using different tactics. Such differences are associated with different outcomes, and they should be considered in future research and applied attempts to maximize reappraisal success.",
"title": ""
},
{
"docid": "718f874dfe34403d70433582065d1737",
"text": "We aim to better exploit the limited amounts of parallel text available in low-resource settings by introducing a differentiable reconstruction loss for neural machine translation (NMT). We reconstruct the input from sampled translations and leverage differentiable sampling and bi-directional NMT to build a compact model that can be trained end-to-end. This approach achieves small but consistent BLEU improvements on four language pairs in both translation directions, and outperforms an alternative differentiable reconstruction strategy based on hidden states.",
"title": ""
},
{
"docid": "ba8886a9e251492ec0dca0512d6994be",
"text": "In this paper, we consider various moment inequalities for sums of random matrices—which are well–studied in the functional analysis and probability theory literature—and demonstrate how they can be used to obtain the best known performance guarantees for several problems in optimization. First, we show that the validity of a recent conjecture of Nemirovski is actually a direct consequence of the so–called non–commutative Khintchine’s inequality in functional analysis. Using this result, we show that an SDP–based algorithm of Nemirovski, which is developed for solving a class of quadratic optimization problems with orthogonality constraints, has a logarithmic approximation guarantee. This improves upon the polynomial approximation guarantee established earlier by Nemirovski. Furthermore, we obtain improved safe tractable approximations of a certain class of chance constrained linear matrix inequalities. Secondly, we consider a recent result of Delage and Ye on the so–called data–driven distributionally robust stochastic programming problem. One of the assumptions in the Delage–Ye result is that the underlying probability distribution has bounded support. However, using a suitable moment inequality, we show that the result in fact holds for a much larger class of probability distributions. Given the close connection between the behavior of sums of random matrices and the theoretical properties of various optimization problems, we expect that the moment inequalities discussed in this paper will find further applications in optimization.",
"title": ""
},
{
"docid": "ced911b92e427c1d58be739e20f47fcd",
"text": "Software defined networking and network function virtualization are widely deemed two critical pillars of the future service provider network. The expectation for significant operations cost savings from network programmability, open APIs, and operations automation is frequently mentioned as one of the primary benefits of the NFV/SDN vision. Intuitively, the flexibility and simplification values attributed to NFV/SDN lead the industry to the conclusion that operating expenses will decrease. This article provides a view into the operational costs of a typical service provider and discusses how the NFV/SDN attributes can be expected to influence the business equation. The drivers of OPEX change, the directionality of the change, and the categories of OPEX most affected based on our analysis from interactions with a number of service providers worldwide are presented in a structured analysis.",
"title": ""
},
{
"docid": "0368fdfe05918134e62e0f7b106130ee",
"text": "Scientific charts are an effective tool to visualize numerical data trends. They appear in a wide range of contexts, from experimental results in scientific papers to statistical analyses in business reports. The abundance of scientific charts in the web has made it inevitable for search engines to include them as indexed content. However, the queries based on only the textual data used to tag the images can limit query results. Many studies exist to address the extraction of data from scientific diagrams in order to improve search results. In our approach to achieving this goal, we attempt to enhance the semantic labeling of the charts by using the original data values that these charts were designed to represent. In this paper, we describe a method to extract data values from a specific class of charts, bar charts. The extraction process is fully automated using image processing and text recognition techniques combined with various heuristics derived from the graphical properties of bar charts. The extracted information can be used to enrich the indexing content for bar charts and improve search results. We evaluate the effectiveness of our method on bar charts drawn from the web as well as charts embedded in digital documents.",
"title": ""
},
{
"docid": "fada1434ec6e060eee9a2431688f82f3",
"text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.",
"title": ""
},
{
"docid": "4ddd48db66a5951b82d5b7c2d9b8345a",
"text": "In this paper we address the memory demands that come with the processing of 3-dimensional, high-resolution, multi-channeled medical images in deep learning. We exploit memory-efficient backpropagation techniques, to reduce the memory complexity of network training from being linear in the network’s depth, to being roughly constant – permitting us to elongate deep architectures with negligible memory increase. We evaluate our methodology in the paradigm of Image Quality Transfer, whilst noting its potential application to various tasks that use deep learning. We study the impact of depth on accuracy and show that deeper models have more predictive power, which may exploit larger training sets. We obtain substantially better results than the previous state-of-the-art model with a slight memory increase, reducing the rootmean-squared-error by 13%. Our code is publicly available.",
"title": ""
},
{
"docid": "d1ffeda7280999f6485f3fa747d14b11",
"text": "We show how two existing paradigms for two-party secure function evaluation (SFE) in the semi-honest model can be combined securely and efficiently – those based on additively homomorphic encryption (HE) with those based on garbled circuits (GC) and vice versa. Additionally, we propose new GC constructions for addition, subtraction, multiplication, and comparison functions. Our circuits are approximately two times smaller (in terms of garbled tables) than previous constructions. This implies corresponding computation and communication improvements in SFE of functions using the above building blocks (including the protocols for combining HE and GC). Based on these results, we present efficient constant-round protocols for secure integer comparison, and the related problems of minimum selection and minimum distance, which are crucial building blocks of many cryptographic schemes such as privacy preserving biometric authentication (e.g., face recognition, fingerprint matching, etc).",
"title": ""
},
{
"docid": "a25c24018499ae1da6d5ff50c2412fec",
"text": "In the rapid change of drug scenarios, as the powerful development in the drug market, particularly in the number and the kind of the compound available, Internet plays a dominant role to become one of the major \"drug market\". The European Commission funded the Psychonaut Web Mapping Project (carried out in the time-frame January 2008-December 2009), with the aim to start/implement an Early Warning System (through the data/information collected from the Web virtual market), to identify and categorise novel recreational drugs/psychoactive compounds (synthetical/herbal drugs), and new trends in drug use to provide information for immediate and prevention intervention. The Psychonaut is a multi-site research project involving 8 research centres (De Sleutel, Belgium; University of Hertfordshire School of Pharmacy, St George's University of London, England; A-klinikkasäätiö, Finlandia; Klinik für Psychiatrie und Psychotherapie, Germany; Assessorato Salute Regione Marche, Italy; Drug Abuse Unit, Spain; Centre of Competence Bergen Clinics Foundation, Norway) based in 7 European Countries (England, Italy, Belgium, Finland, Germany, Spain, Norway).",
"title": ""
},
{
"docid": "13ffc17fe344471e96ada190493354d8",
"text": "The role of inflammation in the pathogenesis of type 2 diabetes and associated complications is now well established. Several conditions that are driven by inflammatory processes are also associated with diabetes, including rheumatoid arthritis, gout, psoriasis and Crohn's disease, and various anti-inflammatory drugs have been approved or are in late stages of development for the treatment of these conditions. This Review discusses the rationale for the use of some of these anti-inflammatory treatments in patients with diabetes and what we could expect from their use. Future immunomodulatory treatments may not target a specific disease, but could instead act on a dysfunctional pathway that causes several conditions associated with the metabolic syndrome.",
"title": ""
},
{
"docid": "e303eddacfdce272b8e71dc30a507020",
"text": "As new media are becoming daily fare, Internet addiction appears as a potential problem in adolescents. From the reported negative consequences, it appears that Internet addiction can have a variety of detrimental outcomes for young people that may require professional intervention. Researchers have now identified a number of activities and personality traits associated with Internet addiction. This study aimed to synthesise previous findings by (i) assessing the prevalence of potential Internet addiction in a large sample of adolescents, and (ii) investigating the interactions between personality traits and the usage of particular Internet applications as risk factors for Internet addiction. A total of 3,105 adolescents in the Netherlands filled out a self-report questionnaire including the Compulsive Internet Use Scale and the Quick Big Five Scale. Results indicate that 3.7% of the sample were classified as potentially being addicted to the Internet. The use of online gaming and social applications (online social networking sites and Twitter) increased the risk for Internet addiction, whereas agreeableness and resourcefulness appeared as protective factors in high frequency online gamers. The findings support the inclusion of ‘Internet addiction’ in the DSM-V. Vulnerability and resilience appear as significant aspects that require consideration in",
"title": ""
}
] | scidocsrr |
f2360d10268383bcfbe6fea7f9cdb2bc | I would DiYSE for it!: a manifesto for do-it-yourself internet-of-things creation | [
{
"docid": "b06dcdb662a8d55219c9ae1c7e507987",
"text": "Most programs today are written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support. For example, a teacher might write a grading spreadsheet to save time grading, or an interaction designer might use an interface builder to test some user interface design ideas. Although these end-user programmers may not have the same goals as professional developers, they do face many of the same software engineering challenges, including understanding their requirements, as well as making decisions about design, reuse, integration, testing, and debugging. This article summarizes and classifies research on these activities, defining the area of End-User Software Engineering (EUSE) and related terminology. The article then discusses empirical research about end-user software engineering activities and the technologies designed to support them. The article also addresses several crosscutting issues in the design of EUSE tools, including the roles of risk, reward, and domain complexity, and self-efficacy in the design of EUSE tools and the potential of educating users about software engineering principles.",
"title": ""
},
{
"docid": "da2a8e74a56fbcc8c98a74eabaaec59b",
"text": "NodeBox is a free application for producing generative art. This paper gives an overview of the nature-inspired functionality in NodeBox and the artworks we created using it. We demonstrate how it can be used for evolutionary computation in the context of computer games and art, and discuss some of our recent research with the aim to simulate (artistic) brainstorming using language processing techniques and semantic networks.",
"title": ""
}
] | [
{
"docid": "dd911eff60469b32330c5627c288f19f",
"text": "Routing Algorithms are driving the growth of the data transmission in wireless sensor networks. Contextually, many algorithms considered the data gathering and data aggregation. This paper uses the scenario of clustering and its impact over the SPIN protocol and also finds out the effect over the energy consumption in SPIN after uses of clustering. The proposed scheme is implemented using TCL/C++ programming language and evaluated using Ns2.34 simulator and compare with LEACH. Simulation shows proposed protocol exhibits significant performance gains over the LEACH for lifetime of network and guaranteed data transmission.",
"title": ""
},
{
"docid": "1d4b1612f9e3d3205ced6ba07af21467",
"text": "A precision control system that enables a center pivot irrigation system (CP) to precisely supply water in optimal rates relative to the needs of individual areas within fields was developed through a collaboration between the Farmscan group (Perth, Western Australia) and the University of Georgia Precision Farming team at the National Environmentally Sound Production Agriculture Laboratory (NESPAL) in Tifton, GA. The control system, referred to as Variable-Rate Irrigation (VRI), varies application rate by cycling sprinklers on and off and by varying the CP travel speed. Desktop PC software is used to define application maps which are loaded into the VRI controller. The VRI system uses GPS to determine pivot position/angle of the CP mainline. Results from VRI system performance testing indicate good correlation between target and actual application rates and also shows that sprinkler cycling on/off does not alter the CP uniformity. By applying irrigation water in this precise manner, water application to the field is optimized. In many cases, substantial water savings can be realized.",
"title": ""
},
{
"docid": "97d1f0c14edeedd8348058b50fae653b",
"text": "A high-efficiency self-shielded microstrip-fed Yagi-Uda antenna has been developed for 60 GHz communications. The antenna is built on a Teflon substrate (εr = 2.2) with a thickness of 10 mils (0.254 mm). A 7-element design results in a measured S11 of <; -10 dB at 56.0 - 66.4 GHz with a gain >; 9.5 dBi at 58 - 63 GHz. The antenna shows excellent performance in free space and in the presence of metal-planes used for shielding purposes. A parametric study is done with metal plane heights from 2 mm to 11 mm, and the Yagi-Uda antenna results in a gain >; 12 dBi at 58 - 63 GHz for h = 5 - 8 mm. A 60 GHz four-element switched-beam Yagi-Uda array is also presented with top and bottom shielding planes, and allows for 180° angular coverage with <; 3 dB amplitude variations. This antenna is ideal for inclusion in complex platforms, such as laptops, for point-to-point communication systems, either as a single element or a switched-beam system.",
"title": ""
},
{
"docid": "77e6593b3078a5d8b23fcb282f90596b",
"text": "A graph database is a database where the data structures for the schema and/or instances are modeled as a (labeled)(directed) graph or generalizations of it, and where querying is expressed by graphoriented operations and type constructors. In this article we present the basic notions of graph databases, give an historical overview of its main development, and study the main current systems that implement them.",
"title": ""
},
{
"docid": "f9f92d3b2ea0a4bf769c63b7f1fc884a",
"text": "The current taxonomy of probiotic lactic acid bacteria is reviewed with special focus on the genera Lactobacillus, Bifidobacterium and Enterococcus. The physiology and taxonomic position of species and strains of these genera were investigated by phenotypic and genomic methods. In total, 176 strains, including the type strains, have been included. Phenotypic methods applied were based on biochemical, enzymatical and physiological characteristics, including growth temperatures, cell wall analysis and analysis of the total soluble cytoplasmatic proteins. Genomic methods used were pulsed field gel electrophoresis (PFGE), randomly amplified polymorphic DNA-PCR (RAPD-PCR) and DNA-DNA hybridization for bifidobacteria. In the genus Lactobacillus the following species of importance as probiotics were investigated: L. acidophilus group, L. casei group and L. reuteri/L. fermentum group. Most strains referred to as L. acidophilus in probiotic products could be identified either as L. gasseri or as L. johnsonii, both members of the L. acidophilus group. A similar situation could be shown in the L. casei group, where most of the strains named L. casei belonged to L. paracasei subspp. A recent proposal to reject the species L. paracasei and to include this species in the restored species L. casei with a neotype strain was supported by protein analysis. Bifidobacterium spp. strains have been reported to be used for production of fermented dairy and recently of probiotic products. According to phenotypic features and confirmed by DNA-DNA hybridization most of the bifidobacteria strains from dairy origin belonged to B. animalis, although they were often declared as B. longum by the manufacturer. From the genus Enterococcus, probiotic Ec. faecium strains were investigated with regard to the vanA-mediated resistance against glycopeptides. These unwanted resistances could be ruled out by analysis of the 39 kDa resistance protein. In conclusion, the taxonomy and physiology of probiotic lactic acid bacteria can only be understood by using polyphasic taxonomy combining morphological, biochemical and physiological characteristics with molecular-based phenotypic and genomic techniques.",
"title": ""
},
{
"docid": "ff619ce19b787d32aa78a6ac295d1f1d",
"text": "Mullerian duct anomalies (MDAs) are rare, affecting approximately 1% of all women and about 3% of women with poor reproductive outcomes. These congenital anomalies usually result from one of the following categories of abnormalities of the mullerian ducts: failure of formation (no development or underdevelopment) or failure of fusion of the mullerian ducts. The American Fertility Society (AFS) classification of uterine anomalies is widely accepted and includes seven distinct categories. MR imaging has consolidated its role as the imaging modality of choice in the evaluation of MDA. MRI is capable of demonstrating the anatomy of the female genital tract remarkably well and is able to provide detailed images of the intra-uterine zonal anatomy, delineate the external fundal contour of the uterus, and comprehensively image the entire female pelvis in multiple imaging planes in a single examination. The purpose of this pictorial essay is to show the value of MRI in the diagnosis of MDA and to review the key imaging features of anomalies of formation and fusion, emphasizing the relevance of accurate diagnosis before therapeutic intervention.",
"title": ""
},
{
"docid": "ffb03136c1f8d690be696f65f832ab11",
"text": "This paper aims to improve the feature learning in Convolutional Networks (Convnet) by capturing the structure of objects. A new sparsity function is imposed on the extracted featuremap to capture the structure and shape of the learned object, extracting interpretable features to improve the prediction performance. The proposed algorithm is based on organizing the activation within and across featuremap by constraining the node activities through `2 and `1 normalization in a structured form.",
"title": ""
},
{
"docid": "8234cb805d080a13fb9aeab9373f75c8",
"text": "Essentially a software system’s utility is determined by both its functionality and its non-functional characteristics, such as usability, flexibility, performance, interoperability and security. Nonetheless, there has been a lop-sided emphasis in the functionality of the software, even though the functionality is not useful or usable without the necessary non-functional characteristics. In this chapter, we review the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future",
"title": ""
},
{
"docid": "c819096800cc1d758cd3bcf4949f2690",
"text": "Recent years have witnessed the trend of leveraging cloud-based services for large scale content storage, processing, and distribution. Security and privacy are among top concerns for the public cloud environments. Towards these security challenges, we propose and implement, on OpenStack Swift, a new client-side deduplication scheme for securely storing and sharing outsourced data via the public cloud. The originality of our proposal is twofold. First, it ensures better confidentiality towards unauthorized users. That is, every client computes a per data key to encrypt the data that he intends to store in the cloud. As such, the data access is managed by the data owner. Second, by integrating access rights in metadata file, an authorized user can decipher an encrypted file only with his private key.",
"title": ""
},
{
"docid": "a2cf369a67507d38ac1a645e84525497",
"text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.",
"title": ""
},
{
"docid": "ee55a72568868837e11da7fabca169fe",
"text": "Tying suture knots is a time-consuming task performed frequently during minimally invasive surgery (MIS). Automating this task could greatly reduce total surgery time for patients. Current solutions to this problem replay manually programmed trajectories, but a more general and robust approach is to use supervised machine learning to smooth surgeon-given training trajectories and generalize from them. Since knottying generally requires a controller with internal memory to distinguish between identical inputs that require different actions at different points along a trajectory, it would be impossible to teach the system using traditional feedforward neural nets or support vector machines. Instead we exploit more powerful, recurrent neural networks (RNNs) with adaptive internal states. Results obtained using LSTM RNNs trained by the recent Evolino algorithm show that this approach can significantly increase the efficiency of suture knot tying in MIS over preprogrammed control",
"title": ""
},
{
"docid": "23f0e46a17cd5833b4f8d344041314a3",
"text": "Fingerprint recognition and verification are often based on local fingerprint features, usually ridge endings or terminations, also called minutiae. By exploiting the structural uniqueness of the image region around a minutia, the fingerprint recognition performance can be significantly enhanced. However, for most fingerprint images the number of minutia image regions (MIR’s) becomes dramatically large, which imposes especially for embedded systems an enormous memory requirement. Therefore, we are investigating different algorithms for compression of minutia regions. The requirement for these algorithms is to achieve a high compression rate (about 20) with minimum loss in the matching performance of minutia image region matching. In this paper we investigate the matching performance for compression algorithms based on the Principal Component and the wavelet transformation. The matching results are presented in form of normalized ROC curves and interpreted in terms of compression rates and the MIR dimension.",
"title": ""
},
{
"docid": "c4b08fc102c5b28f865f3452286470c6",
"text": "Data compression is a method of improving the efficiency of transmission and storage of images. Dithering, as a method of data compression, can be used to convert an 8-bit gray level image into a 1-bit / binary image. Undithering is the process of reconstruction of gray image from binary image obtained from dithering of gray image. In the present paper, I propose a method of undithering using linear filtering followed by anisotropic diffusion which brings the advantage of smoothing and edge enhancement. First-order statistical parameters, second-order statistical parameters, mean-squared error (MSE) between reconstructed image and the original image before dithering, and peak signal to noise ratio (PSNR) are evaluated at each step of diffusion. Results of the experiments show that the reconstructed image is not as sharp as the image before dithering but a large number of gray values are reproduced with reference to those of the original image prior to dithering.",
"title": ""
},
{
"docid": "d903f684b2dacaec1d4b524aa8fe44c1",
"text": "The emerging die-stacked DRAM technology allows computer architects to design a last-level cache (LLC) with high memory bandwidth and large capacity. There are four key requirements for DRAM cache design: minimizing on-chip tag storage overhead, optimizing access latency, improving hit rate, and reducing off-chip traffic. These requirements seem mutually incompatible. For example, to reduce the tag storage overhead, the recent proposed LH-cache co-locates tags and data in the same DRAM cache row, and the Alloy Cache proposed to alloy data and tags in the same cache line in a direct-mapped design. However, these ideas either require significant tag lookup latency or sacrifice hit rate for hit latency. To optimize all four key requirements, we propose the Buffered Way Predictor (BWP). The BWP predicts the way ID of a DRAM cache request with high accuracy and coverage, allowing data and tag to be fetched back to back. Thus, the read latency for the data can be completely hidden so that DRAM cache hitting requests have low access latency. The BWP technique is designed for highly associative block-based DRAM caches and achieves a low miss rate and low off-chip traffic. Our evaluation with multi-programmed workloads and a 128MB DRAM cache shows that a 128KB BWP achieves a 76.2% hit rate. The BWP improves performance by 8.8% and 12.3% compared to LH-cache and Alloy Cache, respectively.",
"title": ""
},
{
"docid": "d7c44247c9ac5f686200b9eca9d8d4f0",
"text": "The computer game industry requires a skilled workforce and this combined with the complexity of modern games, means that production costs are extremely high. One of the most time consuming aspects is the creation of game geometry, the virtual world which the players inhabit. Procedural techniques have been used within computer graphics to create natural textures, simulate special effects and generate complex natural models including trees and waterfalls. It is these procedural techniques that we intend to harness to generate geometry and textures suitable for a game situated in an urban environment. Procedural techniques can provide many benefits for computer graphics applications when the correct algorithm is used. An overview of several commonly used procedural techniques including fractals, L-systems, Perlin noise, tiling systems and cellular basis is provided. The function of each technique and the resulting output they create are discussed to better understand their characteristics, benefits and relevance to the city generation problem. City generation is the creation of an urban area which necessitates the creation of buildings, situated along streets and arranged in appropriate patterns. Some research has already taken place into recreating road network patterns and generating buildings that can vary in function and architectural style. We will study the main body of existing research into procedural city generation and provide an overview of their implementations and a critique of their functionality and results. Finally we present areas in which further research into the generation of cities is required and outline our research goals for city generation.",
"title": ""
},
{
"docid": "2ff3eff32bc1a53527185d6d1a87b2d3",
"text": "This document presents a summary of the latest developments in satellite antennas at MDA. It covers high-performance multibeam antennas used in geostationary missions, low-cost dual-band gimballed Ka-band antennas used in non-geostationary constellations and reconfigurable antennas.",
"title": ""
},
{
"docid": "543a0cdc8101c6f253431c8a4d697be6",
"text": "While significant progress has been made in the image captioning task, video description is still comparatively in its infancy, due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the main issues are the fluency and coherence of the generated descriptions, and their relevance to the video. Recently, reinforcement and adversarial learning based methods have been explored to improve the image captioning models; however, both types of methods suffer from a number of issues, e.g. poor readability and high redundancy for RL and stability issues for GANs. In this work, we instead propose to apply adversarial techniques during inference, designing a discriminator which encourages better multi-sentence video description. In addition, we find that a multi-discriminator “hybrid” design, where each discriminator targets one aspect of a description, leads to the best results. Specifically, we decouple the discriminator to evaluate on three criteria: 1) visual relevance to the video, 2) language diversity and fluency, and 3) coherence across sentences. Our approach results in more accurate, diverse and coherent multi-sentence video descriptions, as shown by automatic as well as human evaluation on the popular ActivityNet Captions dataset.",
"title": ""
},
{
"docid": "02e3f296f7c0c30cc8320abb7456bc9c",
"text": "Purpose – This research aims to examine the relationship between information security strategy and organization performance, with organizational capabilities as important factors influencing successful implementation of information security strategy and organization performance. Design/methodology/approach – Based on existing literature in strategic management and information security, a theoretical model was proposed and validated. A self-administered survey instrument was developed to collect empirical data. Structural equation modeling was used to test hypotheses and to fit the theoretical model. Findings – Evidence suggests that organizational capabilities, encompassing the ability to develop high-quality situational awareness of the current and future threat environment, the ability to possess appropriate means, and the ability to orchestrate the means to respond to information security threats, are positively associated with effective implementation of information security strategy, which in turn positively affects organization performance. However, there is no significant relationship between decision making and information security strategy implementation success. Research limitations/implications – The study provides a starting point for further research on the role of decision-making in information security. Practical implications – Findings are expected to yield practical value for business leaders in understanding the viable predisposition of organizational capabilities in the context of information security, thus enabling firms to focus on acquiring the ones indispensable for improving organization performance. Originality/value – This study provides the body of knowledge with an empirical analysis of organization’s information security capabilities as an aggregation of sense making, decision-making, asset availability, and operations management constructs.",
"title": ""
},
{
"docid": "1177ddef815db481082feb75afd79ec5",
"text": "This paper explores three main areas, firstly, website accessibility guidelines; secondly, website accessibility tools and finally the implication of human factors in the process of implementing successful e-Government websites. It investigates the issues that make a website accessible and explores the importance placed on web usability and accessibility with respect to e-Government websites. It briefly examines accessibility guidelines, evaluation methods and analysis tools. It then evaluates the web accessibility of e-Government websites of Saudi Arabia and Oman by adapting the ‘W3C Web Content Accessibility Guidelines’. Finally, it presents recommendations for improvement of e-Government website accessibility.",
"title": ""
},
{
"docid": "7aae72c04e4b1a230c47d1d481f9e34d",
"text": "We present an algorithm for automatic detection of a large number of anthropometric landmarks on 3D faces. Our approach does not use texture and is completely shape based in order to detect landmarks that are morphologically significant. The proposed algorithm evolves level set curves with adaptive geometric speed functions to automatically extract effective seed points for dense correspondence. Correspondences are established by minimizing the bending energy between patches around seed points of given faces to those of a reference face. Given its hierarchical structure, our algorithm is capable of establishing thousands of correspondences between a large number of faces. Finally, a morphable model based on the dense corresponding points is fitted to an unseen query face for transfer of correspondences and hence automatic detection of landmarks. The proposed algorithm can detect any number of pre-defined landmarks including subtle landmarks that are even difficult to detect manually. Extensive experimental comparison on two benchmark databases containing 6, 507 scans shows that our algorithm outperforms six state of the art algorithms.",
"title": ""
}
] | scidocsrr |
8273a14719375c386589fbfadf432e2a | An Analytical Study of Routing Attacks in Vehicular Ad-hoc Networks ( VANETs ) | [
{
"docid": "fd61461d5033bca2fd5a2be9bfc917b7",
"text": "Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.",
"title": ""
}
] | [
{
"docid": "c504800ce08654fb5bf49356d2f7fce3",
"text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.",
"title": ""
},
{
"docid": "d5509e4d4165872122609deddb440d40",
"text": "Model selection with cross validation (CV) is very popular in machine learning. However, CV with grid and other common search strategies cannot guarantee to find the model with minimum CV error, which is often the ultimate goal of model selection. Recently, various solution path algorithms have been proposed for several important learning algorithms including support vector classification, Lasso, and so on. However, they still do not guarantee to find the model with minimum CV error. In this paper, we first show that the solution paths produced by various algorithms have the property of piecewise linearity. Then, we prove that a large class of error (or loss) functions are piecewise constant, linear, or quadratic w.r.t. the regularization parameter, based on the solution path. Finally, we propose a new generalized error path algorithm (GEP), and prove that it will find the model with minimum CV error for the entire range of the regularization parameter. The experimental results on a variety of datasets not only confirm our theoretical findings, but also show that the best model with our GEP has better generalization error on the test data, compared to the grid search, manual search, and random search.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "b5f8f310f2f4ed083b20f42446d27feb",
"text": "This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "e4cea8ba1de77c94b658c83b08d4c584",
"text": "Algorithms, IEEE.it can be combined with many IP address lookup algorithms for fast update. Surveys on address lookup algorithms were given in 5 11 9. Ruiz-Sanchez, E.W. Dabbous, Survey and Taxonomy of. IP.AbstractIP address lookup is a key bottleneck for. Lookup algorithm based on a new memory organization. Survey and taxonomy of IP address lookup.IP routing requires that a router perform a longest-prefix-match address lookup for each incoming datagram in order to determine the datagrams next. A very quick survey at the time of writing indicates that.",
"title": ""
},
{
"docid": "65a4709f62c084cdd07fe54d834b8eaf",
"text": "Although in the era of third generation (3G) mobile networks technical hurdles are minor, the continuing failure of mobile payments (m-payments) withstands the endorsement by customers and service providers. A major reason is the uncommonly high interdependency of technical, human and market factors which have to be regarded and orchestrated cohesively to solve the problem. In this paper, we apply Business Model Ontology in order to develop an m-payment business model framework based on the results of a precedent multi case study analysis of 27 m-payment procedures. The framework is depicted with a system of morphological boxes and the interrelations between the associated characteristics. Representing any m-payment business model along with its market setting and influencing decisions as instantiations, the resulting framework enables researchers and practitioners for comprehensive analysis of existing and future models and provides a helpful tool for m-payment business model engineering.",
"title": ""
},
{
"docid": "d9f0f36e75c08d2c3097e85d8c2dec36",
"text": "Social software solutions in enterprises such as IBM Connections are said to have the potential to support communication and collaboration among employees. However, companies are faced to manage the adoption of such collaborative tools and therefore need to raise the employees’ acceptance and motivation. To solve these problems, developers started to implement Gamification elements in social software tools, which aim to increase users’ motivation. In this research-in-progress paper, we give first insights and critically examine the current market of leading social software solutions to find out which Gamification approaches are implementated in such collaborative tools. Our findings show, that most of the major social collaboration solutions do not offer Gamification features by default, but leave the integration to a various number of third party plug-in vendors. Furthermore we identify a trend in which Gamification solutions majorly focus on rewarding quantitative improvement of work activities, neglecting qualitative performance. Subsequently, current solutions do not match recent findings in research and ignore risks that can lower the employees’ motivation and work performance in the long run.",
"title": ""
},
{
"docid": "9d089af812c0fdd245a218362d88b62a",
"text": "Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI - how should spectators experience a performer's interaction with a computer? We classify public interfaces (including examples from art, performance and exhibition design) according to the extent to which a performer's manipulations of an interface and their resulting effects are hidden, partially revealed, fully revealed or even amplified for spectators. Our taxonomy uncovers four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they tend to be revealed enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent but effects are only revealed as the spectator takes their turn.",
"title": ""
},
{
"docid": "100c152685655ad6865f740639dd7d57",
"text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"title": ""
},
{
"docid": "e577c2827822bfe2f1fc177efeeef732",
"text": "This paper presents a control problem involving an experimental propeller setup that is called the twin rotor multi-input multi-output system (TRMS). The control objective is to make the beam of the TRMS move quickly and accurately to the desired attitudes, both the pitch angle and the azimuth angle in the condition of decoupling between two axes. It is difficult to design a suitable controller because of the influence between the two axes and nonlinear movement. For easy demonstration in the vertical and horizontal separately, the TRMS is decoupled by the main rotor and tail rotor. An intelligent control scheme which utilizes a hybrid PID controller is implemented to this problem. Simulation results show that the new approach to the TRMS control problem can improve the tracking performance and reduce control energy.",
"title": ""
},
{
"docid": "f50c735147be5112bc3c81107002d99a",
"text": "Over the years, several spatio-temporal interest point detectors have been proposed. While some detectors can only extract a sparse set of scaleinvariant features, others allow for the detection of a larger amount of features at user-defined scales. This paper presents for the first time spatio-temporal interest points that are at the same time scale-invariant (both spatially and temporally) and densely cover the video content. Moreover, as opposed to earlier work, the features can be computed efficiently. Applying scale-space theory, we show that this can be achieved by using the determinant of the Hessian as the saliency measure. Computations are speeded-up further through the use of approximative box-filter operations on an integral video structure. A quantitative evaluation and experimental results on action recognition show the strengths of the proposed detector in terms of repeatability, accuracy and speed, in comparison with previously proposed detectors.",
"title": ""
},
{
"docid": "40aa8b356983686472b3d2871add4491",
"text": "Illegal logging is in these days widespread problem. In this paper we propose the system based on principles of WSN for monitoring the forest. Acoustic signal processing and evaluation system described in this paper is dealing with the detection of chainsaw sound with autocorrelation method. This work is describing first steps in building the integrated system.",
"title": ""
},
{
"docid": "c07c69bf5e2fce6f9944838ce80b5b8c",
"text": "Many image editing applications rely on the analysis of image patches. In this paper, we present a method to analyze patches by embedding them to a vector space, in which the Euclidean distance reflects patch similarity. Inspired by Word2Vec, we term our approach Patch2Vec. However, there is a significant difference between words and patches. Words have a fairly small and well defined dictionary. Image patches, on the other hand, have no such dictionary and the number of different patch types is not well defined. The problem is aggravated by the fact that each patch might contain several objects and textures. Moreover, Patch2Vec should be universal because it must be able to map never-seen-before texture to the vector space. The mapping is learned by analyzing the distribution of all natural patches. We use Convolutional Neural Networks (CNN) to learn Patch2Vec. In particular, we train a CNN on labeled images with a triplet-loss objective function. The trained network encodes a given patch to a 128D vector. Patch2Vec is evaluated visually, qualitatively, and quantitatively. We then use several variants of an interactive single-click image segmentation algorithm to demonstrate the power of our method.",
"title": ""
},
{
"docid": "43f9cd44dee709339fe5b11eb73b15b6",
"text": "Mutual interference of radar systems has been identified as one of the major challenges for future automotive radar systems. In this work the interference of frequency (FMCW) and phase modulated continuous wave (PMCW) systems is investigated by means of simulations. All twofold combinations of the aforementioned systems are considered. The interference scenario follows a typical use-case from the well-known MOre Safety for All by Radar Interference Mitigation (MOSARIM) study. The investigated radar systems operate with similar system parameters to guarantee a certain comparability, but with different waveform durations, and chirps with different slopes and different phase code sequences, respectively. Since the effects in perfect synchrony are well understood, we focus on the cases where both systems exhibit a certain asynchrony. It is shown that the energy received from interferers can cluster in certain Doppler bins in the range-Doppler plane when systems exhibit a slight asynchrony.",
"title": ""
},
{
"docid": "3176f0a4824b2dd11d612d55b4421881",
"text": "This article reviews some of the criticisms directed towards the eclectic paradigm of international production over the past decade, and restates its main tenets. The second part of the article considers a number of possible extensions of the paradigm and concludes by asserting that it remains \"a robust general framework for explaining and analysing not only the economic rationale of economic production but many organisational nd impact issues in relation to MNE activity as well.\"",
"title": ""
},
{
"docid": "811c430ff9efd0f8a61ff40753f083d4",
"text": "The Waikato Environment for Knowledge Analysis (Weka) is a comprehensive suite of Java class libraries that implement many state-of-the-art machine learning and data mining algorithms. Weka is freely available on the World-Wide Web and accompanies a new text on data mining [1] which documents and fully explains all the algorithms it contains. Applications written using the Weka class libraries can be run on any computer with a Web browsing capability; this allows users to apply machine learning techniques to their own data regardless of computer platform.",
"title": ""
},
{
"docid": "5859379f3c4c5a7186c9dc8c85e1e384",
"text": "Purpose – Investigate the use of two imaging-based methods – coded pattern projection and laser-based triangulation – to generate 3D models as input to a rapid prototyping pipeline. Design/methodology/approach – Discusses structured lighting technologies as suitable imaging-based methods. Two approaches, coded-pattern projection and laser-based triangulation, are specifically identified and discussed in detail. Two commercial systems are used to generate experimental results. These systems include the Genex Technologies 3D FaceCam and the Integrated Vision Products Ranger System. Findings – Presents 3D reconstructions of objects from each of the commercial systems. Research limitations/implications – Provides background in imaging-based methods for 3D data collection and model generation. A practical limitation is that imaging-based systems do not currently meet accuracy requirements, but continued improvements in imaging systems will minimize this limitation. Practical implications – Imaging-based approaches to 3D model generation offer potential to increase scanning time and reduce scanning complexity. Originality/value – Introduces imaging-based concepts to the rapid prototyping pipeline.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "60d8839833d10b905729e3d672cafdd6",
"text": "In order to account for the phenomenon of virtual pitch, various theories assume implicitly or explicitly that each spectral component introduces a series of subharmonics. The spectral-compression method for pitch determination can be viewed as a direct implementation of this principle. The widespread application of this principle in pitch determination is, however, impeded by numerical problems with respect to accuracy and computational efficiency. A modified algorithm is described that solves these problems. Its performance is tested for normal speech and \"telephone\" speech, i.e., speech high-pass filtered at 300 Hz. The algorithm out-performs the harmonic-sieve method for pitch determination, while its computational requirements are about the same. The algorithm is described in terms of nonlinear system theory, i.c., subharmonic summation. It is argued that the favorable performance of the subharmonic-summation algorithm stems from its corresponding more closely with current pitch-perception theories than does the harmonic sieve.",
"title": ""
}
] | scidocsrr |
2927f32f5913f7465b8d919564467387 | Ensuring rigour and trustworthiness of qualitative research in clinical pharmacy | [
{
"docid": "fd5f48aebc8fba354137dadb445846bc",
"text": "BACKGROUND\nThe syntheses of multiple qualitative studies can pull together data across different contexts, generate new theoretical or conceptual models, identify research gaps, and provide evidence for the development, implementation and evaluation of health interventions. This study aims to develop a framework for reporting the synthesis of qualitative health research.\n\n\nMETHODS\nWe conducted a comprehensive search for guidance and reviews relevant to the synthesis of qualitative research, methodology papers, and published syntheses of qualitative health research in MEDLINE, Embase, CINAHL and relevant organisational websites to May 2011. Initial items were generated inductively from guides to synthesizing qualitative health research. The preliminary checklist was piloted against forty published syntheses of qualitative research, purposively selected to capture a range of year of publication, methods and methodologies, and health topics. We removed items that were duplicated, impractical to assess, and rephrased items for clarity.\n\n\nRESULTS\nThe Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement consists of 21 items grouped into five main domains: introduction, methods and methodology, literature search and selection, appraisal, and synthesis of findings.\n\n\nCONCLUSIONS\nThe ENTREQ statement can help researchers to report the stages most commonly associated with the synthesis of qualitative health research: searching and selecting qualitative research, quality appraisal, and methods for synthesising qualitative findings. The synthesis of qualitative research is an expanding and evolving methodological area and we would value feedback from all stakeholders for the continued development and extension of the ENTREQ statement.",
"title": ""
}
] | [
{
"docid": "39007b91989c42880ff96e7c5bdcf519",
"text": "Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.",
"title": ""
},
{
"docid": "9e05a37d781d8a3ee0ecca27510f1ae9",
"text": "Context: Evidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews. Objectives: We extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process; and (2) provide evidence of challenges and related solutions for automotive software testing processes. Methods: In this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used. Results: In the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process. Conclusions: Overall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).",
"title": ""
},
{
"docid": "d974b1ffafd9ad738303514f28a770b9",
"text": "We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.",
"title": ""
},
{
"docid": "f8bebcf8d9b544c82af547865672b06a",
"text": "An instance with a bad mask might make a composite image that uses it look fake. This encourages us to learn segmentation by generating realistic composite images. To achieve this, we propose a novel framework that exploits a new proposed prior called the independence prior based on Generative Adversarial Networks (GANs). The generator produces an image with multiple category-specific instance providers, a layout module and a composition module. Firstly, each provider independently outputs a category-specific instance image with a soft mask. Then the provided instances’ poses are corrected by the layout module. Lastly, the composition module combines these instances into a final image. Training with adversarial loss and penalty for mask area, each provider learns a mask that is as small as possible but enough to cover a complete category-specific instance. Weakly supervised semantic segmentation methods widely use grouping cues modeling the association between image parts, which are either artificially designed or learned with costly segmentation labels or only modeled on local pairs. Unlike them, our method automatically models the dependence between any parts and learns instance segmentation. We apply our framework in two cases: (1) Foreground segmentation on category-specific images with box-level annotation. (2) Unsupervised learning of instance appearances and masks with only one image of homogeneous object cluster (HOC). We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image.",
"title": ""
},
{
"docid": "070a1de608a35cddb69b84d5f081e94d",
"text": "Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.",
"title": ""
},
{
"docid": "d48ea163dd0cd5d80ba95beecee5102d",
"text": "Foodborne pathogens (FBP) represent an important threat to the consumers' health as they are able to cause different foodborne diseases. In order to eliminate the potential risk of those pathogens, lactic acid bacteria (LAB) have received a great attention in the food biotechnology sector since they play an essential function to prevent bacterial growth and reduce the biogenic amines (BAs) formation. The foodborne illnesses (diarrhea, vomiting, and abdominal pain, etc.) caused by those microbial pathogens is due to various reasons, one of them is related to the decarboxylation of available amino acids that lead to BAs production. The formation of BAs by pathogens in foods can cause the deterioration of their nutritional and sensory qualities. BAs formation can also have toxicological impacts and lead to different types of intoxications. The growth of FBP and their BAs production should be monitored and prevented to avoid such problems. LAB is capable of improving food safety by preventing foods spoilage and extending their shelf-life. LAB are utilized by the food industries to produce fermented products with their antibacterial effects as bio-preservative agents to extent their storage period and preserve their nutritive and gustative characteristics. Besides their contribution to the flavor for fermented foods, LAB secretes various antimicrobial substances including organic acids, hydrogen peroxide, and bacteriocins. Consequently, in this paper, the impact of LAB on the growth of FBP and their BAs formation in food has been reviewed extensively.",
"title": ""
},
{
"docid": "9ad040dc3a1bcd498436772768903525",
"text": "Memory B and plasma cells (PCs) are generated in the germinal center (GC). Because follicular helper T cells (TFH cells) have high expression of the immunoinhibitory receptor PD-1, we investigated the role of PD-1 signaling in the humoral response. We found that the PD-1 ligands PD-L1 and PD-L2 were upregulated on GC B cells. Mice deficient in PD-L2 (Pdcd1lg2−/−), PD-L1 and PD-L2 (Cd274−/−Pdcd1lg2−/−) or PD-1 (Pdcd1−/−) had fewer long-lived PCs. The mechanism involved more GC cell death and less TFH cell cytokine production in the absence of PD-1; the effect was selective, as remaining PCs had greater affinity for antigen. PD-1 expression on T cells and PD-L2 expression on B cells controlled TFH cell and PC numbers. Thus, PD-1 regulates selection and survival in the GC, affecting the quantity and quality of long-lived PCs.",
"title": ""
},
{
"docid": "0f4ac688367d3ea43643472b7d75ffc9",
"text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.",
"title": ""
},
{
"docid": "754fb355da63d024e3464b4656ea5e8d",
"text": "Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device.",
"title": ""
},
{
"docid": "67989a9fe9d56e27eb42ca867a919a7d",
"text": "Data remanence is the residual physical representation of data that has been erased or overwritten. In non-volatile programmable devices, such as UV EPROM, EEPROM or Flash, bits are stored as charge in the floating gate of a transistor. After each erase operation, some of this charge remains. Security protection in microcontrollers and smartcards with EEPROM/Flash memories is based on the assumption that information from the memory disappears completely after erasing. While microcontroller manufacturers successfully hardened already their designs against a range of attacks, they still have a common problem with data remanence in floating-gate transistors. Even after an erase operation, the transistor does not return fully to its initial state, thereby allowing the attacker to distinguish between previously programmed and not programmed transistors, and thus restore information from erased memory. The research in this direction is summarised here and it is shown how much information can be extracted from some microcontrollers after their memory has been ‘erased’.",
"title": ""
},
{
"docid": "00e13bca1066e54907394b75cb40d0c0",
"text": "This paper explores educational uses of virtual learning environment (VLE) concerned with issues of learning, training and entertainment. We analyze the state-of-art research of VLE based on virtual reality and augmented reality. Some examples for the purpose of education and simulation are described. These applications show that VLE can be means of enhancing, motivating and stimulating learners’ understanding of certain events, especially those for which the traditional notion of instructional learning have proven inappropriate or difficult. Furthermore, the users can learn in a quick and happy mode by playing in the virtual environments. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "08c9a049c5f22c7d83a2e6d29982a3cc",
"text": "Unsupervised image segmentation is an important component in many image understanding algorithms and practical vision systems. However, evaluation of segmentation algorithms thus far has been largely subjective, leaving a system designer to judge the effectiveness of a technique based only on intuition and results in the form of a few example segmented images. This is largely due to image segmentation being an ill-defined problem-there is no unique ground-truth segmentation of an image against which the output of an algorithm may be compared. This paper demonstrates how a recently proposed measure of similarity, the normalized probabilistic rand (NPR) index, can be used to perform a quantitative comparison between image segmentation algorithms using a hand-labeled set of ground-truth segmentations. We show that the measure allows principled comparisons between segmentations created by different algorithms, as well as segmentations on different images. We outline a procedure for algorithm evaluation through an example evaluation of some familiar algorithms - the mean-shift-based algorithm, an efficient graph-based segmentation algorithm, a hybrid algorithm that combines the strengths of both methods, and expectation maximization. Results are presented on the 300 images in the publicly available Berkeley segmentation data set",
"title": ""
},
{
"docid": "3bfb0d2304880065227c4563c6646ce1",
"text": "We propose an automatic video inpainting algorithm which relies on the optimisation of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality results on high definition videos. Finally, we provide specific algorithmic details to make implementation of our algorithm as easy as possible. The resulting algorithm requires no segmentation or manual input other than the definition of the inpainting mask, and can deal with a wider variety of situations than is handled by previous work.",
"title": ""
},
{
"docid": "6d44c4244064634deda30a5059acd87e",
"text": "Currently, gene sequence genealogies of the Oligotrichea Bütschli, 1889 comprise only few species. Therefore, a cladistic approach, especially to the Oligotrichida, was made, applying Hennig's method and computer programs. Twenty-three characters were selected and discussed, i.e., the morphology of the oral apparatus (five characters), the somatic ciliature (eight characters), special organelles (four characters), and ontogenetic particulars (six characters). Nine of these characters developed convergently twice. Although several new features were included into the analyses, the cladograms match other morphological trees in the monophyly of the Oligotrichea, Halteriia, Oligotrichia, Oligotrichida, and Choreotrichida. The main synapomorphies of the Oligotrichea are the enantiotropic division mode and the de novo-origin of the undulating membranes. Although the sister group relationship of the Halteriia and the Oligotrichia contradicts results obtained by gene sequence analyses, no morphologic, ontogenetic or ultrastructural features were found, which support a branching of Halteria grandinella within the Stichotrichida. The cladistic approaches suggest paraphyly of the family Strombidiidae probably due to the scarce knowledge. A revised classification of the Oligotrichea is suggested, including all sufficiently known families and genera.",
"title": ""
},
{
"docid": "4c30af9dd05b773ce881a312bcad9cb9",
"text": "This review summarized various chemical recycling methods for PVC, such as pyrolysis, catalytic dechlorination and hydrothermal treatment, with a view to solving the problem of energy crisis and the impact of environmental degradation of PVC. Emphasis was paid on the recent progress on the pyrolysis of PVC, including co-pyrolysis of PVC with biomass/coal and other plastics, catalytic dechlorination of raw PVC or Cl-containing oil and hydrothermal treatment using subcritical and supercritical water. Understanding the advantage and disadvantage of these treatment methods can be beneficial for treating PVC properly. The dehydrochlorination of PVC mainly happed at low temperature of 250-320°C. The process of PVC dehydrochlorination can catalyze and accelerate the biomass pyrolysis. The intermediates from dehydrochlorination stage of PVC can increase char yield of co-pyrolysis of PVC with PP/PE/PS. For the catalytic degradation and dechlorination of PVC, metal oxides catalysts mainly acted as adsorbents for the evolved HCl or as inhibitors of HCl formation depending on their basicity, while zeolites and noble metal catalysts can produce lighter oil, depending the total number of acid sites and the number of accessible acidic sites. For hydrothermal treatment, PVC decomposed through three stages. In the first region (T<250°C), PVC went through dehydrochlorination to form polyene; in the second region (250°C<T<350°C), polyene decomposed to low-molecular weight compounds; in the third region (350°C<T), polyene further decomposed into a large amount of low-molecular weight compounds.",
"title": ""
},
{
"docid": "0d8cb05f7ba3840e558247b4ee70dff6",
"text": "Even though information visualization (InfoVis) research has matured in recent years, it is generally acknowledged that the field still lacks supporting, encompassing theories. In this paper, we argue that the distributed cognition framework can be used to substantiate the theoretical foundation of InfoVis. We highlight fundamental assumptions and theoretical constructs of the distributed cognition approach, based on the cognitive science literature and a real life scenario. We then discuss how the distributed cognition framework can have an impact on the research directions and methodologies we take as InfoVis researchers. Our contributions are as follows. First, we highlight the view that cognition is more an emergent property of interaction than a property of the human mind. Second, we argue that a reductionist approach to study the abstract properties of isolated human minds may not be useful in informing InfoVis design. Finally we propose to make cognition an explicit research agenda, and discuss the implications on how we perform evaluation and theory building.",
"title": ""
},
{
"docid": "0ce92d47fdde4c7a12d34bffc30e3e62",
"text": "Financial technology (Fintech) service has recently become the focus of considerable attention. Although many researchers and practitioners believe that Fintech can reshape the future of the financial services industry, others are skeptical about the adoption of Fintech because of the considerable risks involved. Therefore, we need to better understand why users are willing or hesitant to adopt Fintech, wherein, positive and negative factors affect their adoption decision. Based on the net valence framework theoretically embedded in theory of reasoned action, we propose a benefit-risk framework which integrates positive and negative factors associated with its adoption. Based on the empirical data collected from 244 Fintech users, this study initially investigates whether perceived benefit and risk significantly impact Fintech adoption intention. We then examine whether the effect of perceived benefit and risk on Fintech adoption intention differs depending on the user types. Results show that legal risk has the biggest negative effect, whereas convenience has the strongest positive effect on Fintech adoption intention. The differences between early adopters and late adopters are driven by different factors.",
"title": ""
},
{
"docid": "dce1e76671789752cf5e6914e2acbf47",
"text": "Powered exoskeletons can facilitate rehabilitation of patients with upper limb disabilities. Designs using rotary motors usually result in bulky exoskeletons to reduce the problem of moving inertia. This paper presents a new linearly actuated elbow exoskeleton that consists of a slider crank mechanism and a linear motor. The linear motor is placed beside the upper arm and closer to shoulder joint. Thus better inertia properties can be achieved while lightweight and compactness are maintained. A passive joint is introduced to compensate for the exoskeleton-elbow misalignment and intersubject size variation. A linear series elastic actuator (SEA) is proposed to obtain accurate force and impedance control at the exoskeleton-elbow interface. Bidirectional actuation between exoskeleton and forearm is verified, which is required for various rehabilitation processes. We expect this exoskeleton can provide a means of robot-aided elbow rehabilitation.",
"title": ""
},
{
"docid": "1f278ddc0d643196ff584c7ea82dc89b",
"text": "We consider an approximate version of a fundamental geometric search problem, polytope membership queries. Given a convex polytope P in REd, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an error bound ε. Previous solutions to this problem were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, and the latter yields constant query time. A space-time tradeoff can be obtained by interpolating between the two. We present the first significant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε(d-1)/2) to O(1/ε(d-1)/4). Our approach is based on a very simple algorithm. Both lower bounds and upper bounds on the performance of the algorithm are presented.\n To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. We show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neighbor searching. Furthermore, this is achieved with constructions that are much simpler than existing methods.",
"title": ""
}
] | scidocsrr |
2194ef1ab674e0f341aade34f6073ca0 | Mobile cloud computing: A survey | [
{
"docid": "ca4d2862ba75bfc35d8e9ada294192e1",
"text": "This paper provides a model that realistically represents the movements in a disaster area scenario. The model is based on an analysis of tactical issues of civil protection. This analysis provides characteristics influencing network performance in public safety communication networks like heterogeneous area-based movement, obstacles, and joining/leaving of nodes. As these characteristics cannot be modelled with existing mobility models, we introduce a new disaster area mobility model. To examine the impact of our more realistic modelling, we compare it to existing ones (modelling the same scenario) using different pure movement and link based metrics. The new model shows specific characteristics like heterogeneous node density. Finally, the impact of the new model is evaluated in an exemplary simulative network performance analysis. The simulations show that the new model discloses new information and has a significant impact on performance analysis.",
"title": ""
}
] | [
{
"docid": "1f8a386867ba1157655eda86a80f4555",
"text": "Body weight, length, and vocal tract length were measured for 23 rhesus macaques (Macaca mulatta) of various sizes using radiographs and computer graphic techniques. linear predictive coding analysis of tape-recorded threat vocalizations were used to determine vocal tract resonance frequencies (\"formants\") for the same animals. A new acoustic variable is proposed, \"formant dispersion,\" which should theoretically depend upon vocal tract length. Formant dispersion is the averaged difference between successive formant frequencies, and was found to be closely tied to both vocal tract length and body size. Despite the common claim that voice fundamental frequency (F0) provides an acoustic indication of body size, repeated investigations have failed to support such a relationship in many vertebrate species including humans. Formant dispersion, unlike voice pitch, is proposed to be a reliable predictor of body size in macaques, and probably many other species.",
"title": ""
},
{
"docid": "f9b11e55be907175d969cd7e76803caf",
"text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.",
"title": ""
},
{
"docid": "b4dd6c9634e86845795bcbe32216ee44",
"text": "Several program analysis tools - such as plagiarism detection and bug finding - rely on knowing a piece of code's relative semantic importance. For example, a plagiarism detector should not bother reporting two programs that have an identical simple loop counter test, but should report programs that share more distinctive code. Traditional program analysis techniques (e.g., finding data and control dependencies) are useful, but do not say how surprising or common a line of code is. Natural language processing researchers have encountered a similar problem and addressed it using an n-gram model of text frequency, derived from statistics computed over text corpora.\n We propose and compute an n-gram model for programming languages, computed over a corpus of 2.8 million JavaScript programs we downloaded from the Web. In contrast to previous techniques, we describe a code n-gram as a subgraph of the program dependence graph that contains all nodes and edges reachable in n steps from the statement. We can count n-grams in a program and count the frequency of n-grams in the corpus, enabling us to compute tf-idf-style measures that capture the differing importance of different lines of code. We demonstrate the power of this approach by implementing a plagiarism detector with accuracy that beats previous techniques, and a bug-finding tool that discovered over a dozen previously unknown bugs in a collection of real deployed programs.",
"title": ""
},
{
"docid": "f28a91e0cdb4c3528a6d04cf549358b4",
"text": "This paper presents an algorithm for calibrating erroneous tri-axis magnetometers in the magnetic field domain. Unlike existing algorithms, no simplification is made on the nature of errors to ease the estimation. A complete error model, including instrumentation errors (scale factors, nonorthogonality, and offsets) and magnetic deviations (soft and hard iron) on the host platform, is elaborated. An adaptive least squares estimator provides a consistent solution to the ellipsoid fitting problem and the magnetometer’s calibration parameters are derived. The calibration is experimentally assessed with two artificial magnetic perturbations introduced close to the sensor on the host platform and without additional perturbation. In all configurations, the algorithm successfully converges to a good estimate of the said errors. Comparing the magnetically derived headings with a GNSS/INS reference, the results show a major improvement in terms of heading accuracy after the calibration.",
"title": ""
},
{
"docid": "50dc3186ad603ef09be8cca350ff4d77",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "78ca8024a825fc8d5539b899ad34fc18",
"text": "In this paper, we examine whether managers use optimistic and pessimistic language in earnings press releases to provide information about expected future firm performance to the market, and whether the market responds to optimistic and pessimistic language usage in earnings press releases after controlling for the earnings surprise and other factors likely to influence the market’s response to the earnings announcement. We use textual-analysis software to measure levels of optimistic and pessimistic language for a sample of approximately 24,000 earnings press releases issued between 1998 and 2003. We find a positive (negative) association between optimistic (pessimistic) language usage and future firm performance and a significant incremental market response to optimistic and pessimistic language usage in earnings press releases. Results suggest managers use optimistic and pessimistic language to provide credible information about expected future firm performance to the market, and that the market responds to managers’ language usage.",
"title": ""
},
{
"docid": "3fc74e621d0e485e1e706367d30e0bad",
"text": "Many commercial navigation aids suffer from a number of design flaws, the most important of which are related to the human interface that conveys information to the user. Aids for the visually impaired are lightweight electronic devices that are either incorporated into a long cane, hand-held or worn by the client, warning of hazards ahead. Most aids use vibrating buttons or sound alerts to warn of upcoming obstacles, a method which is only capable of conveying very crude information regarding direction and proximity to the nearest object. Some of the more sophisticated devices use a complex audio interface in order to deliver more detailed information, but this often compromises the user's hearing, a critical impairment for a blind user. The author has produced an original design and working prototype solution which is a major first step in addressing some of these faults found in current production models for the blind.",
"title": ""
},
{
"docid": "cbfdea54abb1e4c1234ca44ca6913220",
"text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.",
"title": ""
},
{
"docid": "f560be243747927a7d6873ca0f87d9c6",
"text": "Hydrophobic interaction chromatography-high performance liquid chromatography (HIC-HPLC) is a powerful analytical method used for the separation of molecular variants of therapeutic proteins. The method has been employed for monitoring various post-translational modifications, including proteolytic fragments and domain misfolding in etanercept (Enbrel®); tryptophan oxidation, aspartic acid isomerization, the formation of cyclic imide, and α amidated carboxy terminus in recombinant therapeutic monoclonal antibodies; and carboxy terminal heterogeneity and serine fucosylation in Fc and Fab fragments. HIC-HPLC is also a powerful analytical technique for the analysis of antibody-drug conjugates. Most current analytical columns, methods, and applications are described, and critical method parameters and suitability for operation in regulated environment are discussed, in this review.",
"title": ""
},
{
"docid": "3b45e971fd172b01045d8e5241514b37",
"text": "Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple imperative programming language. Based on techniques from knowledge-based neural networks, we insert these programs directly into the agent‘s utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that investigates several aspects of our approach and shows that, given good advice, a learner can achieve statistically significant gains in expected reward. A second experiment shows that advice improves the expected reward regardless of the stage of training at which it is given, while another study demonstrates that subsequent advice can result in further gains in reward. Finally, we present experimental results that indicate our method is more powerful than a naive technique for making use of advice.",
"title": ""
},
{
"docid": "81e1d86f37d88bfdc39602d2e04dfa20",
"text": "The working memory framework was used to investigate the factors determining the phenomenological vividness of images. Participants rated the vividness of visual or auditory images under control conditions or while performing tasks that differentially disrupted the visuospatial sketchpad and phonological loop subsystems of working memory. In Experiments 1, 2, and 6, participants imaged recently presented novel visual patterns and sequences of tones; ratings of vividness showed the predicted interaction between stimulus modality and concurrent task. The images in experiments 3, 4, 5, and 6 were based on long-term memory (LTM). They also showed an image modality by task interaction, with a clear effect of LTM variables (meaningfulness, activity, bizarreness, and stimulus familiarity), implicating both working memory and LTM in the experience of vividness.",
"title": ""
},
{
"docid": "e099186ceed71e03276ab168ecf79de7",
"text": "Twelve patients with deafferentation pain secondary to central nervous system lesions were subjected to chronic motor cortex stimulation. The motor cortex was mapped as carefully as possible and the electrode was placed in the region where muscle twitch of painful area can be observed with the lowest threshold. 5 of the 12 patients reported complete absence of previous pain with intermittent stimulation at 1 year following the initiation of this therapy. Improvements in hemiparesis was also observed in most of these patients. The pain of these patients was typically barbiturate-sensitive and morphine-resistant. Another 3 patients had some degree of residual pain but considerable reduction of pain was still obtained by stimulation. Thus, 8 of the 12 patients (67%) had continued effect of this therapy after 1 year. In 3 patients, revisions of the electrode placement were needed because stimulation became incapable of inducing muscle twitch even with higher stimulation intensity. The effect of stimulation on pain and capability of producing muscle twitch disappeared simultaneously in these cases and the effect reappeared after the revisions, indicating that appropriate stimulation of the motor cortex is definitely necessary for obtaining satisfactory pain control in these patients. None of the patients subjected to this therapy developed neither observable nor electroencephalographic seizure activity.",
"title": ""
},
{
"docid": "75fd1706bb96a1888dc9939dbe5359c2",
"text": "In this paper, we present a novel approach to ide ntify feature specific expressions of opinion in product reviews with different features and mixed emotions . The objective is realized by identifying a set of potential features in the review and extract ing opinion expressions about those features by exploiting their associatio ns. Capitalizing on the view that more closely associated words come togeth er to express an opinion about a certain feature, dependency parsing i used to identify relations between the opinion expressions. The syst em learns the set of significant relations to be used by dependency parsing and a threshold parameter which allows us to merge closely associated opinio n expressions. The data requirement is minimal as thi is a one time learning of the domain independent parameters . The associations are represented in the form of a graph which is partiti oned to finally retrieve the opinion expression describing the user specified feature. We show that the system achieves a high accuracy across all domains and performs at par with state-of-the-art systems despi t its data limitations.",
"title": ""
},
{
"docid": "d87e9a6c62c100142523baddc499320c",
"text": "Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge. We propose a novel algorithm for unsupervised representation learning from piece-wise stationary visual data: Variational Autoencoder with Shared Embeddings (VASE). Based on the Minimum Description Length principle, VASE automatically detects shifts in the data distribution and allocates spare representational capacity to new knowledge, while simultaneously protecting previously learnt representations from catastrophic forgetting. Our approach encourages the learnt representations to be disentangled, which imparts a number of desirable properties: VASE can deal sensibly with ambiguous inputs, it can enhance its own representations through imagination-based exploration, and most importantly, it exhibits semantically meaningful sharing of latents between different datasets. Compared to baselines with entangled representations, our approach is able to reason beyond surface-level statistics and perform semantically meaningful cross-domain inference.",
"title": ""
},
{
"docid": "69b1c87a06b1d83fd00d9764cdadc2e9",
"text": "Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental",
"title": ""
},
{
"docid": "f44e5926c2aa6ff311cb2505e856217a",
"text": "This paper investigates the possibility of implementing node positioning in the ZigBee wireless sensor network by using a readily available Received Signal Strength Indicator (RSSI) infrastructure provided by the physical layer of 802.15.4 networks. In this study the RSSI is converted to the distance providing the basis for using the trilateration methods for location estimation. The software written in C# is used to solve the trilateration problem and the final results of trilateration methods are mapped using Google maps. Providing node positioning capability to the ZigBee network offers an enormous benefit to the Wireless Sensor Networks applications, possibly extending the functionality of existing software solution to include node tracking and monitoring without an additional hardware investment.",
"title": ""
},
{
"docid": "32d79366936e301c44ae4ac11784e9d8",
"text": "A vast literature describes transformational leadership in terms of leader having charismatic and inspiring personality, stimulating followers, and providing them with individualized consideration. A considerable empirical support exists for transformation leadership in terms of its positive effect on followers with respect to criteria like effectiveness, extra role behaviour and organizational learning. This study aims to explore the effect of transformational leadership characteristics on followers’ job satisfaction. Survey method was utilized to collect the data from the respondents. The study reveals that individualized consideration and intellectual stimulation affect followers’ job satisfaction. However, intellectual stimulation is positively related with job satisfaction and individualized consideration is negatively related with job satisfaction. Leader’s charisma or inspiration was found to be having no affect on the job satisfaction. The three aspects of transformational leadership were tested against job satisfaction through structural equation modeling using Amos.",
"title": ""
},
{
"docid": "c1389acb62cca5cb3cfdec34bd647835",
"text": "A Chinese resume information extraction system (CRIES) based on semi-structured text is designed and implemented to obtain formatted information by extracting text content of every field from resumes in different formats and update information automatically based on the web. Firstly, ideas to classify resumes, some constraints obtained by analyzing resume features and overall extraction strategy is introduced. Then two extraction algorithms for parsing resumes in different text formats are given. Consequently, the system was implemented by java programming. Finally, use the system to resolve the resume samples, and the statistical analysis and system optimization analysis are carried out according to the accuracy rate and recall rate of the extracted results.",
"title": ""
},
{
"docid": "18f530c400498658d73aba21f0ce984e",
"text": "Anomaly and event detection has been studied widely for having many applications in fraud detection, network intrusion detection, detection of epidemic outbreaks, and so on. In this paper we propose an algorithm that operates on a time-varying network of agents with edges representing interactions between them and (1) spots \"anomalous\" points in time at which many agents \"change\" their behavior in a way it deviates from the norm; and (2) attributes the detected anomaly to those agents that contribute to the \"change\" the most. Experiments on a large mobile phone network (of 2 million anonymous customers with 50 million interactions over a period of 6 months) shows that the \"change\"-points detected by our algorithm coincide with the social events and the festivals in our data.",
"title": ""
},
{
"docid": "536e45f7130aa40625e3119523d2e1de",
"text": "We consider the problem of Simultaneous Localization and Mapping (SLAM) from a Bayesian point of view using the Rao-Blackwellised Particle Filter (RBPF). We focus on the class of indoor mobile robots equipped with only a stereo vision sensor. Our goal is to construct dense metric maps of natural 3D point landmarks for large cyclic environments in the absence of accurate landmark position measurements and reliable motion estimates. Landmark estimates are derived from stereo vision and motion estimates are based on visual odometry. We distinguish between landmarks using the Scale Invariant Feature Transform (SIFT). Our work defers from current popular approaches that rely on reliable motion models derived from odometric hardware and accurate landmark measurements obtained with laser sensors. We present results that show that our model is a successful approach for vision-based SLAM, even in large environments. We validate our approach experimentally, producing the largest and most accurate vision-based map to date, while we identify the areas where future research should focus in order to further increase its accuracy and scalability to significantly larger",
"title": ""
}
] | scidocsrr |
13deb0844c2cca96fbbec58be700238a | A non-cooperative differential game-based security model in fog computing | [
{
"docid": "2bf42d8ceff931531ca7a04d82576101",
"text": "Fog computing is new buzz word in computing world after cloud computing. This new computing paradigm could be seen as an extension to cloud computing. Main aim of fog computing is to reduce the burden on cloud by gathering workloads, services, applications and huge data to near network edge. In this survey paper, we will discuss main characteristics of the Fog that are; 1. Mobility, 2. Location awareness, 3. Low latency, 4. Huge number of nodes, 5. Extensive geographical distribution, 6. Various real time applications and we explore the advantages and motivation of Fog computing, and analyze its applications for IOT.",
"title": ""
}
] | [
{
"docid": "91e130d562a6a317d5f2885fb161354d",
"text": "In silico modeling is a crucial milestone in modern drug design and development. Although computer-aided approaches in this field are well-studied, the application of deep learning methods in this research area is at the beginning. In this work, we present an original deep neural network (DNN) architecture named RANC (Reinforced Adversarial Neural Computer) for the de novo design of novel small-molecule organic structures based on the generative adversarial network (GAN) paradigm and reinforcement learning (RL). As a generator RANC uses a differentiable neural computer (DNC), a category of neural networks, with increased generation capabilities due to the addition of an explicit memory bank, which can mitigate common problems found in adversarial settings. The comparative results have shown that RANC trained on the SMILES string representation of the molecules outperforms its first DNN-based counterpart ORGANIC by several metrics relevant to drug discovery: the number of unique structures, passing medicinal chemistry filters (MCFs), Muegge criteria, and high QED scores. RANC is able to generate structures that match the distributions of the key chemical features/descriptors (e.g., MW, logP, TPSA) and lengths of the SMILES strings in the training data set. Therefore, RANC can be reasonably regarded as a promising starting point to develop novel molecules with activity against different biological targets or pathways. In addition, this approach allows scientists to save time and covers a broad chemical space populated with novel and diverse compounds.",
"title": ""
},
{
"docid": "71fe65e31364d831214e308d6ef7814d",
"text": "As aggregators, online news portals face great challenges in continuously selecting a pool of candidate articles to be shown to their users. Typically, those candidate articles are recommended manually by platform editors from a much larger pool of articles aggregated from multiple sources. Such a hand-pick process is labor intensive and time-consuming. In this paper, we study the editor article selection behavior and propose a learning by demonstration system to automatically select a subset of articles from the large pool. Our data analysis shows that (i) editors' selection criteria are non-explicit, which are less based only on the keywords or topics, but more depend on the quality and attractiveness of the writing from the candidate article, which is hard to capture based on traditional bag-of-words article representation. And (ii) editors' article selection behaviors are dynamic: articles with different data distribution come into the pool everyday and the editors' preference varies, which are driven by some underlying periodic or occasional patterns. To address such problems, we propose a meta-attention model across multiple deep neural nets to (i) automatically catch the editors' underlying selection criteria via the automatic representation learning of each article and its interaction with the meta data and (ii) adaptively capture the change of such criteria via a hybrid attention model. The attention model strategically incorporates multiple prediction models, which are trained in previous days. The system has been deployed in a commercial article feed platform. A 9-day A/B testing has demonstrated the consistent superiority of our proposed model over several strong baselines.",
"title": ""
},
{
"docid": "6c8151eee3fcfaec7da724c2a6899e8f",
"text": "Classic work on interruptions by Zeigarnik showed that tasks that were interrupted were more likely to be recalled after a delay than tasks that were not interrupted. Much of the literature on interruptions has been devoted to examining this effect, although more recently interruptions have been used to choose between competing designs for interfaces to complex devices. However, none of this work looks at what makes some interruptions disruptive and some not. This series of experiments uses a novel computer-based adventure-game methodology to investigate the effects of the length of the interruption, the similarity of the interruption to the main task, and the complexity of processing demanded by the interruption. It is concluded that subjects make use of some form of nonarticulatory memory which is not affected by the length of the interruption. It is affected by processing similar material however, and by a complex mentalarithmetic task which makes large demands on working memory.",
"title": ""
},
{
"docid": "58ea96e65ce2f767064a32b1e9f60338",
"text": "We present an approach to the problem of real-time identification of vehicle motion models based on fitting, on a continuous basis, parametrized slip models to observed behavior. Our approach is unique in that we generate parametric models capturing the dynamics of systematic error (i.e. slip) and then predict trajectories for arbitrary inputs on arbitrary terrain. The integrated error dynamics are linearized with respect to the unknown parameters to produce an observer relating errors in predicted slip to errors in the parameters. An Extended Kalman filter is used to identify this model on-line. The filter forms innovations based on residual differences between the motion originally predicted using the present model and the motion ultimately experienced by the vehicle. Our results show that the models converge in a few seconds and they reduce prediction error for even benign maneuvers where errors might be expected to be small already. Results are presented for both a skid-steered and an Ackerman steer vehicle.",
"title": ""
},
{
"docid": "ecd8393f05d2e30b488a5828c9a6944a",
"text": "Understanding the changes in the brain which occur in the transition from normal to addictive behavior has major implications in public health. Here we postulate that while reward circuits (nucleus accumbens, amygdala), which have been central to theories of drug addiction, may be crucial to initiate drug self-administration, the addictive state also involves disruption of circuits involved with compulsive behaviors and with drive. We postulate that intermittent dopaminergic activation of reward circuits secondary to drug self-administration leads to dysfunction of the orbitofrontal cortex via the striato-thalamo-orbitofrontal circuit. This is supported by imaging studies showing that in drug abusers studied during protracted withdrawal, the orbitofrontal cortex is hypoactive in proportion to the levels of dopamine D2 receptors in the striatum. In contrast, when drug abusers are tested shortly after last cocaine use or during drug-induced craving, the orbitofrontal cortex is hypermetabolic in proportion to the intensity of the craving. Because the orbitofrontal cortex is involved with drive and with compulsive repetitive behaviors, its abnormal activation in the addicted subject could explain why compulsive drug self-administration occurs even with tolerance to the pleasurable drug effects and in the presence of adverse reactions. This model implies that pleasure per se is not enough to maintain compulsive drug administration in the drugaddicted subject and that drugs that could interfere with the activation of the striato-thalamo-orbitofrontal circuit could be beneficial in the treatment of drug addiction.",
"title": ""
},
{
"docid": "0800bfff6569d6d4f3eb00fae0ea1c11",
"text": "An 8-layer, 75 nm half-pitch, 3D stacked vertical-gate (VG) TFT BE-SONOS NAND Flash array is fabricated and characterized. We propose a buried-channel (n-type well) device to improve the read current of TFT NAND, and it also allows the junction-free structure which is particularly important for 3D stackable devices. Large self-boosting disturb-free memory window (6V) can be obtained in our device, and for the first time the “Z-interference” between adjacent vertical layers is studied. The proposed buried-channel VG NAND allows better X, Y pitch scaling and is a very attractive candidate for ultra high-density 3D stackable NAND Flash.",
"title": ""
},
{
"docid": "24625cbc472bf376b44ac6e962696d0b",
"text": "Although deep neural networks have made tremendous progress in the area of multimedia representation, training neural models requires a large amount of data and time. It is well known that utilizing trained models as initial weights often achieves lower training error than neural networks that are not pre-trained. A fine-tuning step helps to both reduce the computational cost and improve the performance. Therefore, sharing trained models has been very important for the rapid progress of research and development. In addition, trained models could be important assets for the owner(s) who trained them; hence, we regard trained models as intellectual property. In this paper, we propose a digital watermarking technology for ownership authorization of deep neural networks. First, we formulate a new problem: embedding watermarks into deep neural networks. We also define requirements, embedding situations, and attack types on watermarking in deep neural networks. Second, we propose a general framework for embedding a watermark in model parameters, using a parameter regularizer. Our approach does not impair the performance of networks into which a watermark is placed because the watermark is embedded while training the host network. Finally, we perform comprehensive experiments to reveal the potential of watermarking deep neural networks as the basis of this new research effort. We show that our framework can embed a watermark during the training of a deep neural network from scratch, and during fine-tuning and distilling, without impairing its performance. The embedded watermark does not disappear even after fine-tuning or parameter pruning; the watermark remains complete even after 65% of parameters are pruned.",
"title": ""
},
{
"docid": "a51b57427c5204cb38483baa9389091f",
"text": "Cross-laminated timber (CLT), a new generation of engineered wood product developed initially in Europe, has been gaining popularity in residential and non-residential applications in several countries. Numerous impressive lowand mid-rise buildings built around the world using CLT showcase the many advantages that this product can offer to the construction sector. This article provides basic information on the various attributes of CLT as a product and as structural system in general, and examples of buildings made of CLT panels. A road map for codes and standards implementation of CLT in North America is included, along with an indication of some of the obstacles that can be expected.",
"title": ""
},
{
"docid": "defc7f4420ad99d410fa18c24b46ab24",
"text": "To determine a reference range of fetal transverse cerebellar diameter in Brazilian population. This was a retrospective cross-sectional study with 3772 normal singleton pregnancies between 18 and 24 weeks of pregnancy. The transverse cerebellar diameter was measured on the axial plane of the fetal head at the level of the lateral ventricles, including the thalamus, cavum septum pellucidum, and third ventricle. To assess the correlation between transverse cerebellar diameter and gestational age, polynomial equations were calculated, with adjustments by the determination coefficient (R2). The mean of fetal transverse cerebellar diameter ranged from 18.49 ± 1.24 mm at 18 weeks to 25.86 ± 1.66 mm at 24 weeks of pregnancy. We observed a good correlation between transverse cerebellar diameter and gestational age, which was best represented by a linear equation: transverse cerebellar diameter: -6.21 + 1.307*gestational age (R2 = 0.707). We determined a reference range of fetal transverse cerebellar diameter for the second trimester of pregnancy in Brazilian population.",
"title": ""
},
{
"docid": "5da453a1e40f1781804045f64462ea8e",
"text": "Severe aphasia, adult left hemispherectomy, Gilles de la Tourette syndrome (GTS), and other neurological disorders have in common an increased use of swearwords. There are shared linguistic features in common across these language behaviors, as well as important differences. We explore the nature of swearing in normal human communication, and then compare the clinical presentations of selectively preserved, impaired and augmented swearing. These neurolinguistic observations, considered along with related neuroanatomical and neurochemical information, provide the basis for considering the neurobiological foundation of various types of swearing behaviors.",
"title": ""
},
{
"docid": "a9baecb9470242c305942f7bc98494ab",
"text": "This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.",
"title": ""
},
{
"docid": "84b9601738c4df376b42d6f0f6190f53",
"text": "Cloud Computing is one of the most important trend and newest area in the field of information technology in which resources (e.g. CPU and storage) can be leased and released by customers through the Internet in an on-demand basis. The adoption of Cloud Computing in Education and developing countries is real an opportunity. Although Cloud computing has gained popularity in Pakistan especially in education and industry, but its impact in Pakistan is still unexplored especially in Higher Education Department. Already published work investigated in respect of factors influencing on adoption of cloud computing but very few investigated said analysis in developing countries. The Higher Education Institutions (HEIs) of Punjab, Pakistan are still not focused to discover cloud adoption factors. In this study, we prepared cloud adoption model for Higher Education Institutions (HEIs) of Punjab, a survey was carried out from 900 students all over Punjab. The survey was designed based upon literature and after discussion and opinions of academicians. In this paper, 34 hypothesis were developed that affect the cloud computing adoption in HEIs and tested by using powerful statistical analysis tools i.e. SPSS and SmartPLS. Statistical findings shows that 84.44% of students voted in the favor of cloud computing adoption in their colleges, while 99% supported Reduce Cost as most important factor in cloud adoption.",
"title": ""
},
{
"docid": "8df0970ccf314018874ed3f877ec607e",
"text": "In graph-based simultaneous localization and mapping, the pose graph grows over time as the robot gathers information about the environment. An ever growing pose graph, however, prevents long-term mapping with mobile robots. In this paper, we address the problem of efficient information-theoretic compression of pose graphs. Our approach estimates the mutual information between the laser measurements and the map to discard the measurements that are expected to provide only a small amount of information. Our method subsequently marginalizes out the nodes from the pose graph that correspond to the discarded laser measurements. To maintain a sparse pose graph that allows for efficient map optimization, our approach applies an approximate marginalization technique that is based on Chow-Liu trees. Our contributions allow the robot to effectively restrict the size of the pose graph.Alternatively, the robot is able to maintain a pose graph that does not grow unless the robot explores previously unobserved parts of the environment. Real-world experiments demonstrate that our approach to pose graph compression is well suited for long-term mobile robot mapping.",
"title": ""
},
{
"docid": "fa557c1eaaf516035909a8e38be3ec56",
"text": "This paper presents a new buck–boost converter. Unlike the single-switch buck–boost converter, the proposed converter has low-voltage stresses on semiconductors. Moreover, although both the conventional two-switch buck–boost (TSBB) and the proposed converters have the same number of passive and active components, and the proposed converter can reduce the conduction loss as a result of having fewer conducting components. Therefore, the proposed converter obtained a higher efficiency than the TSBB converter. A 48-V output voltage and 150-W output power prototype was fabricated to verify the effectiveness of the proposed converter.",
"title": ""
},
{
"docid": "dac4ee56923c850874f8c6199456a245",
"text": "In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.",
"title": ""
},
{
"docid": "4973ce25e2a638c3923eda62f92d98b2",
"text": "About 20 ethnic groups reside in Mongolia. On the basis of genetic and anthropological studies, it is believed that Mongolians have played a pivotal role in the peopling of Central and East Asia. However, the genetic relationships among these ethnic groups have remained obscure, as have their detailed relationships with adjacent populations. We analyzed 16 binary and 17 STR polymorphisms of human Y chromosome in 669 individuals from nine populations, including four indigenous ethnic groups in Mongolia (Khalkh, Uriankhai, Zakhchin, and Khoton). Among these four Mongolian populations, the Khalkh, Uriankhai, and Zakhchin populations showed relatively close genetic affinities to each other and to Siberian populations, while the Khoton population showed a closer relationship to Central Asian populations than to even the other Mongolian populations. These findings suggest that the major Mongolian ethnic groups have a close genetic affinity to populations in northern East Asia, although the genetic link between Mongolia and Central Asia is not negligible.",
"title": ""
},
{
"docid": "e8f7017704e943fb1ab3055a114490ef",
"text": "Class imbalance is one of the challenging problems for machine learning algorithms. When learning from highly imbalanced data, most classifiers are overwhelmed by the majority class examples, so the false negative rate is always high. Although researchers have introduced many methods to deal with this problem, including resampling techniques and cost-sensitive learning (CSL), most of them focus on either of these techniques. This study presents two empirical methods that deal with class imbalance using both resampling and CSL. The first method combines and compares several sampling techniques with CSL using support vector machines (SVM). The second method proposes using CSL by optimizing the cost ratio (cost matrix) locally. Our experimental results on 18 imbalanced datasets from the UCI repository show that the first method can reduce the misclassification costs, and the second method can improve the classifier performance.",
"title": ""
},
{
"docid": "c0484f3055d7e7db8dfea9d4483e1e06",
"text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.",
"title": ""
},
{
"docid": "2f0d6b9bee323a75eea3d15a3cabaeb6",
"text": "OBJECTIVE\nThis article reviews the mechanisms and pathophysiology of traumatic brain injury (TBI).\n\n\nMETHODS\nResearch on the pathophysiology of diffuse and focal TBI is reviewed with an emphasis on damage that occurs at the cellular level. The mechanisms of injury are discussed in detail including the factors and time course associated with mild to severe diffuse injury as well as the pathophysiology of focal injuries. Examples of electrophysiologic procedures consistent with recent theory and research evidence are presented.\n\n\nRESULTS\nAcceleration/deceleration (A/D) forces rarely cause shearing of nervous tissue, but instead, initiate a pathophysiologic process with a well defined temporal progression. The injury foci are considered to be diffuse trauma to white matter with damage occurring at the superficial layers of the brain, and extending inward as A/D forces increase. Focal injuries result in primary injuries to neurons and the surrounding cerebrovasculature, with secondary damage occurring due to ischemia and a cytotoxic cascade. A subset of electrophysiologic procedures consistent with current TBI research is briefly reviewed.\n\n\nCONCLUSIONS\nThe pathophysiology of TBI occurs over time, in a pattern consistent with the physics of injury. The development of electrophysiologic procedures designed to detect specific patterns of change related to TBI may be of most use to the neurophysiologist.\n\n\nSIGNIFICANCE\nThis article provides an up-to-date review of the mechanisms and pathophysiology of TBI and attempts to address misconceptions in the existing literature.",
"title": ""
}
] | scidocsrr |
bedd771bc6d2a805c72aa585df3d7340 | Reviewing CS1 exam question content | [
{
"docid": "05c82f9599b431baa584dd1e6d7dfc3e",
"text": "It is a common conception that CS1 is a very difficult course and that failure rates are high. However, until now there has only been anecdotal evidence for this claim. This article reports on a survey among institutions around the world regarding failure rates in introductory programming courses. The article describes the design of the survey and the results. The number of institutions answering the call for data was unfortunately rather low, so it is difficult to make firm conclusions. It is our hope that this article can be the starting point for a systematic collection of data in order to find solid proof of the actual failure and pass rates of CS1.",
"title": ""
}
] | [
{
"docid": "9e1c3d4a8bbe211b85b19b38e39db28e",
"text": "This paper presents a novel context-based scene recognition method that enables mobile robots to recognize previously observed topological places in known environments or categorize previously unseen places in new environments. We achieve this by introducing the Histogram of Oriented Uniform Patterns (HOUP), which provides strong discriminative power for place recognition, while offering a significant level of generalization for place categorization. HOUP descriptors are used for image representation within a subdivision framework, where the size and location of sub-regions are determined using an informative feature selection method based on kernel alignment. Further improvement is achieved by developing a similarity measure that accounts for perceptual aliasing to eliminate the effect of indistinctive but visually similar regions that are frequently present in outdoor and indoor scenes. An extensive set of experiments reveals the excellent performance of our method on challenging categorization and recognition tasks. Specifically, our proposed method outperforms the current state of the art on two place categorization datasets with 15 and 5 place categories, and two topological place recognition datasets, with 5 and 27 places.",
"title": ""
},
{
"docid": "853edc6c6564920d0d2b69e0e2a63ad0",
"text": "This study evaluates the environmental performance and discounted costs of the incineration and landfilling of municipal solid waste that is ready for the final disposal while accounting for existing waste diversion initiatives, using the life cycle assessment (LCA) methodology. Parameters such as changing waste generation quantities, diversion rates and waste composition were also considered. Two scenarios were assessed in this study on how to treat the waste that remains after diversion. The first scenario is the status quo, where the entire residual waste was landfilled whereas in the second scenario approximately 50% of the residual waste was incinerated while the remainder is landfilled. Electricity was produced in each scenario. Data from the City of Toronto was used to undertake this study. Results showed that the waste diversion initiatives were more effective in reducing the organic portion of the waste, in turn, reducing the net electricity production of the landfill while increasing the net electricity production of the incinerator. Therefore, the scenario that incorporated incineration performed better environmentally and contributed overall to a significant reduction in greenhouse gas emissions because of the displacement of power plant emissions; however, at a noticeably higher cost. Although landfilling proves to be the better financial option, it is for the shorter term. The landfill option would require the need of a replacement landfill much sooner. The financial and environmental effects of this expenditure have yet to be considered.",
"title": ""
},
{
"docid": "fa855a3d92bf863c33b269383ddde081",
"text": "A network supporting deep unsupervised learning is present d. The network is an autoencoder with lateral shortcut connections from the enc oder to decoder at each level of the hierarchy. The lateral shortcut connections al low the higher levels of the hierarchy to focus on abstract invariant features. Wher eas autoencoders are analogous to latent variable models with a single layer of st ochastic variables, the proposed network is analogous to hierarchical latent varia bles models. Learning combines denoising autoencoder and denoising sou rces separation frameworks. Each layer of the network contributes to the cos t function a term which measures the distance of the representations produce d by the encoder and the decoder. Since training signals originate from all leve ls of the network, all layers can learn efficiently even in deep networks. The speedup offered by cost terms from higher levels of the hi erarchy and the ability to learn invariant features are demonstrated in exp eriments.",
"title": ""
},
{
"docid": "d42aaf5c7c4f7982c1630e7b95b0377a",
"text": "In this paper we analyze our recent research on the use of document analysis techniques for metadata extraction from PDF papers. We describe a package that is designed to extract basic metadata from these documents. The package is used in combination with a digital library software suite to easily build personal digital libraries. The proposed software is based on a suitable combination of several techniques that include PDF parsing, low level document image processing, and layout analysis. In addition, we use the information gathered from a widely known citation database (DBLP) to assist the tool in the difficult task of author identification. The system is tested on some paper collections selected from recent conference proceedings.",
"title": ""
},
{
"docid": "6c81b1fe36a591b3b86a5e912a8792c1",
"text": "Mobile phones, sensors, patients, hospitals, researchers, providers and organizations are nowadays, generating huge amounts of healthcare data. The real challenge in healthcare systems is how to find, collect, analyze and manage information to make people's lives healthier and easier, by contributing not only to understand new diseases and therapies but also to predict outcomes at earlier stages and make real-time decisions. In this paper, we explain the potential benefits of big data to healthcare and explore how it improves treatment and empowers patients, providers and researchers. We also describe the ability of reality mining in collecting large amounts of data to understand people's habits, detect and predict outcomes, and illustrate the benefits of big data analytics through five effective new pathways that could be adopted to promote patients' health, enhance medicine, reduce cost and improve healthcare value and quality. We cover some big data solutions in healthcare and we shed light on implementations, such as Electronic Healthcare Record (HER) and Electronic Healthcare Predictive Analytics (e-HPA) in US hospitals. Furthermore, we complete the picture by highlighting some challenges that big data analytics faces in healthcare.",
"title": ""
},
{
"docid": "073f129a34957b19c6d9af96c869b9ab",
"text": "The stability of dc microgrids (MGs) depends on the control strategy adopted for each mode of operation. In an islanded operation mode, droop control is the basic method for bus voltage stabilization when there is no communication among the sources. In this paper, it is shown the consequences of droop implementation on the voltage stability of dc power systems, whose loads are active and nonlinear, e.g., constant power loads. The set of parallel sources and their corresponding transmission lines are modeled by an ideal voltage source in series with an equivalent resistance and inductance. This approximate model allows performing a nonlinear stability analysis to predict the system qualitative behavior due to the reduced number of differential equations. Additionally, nonlinear analysis provides analytical stability conditions as a function of the model parameters and it leads to a design guideline to build reliable (MGs) based on safe operating regions.",
"title": ""
},
{
"docid": "f086fef6b9026a67e73cd6f892aa1c37",
"text": "Shoulder girdle movement is critical for stabilizing and orientating the arm during daily activities. During robotic arm rehabilitation with stroke patients, the robot must assist movements of the shoulder girdle. Shoulder girdle movement is characterized by a highly nonlinear function of the humeral orientation, which is different for each person. Hence it is improper to use pre-calculated shoulder girdle movement. If an exoskeleton robot cannot mimic the patient's shoulder girdle movement well, the robot axes will not coincide with the patient's, which brings reduced range of motion (ROM) and discomfort to the patients. A number of exoskeleton robots have been developed to assist shoulder girdle movement. The shoulder mechanism of these robots, along with the advantages and disadvantages, are introduced. In this paper, a novel shoulder mechanism design of exoskeleton robot is proposed, which can fully mimic the patient's shoulder girdle movement in real time.",
"title": ""
},
{
"docid": "fab33f2e32f4113c87e956e31674be58",
"text": "We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, uniqueand synergistic contributions. We focus on the relationship be tween “redundant information” and the more familiar information theoretic notions of “common information.” Our main contri bution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decompositi on of the total mutual information. Interestingly, this entai ls that any reasonable measure of redundant information cannot be deri ved by optimization over a single random variable. Keywords—common and private information, synergy, redundancy, information lattice, sufficient statistic, partial information decomposition",
"title": ""
},
{
"docid": "f128c1903831e9310d0ed179838d11d1",
"text": "A partially corporate feeding waveguide located below the radiating waveguide is introduced to a waveguide slot array to enhance the bandwidth of gain. A PMC termination associated with the symmetry of the feeding waveguide as well as uniform excitation is newly proposed for realizing dense and uniform slot arrangement free of high sidelobes. To exploit the bandwidth of the feeding circuit, the 4 × 4-element subarray is also developed for wider bandwidth by using standing-wave excitation. A 16 × 16-element array with uniform excitation is fabricated in the E-band by diffusion bonding of laminated thin copper plates which has the advantages of high precision and high mass-productivity. The antenna gain of 32.4 dBi and the antenna efficiency of 83.0% are measured at the center frequency. The 1 dB-down gain bandwidth is no less than 9.0% and a wideband characteristic is achieved.",
"title": ""
},
{
"docid": "71da7722f6ce892261134bd60ca93ab7",
"text": "Semantically annotated data, using markup languages like RDFa and Microdata, has become more and more publicly available in the Web, especially in the area of e-commerce. Thus, a large amount of structured product descriptions are freely available and can be used for various applications, such as product search or recommendation. However, little efforts have been made to analyze the categories of the available product descriptions. Although some products have an explicit category assigned, the categorization schemes vary a lot, as the products originate from thousands of different sites. This heterogeneity makes the use of supervised methods, which have been proposed by most previous works, hard to apply. Therefore, in this paper, we explain how distantly supervised approaches can be used to exploit the heterogeneous category information in order to map the products to set of target categories from an existing product catalogue. Our results show that, even though this task is by far not trivial, we can reach almost 56% accuracy for classifying products into 37 categories.",
"title": ""
},
{
"docid": "6afe0360f074304e9da9c100e28e9528",
"text": "Unikernels are a promising alternative for application deployment in cloud platforms. They comprise a very small footprint, providing better deployment agility and portability among virtualization platforms. Similar to Linux containers, they are a lightweight alternative for deploying distributed applications based on microservices. However, the comparison of unikernels with other virtualization options regarding the concurrent provisioning of instances, as in the case of microservices-based applications, is still lacking. This paper provides an evaluation of KVM (Virtual Machines), Docker (Containers), and OSv (Unikernel), when provisioning multiple instances concurrently in an OpenStack cloud platform. We confirmed that OSv outperforms the other options and also identified opportunities for optimization.",
"title": ""
},
{
"docid": "6ed5198b9b0364f41675b938ec86456f",
"text": "Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI ARTIFICIAL INTELLIGENCE [Al] and other developments in computer science are giving birth to a dramatically different class of machinesPmachines that can perform tasks requiring reasoning, judgment, and perception that previously could be done only by humans. Will these I am grateful for the helpful comments provided by many people Specifically I would like to acknowledge the advice teceived from Sandra Cook and Victor Walling of SRI, Wassily Leontief and Faye Duchin of the New York University Institute for Economic Analysis, Margaret Boden of The University of Sussex, Henry Levin and Charles Holloway of Stanford University, James Albus of the National Bureau of Standards, and Peter Hart of Syntelligence Herbert Simon, of CarnegieMellon Univetsity, wrote me extensive criticisms and rebuttals of my arguments Robert Solow of MIT was quite skeptical of my premises, but conceded nevertheless that my conclusions could possibly follow from them if certain other economic conditions were satisfied. Save1 Kliachko of SRI improved my composition and also referred me to a prescient article by Keynes (Keynes, 1933) who, a half-century ago, predicted an end to toil within one hundred years machines reduce the need for human toil and thus cause unemployment? There are two opposing views in response to this question Some claim that AI is not really very different from other technologies that have supported automation and increased productivitytechnologies such as mechanical engineering, ele&onics, control engineering, and operations rcsearch. Like them, AI may also lead ultimately to an expanding economy with a concomitant expansion of employment opportunities. At worst, according to this view, thcrc will be some, perhaps even substantial shifts in the types of jobs, but certainly no overall reduction in the total number of jobs. In my opinion, however, such an out,come is based on an overly conservative appraisal of the real potential of artificial intelligence. Others accept a rather strong hypothesis with regard to AI-one that sets AI far apart from previous labor-saving technologies. Quite simply, this hypothesis affirms that anything people can do, AI can do as well. Cert,ainly AI has not yet achieved human-level performance in many important functions, but many AI scientists believe that artificial intelligence inevitably will equal and surpass human mental abilities-if not in twenty years, then surely in fifty. The main conclusion of this view of AI is that, even if AI does create more work, this work can also be performed by AI devices without necessarily implying more jobs for humans Of course, the mcrc fact that some work can be performed automatically does not make it inevitable that it, will be. Automation depends on many factorsPeconomic, political, and social. The major economic parameter would seem to be the relative cost of having either people or machines execute a given task (at a specified rate and level of quality) In THE AI MAGAZINE Summer 1984 5 AI Magazine Volume 5 Number 2 (1984) (© AAAI)",
"title": ""
},
{
"docid": "9b628f47102a0eee67e469e223ece837",
"text": "We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.",
"title": ""
},
{
"docid": "7121d534b758bab829e1db31d0ce2e43",
"text": "With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (eg. whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias xspace, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias xspace on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias xspace are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task.",
"title": ""
},
{
"docid": "aef76a8375b12f4c38391093640a704a",
"text": "Storytelling plays an important role in human life, from everyday communication to entertainment. Interactive storytelling (IS) offers its audience an opportunity to actively participate in the story being told, particularly in video games. Managing the narrative experience of the player is a complex process that involves choices, authorial goals and constraints of a given story setting (e.g., a fairy tale). Over the last several decades, a number of experience managers using artificial intelligence (AI) methods such as planning and constraint satisfaction have been developed. In this paper, we extend existing work and propose a new AI experience manager called player-specific automated storytelling (PAST), which uses automated planning to satisfy the story setting and authorial constraints in response to the player's actions. Out of the possible stories algorithmically generated by the planner in response, the one that is expected to suit the player's style best is selected. To do so, we employ automated player modeling. We evaluate PAST within a video-game domain with user studies and discuss the effects of combining planning and player modeling on the player's perception of agency.",
"title": ""
},
{
"docid": "9086d8f1d9a0978df0bd93cff4bce20a",
"text": "Australian government enterprises have shown a significant interest in the cloud technology-enabled enterprise transformation. Australian government suggests the whole-of-a-government strategy to cloud adoption. The challenge is how best to realise this cloud adoption strategy for the cloud technology-enabled enterprise transformation? The cloud adoption strategy realisation requires concrete guidelines and a comprehensive practical framework. This paper proposes the use of an agile enterprise architecture framework to developing and implementing the adaptive cloud technology-enabled enterprise architecture in the Australian government context. The results of this paper indicate that a holistic strategic agile enterprise architecture approach seems appropriate to support the strategic whole-of-a-government approach to cloud technology-enabled government enterprise transformation.",
"title": ""
},
{
"docid": "e0a314eb1fe221791bc08094d0c04862",
"text": "The present study was undertaken with the objective to explore the influence of the five personality dimensions on the information seeking behaviour of the students in higher educational institutions. Information seeking behaviour is defined as the sum total of all those activities that are usually undertaken by the students of higher education to collect, utilize and process any kind of information needed for their studies. Data has been collected from 600 university students of the three broad disciplines of studies from the Universities of Eastern part of India (West Bengal). The tools used for the study were General Information schedule (GIS), Information Seeking Behaviour Inventory (ISBI) and NEO-FFI Personality Inventory. Product moment correlation has been worked out between the scores in ISBI and those in NEO-FFI Personality Inventory. The findings indicated that the five personality traits are significantly correlated to all the dimensions of information seeking behaviour of the university students.",
"title": ""
},
{
"docid": "a4ed5c4f87e4faa357f0dec0f5c0e354",
"text": "In today's information age, information sharing and transfer has increased exponentially. The threat of an intruder accessing secret information has been an ever existing concern for the data communication experts. Cryptography and steganography are the most widely used techniques to overcome this threat. Cryptography involves converting a message text into an unreadable cipher. On the other hand, steganography embeds message into a cover media and hides its existence. Both these techniques provide some security of data neither of them alone is secure enough for sharing information over an unsecure communication channel and are vulnerable to intruder attacks. Although these techniques are often combined together to achieve higher levels of security but still there is a need of a highly secure system to transfer information over any communication media minimizing the threat of intrusion. In this paper we propose an advanced system of encrypting data that combines the features of cryptography, steganography along with multimedia data hiding. This system will be more secure than any other these techniques alone and also as compared to steganography and cryptography combined systems Visual steganography is one of the most secure forms of steganography available today. It is most commonly implemented in image files. However embedding data into image changes its color frequencies in a predictable way. To overcome this predictability, we propose the concept of multiple cryptography where the data will be encrypted into a cipher and the cipher will be hidden into a multimedia image file in encrypted format. We shall use traditional cryptographic techniques to achieve data encryption and visual steganography algorithms will be used to hide the encrypted data.",
"title": ""
},
{
"docid": "7ba3f13f58c4b25cc425b706022c1f2b",
"text": "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast/Faster R-CNN [1,2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN [2] for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.",
"title": ""
},
{
"docid": "953d1b368a4a6fb09e6b34e3131d7804",
"text": "The activation of the Deep Convolutional Neural Networks hidden layers can be successfully used as features, often referred as Deep Features, in generic visual similarity search tasks. Recently scientists have shown that permutation-based methods offer very good performance in indexing and supporting approximate similarity search on large database of objects. Permutation-based approaches represent metric objects as sequences (permutations) of reference objects, chosen from a predefined set of data. However, associating objects with permutations might have a high cost due to the distance calculation between the data objects and the reference objects. In this work, we propose a new approach to generate permutations at a very low computational cost, when objects to be indexed are Deep Features. We show that the permutations generated using the proposed method are more effective than those obtained using pivot selection criteria specifically developed for permutation-based methods.",
"title": ""
}
] | scidocsrr |
7fc4b30a0ea6873fc03082ded61a82ed | A vision of industry 4 . 0 from an artificial intelligence point of view | [
{
"docid": "22fd1487e69420597c587e03f2b48f65",
"text": "Design and operation of a manufacturing enterprise involve numerous types of decision-making at various levels and domains. A complex system has a large number of design variables and decision-making requires real-time data collected from machines, processes, and business environments. Enterprise systems (ESs) are used to support data acquisition, communication, and all decision-making activities. Therefore, information technology (IT) infrastructure for data acquisition and sharing affects the performance of an ES greatly. Our objective is to investigate the impact of emerging Internet of Things (IoT) on ESs in modern manufacturing. To achieve this objective, the evolution of manufacturing system paradigms is discussed to identify the requirements of decision support systems in dynamic and distributed environments; recent advances in IT are overviewed and associated with next-generation manufacturing paradigms; and the relation of IT infrastructure and ESs is explored to identify the technological gaps in adopting IoT as an IT infrastructure of ESs. The future research directions in this area are discussed.",
"title": ""
},
{
"docid": "eead063c20e32f53ec8a5e81dbac951c",
"text": "We are currently experiencing the fourth Industrial Revolution in terms of cyber physical systems. These systems are industrial automation systems that enable many innovative functionalities through their networking and their access to the cyber world, thus changing our everyday lives significantly. In this context, new business models, work processes and development methods that are currently unimaginable will arise. These changes will also strongly influence the society and people. Family life, globalization, markets, etc. will have to be redefined. However, the Industry 4.0 simultaneously shows characteristics that represent the challenges regarding the development of cyber-physical systems, reliability, security and data protection. Following a brief introduction to Industry 4.0, this paper presents a prototypical application that demonstrates the essential aspects.",
"title": ""
}
] | [
{
"docid": "623cdf022d333ca4d6b244f54d301650",
"text": "Alveolar rhabdomyosarcoma (ARMS) are aggressive soft tissue tumors harboring specific fusion transcripts, notably PAX3-FOXO1 (P3F). Current therapy concepts result in unsatisfactory survival rates making the search for innovative approaches necessary: targeting PAX3-FOXO1 could be a promising strategy. In this study, we developed integrin receptor-targeted Lipid-Protamine-siRNA (LPR) nanoparticles using the RGD peptide and validated target specificity as well as their post-silencing effects. We demonstrate that RGD-LPRs are specific to ARMS in vitro and in vivo. Loaded with siRNA directed against the breakpoint of P3F, these particles efficiently down regulated the fusion transcript and inhibited cell proliferation, but did not induce substantial apoptosis. In a xenograft ARMS model, LPR nanoparticles targeting P3F showed statistically significant tumor growth delay as well as inhibition of tumor initiation when injected in parallel with the tumor cells. These findings suggest that RGD-LPR targeting P3F are promising to be highly effective in the setting of minimal residual disease for ARMS.",
"title": ""
},
{
"docid": "d56fb6c80cc0d48602b48f506b0601a6",
"text": "In application domains such as healthcare, we want accurate predictive models that are also causally interpretable. In pursuit of such models, we propose a causal regularizer to steer predictive models towards causally-interpretable solutions and theoretically study its properties. In a large-scale analysis of Electronic Health Records (EHR), our causally-regularized model outperforms its L1-regularized counterpart in causal accuracy and is competitive in predictive performance. We perform non-linear causality analysis by causally regularizing a special neural network architecture. We also show that the proposed causal regularizer can be used together with neural representation learning algorithms to yield up to 20% improvement over multilayer perceptron in detecting multivariate causation, a situation common in healthcare, where many causal factors should occur simultaneously to have an effect on the target variable.",
"title": ""
},
{
"docid": "6ee26f725bfb63a6ff72069e48404e68",
"text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.",
"title": ""
},
{
"docid": "0123fd04bc65b8dfca7ff5c058d087da",
"text": "The authors forward the hypothesis that social exclusion is experienced as painful because reactions to rejection are mediated by aspects of the physical pain system. The authors begin by presenting the theory that overlap between social and physical pain was an evolutionary development to aid social animals in responding to threats to inclusion. The authors then review evidence showing that humans demonstrate convergence between the 2 types of pain in thought, emotion, and behavior, and demonstrate, primarily through nonhuman animal research, that social and physical pain share common physiological mechanisms. Finally, the authors explore the implications of social pain theory for rejection-elicited aggression and physical pain disorders.",
"title": ""
},
{
"docid": "9592fc0ec54a5216562478414dc68eb4",
"text": "We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) ∑ i 1/∆ 2 i , where the sum is on the suboptimal arms and ∆i represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/∆ samples to differentiate the means of two distributions with gap ∆.",
"title": ""
},
{
"docid": "cc9de768281e58749cd073d25a97d39c",
"text": "The Dynamic Adaptive Streaming over HTTP (referred as MPEG DASH) standard is designed to provide high quality of media content over the Internet delivered from conventional HTTP web servers. The visual content, divided into a sequence of segments, is made available at a number of different bitrates so that an MPEG DASH client can automatically select the next segment to download and play back based on current network conditions. The task of transcoding media content to different qualities and bitrates is computationally expensive, especially in the context of large-scale video hosting systems. Therefore, it is preferably executed in a powerful cloud environment, rather than on the source computer (which may be a mobile device with limited memory, CPU speed and battery life). In order to support the live distribution of media events and to provide a satisfactory user experience, the overall processing delay of videos should be kept to a minimum. In this paper, we propose a novel dynamic scheduling methodology on video transcoding for MPEG DASH in a cloud environment, which can be adapted to different applications. The designed scheduler monitors the workload on each processor in the cloud environment and selects the fastest processors to run high-priority jobs. It also adjusts the video transcoding mode (VTM) according to the system load. Experimental results show that the proposed scheduler performs well in terms of the video completion time, system load balance, and video playback smoothness.",
"title": ""
},
{
"docid": "af22932b48a2ea64ecf3e5ba1482564d",
"text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.",
"title": ""
},
{
"docid": "20ef5a8b6835bedd44d571952b46ca90",
"text": "This paper proposes an XYZ-flexure parallel mechanism (FPM) with large displacement and decoupled kinematics structure. The large-displacement FPM has large motion range more than 1 mm. Moreover, the decoupled XYZ-stage has small cross-axis error and small parasitic rotation. In this study, the typical prismatic joints are investigated, and a new large-displacement prismatic joint using notch hinges is designed. The conceptual design of the FPM is proposed by assembling these modular prismatic joints, and then the optimal design of the FPM is conducted. The analytical models of linear stiffness and dynamics are derived using pseudo-rigid-body (PRB) method. Finally, the numerical simulation using ANSYS is conducted for modal analysis to verify the analytical dynamics equation. Experiments are conducted to verify the proposed design for linear stiffness, cross-axis error and parasitic rotation",
"title": ""
},
{
"docid": "e21aed852a892cbede0a31ad84d50a65",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.09.010 ⇑ Corresponding author. Tel.: +1 662 915 5519. E-mail addresses: [email protected] (C. R (D. Gamboa), [email protected] (F. Glover), [email protected] (C. Osterman). Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin–Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today’s best methods that may provide additional gains in solving large and difficult TSPs. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "612423df25809938ada93f24be7d2ac5",
"text": "Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input–output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.",
"title": ""
},
{
"docid": "4c74b49b01e550cee8b49cbf3d142c15",
"text": "Neural embeddings are a popular set of methods for representing words, phrases or text as a low dimensional vector (typically 50-500 dimensions). However, it is difficult to interpret these dimensions in a meaningful manner, and creating neural embeddings requires extensive training and tuning of multiple parameters and hyperparameters. We present here a simple unsupervised method for representing words, phrases or text as a low dimensional vector, in which the meaning and relative importance of dimensions is transparent to inspection. We have created a near-comprehensive vector representation of words, and selected bigrams, trigrams and abbreviations, using the set of titles and abstracts in PubMed as a corpus. This vector is used to create several novel implicit word-word and text-text similarity metrics. The implicit word-word similarity metrics correlate well with human judgement of word pair similarity and relatedness, and outperform or equal all other reported methods on a variety of biomedical benchmarks, including several implementations of neural embeddings trained on PubMed corpora. Our implicit word-word metrics capture different aspects of word-word relatedness than word2vecbased metrics and are only partially correlated (rho = ~0.5-0.8 depending on task and corpus). The vector representations of words, bigrams, trigrams, abbreviations, and PubMed title+abstracts are all publicly available from http://arrowsmith.psych.uic.edu for release under CC-BY-NC license. Several public web query interfaces are also available at the same site, including one which allows the user to specify a given word and view its most closely related terms according to direct co-occurrence as well as different implicit similarity metrics.",
"title": ""
},
{
"docid": "7070a2d1e1c098950996d794c372cbc7",
"text": "Selecting the right audience for an advertising campaign is one of the most challenging, time-consuming and costly steps in the advertising process. To target the right audience, advertisers usually have two options: a) market research to identify user segments of interest and b) sophisticated machine learning models trained on data from past campaigns. In this paper we study how demand-side platforms (DSPs) can leverage the data they collect (demographic and behavioral) in order to learn reputation signals about end user convertibility and advertisement (ad) quality. In particular, we propose a reputation system which learns interest scores about end users, as an additional signal of ad conversion, and quality scores about ads, as a signal of campaign success. Then our model builds user segments based on a combination of demographic, behavioral and the new reputation signals and recommends transparent targeting rules that are easy for the advertiser to interpret and refine. We perform an experimental evaluation on industry data that showcases the benefits of our approach for both new and existing advertiser campaigns.",
"title": ""
},
{
"docid": "c4c482cc453884d0016c442b580e3424",
"text": "PURPOSE/OBJECTIVES\nTo better understand treatment-induced changes in sexuality from the patient perspective, to learn how women manage these changes in sexuality, and to identify what information they want from nurses about this symptom.\n\n\nRESEARCH APPROACH\nQualitative descriptive methods.\n\n\nSETTING\nAn outpatient gynecologic clinic in an urban area in the southeastern United States served as the recruitment site for patients.\n\n\nPARTICIPANTS\nEight women, ages 33-69, receiving first-line treatment for ovarian cancer participated in individual interviews. Five women, ages 40-75, participated in a focus group and their status ranged from newly diagnosed to terminally ill from ovarian cancer.\n\n\nMETHODOLOGIC APPROACH\nBoth individual interviews and a focus group were conducted. Content analysis was used to identify themes that described the experience of women as they became aware of changes in their sexuality. Triangulation of approach, the researchers, and theory allowed for a rich description of the symptom experience.\n\n\nFINDINGS\nRegardless of age, women reported that ovarian cancer treatment had a detrimental impact on their sexuality and that the changes made them feel \"no longer whole.\" Mechanical changes caused by surgery coupled with hormonal changes added to the intensity and dimension of the symptom experience. Physiologic, psychological, and social factors also impacted how this symptom was experienced.\n\n\nCONCLUSIONS\nRegardless of age or relationship status, sexuality is altered by the diagnosis and treatment of ovarian cancer.\n\n\nINTERPRETATION\nNurses have an obligation to educate women with ovarian cancer about anticipated changes in their sexuality that may come from treatment.",
"title": ""
},
{
"docid": "e6088779901bd4bfaf37a3a1784c3854",
"text": "There has been recently a great progress in the field of automatically generated knowledge bases and corresponding disambiguation systems that are capable of mapping text mentions onto canonical entities. Efforts like the before mentioned have enabled researchers and analysts from various disciplines to semantically “understand” contents. However, most of the approaches have been specifically designed for the English language and in particular support for Arabic is still in its infancy. Since the amount of Arabic Web contents (e.g. in social media) has been increasing dramatically over the last years, we see a great potential for endeavors that support an entity-level analytics of these data. To this end, we have developed a framework called AIDArabic that extends the existing AIDA system by additional components that allow the disambiguation of Arabic texts based on an automatically generated knowledge base distilled from Wikipedia. Even further, we overcome the still existing sparsity of the Arabic Wikipedia by exploiting the interwiki links between Arabic and English contents in Wikipedia, thus, enriching the entity catalog as well as disambiguation context.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "30178d1de9d0aab8c3ab0ac9be674d8c",
"text": "The immune system protects from infections primarily by detecting and eliminating the invading pathogens; however, the host organism can also protect itself from infectious diseases by reducing the negative impact of infections on host fitness. This ability to tolerate a pathogen's presence is a distinct host defense strategy, which has been largely overlooked in animal and human studies. Introduction of the notion of \"disease tolerance\" into the conceptual tool kit of immunology will expand our understanding of infectious diseases and host pathogen interactions. Analysis of disease tolerance mechanisms should provide new approaches for the treatment of infections and other diseases.",
"title": ""
},
{
"docid": "171fd68f380f445723b024f290a02d69",
"text": "Cytokines, produced at the site of entry of a pathogen, drive inflammatory signals that regulate the capacity of resident and newly arrived phagocytes to destroy the invading pathogen. They also regulate antigen presenting cells (APCs), and their migration to lymph nodes to initiate the adaptive immune response. When naive CD4+ T cells recognize a foreign antigen-derived peptide presented in the context of major histocompatibility complex class II on APCs, they undergo massive proliferation and differentiation into at least four different T-helper (Th) cell subsets (Th1, Th2, Th17, and induced T-regulatory (iTreg) cells in mammals. Each cell subset expresses a unique set of signature cytokines. The profile and magnitude of cytokines produced in response to invasion of a foreign organism or to other danger signals by activated CD4+ T cells themselves, and/or other cell types during the course of differentiation, define to a large extent whether subsequent immune responses will have beneficial or detrimental effects to the host. The major players of the cytokine network of adaptive immunity in fish are described in this review with a focus on the salmonid cytokine network. We highlight the molecular, and increasing cellular, evidence for the existence of T-helper cells in fish. Whether these cells will match exactly to the mammalian paradigm remains to be seen, but the early evidence suggests that there will be many similarities to known subsets. Alternative or additional Th populations may also exist in fish, perhaps influenced by the types of pathogen encountered by a particular species and/or fish group. These Th cells are crucial for eliciting disease resistance post-vaccination, and hopefully will help resolve some of the difficulties in producing efficacious vaccines to certain fish diseases.",
"title": ""
},
{
"docid": "49445cfa92b95045d23a54eca9f9a592",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. In the past decade, several data mining techniques have been proposed in the literature for predicting the churners using heterogeneous customer records. This paper reviews the different categories of customer data available in open datasets, predictive models and performance metrics used in the literature for churn prediction in telecom industry.",
"title": ""
},
{
"docid": "fddf6e71af23aba468989d6d09da989c",
"text": "The rapidly increasing pervasiveness and integration of computers in human society calls for a broad discipline under which this development can be studied. We argue that to design and use technology one needs to develop and use models of humans and machines in all their aspects, including cognitive and memory models, but also social influence and (artificial) emotions. We call this wider discipline Behavioural Computer Science (BCS), and argue in this paper for why BCS models should unify (models of) the behaviour of humans and machines when designing information and communication technology systems. Thus, one main point to be addressed is the incorporation of empirical evidence for actual human behaviour, instead of making inferences about behaviour based on the rational agent model. Empirical studies can be one effective way to constantly update the behavioural models. We are motivated by the future advancements in artificial intelligence which will give machines capabilities that from many perspectives will be indistinguishable from those of humans. Such machine behaviour would be studied using BCS models, looking at questions about machine trust like “Can a self driving car trust its passengers?”, or artificial influence like “Can the user interface adapt to the user’s behaviour, and thus influence this behaviour?”. We provide a few directions for approaching BCS, focusing on modelling of human and machine behaviour, as well as their interaction.",
"title": ""
}
] | scidocsrr |
6ac2d3470e329368e2894162f61d34c1 | Android Mobile Phone Controlled Bluetooth Robot Using 8051 Microcontroller | [
{
"docid": "05e4cfafcef5ad060c1f10b9c6ad2bc0",
"text": "Mobile devices have been integrated into our everyday life. Consequently, home automation and security are becoming increasingly prominent features on mobile devices. In this paper, we have developed a security system that interfaces with an Android mobile device. The mobile device and security system communicate via Bluetooth because a short-range-only communications system was desired. The mobile application can be loaded onto any compatible device, and once loaded, interface with the security system. Commands to lock, unlock, or check the status of the door to which the security system is installed can be sent quickly from the mobile device via a simple, easy to use GUI. The security system then acts on these commands, taking the appropriate action and sending a confirmation back to the mobile device. The security system can also tell the user if the door is open. The door also incorporates a traditional lock and key interface in case the user loses the mobile device.",
"title": ""
}
] | [
{
"docid": "1b5450c2f21cab5117275b787413b3ad",
"text": "The security and privacy of the data transmitted is an important aspect of the exchange of information on the Internet network. Cryptography and Steganography are two of the most commonly used digital data security techniques. In this research, we proposed the combination of the cryptographic method with Data Encryption Standard (DES) algorithm and the steganographic method with Discrete Cosine Transform (DCT) to develop a digital data security application. The application can be used to secure document data in Word, Excel, Powerpoint or PDF format. Data encrypted with DES algorithm and further hidden in image cover using DCT algorithm. The results showed that the quality of the image that has been inserted (stego-image) is still in a good category with an average PSNR value of 46.9 dB. Also, the experiment results show that the average computational time of 0.75 millisecond/byte, an average size increase of 4.79 times and a success rate of 58%. This research can help solve the problem of data and information security that will be sent through a public network like the internet.",
"title": ""
},
{
"docid": "2698718d069ca73399eb28472a1bb686",
"text": "Process mining is a research discipline that aims to discover, monitor and improve real processing using event logs. In this paper we describe a novel approach that (i) identifies partial process models by exploiting sequential pattern mining and (ii) uses the additional information about the activities matching a partial process model to train nested prediction models from event logs. Models can be used to predict the next activity and completion time of a new (running) process instance. We compare our approach with a model based on Transition Systems implemented in the ProM5 Suite and show that the attributes in the event log can improve the accuracy of the model without decreasing performances. The experimental results show how our algorithm improves of a large margin ProM5 in predicting the completion time of a process, while it presents competitive results for next activity prediction.",
"title": ""
},
{
"docid": "c2e7425f719dd51eec0d8e180577269e",
"text": "Most important way of communication among humans is language and primary medium used for the said is speech. The speech recognizers make use of a parametric form of a signal to obtain the most important distinguishable features of speech signal for recognition purpose. In this paper, Linear Prediction Cepstral Coefficient (LPCC), Mel Frequency Cepstral Coefficient (MFCC) and Bark frequency Cepstral coefficient (BFCC) feature extraction techniques for recognition of Hindi Isolated, Paired and Hybrid words have been studied and the corresponding recognition rates are compared. Artifical Neural Network is used as back end processor. The experimental results show that the better recognition rate is obtained for MFCC as compared to LPCC and BFCC for all the three types of words.",
"title": ""
},
{
"docid": "33be5718d8a60f36e5faaa0cc4f0019f",
"text": "Most of our daily activities are now moving online in the big data era, with more than 25 billion devices already connected to the Internet, to possibly over a trillion in a decade. However, big data also bears a connotation of “big brother” when personal information (such as sales transactions) is being ubiquitously collected, stored, and circulated around the Internet, often without the data owner's knowledge. Consequently, a new paradigm known as online privacy or Internet privacy is becoming a major concern regarding the privacy of personal and sensitive data.",
"title": ""
},
{
"docid": "14e92e2c9cd31db526e084669d15903c",
"text": "This paper presents three building blocks for enabling the efficient and safe design of persistent data stores for emerging non-volatile memory technologies. Taking the fullest advantage of the low latency and high bandwidths of emerging memories such as phase change memory (PCM), spin torque, and memristor necessitates a serious look at placing these persistent storage technologies on the main memory bus. Doing so, however, introduces critical challenges of not sacrificing the data reliability and consistency that users demand from storage. This paper introduces techniques for (1) robust wear-aware memory allocation, (2) preventing of erroneous writes, and (3) consistency-preserving updates that are cache-efficient. We show through our evaluation that these techniques are efficiently implementable and effective by demonstrating a B+-tree implementation modified to make full use of our toolkit.",
"title": ""
},
{
"docid": "1cbd13de915d2a4cedd736345ebb2134",
"text": "This paper deals with the design and implementation of a nonlinear control algorithm for the attitude tracking of a four-rotor helicopter known as quadrotor. This algorithm is based on the second order sliding mode technique known as Super-Twisting Algorithm (STA) which is able to ensure robustness with respect to bounded external disturbances. In order to show the effectiveness of the proposed controller, experimental tests were carried out on a real quadrotor. The obtained results show the good performance of the proposed controller in terms of stabilization, tracking and robustness with respect to external disturbances.",
"title": ""
},
{
"docid": "93810dab9ff258d6e11edaffa1e4a0ff",
"text": "Ishaq, O. 2016. Image Analysis and Deep Learning for Applications in Microscopy. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1371. 76 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9567-1. Quantitative microscopy deals with the extraction of quantitative measurements from samples observed under a microscope. Recent developments in microscopy systems, sample preparation and handling techniques have enabled high throughput biological experiments resulting in large amounts of image data, at biological scales ranging from subcellular structures such as fluorescently tagged nucleic acid sequences to whole organisms such as zebrafish embryos. Consequently, methods and algorithms for automated quantitative analysis of these images have become increasingly important. These methods range from traditional image analysis techniques to use of deep learning architectures. Many biomedical microscopy assays result in fluorescent spots. Robust detection and precise localization of these spots are two important, albeit sometimes overlapping, areas for application of quantitative image analysis. We demonstrate the use of popular deep learning architectures for spot detection and compare them against more traditional parametric model-based approaches. Moreover, we quantify the effect of pre-training and change in the size of training sets on detection performance. Thereafter, we determine the potential of training deep networks on synthetic and semi-synthetic datasets and their comparison with networks trained on manually annotated real data. In addition, we present a two-alternative forced-choice based tool for assisting in manual annotation of real image data. On a spot localization track, we parallelize a popular compressed sensing based localization method and evaluate its performance in conjunction with different optimizers, noise conditions and spot densities. We investigate its sensitivity to different point spread function estimates. Zebrafish is an important model organism, attractive for whole-organism image-based assays for drug discovery campaigns. The effect of drug-induced neuronal damage may be expressed in the form of zebrafish shape deformation. First, we present an automated method for accurate quantification of tail deformations in multi-fish micro-plate wells using image analysis techniques such as illumination correction, segmentation, generation of branch-free skeletons of partial tail-segments and their fusion to generate complete tails. Later, we demonstrate the use of a deep learning-based pipeline for classifying micro-plate wells as either drug-affected or negative controls, resulting in competitive performance, and compare the performance from deep learning against that from traditional image analysis approaches.",
"title": ""
},
{
"docid": "a05eb1631da751562fd25913b578032a",
"text": "In this paper, we examine the intergenerational gaming practices of four generations of console gamers, from ages 3 to 83 and, in particular, the roles that gamers of different generations take on when playing together in groups. Our data highlight the extent to which existing gaming technologies support interactions within collocated intergenerational groups, and our analysis reveals a more generationally flexible suite of roles in these computer-mediated interactions than have been documented by previous studies of more traditional collocated, intergenerational interactions. Finally, we offer implications for game designers who wish to make console games more accessible to intergenerational groups.",
"title": ""
},
{
"docid": "31c62f403e6d7f06ff2ab028894346ff",
"text": "Automated text summarization is important to for humans to better manage the massive information explosion. Several machine learning approaches could be successfully used to handle the problem. This paper reports the results of our study to compare the performance between neural networks and support vector machines for text summarization. Both models have the ability to discover non-linear data and are effective model when dealing with large datasets.",
"title": ""
},
{
"docid": "e0211c2c024b1eb427a8d06f3421e0ba",
"text": "Within the past ten years, methods for automating the process of monitoring the behaviour of cattle have become increasingly important. Within the UK, there has been a steady decline in the number of milk producers and increased commercial pressures have forced increasing consolidation within dairy farming. As a result the average farm size has grown from around 90 to 160 cows. A direct consequence of these trends is that the farmers have less time to observe their herd and are increasingly reliant on technology to undertake this function, most readily underlined with the growth in the use of oestrus or ‘heat’ detection collars to assist in the optimisation of fertility. There is also a desire to derive additional information for collars that have to date been utilised solely to indicate the onset of heat The paper reports on the analysis of signatures obtained from an accelerometer based collar (Silent Herdsman) to identify both eating and rumination signatures, identified through a combination of frequency and statistical analysis. A range of post processing methods have been evaluated in order to determine the most appropriate for integration within a low power processor on the collar. Trials have been carried out using a rumination sensing halter to provide verification data. Analysis of this data over a period of several days, on a minute by minute basis has shown that it is possible to recover eating and rumination with sensitivity and positive predictive value greater than 85%.",
"title": ""
},
{
"docid": "971a0e51042e949214fd75ab6203e36a",
"text": "This paper presents an automatic recognition method for color text characters extracted from scene images, which is robust to strong distortions, complex background, low resolution and non uniform lightning. Based on a specific architecture of convolutional neural networks, the proposed system automatically learns how to recognize characters without making any assumptions, without applying any preprocessing or post-processing and without using tunable parameters. For this purpose, we use a training set of scene text images extracted from the ICDAR 2003 public training database. The proposed method is compared to recent character recognition techniques for scene images based on the ICDAR 2003 public samples dataset in order to contribute to the state-of-the-art method comparison efforts initiated in ICDAR 2003. Experimental results show an encouraging average recognition rate of 84.53%, ranging from 93.47% for clear images to 67.86% for seriously distorted images.",
"title": ""
},
{
"docid": "348f9c689c579cf07085b6e263c53ff5",
"text": "Over recent years, interest has been growing in Bitcoin, an innovation which has the potential to play an important role in e-commerce and beyond. The aim of our paper is to provide a comprehensive empirical study of the payment and investment features of Bitcoin and their implications for the conduct of ecommerce. Since network externality theory suggests that the value of a network and its take-up are interlinked, we investigate both adoption and price formation. We discover that Bitcoin returns are driven primarily by its popularity, the sentiment expressed in newspaper reports on the cryptocurrency, and total number of transactions. The paper also reports on the first global survey of merchants who have adopted this technology and model the share of sales paid for with this alternative currency, using both ordinary and Tobit regressions. Our analysis examines how country, customer and company-specific characteristics interact with the proportion of sales attributed to Bitcoin. We find that company features, use of other payment methods, customers’ knowledge about Bitcoin, as well as the size of both the official and unofficial economy are significant determinants. The results presented allow a better understanding of the practical and theoretical ramifications of this innovation.",
"title": ""
},
{
"docid": "4e685637bb976716b335ac2f52f03782",
"text": "Breast Cancer is becoming a leading cause of death among women in the whole world; meanwhile, it is confirmed that the early detection and accurate diagnosis of this disease can ensure a long survival of the patients. This paper work presents a disease status prediction employing a hybrid methodology to forecast the changes and its consequence that is crucial for lethal infections. To alarm the severity of the diseases, our strategy consists of two main parts: 1. Information Treatment and Option Extraction, and 2. Decision Tree-Support Vector Machine (DT-SVM) Hybrid Model for predictions. We analyse the breast Cancer data available from the Wisconsin dataset from UCI machine learning with the aim of developing accurate prediction models for breast cancer using data mining techniques. In this experiment, we compare three classifications techniques in Weka software and comparison results show that DTSVM has higher prediction accuracy than Instance-based learning (IBL), Sequential Minimal Optimization (SMO) and Naïve based classifiers. Index Terms breast cancer; classification; Decision TreeSupport Vector Machine, Naïve Bayes, Instance-based learning, Sequential Minimal Optimization, and weka;",
"title": ""
},
{
"docid": "27447d05f1e13e487a741fabc9059fa6",
"text": "Online communication media are being used increasingly for attempts to persuade message receivers. This paper presents a theoretical model that predicts outcomes of online persuasion based on the structure of primary and secondary goals message receivers hold toward the communication.",
"title": ""
},
{
"docid": "eb3a07c2295ba09c819c7a998b2fb337",
"text": "Recent advances have demonstrated the potential of network MIMO (netMIMO), which combines a practical number of distributed antennas as a virtual netMIMO AP (nAP) to improve spatial multiplexing of an WLAN. Existing solutions, however, either simply cluster nearby antennas as static nAPs, or dynamically cluster antennas on a per-packet basis so as to maximize the sum rate of the scheduled clients. To strike the balance between the above two extremes, in this paper, we present the design, implementation and evaluation of FlexNEMO, a practical two-phase netMIMO clustering system. Unlike previous per-packet clustering approaches, FlexNEMO only clusters antennas when client distribution and traffic pattern change, as a result being more practical to be implemented. A medium access control protocol is then designed to allow the clients at the center of nAPs to have a higher probability to gain access opportunities, but still ensure long-term fairness among clients. By combining on-demand clustering and priority-based access control, FlexNEMO not only improves antenna utilization, but also optimizes the channel condition for every individual client. We evaluated our design via both testbed experiments on USRPs and trace-driven emulations. The results demonstrate that FlexNEMO can deliver 94.7% and 93.7% throughput gains over static antenna clustering in a 4-antenna testbed and 16-antenna emulation, respectively.",
"title": ""
},
{
"docid": "6fec53c8c10c2e7114a1464b2b8e3024",
"text": "This paper provides generalized analysis of active filters used as electromagnetic interference (EMI) filters and active-power filters. Insertion loss and impedance increase of various types of active-filter topologies are described with applicable requirements and limitations as well as the rationale for selecting active-filter topology according to different applications.",
"title": ""
},
{
"docid": "e30cedcb4cb99c4c3b2743c5359cf823",
"text": "This paper presents a 116nW wake-up radio complete with crystal reference, interference compensation, and baseband processing, such that a selectable 31-bit code is required to toggle a wake-up signal. The front-end operates over a broad frequency range, tuned by an off-chip band-select filter and matching network, and is demonstrated in the 402-405MHz MICS band and the 915MHz and 2.4GHz ISM bands with sensitivities of -45.5dBm, -43.4dBm, and -43.2dBm, respectively. Additionally, the baseband processor implements automatic threshold feedback to detect the presence of interferers and dynamically adjust the receiver's sensitivity, mitigating the jamming problem inherent to previous energy-detection wake-up radios. The wake-up radio has a raw OOK chip-rate of 12.5kbps, an active area of 0.35mm2 and operates using a 1.2V supply for the crystal reference and RF demodulation, and a 0.5V supply for subthreshold baseband processing.",
"title": ""
},
{
"docid": "6c2ab35deb9bb61e23e95b6510134459",
"text": "ENSURING THAT ATHLETES FUNCTION OPTIMALLY THROUGHOUT TRAINING IS AN IMPORTANT FOCUS FOR THE STRENGTH AND CONDITIONING COACH. SLEEP IS AN INFLUENTIAL FACTOR THAT AFFECTS THE QUALITY OF TRAINING, GIVEN ITS IMPLICATIONS ON THE RECOVERY PROCESS. INTENSE TRAINING MAY PREDISPOSE ATHLETES TO RISK FACTORS SURROUNDING DISTURBED SLEEP PATTERNS. THESE MAY BE DUE TO INHERENT PHYSICAL EXERTION, COMMITMENT TO EXTENSIVE TRAINING SCHEDULES, THE EFFECTS OF TRAVEL, DOMESTIC OR INTERNATIONAL, AND THE PRESSURES THAT COMPETITION EVOKES. EDUCATING ATHLETES ON THE IMPLICATIONS OF SLEEP SHOULD BE IMPLEMENTED BY STRENGTH AND CONDITIONING COACHES TO OPTIMIZE ATHLETE RECOVERY, PROMOTE CONSISTENT SLEEP ROUTINES, AND SLEEP LENGTH.",
"title": ""
},
{
"docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc",
"text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting",
"title": ""
},
{
"docid": "ef8a61d3ff3aad461c57fe893e0b5bb6",
"text": "In this paper, we propose an underwater wireless sensor network (UWSN) named SOUNET where sensor nodes form and maintain a tree-topological network for data gathering in a self-organized manner. After network topology discovery via packet flooding, the sensor nodes consistently update their parent node to ensure the best connectivity by referring to the timevarying neighbor tables. Such a persistent and self-adaptive method leads to high network connectivity without any centralized control, even when sensor nodes are added or unexpectedly lost. Furthermore, malfunctions that frequently happen in self-organized networks such as node isolation and closed loop are resolved in a simple way. Simulation results show that SOUNET outperforms other conventional schemes in terms of network connectivity, packet delivery ratio (PDR), and energy consumption throughout the network. In addition, we performed an experiment at the Gyeongcheon Lake in Korea using commercial underwater modems to verify that SOUNET works well in a real environment.",
"title": ""
}
] | scidocsrr |
1d1c7ed520b543c6c4fd71f0e3776c9d | Teachers' pedagogical beliefs and their use of digital media in classrooms: Sharpening the focus of the 'will, skill, tool' model and integrating teachers' constructivist orientations | [
{
"docid": "48dd3e8e071e7dd580ea42b528ee9427",
"text": "Information systems (IS) implementation is costly and has a relatively low success rate. Since the seventies, IS research has contributed to a better understanding of this process and its outcomes. The early efforts concentrated on the identification of factors that facilitated IS use. This produced a long list of items that proved to be of little practical value. It became obvious that, for practical reasons, the factors had to be grouped into a model in a way that would facilitate analysis of IS use. In 1985, Fred Davis suggested the technology acceptance model (TAM). It examines the mediating role of perceived ease of use and perceived usefulness in their relation between systems characteristics (external variables) and the probability of system use (an indicator of system success). More recently, Davis proposed a new version of his model: TAM2. It includes subjective norms, and was tested with longitudinal research designs. Overall the two explain about 40% of system’s use. Analysis of empirical research using TAM shows that results are not totally consistent or clear. This suggests that significant factors are not included in the models. We conclude that TAM is a useful model, but has to be integrated into a broader one which would include variables related to both human and social change processes, and to the adoption of the innovation model. # 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "ff0644de5cd474dbd858c96bb4c76fd9",
"text": "With the growth of the Internet of Things, many insecure embedded devices are entering into our homes and businesses. Some of these web-connected devices lack even basic security protections such as secure password authentication. As a result, thousands of IoT devices have already been infected with malware and enlisted into malicious botnets and many more are left vulnerable to exploitation. In this paper we analyze the practical security level of 16 popular IoT devices from high-end and low-end manufacturers. We present several low-cost black-box techniques for reverse engineering these devices, including software and fault injection based techniques for bypassing password protection. We use these techniques to recover device rmware and passwords. We also discover several common design aws which lead to previously unknown vulnerabilities. We demonstrate the e ectiveness of our approach by modifying a laboratory version of the Mirai botnet to automatically include these devices. We also discuss how to improve the security of IoT devices without signi cantly increasing their cost.",
"title": ""
},
{
"docid": "6aed31a677c2fca976c91c67abd1e7b1",
"text": "Facebook is the most popular Social Network Site (SNS) among college students. Despite the popularity and extensive use of Facebook by students, its use has not made significant inroads into classroom usage. In this study, we seek to examine why this is the case and whether it would be worthwhile for faculty to invest the time to integrate Facebook into their teaching. To this end, we decided to undertake a study with a sample of 214 undergraduate students at the University of Huelva (Spain). We applied the structural equation model specifically designed by Mazman and Usluel (2010) to identify the factors that may motivate these students to adopt and use social network tools, specifically Facebook, for educational purposes. According to our results, Social Influence is the most important factor in predicting the adoption of Facebook; students are influenced to adopt it to establish or maintain contact with other people with whom they share interests. Regarding the purposes of Facebook usage, Social Relations is perceived as the most important factor among all of the purposes collected. Our findings also revealed that the educational use of Facebook is explained directly by its purposes of usage and indirectly by its adoption. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e2459b9991cfda1e81119e27927140c5",
"text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.",
"title": ""
},
{
"docid": "e79a335fb5dc6e2169484f8ac4130b35",
"text": "We obtained expressions for TE and TM modes of the planar hyperbolic secant (HS) waveguide. We found waveguide parameters for which the fundamental mode has minimal width. By FDTD-simulation we show propagation of TE-modes and periodical reconstruction of non-modal fields in bounded HS-waveguides. We show that truncated HS-waveguide focuses plane wave into spot with diameter 0.132 of wavelength.",
"title": ""
},
{
"docid": "b15b88a31cc1762618ca976bdf895d57",
"text": "How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional (1,000+ pixels) natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task. Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot task, with competitive results and little parameter overhead. Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task. We conclude that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.",
"title": ""
},
{
"docid": "4dbbcaf264cc9beda8644fa926932d2e",
"text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.",
"title": ""
},
{
"docid": "85856deb5bf7cafef8f68ad13414d4b1",
"text": "human health and safety, serving as an early-warning system for hazardous environmental conditions, such as poor air and water quality (e.g., Glasgow et al. 2004, Normander et al. 2008), and natural disasters, such as fires (e.g., Hefeeda and Bagheri 2009), floods (e.g., Young 2002), and earthquakes (e.g., Hart and Martinez 2006). Collectively, these changes in the technological landscape are altering the way that environmental conditions are monitored, creating a platform for new scientific discoveries (Porter et al. 2009). Although sensor networks can provide many benefits, they are susceptible to malfunctions that can result in lost or poor-quality data. Some level of sensor failure is inevitable; however, steps can be taken to minimize the risk of loss and to improve the overall quality of the data. In the ecological community, it has become common practice to post streaming sensor data online with limited or no quality control. That is, these data are often delivered to end users in a raw form, without any checks or evaluations having been performed. In such cases, the data are typically released provisionally with the understanding that they could change in the future. However, when provisional data are made publically available before they have been comprehensively checked, there is the potential for erroneous or misleading results. Streaming sensor networks have advanced ecological research by providing enormous quantities of data at fine temporal and spatial resolutions in near real time (Szewczyk et al. 2004, Porter et al. 2005, Collins et al. 2006). The advent of wireless technologies has enabled connections with sensors in remote locations, making it possible to transmit data instantaneously using communication devices such as cellular phones, radios, and local area networks. Advancements in cyberinfrastructure have improved data storage capacity, processing speed, and communication bandwidth, making it possible to deliver to end users the most current observations from sensors (e.g., within minutes after their collection). Recent technological developments have resulted in a new generation of in situ sensors that provide continuous data streams on the physical, chemical, optical, acoustical, and biological properties of ecosystems. These new types of sensors provide a window into natural patterns not obtainable with discrete measurements (Benson et al. 2010). Techniques for rapidly processing and interpreting digital data, such as webcam images in investigations of tree phenology (Richardson et al. 2009) and acoustic data in wildlife research (Szewczyk et al. 2004), have also enhanced our understanding of ecological processes. Access to near-real-time data has become important for",
"title": ""
},
{
"docid": "c429bf418a4ecbd56c7b2ab6f4ca3cd6",
"text": "The Internet exhibits a gigantic measure of helpful data which is generally designed for its users, which makes it hard to extract applicable information from different sources. Accordingly, the accessibility of strong, adaptable Information Extraction framework that consequently concentrate structured data such as, entities, relationships between entities, and attributes from unstructured or semi-structured sources. But somewhere during extraction of information may lead to the loss of its meaning, which is absolutely not feasible. Semantic Web adds solution to this problem. It is about providing meaning to the data and allow the machine to understand and recognize these augmented data more accurately. The proposed system is about extracting information from research data of IT domain like journals of IEEE, Springer, etc., which aid researchers and the organizations to get the data of journals in an optimized manner so the time and hard work of surfing and reading the entire journal's papers or articles reduces. Also the accuracy of the system is taken care of using RDF, the data extracted has a specific declarative semantics so that the meaning of the research papers or articles during extraction remains unchanged. In addition, the same approach shall be applied on multiple documents, so that time factor can get saved.",
"title": ""
},
{
"docid": "bf5cedb076c779157e1c1fbd4df0adc9",
"text": "Generating novel graph structures that optimize given objectives while obeying some given underlying rules is fundamental for chemistry, biology and social science research. This is especially important in the task of molecular graph generation, whose goal is to discover novel molecules with desired properties such as drug-likeness and synthetic accessibility, while obeying physical laws such as chemical valency. However, designing models to find molecules that optimize desired properties while incorporating highly complex and non-differentiable rules remains to be a challenging task. Here we propose Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goaldirected graph generation through reinforcement learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules. Experimental results show that GCPN can achieve 61% improvement on chemical property optimization over state-of-the-art baselines while resembling known molecules, and achieve 184% improvement on the constrained property optimization task.",
"title": ""
},
{
"docid": "a6a7770857964e96f98bd4021d38f59f",
"text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.",
"title": ""
},
{
"docid": "aafae4864d274540d0f80842970c7eac",
"text": "Fraud is increasing with the extensive use of internet and the increase of online transactions. More advanced solutions are desired to protect financial service companies and credit card holders from constantly evolving online fraud attacks. The main objective of this paper is to construct an efficient fraud detection system which is adaptive to the behavior changes by combining classification and clustering techniques. This is a two stage fraud detection system which compares the incoming transaction against the transaction history to identify the anomaly using BOAT algorithm in the first stage. In second stage to reduce the false alarm rate suspected anomalies are checked with the fraud history database and make sure that the detected anomalies are due to fraudulent transaction or any short term change in spending profile. In this work BOAT supports incremental update of transactional database and it handles maximum fraud coverage with high speed and less cost. Proposed model is evaluated on both synthetically generated and real life data and shows very good accuracy in detecting fraud transaction.",
"title": ""
},
{
"docid": "a19c27371c6bf366fddabc2fd3f277b7",
"text": "Simultaneous sparse coding (SSC) or nonlocal image representation has shown great potential in various low-level vision tasks, leading to several state-of-the-art image restoration techniques, including BM3D and LSSC. However, it still lacks a physically plausible explanation about why SSC is a better model than conventional sparse coding for the class of natural images. Meanwhile, the problem of sparsity optimization, especially when tangled with dictionary learning, is computationally difficult to solve. In this paper, we take a low-rank approach toward SSC and provide a conceptually simple interpretation from a bilateral variance estimation perspective, namely that singular-value decomposition of similar packed patches can be viewed as pooling both local and nonlocal information for estimating signal variances. Such perspective inspires us to develop a new class of image restoration algorithms called spatially adaptive iterative singular-value thresholding (SAIST). For noise data, SAIST generalizes the celebrated BayesShrink from local to nonlocal models; for incomplete data, SAIST extends previous deterministic annealing-based solution to sparsity optimization through incorporating the idea of dictionary learning. In addition to conceptual simplicity and computational efficiency, SAIST has achieved highly competent (often better) objective performance compared to several state-of-the-art methods in image denoising and completion experiments. Our subjective quality results compare favorably with those obtained by existing techniques, especially at high noise levels and with a large amount of missing data.",
"title": ""
},
{
"docid": "1831e2a5a75fc85299588323d68947b2",
"text": "The Transaction Processing Performance Council (TPC) is completing development of TPC-DS, a new generation industry standard decision support benchmark. The TPC-DS benchmark, first introduced in the “The Making of TPC-DS” [9] paper at the 32 International Conference on Very Large Data Bases (VLDB), has now entered the TPC’s “Formal Review” phase for new benchmarks; companies and researchers alike can now download the draft benchmark specification and tools for evaluation. The first paper [9] gave an overview of the TPC-DS data model, workload model, and execution rules. This paper details the characteristics of different phases of the workload, namely: database load, query workload and data maintenance; and also their impact to the benchmark’s performance metric. As with prior TPC benchmarks, this workload will be widely used by vendors to demonstrate their capabilities to support complex decision support systems, by customers as a key factor in purchasing servers and software, and by the database community for research and development of optimization techniques.",
"title": ""
},
{
"docid": "797166b4c68bcdc7a8860462117e2051",
"text": "In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.",
"title": ""
},
{
"docid": "6a1ade9670c8ee161209d54901318692",
"text": "The motion of a plane can be described by a homography. We study how to parameterize homographies to maximize plane estimation performance. We compare the usual 3 × 3 matrix parameterization with a parameterization that combines 4 fixed points in one of the images with 4 variable points in the other image. We empirically show that this 4pt parameterization is far superior. We also compare both parameterizations with a variety of direct parameterizations. In the case of unknown relative orientation, we compare with a direct parameterization of the plane equation, and the rotation and translation of the camera(s). We show that the direct parameteri-zation is both less accurate and far less robust than the 4-point parameterization. We explain the poor performance using a measure of independence of the Jacobian images. In the fully calibrated setting, the direct parameterization just consists of 3 parameters of the plane equation. We show that this parameterization is far more robust than the 4-point parameterization, but only approximately as accurate. In the case of a moving stereo rig we find that the direct parameterization of plane equation, camera rotation and translation performs very well, both in terms of accuracy and robustness. This is in contrast to the corresponding direct parameterization in the case of unknown relative orientation. Finally, we illustrate the use of plane estimation in 2 automotive applications.",
"title": ""
},
{
"docid": "90dc36628f9262157ea8722d82830852",
"text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.",
"title": ""
},
{
"docid": "0f10bb2afc1797fad603d8c571058ecb",
"text": "This paper presents findings from the All Wales Hate Crime Project. Most hate crime research has focused on discrete victim types in isolation. For the first time, internationally, this paper examines the psychological and physical impacts of hate crime across seven victim types drawing on quantitative and qualitative data. It contributes to the hate crime debate in two significant ways: (1) it provides the first look at the problem in Wales and (2) it provides the first multi-victim-type analysis of hate crime, showing that impacts are not homogenous across victim groups. The paper provides empirical credibility to the impacts felt by hate crime victims on the margins who have routinely struggled to gain support.",
"title": ""
},
{
"docid": "a4dea5e491657e1ba042219401ebcf39",
"text": "Beam scanning arrays typically suffer from scan loss; an increasing degradation in gain as the beam is scanned from broadside toward the horizon in any given scan plane. Here, a metasurface is presented that reduces the effects of scan loss for a leaky-wave antenna (LWA). The metasurface is simple, being composed of an ultrathin sheet of subwavelength split-ring resonators. The leaky-wave structure is balanced, scanning from the forward region, through broadside, and into the backward region, and designed to scan in the magnetic plane. The metasurface is effectively invisible at broadside, where balanced LWAs are most sensitive to external loading. It is shown that the introduction of the metasurface results in increased directivity, and hence, gain, as the beam is scanned off broadside, having an increasing effect as the beam is scanned to the horizon. Simulations show that the metasurface improves the effective aperture distribution at higher scan angles, resulting in a more directive main beam, while having a negligible impact on cross-polarization gain. Experimental validation results show that the scan range of the antenna is increased from $-39 {^{\\circ }} \\leq \\theta \\leq +32 {^{\\circ }}$ to $-64 {^{\\circ }} \\leq \\theta \\leq +70 {^{\\circ }}$ , when loaded with the metasurface, demonstrating a flattened gain profile over a 135° range centered about broadside. Moreover, this scan range occurs over a frequency band spanning from 9 to 15.5 GHz, demonstrating a relative bandwidth of 53% for the metasurface.",
"title": ""
},
{
"docid": "16b5c5d176f2c9292d9c9238769bab31",
"text": "We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.",
"title": ""
},
{
"docid": "2a4201c5789a546edf8944acbcf99546",
"text": "Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained models need only half or even less training data to achieve equivalent performance as the same models without pre-training.",
"title": ""
}
] | scidocsrr |
af6b29a103dba800f2fec5f4f879c16a | Most liked, fewest friends: patterns of enterprise social media use | [
{
"docid": "d489bd0fbf14fdad30b5a59190c86078",
"text": "This research investigates two competing hypotheses from the literature: 1) the Social Enhancement (‘‘Rich Get Richer’’) hypothesis that those more popular offline augment their popularity by increasing it on Facebook , and 2) the ‘‘Social Compensation’’ (‘‘Poor Get Richer’’) hypothesis that users attempt to increase their Facebook popularity to compensate for inadequate offline popularity. Participants (n= 614) at a large, urban university in the Midwestern United States completed an online survey. Results are that a subset of users, those more extroverted and with higher self-esteem, support the Social Enhancement hypothesis, being more popular both offline and on Facebook . Another subset of users, those less popular offline, support the Social Compensation hypotheses because they are more introverted, have lower self-esteem and strive more to look popular on Facebook . Semantic network analysis of open-ended responses reveals that these two user subsets also have different meanings for offline and online popularity. Furthermore, regression explains nearly twice the variance in offline popularity as in Facebook popularity, indicating the latter is not as socially grounded or defined as offline popularity.",
"title": ""
}
] | [
{
"docid": "f3471acc1405bbd9546cc8ec42267053",
"text": "The authors examined the association between semen quality and caffeine intake among 2,554 young Danish men recruited when they were examined to determine their fitness for military service in 2001-2005. The men delivered a semen sample and answered a questionnaire including information about caffeine intake from various sources, from which total caffeine intake was calculated. Moderate caffeine and cola intakes (101-800 mg/day and < or =14 0.5-L bottles of cola/week) compared with low intake (< or =100 mg/day, no cola intake) were not associated with semen quality. High cola (>14 0.5-L bottles/week) and/or caffeine (>800 mg/day) intake was associated with reduced sperm concentration and total sperm count, although only significant for cola. High-intake cola drinkers had an adjusted sperm concentration and total sperm count of 40 mill/mL (95% confidence interval (CI): 32, 51) and 121 mill (95% CI: 92, 160), respectively, compared with 56 mill/mL (95% CI: 50, 64) and 181 mill (95% CI: 156, 210) in non-cola-drinkers, which could not be attributed to the caffeine they consumed because it was <140 mg/day. Therefore, the authors cannot exclude the possibility of a threshold above which cola, and possibly caffeine, negatively affects semen quality. Alternatively, the less healthy lifestyle of these men may explain these findings.",
"title": ""
},
{
"docid": "f68b11af8958117f75fc82c40c51c395",
"text": "Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world. While aleatory uncertainty refers to the inherent randomness in nature, derived from natural variability of the physical world (e.g., random show of a flipped coin), epistemic uncertainty origins from human's lack of knowledge of the physical world, as well as ability of measuring and modeling the physical world (e.g., computation of the distance between two cities). Different kinds of uncertainty call for different handling methods. Aggarwal, Yu, Sarma, and Zhang et al. have made good surveys on uncertain database management based on the probability theory. This paper reviews multidisciplinary uncertainty processing activities in diverse fields. Beyond the dominant probability theory and fuzzy theory, we also review information-gap theory and recently derived uncertainty theory. Practices of these uncertainty handling theories in the domains of economics, engineering, ecology, and information sciences are also described. It is our hope that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.",
"title": ""
},
{
"docid": "d8e1410ec6573bd1fa09091e123f53be",
"text": "In the last years the protection and safeguarding of cultural heritage has become a key issue of European cultural policy and this applies not only to tangible artefacts (monuments, sites, etc.), but also to intangible cultural expressions (singing, dancing, etc.). The i-Treasures project focuses on some Intangible Cultural Heritages (ICH) and investigates whether and to what extent new technology can play a role in the preservation and dissemination of these expressions. To this aim, the project will develop a system, based on cutting edge technology and sensors, that digitally captures the performances of living human treasures, analyses the digital information to semantically index the performances and their constituting elements, and builds an educational platform on top of the semantically indexed content. The main purpose of this paper is to describe how the user requirements of this system were defined. The requirements definition process was based on a participatory approach, where ICH experts, performers and users were actively involved through surveys and interviews, and extensively collaborated in the complex tasks of identifying specificities of rare traditional know-how, discovering existing teaching and learning practices and finally identifying the most cutting edge technologies able to support innovative teaching and learning approaches to ICH.",
"title": ""
},
{
"docid": "b7189c1b1dc625fb60a526d81c0d0a89",
"text": "This paper presents a development of an anthropomorphic robot hand, `KITECH Hand' that has 4 full-actuated fingers. Most robot hands have small size simultaneously many joints as compared with robot manipulators. Components of actuator, gear, and sensors used for building robots are not small and are expensive, and those make it difficult to build a small sized robot hand. Differently from conventional development of robot hands, KITECH hand adopts a RC servo module that is cheap, easily obtainable, and easy to handle. The RC servo module that have been already used for several small sized humanoid can be new solution of building small sized robot hand with many joints. The feasibility of KITECH hand in object manipulation is shown through various experimental results. It is verified that the modified RC servo module is one of effective solutions in the development of a robot hand.",
"title": ""
},
{
"docid": "3b2376110b0e6949379697b7ba6730b5",
"text": "............................................................................................................................... i Acknowledgments............................................................................................................... ii Table of",
"title": ""
},
{
"docid": "40fbee18e4b0eca3f2b9ad69119fec5d",
"text": "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.",
"title": ""
},
{
"docid": "75642d6a79f6b9bb8b02f6d8ded6a370",
"text": "Spectral indices as a selection tool in plant breeding could improve genetic gains for different important traits. The objectives of this study were to assess the potential of using spectral reflectance indices (SRI) to estimate genetic variation for in-season biomass production, leaf chlorophyll, and canopy temperature (CT) in wheat (Triticum aestivum L.) under irrigated conditions. Three field experiments, GHIST (15 CIMMYT globally adapted historic genotypes), RILs1 (25 recombinant inbred lines [RILs]), and RILs2 (36 RILs) were conducted under irrigated conditions at the CIMMYT research station in northwest Mexico in three different years. Five SRI were evaluated to differentiate genotypes for biomass production. In general, genotypic variation for all the indices was significant. Near infrared radiation (NIR)–based indices gave the highest levels of associationwith biomass production and the higher associations were observed at heading and grainfilling, rather than at booting. Overall, NIR-based indices were more consistent and differentiated biomass more effectively compared to the other indices. Indices based on ratio of reflection spectra correlatedwith SPADchlorophyll values, and the associationwas stronger at the generative growth stages. These SRI also successfully differentiated the SPAD values at the genotypic level. The NIR-based indices showed a strong and significant association with CT at the heading and grainfilling stages. These results demonstrate the potential of using SRI as a breeding tool to select for increased genetic gains in biomass and chlorophyll content, plus for cooler canopies. SIGNIFICANT PROGRESS in grain yield of spring wheat under irrigated conditions has been made through the classical breeding approach (Slafer et al., 1994), even though the genetic basis of yield improvement in wheat is not well established (Reynolds et al., 1999). Several authors have reported that progress in grain yield is mainly attributed to better partitioning of photosynthetic products (Waddington et al., 1986; Calderini et al., 1995; Sayre et al., 1997). The systematic increase in the partitioning of assimilates (harvest index) has a theoretical upper limit of approximately 60% (Austin et al., 1980). Further yield increases in wheat through improvement in harvest index will be limited without a further increase in total crop biomass (Austin et al., 1980; Slafer and Andrade, 1991; Reynolds et al., 1999). Though until relatively recently biomass was not commonly associated with yield gains, increases in biomass of spring wheat have been reported (Waddington et al., 1986; Sayre et al., 1997) and more recently in association with yield increases (Singh et al., 1998; Reynolds et al., 2005; Shearman et al., 2005). Thus, a breeding approach is needed that will select genotypes with higher biomass capacity, while maintaining the high partitioning rate of photosynthetic products. Direct estimation of biomass is a timeand laborintensive undertaking. Moreover, destructive in-season sampling involves large sampling errors (Whan et al., 1991) and reduces the final area for estimation of grain yield and final biomass. Regan et al. (1992) demonstrated a method to select superior genotypes of spring wheat for early vigor under rainfed conditions using a destructive sampling technique, but such sampling is impossible for breeding programs where a large number of genotypes are being screened for various desirable traits. Spectral reflectance indices are a potentially rapid technique that could assess biomass at the genotypic level without destructive sampling (Elliott and Regan, 1993; Smith et al., 1993; Bellairs et al., 1996; Peñuelas et al., 1997). Canopy light reflectance properties based mainly on the absorption of light at a specific wavelength are associated with specific plant characteristics. The spectral reflectance in the visible (VIS) wavelengths (400–700 nm) depends on the absorption of light by leaf chlorophyll and associated pigments such as carotenoid and anthocyanins. The reflectance of the VIS wavelengths is relatively low because of the high absorption of light energy by these pigments. In contrast, the reflectance of theNIR wavelengths (700–1300 nm) is high, since it is not absorbed by plant pigments and is scattered by plant tissue at different levels in the canopy, such that much of it is reflected back rather than being absorbed by the soil (Knipling, 1970). Spectral reflectance indices were developed on the basis of simple mathematical formula, such as ratios or differences between the reflectance at given wavelengths (Araus et al., 2001). Simple ratio (SR 5 NIR/VIS) and the normalized difference vegetation M.A. Babar, A.R. Klatt, and W.R. Raun, Department of Plant and Soil Sciences, 368 Ag. Hall, Oklahoma State University, Stillwater, OK 74078, USA; M.P. Reynolds, International Maize and Wheat Improvement Center (CIMMYT), Km. 45, Carretera Mexico, El Batan, Texcoco, Mexico; M. van Ginkel, Department of Primary Industries (DPI), Private Bag 260, Horsham, Victoria, Postcode: 3401, DX Number: 216515, Australia; M.L. Stone, Department of Biosystems and Agricultural Engineering, Oklahoma State University, Stillwater, OK 74078, USA. This research was partially funded by the Oklahoma Wheat Research Foundation (OWRF), Oklahoma Wheat Commission, and CIMMYT (International Maize and Wheat Improvement Center), Mexico. Received 11 Mar. 2005. *Corresponding author ([email protected]). Published in Crop Sci. 46:1046–1057 (2006). Crop Breeding & Genetics doi:10.2135/cropsci2005.0211 a Crop Science Society of America 677 S. Segoe Rd., Madison, WI 53711 USA Abbreviations: CT, canopy temperature; CTD, canopy temperature depression; GHIST, global historic; NDVI, normalized difference vegetation index; NIR, near infrared radiation; NWI-1, normalized water index-1; NWI-2, normalized water index-2; PSSRa, pigment specific simple ratio-chlorophyll a; RARSa, ratio analysis of reflectance spectra-chlorophyll a; RARSb, ratio analysis of reflectance spectra-chlorophyll b; RARSc, ratio analysis of reflectance spectracarotenoids; RILs, recombinant inbred lines; SR, simple ratio; SRI, spectral reflectance indices; WI, water index. R e p ro d u c e d fr o m C ro p S c ie n c e . P u b lis h e d b y C ro p S c ie n c e S o c ie ty o f A m e ri c a . A ll c o p y ri g h ts re s e rv e d . 1046 Published online March 27, 2006",
"title": ""
},
{
"docid": "d725c63647485fd77412f16e1f6485f2",
"text": "The ongoing discussions about a „digital revolution― and ―disruptive competitive advantages‖ have led to the creation of such a business vision as ―Industry 4.0‖. Yet, the term and even more its actual impact on businesses is still unclear.This paper addresses this gap and explores more specifically, the consequences and potentials of Industry 4.0 for the procurement, supply and distribution management functions. A blend of literature-based deductions and results from a qualitative study are used to explore the phenomenon.The findings indicate that technologies of Industry 4.0 legitimate the next level of maturity in procurement (Procurement &Supply Management 4.0). Empirical findings support these conceptual considerations, revealing the ambitious expectations.The sample comprises seven industries and the employed method is qualitative (telephone and face-to-face interviews). The empirical findings are only a basis for further quantitative investigation , however, they support the necessity and existence of the maturity level. The findings also reveal skepticism due to high investment costs but also very high expectations. As recent studies about digitalization are rather rare in the context of single company functions, this research work contributes to the understanding of digitalization and supply management.",
"title": ""
},
{
"docid": "bf57a5fcf6db7a9b26090bd9a4b65784",
"text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.",
"title": ""
},
{
"docid": "a9709367bc84ececd98f65ed7359f6b0",
"text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.",
"title": ""
},
{
"docid": "fc55bae802e8b82f79bbb381f7bcf30b",
"text": "In order to improve the efficiency of Apriori algorithm for mining frequent item sets, MH-Apriori algorithm was designed for big data to address the poor efficiency problem. MH-Apriori takes advantages of MapReduce and HBase together to optimize Apriori algorithm. Compared with the improved Apriori algorithm simply based on MapReduce framework, timestamp of HBase is utilized in this algorithm to avoid generating a large number of key/value pairs. It saves the pattern matching time and scans the database only once. Also, to obtain transaction marks automatically, transaction mark column is added to set list for computing support numbers. MH-Apriori was executed on Hadoop platform. The experimental results show that MH-Apriori has higher efficiency and scalability.",
"title": ""
},
{
"docid": "565efa7a51438990b3d8da6222dca407",
"text": "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.",
"title": ""
},
{
"docid": "310aa0a02f8fc8b7b6d31c987a12a576",
"text": "We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.",
"title": ""
},
{
"docid": "eb5f2e4a1b01a67516089bbfecc0ab8a",
"text": "With the fast development of digital systems and concomitant information technologies, there is certainly an incipient spirit in the extensive overall economy to put together digital Customer Relationship Management (CRM) systems. This slanting is further more palpable in the telecommunications industry, in which businesses turn out to be increasingly digitalized. Customer churn prediction is a foremost aspect of a contemporary telecom CRM system. Churn prediction model leads the customer relationship management to retain the customers who will be possible to give up. Currently scenario, a lot of outfit and monitored classifiers and data mining techniques are employed to model the churn prediction in telecom. Within this paper, Kernelized Extreme Learning Machine (KELM) algorithm is proposed to categorize customer churn patterns in telecom industry. The primary strategy of proposed work is organized the data from telecommunication mobile customer’s dataset. The data preparation is conducted by using preprocessing with Expectation Maximization (EM) clustering algorithm. After that, customer churn behavior is examined by using Naive Bayes Classifier (NBC) in accordance with the four conditions like customer dissatisfaction (H1), switching costs (H2), service usage (H3) and customer status (H4). The attributes originate from call details and customer profiles which is enhanced the precision of customer churn prediction in the telecom industry. The attributes are measured using BAT algorithm and KELM algorithm used for churn prediction. The experimental results prove that proposed model is better than AdaBoost and Hybrid Support Vector Machine (HSVM) models in terms of the performance of ROC, sensitivity, specificity, accuracy and processing time.",
"title": ""
},
{
"docid": "5d1fbf1b9f0529652af8d28383ce9a34",
"text": "Automatic License Plate Recognition (ALPR) is one of the most prominent tools in intelligent transportation system applications. In ALPR algorithm implementation, License Plate Detection (LPD) is a critical stage. Despite many state-of-the-art researches, some parameters such as low/high illumination, type of camera, or a different style of License Plate (LP) causes LPD step is still a challenging problem. In this paper, we propose a new style-free method based on the cross power spectrum. Our method has three steps; designing adaptive binarized filter, filtering using cross power spectrum and verification. Experimental results show that the recognition accuracy of the proposed approach is 98% among 2241 Iranian cars images including two styles of the LP. In addition, the process of the plate detection takes 44 milliseconds, which is suitable for real-time processing.",
"title": ""
},
{
"docid": "11c7fba6fcbf36cc1187c1cfd07c91f9",
"text": "We describe a real-time bidding algorithm for performance-based display ad allocation. A central issue in performance display advertising is matching campaigns to ad impressions, which can be formulated as a constrained optimization problem that maximizes revenue subject to constraints such as budget limits and inventory availability. The current practice is to solve the optimization problem offline at a tractable level of impression granularity (e.g., the page level), and to serve ads online based on the precomputed static delivery scheme. Although this offline approach takes a global view to achieve optimality, it fails to scale to ad allocation at the individual impression level. Therefore, we propose a real-time bidding algorithm that enables fine-grained impression valuation (e.g., targeting users with real-time conversion data), and adjusts value-based bids according to real-time constraint snapshots (e.g., budget consumption levels). Theoretically, we show that under a linear programming (LP) primal-dual formulation, the simple real-time bidding algorithm is indeed an online solver to the original primal problem by taking the optimal solution to the dual problem as input. In other words, the online algorithm guarantees the offline optimality given the same level of knowledge an offline optimization would have. Empirically, we develop and experiment with two real-time bid adjustment approaches to adapting to the non-stationary nature of the marketplace: one adjusts bids against real-time constraint satisfaction levels using control-theoretic methods, and the other adjusts bids also based on the statistically modeled historical bidding landscape. Finally, we show experimental results with real-world ad delivery data that support our theoretical conclusions.",
"title": ""
},
{
"docid": "5158b5da8a561799402cb1ef3baa3390",
"text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.",
"title": ""
},
{
"docid": "c04065ff9cbeba50c0d70e30ab2e8b53",
"text": "A linear model is suggested for the influence of covariates on the intensity function. This approach is less vulnerable than the Cox model to problems of inconsistency when covariates are deleted or the precision of covariate measurements is changed. A method of non-parametric estimation of regression functions is presented. This results in plots that may give information on the change over time in the influence of covariates. A test method and two goodness of fit plots are also given. The approach is illustrated by simulation as well as by data from a clinical trial of treatment of carcinoma of the oropharynx.",
"title": ""
},
{
"docid": "8e071cfeaf33444e9f85f6bfcb8fa51b",
"text": "BACKGROUND\nLutein is a carotenoid that may play a role in eye health. Human milk typically contains higher concentrations of lutein than infant formula. Preliminary data suggest there are differences in serum lutein concentrations between breastfed and formula-fed infants.\n\n\nAIM OF THE STUDY\nTo measure the serum lutein concentrations among infants fed human milk or formulas with and without added lutein.\n\n\nMETHODS\nA prospective, double-masked trial was conducted in healthy term formula-fed infants (n = 26) randomized between 9 and 16 days of age to study formulas containing 20 (unfortified), 45, 120, and 225 mcg/l of lutein. A breastfed reference group was studied (n = 14) and milk samples were collected from their mothers. Primary outcome was serum lutein concentration at week 12.\n\n\nRESULTS\nGeometric mean lutein concentration of human milk was 21.1 mcg/l (95% CI 14.9-30.0). At week 12, the human milk group had a sixfold higher geometric mean serum lutein (69.3 mcg/l; 95% CI 40.3-119) than the unfortified formula group (11.3 mcg/l; 95% CI 8.1-15.8). Mean serum lutein increased from baseline in each formula group except the unfortified group. Linear regression equation indicated breastfed infants had a greater increase in serum lutein (slope 3.7; P < 0.001) per unit increase in milk lutein than formula-fed infants (slope 0.9; P < 0.001).\n\n\nCONCLUSIONS\nBreastfed infants have higher mean serum lutein concentrations than infants who consume formula unfortified with lutein. These data suggest approximately 4 times more lutein is needed in infant formula than in human milk to achieve similar serum lutein concentrations among breastfed and formula fed infants.",
"title": ""
}
] | scidocsrr |
73a537f621468311eabaa37761cef16e | Self-Organizing Scheme Based on NFV and SDN Architecture for Future Heterogeneous Networks | [
{
"docid": "4d66a85651a78bfd4f7aba290c21f9a7",
"text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.",
"title": ""
}
] | [
{
"docid": "f452650f3b003e6cd35d0303823e9277",
"text": "With the cloud storage services, users can easily form a group and share data with each other. Given the fact that the cloud is not trustable, users need to compute signatures for blocks of the shared data to allow public integrity auditing. Once a user is revoked from the group, the blocks that were previously signed by this revoked user must be re-signed by an existing user, which may result in heavy communication and computation cost for the user. Proxy re-signatures can be used here to allow the cloud to do the re-signing work on behalf of the group. However, a malicious cloud is able to use the re-signing keys to arbitrarily convert signatures from one user to another deliberately. Moreover, collusions between revoked users and a malicious cloud will disclose the secret values of the existing users. In this paper, we propose a novel public auditing scheme for the integrity of shared data with efficient and collusion-resistant user revocation utilizing the concept of Shamir secret sharing. Besides, our scheme also supports secure and efficient public auditing due to our improved polynomial-based authentication tags. The numerical analysis and experimental results demonstrate that our proposed scheme is provably secure and highly efficient.",
"title": ""
},
{
"docid": "6a4638a12c87b470a93e0d373a242868",
"text": "Unfortunately, few of today’s classrooms focus on helping students develop as creative thinkers. Even students who perform well in school are often unprepared for the challenges that they encounter after graduation, in their work lives as well as their personal lives. Many students learn to solve specific types of problems, but they are unable to adapt and improvise in response to the unexpected situations that inevitably arise in today’s fast-changing world.",
"title": ""
},
{
"docid": "9648c6cbdd7a04c595b7ba3310f32980",
"text": "Increase in identity frauds, crimes, security there is growing need of fingerprint technology in civilian and law enforcement applications. Partial fingerprints are of great interest which are either found at crime scenes or resulted from improper scanning. These fingerprints are poor in quality and the number of features present depends on size of fingerprint. Due to the lack of features such as core and delta, general fingerprint matching algorithms do not perform well for partial fingerprint matching. By using combination of level1 and level 2 features accuracy of partial matching cannot be increased. Therefore, we utilize extended features in combination with other feature set. Efficacious fusion methods for coalesce of different modality systems perform better for these types of prints. In this paper, we propose a method for partial fingerprint matching using score level fusion of minutiae based radon transform and pores based LBP extraction. To deal with broken ridges and fragmentary information, radon transform is used to get local information around minutiae. Finally, we evaluate the performance by comparing Equal Error Rate (ERR) of proposed method and existing method and proposed method reduces the error rate to 1.84%.",
"title": ""
},
{
"docid": "23aa04378f4eed573d1290c6bb9d3670",
"text": "The ability to compare systems from the same domain is of central importance for their introduction into complex applications. In the domains of named entity recognition and entity linking, the large number of systems and their orthogonal evaluation w.r.t. measures and datasets has led to an unclear landscape regarding the abilities and weaknesses of the different approaches. We present GERBIL—an improved platform for repeatable, storable and citable semantic annotation experiments— and its extension since being release. GERBIL has narrowed this evaluation gap by generating concise, archivable, humanand machine-readable experiments, analytics and diagnostics. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets. By these means, we aim to ensure that both tool developers and end users can derive meaningful insights into the extension, integration and use of annotation applications. In particular, GERBIL provides comparable results to tool developers, simplifying the discovery of strengths and weaknesses of their implementations with respect to the state-of-the-art. With the permanent experiment URIs provided by our framework, we ensure the reproducibility and archiving of evaluation results. Moreover, the framework generates data in a machine-processable format, allowing for the efficient querying and postprocessing of evaluation results. Additionally, the tool diagnostics provided by GERBIL provide insights into the areas where tools need further refinement, thus allowing developers to create an informed agenda for extensions and end users to detect the right tools for their purposes. Finally, we implemented additional types of experiments including entity typing. GERBIL aims to become a focal point for the state-of-the-art, driving the research agenda of the community by presenting comparable objective evaluation results. Furthermore, we tackle the central problem of the evaluation of entity linking, i.e., we answer the question of how an evaluation algorithm can compare two URIs to each other without being bound to a specific knowledge base. Our approach to this problem opens a way to address the deprecation of URIs of existing gold standards for named entity recognition and entity linking, a feature which is currently not supported by the state-of-the-art. We derived the importance of this feature from usage and dataset requirements collected from the GERBIL user community, which has already carried out more than 24.000 single evaluations using our framework. Through the resulting updates, GERBIL now supports 8 tasks, 46 datasets and 20 systems.",
"title": ""
},
{
"docid": "f3b4a9b49a34d56c32589cee14e6b900",
"text": "The paper reports on mobile robot motion estimation based on matching points from successive two-dimensional (2D) laser scans. This ego-motion approach is well suited to unstructured and dynamic environments because it directly uses raw laser points rather than extracted features. We have analyzed the application of two methods that are very different in essence: (i) A 2D version of iterative closest point (ICP), which is widely used for surface registration; (ii) a genetic algorithm (GA), which is a novel approach for this kind of problem. Their performance in terms of real-time applicability and accuracy has been compared in outdoor experiments with nonstop motion under diverse realistic navigation conditions. Based on this analysis, we propose a hybrid GA-ICP algorithm that combines the best characteristics of these pure methods. The experiments have been carried out with the tracked mobile robot Auriga-alpha and an on-board 2D laser scanner. _____________________________________________________________________________________ This document is a PREPRINT. The published version of the article is available in: Journal of Field Robotics, 23: 21–34. doi: 10.1002/rob.20104; http://dx.doi.org/10.1002/rob.20104.",
"title": ""
},
{
"docid": "15657f493da77021df3406868e6949ff",
"text": "Brushless dc motors controlled by Hall-effect sensors are used in variety of applications, wherein the Hall sensors should be placed 120 electrical degrees apart. This is difficult to achieve in practice especially in low-precision motors, which leads to unsymmetrical operation of the inverter/motor phases. To mitigate this phenomenon, an approach of filtering the Hall-sensor signals has been recently proposed. This letter extends the previous work and presents a very efficient digital implementation of such filters that can be easily included into various brushless dc motor-drive systems for restoring their operation in steady state and transients.",
"title": ""
},
{
"docid": "41b83a85c1c633785766e3f464cbd7a6",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
},
{
"docid": "fe31348bce3e6e698e26aceb8e99b2d8",
"text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.",
"title": ""
},
{
"docid": "e76a9cef74788905d3d8f5659c2bfca2",
"text": "In this paper, we present a novel configuration for realizing monolithic substrate integrated waveguide (SIW)-based phased antenna arrays using Ferrite low-temperature cofired ceramic (LTCC) technology. Unlike the current common schemes for realizing SIW phased arrays that rely on surface-mount component (p-i-n diodes, etc.) for controlling the phase of the individual antenna elements, here the phase is tuned by biasing of the ferrite filling of the SIW. This approach eliminates the need for mounting of any additional RF components and enables seamless monolithic integration of phase shifters and antennas in SIW technology. As a proof of concept, a two-element slotted SIW-based phased array is designed, fabricated, and measured. The prototype exhibits a gain of 4.9 dBi at 13.2 GHz and a maximum E-plane beam-scanning of ±28° using external windings for biasing the phase shifters. Moreover, the array can achieve a maximum beam-scanning of ±19° when biased with small windings that are embedded in the package. This demonstration marks the first time a fully monolithic SIW-based phased array is realized in Ferrite LTCC technology and paves the way for future larger size implementations.",
"title": ""
},
{
"docid": "a820e52486283ae0b1dd5c1ce07daa34",
"text": "The striatal dopaminergic system has been implicated in reinforcement learning (RL), motor performance, and incentive motivation. Various computational models have been proposed to account for each of these effects individually, but a formal analysis of their interactions is lacking. Here we present a novel algorithmic model expanding the classical actor-critic architecture to include fundamental interactive properties of neural circuit models, incorporating both incentive and learning effects into a single theoretical framework. The standard actor is replaced by a dual opponent actor system representing distinct striatal populations, which come to differentially specialize in discriminating positive and negative action values. Dopamine modulates the degree to which each actor component contributes to both learning and choice discriminations. In contrast to standard frameworks, this model simultaneously captures documented effects of dopamine on both learning and choice incentive-and their interactions-across a variety of studies, including probabilistic RL, effort-based choice, and motor skill learning.",
"title": ""
},
{
"docid": "2f045a9bfabe7adb71085ac29be39990",
"text": "Changes in functional connectivity across mental states can provide richer information about human cognition than simpler univariate approaches. Here, we applied a graph theoretical approach to analyze such changes in the lower alpha (8-10 Hz) band of EEG data from 26 subjects undergoing a mentally-demanding test of sustained attention: the Psychomotor Vigilance Test. Behavior and connectivity maps were compared between the first and last 5 min of the task. Reaction times were significantly slower in the final minutes of the task, showing a clear time-on-task effect. A significant increase was observed in weighted characteristic path length, a measure of the efficiency of information transfer within the cortical network. This increase was correlated with reaction time change. Functional connectivity patterns were also estimated on the cortical surface via source localization of cortical activities in 26 predefined regions of interest. Increased characteristic path length was revealed, providing further support for the presence of a reshaped global topology in cortical connectivity networks under fatigue state. Additional analysis showed an asymmetrical pattern of connectivity (right>left) in fronto-parietal regions associated with sustained attention, supporting the right-lateralization of this function. Interestingly, in the fatigue state, significance decreases were observed in left, but not right fronto-parietal connectivity. Our results indicate that functional network organization can change over relatively short time scales with mental fatigue, and that decreased connectivity has a meaningful relationship with individual difference in behavior and performance.",
"title": ""
},
{
"docid": "ed13193df5db458d0673ccee69700bc0",
"text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).",
"title": ""
},
{
"docid": "bf19f897047ba130afd7742a9847e08c",
"text": "Neural Machine Translation (NMT) has been shown to be more effective in translation tasks compared to the Phrase-Based Statistical Machine Translation (PBMT). However, NMT systems are limited in translating low-resource languages (LRL), due to the fact that neural methods require a large amount of parallel data to learn effective mappings between languages. In this work we show how so-called multilingual NMT can help to tackle the challenges associated with LRL translation. Multilingual NMT forces words and subwords representation in a shared semantic space across multiple languages. This allows the model to utilize a positive parameter transfer between different languages, without changing the standard attentionbased encoder-decoder architecture and training modality. We run preliminary experiments with three languages (English, Italian, Romanian) covering six translation directions and show that for all available directions the multilingual approach, i.e. just one system covering all directions is comparable or even outperforms the single bilingual systems. Finally, our approach achieve competitive results also for language pairs not seen at training time using a pivoting (x-step) translation. Italiano. La traduzione automatica con reti neurali (neural machine translation, NMT) ha dimostrato di essere più efficace in molti compiti di traduzione rispetto a quella basata su frasi (phrase-based machine translation, PBMT). Tuttavia, i sistemi NMT sono limitati nel tradurre lingue con basse risorse (LRL). Questo è dovuto al fatto che i metodi di deep learning richiedono grandi quantit di dati per imparare una mappa efficace tra le due lingue. In questo lavoro mostriamo come un modello NMT multilingua può aiutare ad affrontare i problemi legati alla traduzione di LRL. La NMT multilingua costringe la rappresentrazione delle parole e dei segmenti di parole in uno spazio semantico condiviso tra multiple lingue. Questo consente al modello di usare un trasferimento di parametri positivo tra le lingue coinvolte, senza cambiare l’architettura NMT encoder-decoder basata sull’attention e il modo di addestramento. Abbiamo eseguito esperimenti preliminari con tre lingue (inglese, italiano e rumeno), coprendo sei direzioni di traduzione e mostriamo che per tutte le direzioni disponibili l’approccio multilingua, cioè un solo sistema che copre tutte le direzioni è confrontabile o persino migliore dei singolo sistemi bilingue. Inoltre, il nostro approccio ottiene risultati competitivi anche per coppie di lingue non viste durante il trainig, facendo uso di traduzioni con pivot.",
"title": ""
},
{
"docid": "43d5236bd9e2afc2882b662e4626bfce",
"text": "Mindfulness meditation (or simply mindfulness) is an ancient method of attention training. Arguably, developed originally by the Buddha, it has been practiced by Buddhists over 2,500 years as part of their spiritual training. The popularity in mindfulness has soared recently following its adaptation as Mindfulness-Based Stress Management by Jon Kabat-Zinn (1995). Mindfulness is often compared to hypnosis but not all assertions are accurate. This article, as a primer, delineates similarities and dissimilarities between mindfulness and hypnosis in terms of 12 specific facets, including putative neuroscientific findings. It also provides a case example that illustrates clinical integration of the two methods.",
"title": ""
},
{
"docid": "9058505c04c1dc7c33603fd8347312a0",
"text": "Fear appeals are a polarizing issue, with proponents confident in their efficacy and opponents confident that they backfire. We present the results of a comprehensive meta-analysis investigating fear appeals' effectiveness for influencing attitudes, intentions, and behaviors. We tested predictions from a large number of theories, the majority of which have never been tested meta-analytically until now. Studies were included if they contained a treatment group exposed to a fear appeal, a valid comparison group, a manipulation of depicted fear, a measure of attitudes, intentions, or behaviors concerning the targeted risk or recommended solution, and adequate statistics to calculate effect sizes. The meta-analysis included 127 articles (9% unpublished) yielding 248 independent samples (NTotal = 27,372) collected from diverse populations. Results showed a positive effect of fear appeals on attitudes, intentions, and behaviors, with the average effect on a composite index being random-effects d = 0.29. Moderation analyses based on prominent fear appeal theories showed that the effectiveness of fear appeals increased when the message included efficacy statements, depicted high susceptibility and severity, recommended one-time only (vs. repeated) behaviors, and targeted audiences that included a larger percentage of female message recipients. Overall, we conclude that (a) fear appeals are effective at positively influencing attitude, intentions, and behaviors; (b) there are very few circumstances under which they are not effective; and (c) there are no identified circumstances under which they backfire and lead to undesirable outcomes.",
"title": ""
},
{
"docid": "e5ecbd3728e93badd4cfbf5eef6957f9",
"text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.",
"title": ""
},
{
"docid": "c91cbf47f1c506b4d512adc752fff039",
"text": "OBJECTIVE\nSodium benzoate, a common additive in popular beverages, has recently been linked to ADHD. This research examined the relationship between sodium benzoate-rich beverage ingestion and symptoms related to ADHD in college students.\n\n\nMETHOD\nCollege students (N = 475) completed an anonymous survey in class in fall 2010. The survey assessed recent intake of a noninclusive list of sodium benzoate-rich beverages and ADHD-related symptoms using a validated screener.\n\n\nRESULTS\nSodium benzoate-rich beverage intake was significantly associated with ADHD-related symptoms (p = .001), and significance was retained after controlling for covariates. Students scoring ≥4 on the screener (scores that may be consistent with ADHD; n = 67) reported higher intakes (34.9 ± 4.4 servings/month) than the remainder of the sample (16.7 ± 1.1 servings/month).\n\n\nCONCLUSION\nThese data suggest that a high intake of sodium benzoate-rich beverages may contribute to ADHD-related symptoms in college students and warrants further investigation.",
"title": ""
},
{
"docid": "1d1caa539215e7051c25a9f28da48651",
"text": "Physiological changes occur in pregnancy to nurture the developing foetus and prepare the mother for labour and delivery. Some of these changes influence normal biochemical values while others may mimic symptoms of medical disease. It is important to differentiate between normal physiological changes and disease pathology. This review highlights the important changes that take place during normal pregnancy.",
"title": ""
},
{
"docid": "9b71c5bd7314e793757776c6e54f03bb",
"text": "This paper evaluates the application of Bronfenbrenner’s bioecological theory as it is represented in empirical work on families and their relationships. We describe the ‘‘mature’’ form of bioecological theory of the mid-1990s and beyond, with its focus on proximal processes at the center of the Process-Person-Context-Time model. We then examine 25 papers published since 2001, all explicitly described as being based on Bronfenbrenner’s theory, and show that all but 4 rely on outmoded versions of the theory, resulting in conceptual confusion and inadequate testing of the theory.",
"title": ""
}
] | scidocsrr |
191acb49442f6505c839606b130fa5ff | A simulation as a service cloud middleware | [
{
"docid": "e740e5ff2989ce414836c422c45570a9",
"text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.",
"title": ""
},
{
"docid": "9380bb09ffc970499931f063008c935f",
"text": "Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7f06370a81e7749970cd0359c5b5f993",
"text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.",
"title": ""
}
] | [
{
"docid": "234fcc911f6d94b6bbb0af237ad5f34f",
"text": "Contamination of samples with DNA is still a major problem in microbiology laboratories, despite the wide acceptance of PCR and other amplification techniques for the detection of frequently low amounts of target DNA. This review focuses on the implications of contamination in the diagnosis and research of infectious diseases, possible sources of contaminants, strategies for prevention and destruction, and quality control. Contamination of samples in diagnostic PCR can have far-reaching consequences for patients, as illustrated by several examples in this review. Furthermore, it appears that the (sometimes very unexpected) sources of contaminants are diverse (including water, reagents, disposables, sample carry over, and amplicon), and contaminants can also be introduced by unrelated activities in neighboring laboratories. Therefore, lack of communication between researchers using the same laboratory space can be considered a risk factor. Only a very limited number of multicenter quality control studies have been published so far, but these showed false-positive rates of 9–57%. The overall conclusion is that although nucleic acid amplification assays are basically useful both in research and in the clinic, their accuracy depends on awareness of risk factors and the proper use of procedures for the prevention of nucleic acid contamination. The discussion of prevention and destruction strategies included in this review may serve as a guide to help improve laboratory practices and reduce the number of false-positive amplification results.",
"title": ""
},
{
"docid": "cff3b4f6db26e66893a9db95fb068ef1",
"text": "In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches.",
"title": ""
},
{
"docid": "417100b3384ec637b47846134bc6d1fd",
"text": "The electronic way of learning and communicating with students offers a lot of advantages that can be achieved through different solutions. Among them, the most popular approach is the use of a learning management system. Teachers and students do not have the possibility to use all of the available learning system tools and modules. Even for modules that are used it is necessary to find the most effective method of approach for any given situation. Therefore, in this paper we make a usability evaluation of standard modules in Moodle, one of the leading open source learning management systems. With this research, we obtain significant results and informationpsilas for administrators, teachers and students on how to improve effective usage of this system.",
"title": ""
},
{
"docid": "48317f6959b4a681e0ff001c7ce3e7ee",
"text": "We introduce the challenge of using machine learning effectively in space applications and motivate the domain for future researchers. Machine learning can be used to enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science return of space missions. In addition to the challenges provided by the nature of space itself, the requirements of a space mission severely limit the use of many current machine learning approaches, and we encourage researchers to explore new ways to address these challenges.",
"title": ""
},
{
"docid": "6746032bbd302a8c873ac437fc79b3fe",
"text": "This article examines the development of profitor revenue-sharing contracts in the motion picture industry. Contrary to much popular belief, such contracts have been in use since the start of the studio era. However, early contracts differed from those seen today. The evolution of the current contract is traced, and evidence regarding the increased use of sharing contracts after 1948 is examined. I examine competing theories of the economic function served by these contracts. I suggest that it is unlikely that these contracts are the result of a standard principal-agent problem.",
"title": ""
},
{
"docid": "defb837e866948e5e092ab64476d33b5",
"text": "Recent multicoil polarised pads called Double D pads (DDP) and Bipolar Pads (BPP) show excellent promise when used in lumped charging due to having single sided fields and high native Q factors. However, improvements to field leakage are desired to enable higher power transfer while keeping the leakage flux within ICNIRP levels. This paper proposes a method to reduce the leakage flux which a lumped inductive power transfer (IPT) system exhibits by modifying the ferrite structure of its pads. The DDP and BPP pads ferrite structures are both modified by extending them past the ends of the coils in each pad with the intention of attracting only magnetic flux generated by the primary pad not coupled onto the secondary pad. Simulated improved ferrite structures are validated through practical measurements.",
"title": ""
},
{
"docid": "4b057d86825e346291d675e0c1285fad",
"text": "We describe theclipmap, a dynamic texture representation that efficiently caches textures of arbitrarily large size in a finite amount of physical memory for rendering at real-time rates. Further, we describe a software system for managing clipmaps that supports integration into demanding real-time applications. We show the scale and robustness of this integrated hardware/software architecture by reviewing an application virtualizing a 170 gigabyte texture at 60 Hertz. Finally, we suggest ways that other rendering systems may exploit the concepts underlying clipmaps to solve related problems. CR",
"title": ""
},
{
"docid": "6be97ac80738519792c02b033563efa7",
"text": "Title of Document: SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT Stephan Charles Greene Doctor of Philosophy, 2007 Directed By: Professor Philip Resnik, Department of Linguistics and Institute for Advanced Computer Studies Current interest in automatic sentiment analysis i motivated by a variety of information requirements. The vast majority of work in sentiment analysis has been specifically targeted at detecting subjective state ments and mining opinions. This dissertation focuses on a different but related pro blem that to date has received relatively little attention in NLP research: detect ing implicit sentiment , or spin, in text. This text classification task is distinguished from ther sentiment analysis work in that there is no assumption that the documents to b e classified with respect to sentiment are necessarily overt expressions of opin ion. They rather are documents that might reveal a perspective . This dissertation describes a novel approach to t e identification of implicit sentiment, motivated by ideas drawn from the literature on lexical semantics and argument structure, supported and refined through psycholinguistic experimentation. A relationship pr edictive of sentiment is established for components of meaning that are thou g t to be drivers of verbal argument selection and linking and to be arbiters o f what is foregrounded or backgrounded in discourse. In computational experim nts employing targeted lexical selection for verbs and nouns, a set of features re flective of these components of meaning is extracted for the terms. As observable p roxies for the underlying semantic components, these features are exploited using mach ine learning methods for text classification with respect to perspective. After i nitial experimentation with manually selected lexical resources, the method is generaliz d to require no manual selection or hand tuning of any kind. The robustness of this lin gu stically motivated method is demonstrated by successfully applying it to three d istinct text domains under a number of different experimental conditions, obtain ing the best classification accuracies yet reported for several sentiment class ification tasks. A novel graph-based classifier combination method is introduced which f urther improves classification accuracy by integrating statistical classifiers wit h models of inter-document relationships. SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "6ce2991a68c7d4d6467ff2007badbaf0",
"text": "This paper investigates acoustic models for automatic speech recognition (ASR) using deep neural networks (DNNs) whose input is taken directly from windowed speech waveforms (WSW). After demonstrating the ability of these networks to automatically acquire internal representations that are similar to mel-scale filter-banks, an investigation into efficient DNN architectures for exploiting WSW features is performed. First, a modified bottleneck DNN architecture is investigated to capture dynamic spectrum information that is not well represented in the time domain signal. Second,the redundancies inherent in WSW based DNNs are considered. The performance of acoustic models defined over WSW features is compared to that obtained from acoustic models defined over mel frequency spectrum coefficient (MFSC) features on the Wall Street Journal (WSJ) speech corpus. It is shown that using WSW features results in a 3.0 percent increase in WER relative to that resulting from MFSC features on the WSJ corpus. However, when combined with MFSC features, a reduction in WER of 4.1 percent is obtained with respect to the best evaluated MFSC based DNN acoustic model.",
"title": ""
},
{
"docid": "e91310da7635df27b5c4056388cc6e52",
"text": "This paper presents a new metric for automated registration of multi-modal sensor data. The metric is based on the alignment of the orientation of gradients formed from the two candidate sensors. Data registration is performed by estimating the sensors’ extrinsic parameters that minimises the misalignment of the gradients. The metric can operate in a large range of applications working on both 2D and 3D sensor outputs and is suitable for both (i) single scan data registration and (ii) multi-sensor platform calibration using multiple scans. Unlike traditional calibration methods, it does not require markers or other registration aids to be placed in the scene. The effectiveness of the new method is demonstrated with experimental results on a variety of camera-lidar and camera-camera calibration problems. The novel metric is validated through comparisons with state of the art methods. Our approach is shown to give high quality registrations under all tested conditions. C © 2014 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "35b286999957396e1f5cab6e2370ed88",
"text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.",
"title": ""
},
{
"docid": "1d60437cbd2cec5058957af291ca7cde",
"text": "e behavior of users in certain services could be a clue that can be used to infer their preferences and may be used to make recommendations for other services they have never used. However, the cross-domain relationships between items and user consumption paerns are not simple, especially when there are few or no common users and items across domains. To address this problem, we propose a content-based cross-domain recommendation method for cold-start users that does not require userand itemoverlap. We formulate recommendation as extreme multi-class classication where labels (items) corresponding to the users are predicted. With this formulation, the problem is reduced to a domain adaptation seing, in which a classier trained in the source domain is adapted to the target domain. For this, we construct a neural network that combines an architecture for domain adaptation, Domain Separation Network, with a denoising autoencoder for item representation. We assess the performance of our approach in experiments on a pair of data sets collected from movie and news services of Yahoo! JAPAN and show that our approach outperforms several baseline methods including a cross-domain collaborative ltering method.",
"title": ""
},
{
"docid": "a89c53f4fbe47e7a5e49193f0786cd6d",
"text": "Although hundreds of studies have documented the association between family poverty and children's health, achievement, and behavior, few measure the effects of the timing, depth, and duration of poverty on children, and many fail to adjust for other family characteristics (for example, female headship, mother's age, and schooling) that may account for much of the observed correlation between poverty and child outcomes. This article focuses on a recent set of studies that explore the relationship between poverty and child outcomes in depth. By and large, this research supports the conclusion that family income has selective but, in some instances, quite substantial effects on child and adolescent well-being. Family income appears to be more strongly related to children's ability and achievement than to their emotional outcomes. Children who live in extreme poverty or who live below the poverty line for multiple years appear, all other things being equal, to suffer the worst outcomes. The timing of poverty also seems to be important for certain child outcomes. Children who experience poverty during their preschool and early school years have lower rates of school completion than children and adolescents who experience poverty only in later years. Although more research is needed on the significance of the timing of poverty on child outcomes, findings to date suggest that interventions during early childhood may be most important in reducing poverty's impact on children.",
"title": ""
},
{
"docid": "5e75a46c36e663791db0f8b45f685cb6",
"text": "This study provides one of very few experimental investigations into the impact of a musical soundtrack on the video gaming experience. Participants were randomly assigned to one of three experimental conditions: game-with-music, game-without-music, or music-only. After playing each of three segments of The Lord of the Rings: The Two Towers (Electronic Arts, 2002)--or, in the music-only condition, listening to the musical score that accompanies the scene--subjects responded on 21 verbal scales. Results revealed that some, but not all, of the verbal scales exhibited a statistically significant difference due to the presence of a musical score. In addition, both gender and age level were shown to be significant factors for some, but not all, of the verbal scales. Details of the specific ways in which music affects the gaming experience are provided in the body of the paper.",
"title": ""
},
{
"docid": "8a293b95b931f4f72fe644fdfe30564a",
"text": "Today, the concept of brain connectivity plays a central role in the neuroscience. While functional connectivity is defined as the temporal coherence between the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally between cortical sites. The most used method to estimate effective connectivity in neuroscience is the structural equation modeling (SEM), typically used on data related to the brain hemodynamic behavior. However, the use of hemodynamic measures limits the temporal resolution on which the brain process can be followed. The present research proposes the use of the SEM approach on the cortical waveforms estimated from the high-resolution EEG data, which exhibits a good spatial resolution and a higher temporal resolution than hemodynamic measures. We performed a simulation study, in which different main factors were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). Such factors were the signal-to-noise ratio and the duration of the simulated cortical activity. Since SEM technique is based on the use of a model formulated on the basis of anatomical and physiological constraints, different experimental conditions were analyzed, in order to evaluate the effect of errors made in the a priori model formulation on its performances. The feasibility of the proposed approach has been shown in a human study using high-resolution EEG recordings related to finger tapping movements.",
"title": ""
},
{
"docid": "3476246809afe4e6b7cef9bbbed1926e",
"text": "The aim of this study was to investigate the efficacy of a proposed new implant mediated drug delivery system (IMDDS) in rabbits. The drug delivery system is applied through a modified titanium implant that is configured to be implanted into bone. The implant is hollow and has multiple microholes that can continuously deliver therapeutic agents into the systematic body. To examine the efficacy and feasibility of the IMDDS, we investigated the pharmacokinetic behavior of dexamethasone in plasma after a single dose was delivered via the modified implant placed in the rabbit tibia. After measuring the plasma concentration, the areas under the curve showed that the IMDDS provided a sustained release for a relatively long period. The result suggests that the IMDDS can deliver a sustained release of certain drug components with a high bioavailability. Accordingly, the IMDDS may provide the basis for a novel approach to treating patients with chronic diseases.",
"title": ""
},
{
"docid": "bd21815804115f2c413265660a78c203",
"text": "Outsourcing, internationalization, and complexity characterize today's aerospace supply chains, making aircraft manufacturers structurally dependent on each other. Despite several complexity-related supply chain issues reported in the literature, aerospace supply chain structure has not been studied due to a lack of empirical data and suitable analytical toolsets for studying system structure. In this paper, we assemble a large-scale empirical data set on the supply network of Airbus and apply the new science of networks to analyze how the industry is structured. Our results show that the system under study is a network, formed by communities connected by hub firms. Hub firms also tend to connect to each other, providing cohesiveness, yet making the network vulnerable to disruptions in them. We also show how network science can be used to identify firms that are operationally critical and that are key to disseminating information.",
"title": ""
},
{
"docid": "dc207fb8426f468dde2cb1d804b33539",
"text": "This paper presents a webcam-based spherical coordinate conversion system using OpenCL massive parallel computing for panorama video image stitching. With multi-core architecture and its high-bandwidth data transmission rate of memory accesses, modern programmable GPU makes it possible to process multiple video images in parallel for real-time interaction. To get a panorama view of 360 degrees, we use OpenCL to stitch multiple webcam video images into a panorama image and texture mapped it to a spherical object to compose a virtual reality immersive environment. The experimental results show that when we use NVIDIA 9600GT to process eight 640×480 images, OpenCL can achieve ninety times speedups.",
"title": ""
},
{
"docid": "161c79eeb01624c497446cb2c51f3893",
"text": "In this article, results of a German nationwide survey (KFN schools survey 2007/2008) are presented. The controlled sample of 44,610 male and female ninth-graders was carried out in 2007 and 2008 by the Criminological Research Institute of Lower Saxony (KFN). According to a newly developed screening instrument (KFN-CSAS-II), which was presented to every third juvenile participant (N = 15,168), 3% of the male and 0.3% of the female students are diagnosed as dependent on video games. The data indicate a clear dividing line between extensive gaming and video game dependency (VGD) as a clinically relevant phenomenon. VGD is accompanied by increased levels of psychological and social stress in the form of lower school achievement, increased truancy, reduced sleep time, limited leisure activities, and increased thoughts of committing suicide. In addition, it becomes evident that personal risk factors are crucial for VGD. The findings indicate the necessity of additional research as well as the respective measures in the field of health care policies.",
"title": ""
}
] | scidocsrr |
e712a6a8962386e24801f52412fdce61 | Quantifying the relation between performance and success in soccer | [
{
"docid": "b88ceafe9998671820291773be77cabc",
"text": "The aim of this study was to propose a set of network methods to measure the specific properties of a team. These metrics were organised at macro-analysis levels. The interactions between teammates were collected and then processed following the analysis levels herein announced. Overall, 577 offensive plays were analysed from five matches. The network density showed an ambiguous relationship among the team, mainly during the 2nd half. The mean values of density for all matches were 0.48 in the 1st half, 0.32 in the 2nd half and 0.34 for the whole match. The heterogeneity coefficient for the overall matches rounded to 0.47 and it was also observed that this increased in all matches in the 2nd half. The centralisation values showed that there was no 'star topology'. The results suggest that each node (i.e., each player) had nearly the same connectivity, mainly in the 1st half. Nevertheless, the values increased in the 2nd half, showing a decreasing participation of all players at the same level. Briefly, these metrics showed that it is possible to identify how players connect with each other and the kind and strength of the connections between them. In summary, it may be concluded that network metrics can be a powerful tool to help coaches understand team's specific properties and support decision-making to improve the sports training process based on match analysis.",
"title": ""
},
{
"docid": "6325188ee21b6baf65dbce6855c19bc2",
"text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.",
"title": ""
}
] | [
{
"docid": "c6e14529a55b0e6da44dd0966896421a",
"text": "Context-based pairing solutions increase the usability of IoT device pairing by eliminating any human involvement in the pairing process. This is possible by utilizing on-board sensors (with same sensing modalities) to capture a common physical context (e.g., ambient sound via each device's microphone). However, in a smart home scenario, it is impractical to assume that all devices will share a common sensing modality. For example, a motion detector is only equipped with an infrared sensor while Amazon Echo only has microphones. In this paper, we develop a new context-based pairing mechanism called Perceptio that uses time as the common factor across differing sensor types. By focusing on the event timing, rather than the specific event sensor data, Perceptio creates event fingerprints that can be matched across a variety of IoT devices. We propose Perceptio based on the idea that devices co-located within a physically secure boundary (e.g., single family house) can observe more events in common over time, as opposed to devices outside. Devices make use of the observed contextual information to provide entropy for Perceptio's pairing protocol. We design and implement Perceptio, and evaluate its effectiveness as an autonomous secure pairing solution. Our implementation demonstrates the ability to sufficiently distinguish between legitimate devices (placed within the boundary) and attacker devices (placed outside) by imposing a threshold on fingerprint similarity. Perceptio demonstrates an average fingerprint similarity of 94.9% between legitimate devices while even a hypothetical impossibly well-performing attacker yields only 68.9% between itself and a valid device.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "581f8909adca17194df618cc951749cd",
"text": "In this paper the problem of emotion recognition using physiological signals is presented. Firstly the problems with acquisition of physiological signals related to specific human emotions are described. It is not a trivial problem to elicit real emotions and to choose stimuli that always, and for all people, elicit the same emotion. Also different kinds of physiological signals for emotion recognition are considered. A set of the most helpful biosignals is chosen. An experiment is described that was performed in order to verify the possibility of eliciting real emotions using specially prepared multimedia presentations, as well as finding physiological signals that are most correlated with human emotions. The experiment was useful for detecting and identifying many problems and helping to find their solutions. The results of this research can be used for creation of affect-aware applications, for instance video games, that will be able to react to user's emotions.",
"title": ""
},
{
"docid": "2603c07864b92c6723b40c83d3c216b9",
"text": "Background: A study was undertaken to record exacerbations and health resource use in patients with COPD during 6 months of treatment with tiotropium, salmeterol, or matching placebos. Methods: Patients with COPD were enrolled in two 6-month randomised, placebo controlled, double blind, double dummy studies of tiotropium 18 μg once daily via HandiHaler or salmeterol 50 μg twice daily via a metered dose inhaler. The two trials were combined for analysis of heath outcomes consisting of exacerbations, health resource use, dyspnoea (assessed by the transitional dyspnoea index, TDI), health related quality of life (assessed by St George’s Respiratory Questionnaire, SGRQ), and spirometry. Results: 1207 patients participated in the study (tiotropium 402, salmeterol 405, placebo 400). Compared with placebo, tiotropium but not salmeterol was associated with a significant delay in the time to onset of the first exacerbation. Fewer COPD exacerbations/patient year occurred in the tiotropium group (1.07) than in the placebo group (1.49, p<0.05); the salmeterol group (1.23 events/year) did not differ from placebo. The tiotropium group had 0.10 hospital admissions per patient year for COPD exacerbations compared with 0.17 for salmeterol and 0.15 for placebo (not statistically different). For all causes (respiratory and non-respiratory) tiotropium, but not salmeterol, was associated with fewer hospital admissions while both groups had fewer days in hospital than the placebo group. The number of days during which patients were unable to perform their usual daily activities was lowest in the tiotropium group (tiotropium 8.3 (0.8), salmeterol 11.1 (0.8), placebo 10.9 (0.8), p<0.05). SGRQ total score improved by 4.2 (0.7), 2.8 (0.7) and 1.5 (0.7) units during the 6 month trial for the tiotropium, salmeterol and placebo groups, respectively (p<0.01 tiotropium v placebo). Compared with placebo, TDI focal score improved in both the tiotropium group (1.1 (0.3) units, p<0.001) and the salmeterol group (0.7 (0.3) units, p<0.05). Evaluation of morning pre-dose FEV1, peak FEV1 and mean FEV1 (0–3 hours) showed that tiotropium was superior to salmeterol while both active drugs were more effective than placebo. Conclusions: Exacerbations of COPD and health resource usage were positively affected by daily treatment with tiotropium. With the exception of the number of hospital days associated with all causes, salmeterol twice daily resulted in no significant changes compared with placebo. Tiotropium also improved health related quality of life, dyspnoea, and lung function in patients with COPD.",
"title": ""
},
{
"docid": "155e53e97c23498a557f848ef52da2a7",
"text": "We propose a simultaneous extraction method for 12 organs from non-contrast three-dimensional abdominal CT images. The proposed method uses an abdominal cavity standardization process and atlas guided segmentation incorporating parameter estimation with the EM algorithm to deal with the large fluctuations in the feature distribution parameters between subjects. Segmentation is then performed using multiple level sets, which minimize the energy function that considers the hierarchy and exclusiveness between organs as well as uniformity of grey values in organs. To assess the performance of the proposed method, ten non-contrast 3D CT volumes were used. The accuracy of the feature distribution parameter estimation was slightly improved using the proposed EM method, resulting in better performance of the segmentation process. Nine organs out of twelve were statistically improved compared with the results without the proposed parameter estimation process. The proposed multiple level sets also boosted the performance of the segmentation by 7.2 points on average compared with the atlas guided segmentation. Nine out of twelve organs were confirmed to be statistically improved compared with the atlas guided method. The proposed method was statistically proved to have better performance in the segmentation of 3D CT volumes.",
"title": ""
},
{
"docid": "5ea45a4376e228b3eacebb8dd8e290d2",
"text": "The sharing economy has quickly become a very prominent subject of research in the broader computing literature and the in human--computer interaction (HCI) literature more specifically. When other computing research areas have experienced similarly rapid growth (e.g. human computation, eco-feedback technology), early stage literature reviews have proved useful and influential by identifying trends and gaps in the literature of interest and by providing key directions for short- and long-term future work. In this paper, we seek to provide the same benefits with respect to computing research on the sharing economy. Specifically, following the suggested approach of prior computing literature reviews, we conducted a systematic review of sharing economy articles published in the Association for Computing Machinery Digital Library to investigate the state of sharing economy research in computing. We performed this review with two simultaneous foci: a broad focus toward the computing literature more generally and a narrow focus specifically on HCI literature. We collected a total of 112 sharing economy articles published between 2008 and 2017 and through our analysis of these papers, we make two core contributions: (1) an understanding of the computing community's contributions to our knowledge about the sharing economy, and specifically the role of the HCI community in these contributions (i.e.what has been done) and (2) a discussion of under-explored and unexplored aspects of the sharing economy that can serve as a partial research agenda moving forward (i.e.what is next to do).",
"title": ""
},
{
"docid": "8d40b29088a331578e502abb2148ea8c",
"text": "Governments are increasingly realizing the importance of utilizing Information and Communication Technologies (ICT) as a tool to better address user’s/citizen’s needs. As citizen’s expectations grow, governments need to deliver services of high quality level to motivate more users to utilize these available e-services. In spite of this, governments still fall short in their service quality level offered to citizens/users. Thus understanding and measuring service quality factors become crucial as the number of services offered is increasing while not realizing what citizens/users really look for when they utilize these services. The study presents an extensive literature review on approaches used to evaluate e-government services throughout a phase of time. The study also suggested those quality/factors indicators government’s need to invest in of high priority in order to meet current and future citizen’s expectations of service quality.",
"title": ""
},
{
"docid": "00946bbfab7cd0ab0d51875b944bca66",
"text": "We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities and relations present in a document which can then be used to answer questions about the document. It is trained end-to-end: only supervision to the model is in the form of correct answers to the questions. We test the model on the 20 bAbI question-answering tasks with 10k examples per task and find that it solves all the tasks with a mean error of 0.3%, achieving 0% error on 11 of the 20 tasks.",
"title": ""
},
{
"docid": "56e1778df9d5b6fa36cbf4caae710e67",
"text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.",
"title": ""
},
{
"docid": "735cc7f7b067175705cb605affd7f06e",
"text": "This paper presents a design, simulation, implementation and measurement of a novel microstrip meander patch antenna for the application of sensor networks. The dimension of the microstrip chip antenna is 15 mm times 15 mm times 2 mm. The meander-type radiating patch is constructed on the upper layer of the 2 mm height substrate with 0.0 5 mm height metallic conduct lines. Because of using the very high relative permittivity substrate ( epsivr=90), the proposed antenna achieves 315 MHz band operations.",
"title": ""
},
{
"docid": "43b76baccb237dd36dddfac5854414b8",
"text": "PISCES is a public server for culling sets of protein sequences from the Protein Data Bank (PDB) by sequence identity and structural quality criteria. PISCES can provide lists culled from the entire PDB or from lists of PDB entries or chains provided by the user. The sequence identities are obtained from PSI-BLAST alignments with position-specific substitution matrices derived from the non-redundant protein sequence database. PISCES therefore provides better lists than servers that use BLAST, which is unable to identify many relationships below 40% sequence identity and often overestimates sequence identity by aligning only well-conserved fragments. PDB sequences are updated weekly. PISCES can also cull non-PDB sequences provided by the user as a list of GenBank identifiers, a FASTA format file, or BLAST/PSI-BLAST output.",
"title": ""
},
{
"docid": "a692778b7f619de5ad4bc3b2d627c265",
"text": "Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques. Fortunately, cognitive and educational psychologists have been developing and evaluating easy-to-use learning techniques that could help students achieve their learning goals. In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice. To offer recommendations about the relative utility of these techniques, we evaluated whether their benefits generalize across four categories of variables: learning conditions, student characteristics, materials, and criterion tasks. Learning conditions include aspects of the learning environment in which the technique is implemented, such as whether a student studies alone or with a group. Student characteristics include variables such as age, ability, and level of prior knowledge. Materials vary from simple concepts to mathematical problems to complicated science texts. Criterion tasks include different outcome measures that are relevant to student achievement, such as those tapping memory, problem solving, and comprehension. We attempted to provide thorough reviews for each technique, so this monograph is rather lengthy. However, we also wrote the monograph in a modular fashion, so it is easy to use. In particular, each review is divided into the following sections: General description of the technique and why it should work How general are the effects of this technique? 2a. Learning conditions 2b. Student characteristics 2c. Materials 2d. Criterion tasks Effects in representative educational contexts Issues for implementation Overall assessment The review for each technique can be read independently of the others, and particular variables of interest can be easily compared across techniques. To foreshadow our final recommendations, the techniques vary widely with respect to their generalizability and promise for improving student learning. Practice testing and distributed practice received high utility assessments because they benefit learners of different ages and abilities and have been shown to boost students' performance across many criterion tasks and even in educational contexts. Elaborative interrogation, self-explanation, and interleaved practice received moderate utility assessments. The benefits of these techniques do generalize across some variables, yet despite their promise, they fell short of a high utility assessment because the evidence for their efficacy is limited. For instance, elaborative interrogation and self-explanation have not been adequately evaluated in educational contexts, and the benefits of interleaving have just begun to be systematically explored, so the ultimate effectiveness of these techniques is currently unknown. Nevertheless, the techniques that received moderate-utility ratings show enough promise for us to recommend their use in appropriate situations, which we describe in detail within the review of each technique. Five techniques received a low utility assessment: summarization, highlighting, the keyword mnemonic, imagery use for text learning, and rereading. These techniques were rated as low utility for numerous reasons. Summarization and imagery use for text learning have been shown to help some students on some criterion tasks, yet the conditions under which these techniques produce benefits are limited, and much research is still needed to fully explore their overall effectiveness. The keyword mnemonic is difficult to implement in some contexts, and it appears to benefit students for a limited number of materials and for short retention intervals. Most students report rereading and highlighting, yet these techniques do not consistently boost students' performance, so other techniques should be used in their place (e.g., practice testing instead of rereading). Our hope is that this monograph will foster improvements in student learning, not only by showcasing which learning techniques are likely to have the most generalizable effects but also by encouraging researchers to continue investigating the most promising techniques. Accordingly, in our closing remarks, we discuss some issues for how these techniques could be implemented by teachers and students, and we highlight directions for future research.",
"title": ""
},
{
"docid": "7a12529d179d9ca6b94dbac57c54059f",
"text": "A novel design of a hand functions task training robotic system was developed for the stroke rehabilitation. It detects the intention of hand opening or hand closing from the stroke person using the electromyography (EMG) signals measured from the hemiplegic side. This training system consists of an embedded controller and a robotic hand module. Each hand robot has 5 individual finger assemblies capable to drive 2 degrees of freedom (DOFs) of each finger at the same time. Powered by the linear actuator, the finger assembly achieves 55 degree range of motion (ROM) at the metacarpophalangeal (MCP) joint and 65 degree range of motion (ROM) at the proximal interphalangeal (PIP) joint. Each finger assembly can also be adjusted to fit for different finger length. With this task training system, stroke subject can open and close their impaired hand using their own intention to carry out some of the daily living tasks.",
"title": ""
},
{
"docid": "ea277c160544fb54bef69e2a4fa85233",
"text": "This paper proposes approaches to measure linkography in protocol studies of designing. It outlines the ideas behind using clustering and Shannon’s entropy as measures of designing behaviour. Hypothetical cases are used to illustrate the methods. The paper concludes that these methods may form the basis of a new tool to assess designer behaviour in terms of chunking of design ideas and the opportunities for idea development.",
"title": ""
},
{
"docid": "a5df1d285a359c493d53d1a3bf9920c2",
"text": "In this paper, we have reported a new failure phenomenon of read-disturb in MLC NAND flash memory caused by boosting hot-carrier injection effect. 1) The read-disturb failure occurred on unselected WL (WLn+1) after the adjacent selected WL (WLn) was performed with more than 1K read cycles. 2) The read-disturb failure of WLn+1 depends on WLn cell’s Vth and its applied voltage. 3) The mechanism of this kind of failure can be explained by hot carrier injection that is generated by discharging from boosting voltage in unselected cell area (Drain of WLn) to ground (Source of WLn). Experiment A NAND Flash memory was fabricated based on 70nm technology. In order to investigate the mechanisms of readdisturb, 3 different read voltages and 4 different cell data states (S0, S1, S2 and S3) were applied on the selected WL with SGS/SGD rising time shift scheme [1]. Fig. 1 and Fig. 2 show the operation condition and waveform for readdisturb evaluation. In the evaluation, the selected WLn was performed with more than 100K read cycles. Result And Discussion Fig. 3 shows the measured results of WL2 Vth shift (i.e. read-disturb failure) during WL1 read-didturb cycles with different WL1 voltages (VWL1) and cell data states (S0~S3). From these data, a serious WL2 Vth shift can be observed in VWL1=0.5V and VWL1=1.8V after 1K read cycles. In Fig. 3(a), the magnitude of Vth shift with WL1=S2 state is larger than that with WL1=S3 state. However, obviously WL2 Vth shift can be found only when WL1 is at S3 state in Fig. 3(b). In Fig. 3(c), WL2 Vth is unchanged while VWL1 is set to 3.6V. To precisely analyze the phenomenon, further TCAD simulation and analysis were carried out to clarify the mechanism of the read-disturb failure. Based on simulation results of Fig. 4, the channel potential difference between selected WLn (e.g. WL1) and unselected WLn+1 (e.g. WL2) is related to cell data states (S0~S3) and the read voltage of the selected WL (VWL1). Fig. 4(a) exhibits that the selected WL1 channel was tuned off and the channel potential of unselected WL2~31 was boosted to high level when the WL1 cell data state is S2 or S3. Therefore, a sufficient potential difference appears between WLn and WLn+1 and provides a high transverse electric field. When VWL1 is increased to 1.8V as Fig. 4(b), a high programming cell state (S3) is required to support the potential boosting of unselected WL2~31. In addition, from Fig. 4(c) and the case of WL1=S2 in Fig. 4(b), we can find that the potential difference were depressed since the WL1 channel is turned on by high WL1 voltage. Therefore, the potential difference can be reduced by sharing effect. These simulation results are well corresponding with read disturb results of Fig. 3. Electron current density is another factor to cause the Vth shift of WLn+1. From Fig. 3(a), the current density of WL1=S2 should higher than that of WL=S3 since its Vth is lower. Consequently, the probability of impact ionization can be increased due to the high current density in case of WL1=S2. According to the model, we can clearly explain the phenomenon of serious WL2 Vth shift occurs in the condition of WL 1=S2 rather than WL1=S3. Fig. 5 shows the schematic diagram of the mechanism of Boosting Hot-carrier Injection in MLC NAND flash memories. The transverse E-field can be enhanced by the channel potential difference and consequently make a high probability of impact ionization. As a result, electron-hole pairs will be generated, and then electrons will inject into the adjacent cell (WL2) since the higher vertical field of VWL2. Thus, the Vth of adjacent cell will be changed after 1K cycles with the continual injecting of the hot electrons. Table 1 shows the measured result of cell array Vth shift for WL1 to WL4 after the read-disturb cycles on WL1 or WL2. From the data, it concretely indicates that the WLn read cycles could only causes WLn+1 Vth shift even if WLn+1 did not apply the read cycles. The result is consistent with measured data and also supports that the read-disturb on adjacent cell results from boosting hotcarrier injection. Conclusion A new read-disturb failure mechanism caused by boosting hot-carrier injection effect in MLC NAND flash memory has been reported and clarified. Simulation and measured data describe that the electrostatic potential difference between reading cell and the adjacent cell plays a significant role to enhance hot-carrier injection effect. Reference [1] Ken Takeuchi, “A 56nm CMOS 99mm2 8Gb Multilevel NAND Flash Memory with 10MB/s Program Throughput ,” ISSCC, 2006. 978-1-4244-3761-0/09/$25.00 ©2009 IEEE R ead D isturbance Test R esult W L0 W L1 W L2 W L3 W L4 W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Fail Pass Pass W L1= S3 Pass Pass Fail Pass Pass W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Pass Pass Pass W L1= S3 Pass Pass Fail Pass Pass W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Pass Pass Pass W L1= S3 Pass Pass Pass Pass Pass W L2= S0 Pass Pass Pass Pass Pass W L2= S1 Pass Pass Pass Pass Pass W L2= S2 Pass Pass Pass Fail Pass W L2= S3 Pass Pass Pass Fail Pass W L1 Read= 0.5V (C ase 1 ) W L1 Read= 1.8V (C ase 2 ) W L1 Read= 3.6V (C ase 3 )",
"title": ""
},
{
"docid": "ee0d11cbd2e723aff16af1c2f02bbc2b",
"text": "This study simplifies the complicated metric distance method [L.S. Chen, C.H. Cheng, Selecting IS personnel using ranking fuzzy number by metric distance method, Eur. J. Operational Res. 160 (3) 2005 803–820], and proposes an algorithm to modify Chen’s Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) [C.T. Chen, Extensions of the TOPSIS for group decision-making under fuzzy environment, Fuzzy Sets Syst., 114 (2000) 1–9]. From experimental verification, Chen directly assigned the fuzzy numbers 1̃ and 0̃ as fuzzy positive ideal solution (PIS) and negative ideal solution (NIS). Chen’s method sometimes violates the basic concepts of traditional TOPSIS. This study thus proposes fuzzy hierarchical TOPSIS, which not only is well suited for evaluating fuzziness and uncertainty problems, but also can provide more objective and accurate criterion weights, while simultaneously avoiding the problem of Chen’s Fuzzy TOPSIS. For application and verification, this study presents a numerical example and build a practical supplier selection problem to verify our proposed method and compare it with other methods. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +886 5 5342601x5312; fax: +886 5 531 2077. E-mail addresses: [email protected] (J.-W. Wang), [email protected] (C.-H. Cheng), [email protected] (K.-C. Huang).",
"title": ""
},
{
"docid": "2ecdf4a4d7d21ca30f3204506a91c22c",
"text": "Because of the transition from analog to digital technologies, content owners are seeking technologies for the protection of copyrighted multimedia content. Encryption and watermarking are two major tools that can be used to prevent unauthorized consumption and duplication. In this paper, we generalize an idea in a recent paper that embeds a binary pattern in the form of a binary image in the LL and HH bands at the second level of Discrete Wavelet Transform (DWT) decomposition. Our generalization includes all four bands (LL, HL, LH, and HH), and a comparison of embedding a watermark at first and second level decompositions. We tested the proposed algorithm against fifteen attacks. Embedding the watermark in lower frequencies is robust to a group of attacks, and embedding the watermark in higher frequencies is robust to another set of attacks. Only for rewatermarking and collusion attacks, the watermarks extracted from all four bands are identical. Our experiments indicate that first level decomposition appear advantageous for two reasons: The area for watermark embedding is maximized, and the extracted watermarks are more textured with better visual quality.",
"title": ""
},
{
"docid": "89cc39369eeb6c12a12c61e210c437e3",
"text": "Multimodal learning with deep Boltzmann machines (DBMs) is an generative approach to fuse multimodal inputs, and can learn the shared representation via Contrastive Divergence (CD) for classification and information retrieval tasks. However, it is a 2-fan DBM model, and cannot effectively handle multiple prediction tasks. Moreover, this model cannot recover the hidden representations well by sampling from the conditional distribution when more than one modalities are missing. In this paper, we propose a Kfan deep structure model, which can handle the multi-input and muti-output learning problems effectively. In particular, the deep structure has K-branch for different inputs where each branch can be composed of a multi-layer deep model, and a shared representation is learned in an discriminative manner to tackle multimodal tasks. Given the deep structure, we propose two objective functions to handle two multi-input and multi-output tasks: joint visual restoration and labeling, and the multi-view multi-calss object recognition tasks. To estimate the model parameters, we initialize the deep model parameters with CD to maximize the joint distribution, and then we use backpropagation to update the model according to specific objective function. The experimental results demonstrate that the model can effectively leverages multi-source information and predict multiple tasks well over competitive baselines.",
"title": ""
},
{
"docid": "d4ea4a718837db4ecdfd64896661af77",
"text": "Laboratory studies have documented that women often respond less favorably to competition than men. Conditional on performance, men are often more eager to compete, and the performance of men tends to respond more positively to an increase in competition. This means that few women enter and win competitions. We review studies that examine the robustness of these differences as well the factors that may give rise to them. Both laboratory and field studies largely confirm these initial findings, showing that gender differences in competitiveness tend to result from differences in overconfidence and in attitudes toward competition. Gender differences in risk aversion, however, seem to play a smaller and less robust role. We conclude by asking what could and should be done to encourage qualified males and females to compete. 601 A nn u. R ev . E co n. 2 01 1. 3: 60 163 0. D ow nl oa de d fr om w w w .a nn ua lre vi ew s.o rg by $ {i nd iv id ua lU se r.d is pl ay N am e} o n 08 /1 6/ 11 . F or p er so na l u se o nl y.",
"title": ""
},
{
"docid": "c7d901f63f0d7ca0b23d5b8f23d92f7d",
"text": "We propose a novel approach to automatic spoken language identification (LID) based on vector space modeling (VSM). It is assumed that the overall sound characteristics of all spoken languages can be covered by a universal collection of acoustic units, which can be characterized by the acoustic segment models (ASMs). A spoken utterance is then decoded into a sequence of ASM units. The ASM framework furthers the idea of language-independent phone models for LID by introducing an unsupervised learning procedure to circumvent the need for phonetic transcription. Analogous to representing a text document as a term vector, we convert a spoken utterance into a feature vector with its attributes representing the co-occurrence statistics of the acoustic units. As such, we can build a vector space classifier for LID. The proposed VSM approach leads to a discriminative classifier backend, which is demonstrated to give superior performance over likelihood-based n-gram language modeling (LM) backend for long utterances. We evaluated the proposed VSM framework on 1996 and 2003 NIST Language Recognition Evaluation (LRE) databases, achieving an equal error rate (EER) of 2.75% and 4.02% in the 1996 and 2003 LRE 30-s tasks, respectively, which represents one of the best results reported on these popular tasks",
"title": ""
}
] | scidocsrr |
ae3dbdad428b7cd12dadceef2f3ef261 | Linguistic Reflections of Student Engagement in Massive Open Online Courses | [
{
"docid": "a7eff25c60f759f15b41c85ac5e3624f",
"text": "Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized – active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.",
"title": ""
},
{
"docid": "2fbc75f848a0a3ae8228b5c6cbe76ec4",
"text": "The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations.",
"title": ""
}
] | [
{
"docid": "4dca240e5073db9f09e6fdc3b022a29a",
"text": "We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to three-dimensional physically simulated biped locomotion.",
"title": ""
},
{
"docid": "cf0b49aabe042b93be0c382ad69e4093",
"text": "This paper shows a technique to enhance the resolution of a frequency modulated continuous wave (FMCW) radar system. The range resolution of an FMCW radar system is limited by the bandwidth of the transmitted signal. By using high resolution methods such as the Matrix Pencil Method (MPM) it is possible to enhance the resolution. In this paper a new method to obtain a better resolution for FMCW radar systems is used. This new method is based on the MPM and is enhanced to require less computing power. To evaluate this new technique, simulations and measurements are used. The result shows that this new method is able to improve the performance of FMCW radar systems.",
"title": ""
},
{
"docid": "a5b147f5b3da39fed9ed11026f5974a2",
"text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).",
"title": ""
},
{
"docid": "db7a4ab8d233119806e7edf2a34fffd1",
"text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.",
"title": ""
},
{
"docid": "d9a87325efbd29520c37ec46531c6062",
"text": "Predicting the risk of potential diseases from Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Compared with traditional machine learning models, deep learning based approaches achieve superior performance on risk prediction task. However, none of existing work explicitly takes prior medical knowledge (such as the relationships between diseases and corresponding risk factors) into account. In medical domain, knowledge is usually represented by discrete and arbitrary rules. Thus, how to integrate such medical rules into existing risk prediction models to improve the performance is a challenge. To tackle this challenge, we propose a novel and general framework called PRIME for risk prediction task, which can successfully incorporate discrete prior medical knowledge into all of the state-of-the-art predictive models using posterior regularization technique. Different from traditional posterior regularization, we do not need to manually set a bound for each piece of prior medical knowledge when modeling desired distribution of the target disease on patients. Moreover, the proposed PRIME can automatically learn the importance of different prior knowledge with a log-linear model.Experimental results on three real medical datasets demonstrate the effectiveness of the proposed framework for the task of risk prediction",
"title": ""
},
{
"docid": "719c1b6ad0d945b68b34abceb1ed8e3b",
"text": "This editorial provides a behavioral science view on gamification and health behavior change, describes its principles and mechanisms, and reviews some of the evidence for its efficacy. Furthermore, this editorial explores the relation between gamification and behavior change frameworks used in the health sciences and shows how gamification principles are closely related to principles that have been proven to work in health behavior change technology. Finally, this editorial provides criteria that can be used to assess when gamification provides a potentially promising framework for digital health interventions.",
"title": ""
},
{
"docid": "927f2c68d709c7418ad76fd9d81b18c4",
"text": "With the growing deployment of host and network intrusion detection systems, managing reports from these systems becomes critically important. We present a probabilistic approach to alert correlation, extending ideas from multisensor data fusion. Features used for alert correlation are based on alert content that anticipates evolving IETF standards. The probabilistic approach provides a unified mathematical framework for correlating alerts that match closely but not perfectly, where the minimum degree of match required to fuse alerts is controlled by a single configurable parameter. Only features in common are considered in the fusion algorithm. For each feature we define an appropriate similarity function. The overall similarity is weighted by a specifiable expectation of similarity. In addition, a minimum similarity may be specified for some or all features. Features in this set must match at least as well as the minimum similarity specification in order to combine alerts, regardless of the goodness of match on the feature set as a whole. Our approach correlates attacks over time, correlates reports from heterogeneous sensors, and correlates multiple attack steps.",
"title": ""
},
{
"docid": "121f2bfd854b79a14e8171d875ba951f",
"text": "Arising from many applications at the intersection of decision-making and machine learning, Marginal Maximum A Posteriori (Marginal MAP) problems unify the two main classes of inference, namely maximization (optimization) and marginal inference (counting), and are believed to have higher complexity than both of them. We propose XOR_MMAP, a novel approach to solve the Marginal MAP problem, which represents the intractable counting subproblem with queries to NP oracles, subject to additional parity constraints. XOR_MMAP provides a constant factor approximation to the Marginal MAP problem, by encoding it as a single optimization in a polynomial size of the original problem. We evaluate our approach in several machine learning and decision-making applications, and show that our approach outperforms several state-of-the-art Marginal MAP solvers.",
"title": ""
},
{
"docid": "3bae971fce094c3ff6c34595bac60ef2",
"text": "In this work, we present a 3D 128Gb 2bit/cell vertical-NAND (V-NAND) Flash product. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1xnm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50MB/s write throughput with 3K endurance for typical embedded applications. Also, extended endurance of 35K is achieved with 36MB/s of write throughput for data center and enterprise SSD applications. And 2nd generation of 3D V-NAND opens up a whole new world at SSD endurance, density and battery life for portables.",
"title": ""
},
{
"docid": "7a4bb28ae7c175a018b278653e32c3a1",
"text": "Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.",
"title": ""
},
{
"docid": "f2a1e5d8e99977c53de9f2a82576db69",
"text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.",
"title": ""
},
{
"docid": "d6d07f50778ba3d99f00938b69fe0081",
"text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.",
"title": ""
},
{
"docid": "f2fed9066ac945ae517aef8ec5bb5c61",
"text": "BACKGROUND\nThe aging of society is a global trend, and care of older adults with dementia is an urgent challenge. As dementia progresses, patients exhibit negative emotions, memory disorders, sleep disorders, and agitated behavior. Agitated behavior is one of the most difficult problems for family caregivers and healthcare providers to handle when caring for older adults with dementia.\n\n\nPURPOSE\nThe aim of this study was to investigate the effectiveness of white noise in improving agitated behavior, mental status, and activities of daily living in older adults with dementia.\n\n\nMETHODS\nAn experimental research design was used to study elderly participants two times (pretest and posttest). Six dementia care centers in central and southern Taiwan were targeted to recruit participants. There were 63 participants: 28 were in the experimental group, and 35 were in the comparison group. Experimental group participants received 20 minutes of white noise consisting of ocean, rain, wind, and running water sounds between 4 and 5 P.M. daily over a period of 4 weeks. The comparison group received routine care. Questionnaires were completed, and observations of agitated behaviors were collected before and after the intervention.\n\n\nRESULTS\nAgitated behavior in the experimental group improved significantly between pretest and posttest. Furthermore, posttest scores on the Mini-Mental Status Examination and Barthel Index were slightly better for this group than at pretest. However, the experimental group registered no significant difference in mental status or activities of daily living at posttest. For the comparison group, agitated behavior was unchanged between pretest and posttest.\n\n\nCONCLUSIONS\nThe results of this study support white noise as a simple, convenient, and noninvasive intervention that improves agitated behavior in older adults with dementia. These results may provide a reference for related healthcare providers, educators, and administrators who care for older adults with dementia.",
"title": ""
},
{
"docid": "e3d0a58ddcffabb26d5e059d3ae6b370",
"text": "HCI ( Human Computer Interaction ) studies the ways humans use digital or computational machines, systems or infrastructures. The study of the barriers encountered when users interact with the various interfaces is critical to improving their use, as well as their experience. Access and information processing is carried out today from multiple devices (computers, tablets, phones... ) which is essential to maintain a multichannel consistency. This complexity increases with environments in which we do not have much experience as users, where interaction with the machine is a challenge even in phases of research: virtual reality environments, augmented reality, or viewing and handling of large amounts of data, where the simplicity and ease of use are critical.",
"title": ""
},
{
"docid": "e8c9067f13c9a57be46823425deb783b",
"text": "In order to utilize the tremendous computing power of graphics hardware and to automatically adapt to the fast and frequent changes in its architecture and performance characteristics, this paper implements an automatic tuning system to generate high-performance matrix-multiplication implementation on graphics hardware. The automatic tuning system uses a parameterized code generator to generate multiple versions of matrix multiplication, whose performances are empirically evaluated by actual execution on the target platform. An ad-hoc search engine is employed to search over the implementation space for the version that yields the best performance. In contrast to similar systems on CPUs, which utilize cache blocking, register tiling, instruction scheduling tuning strategies, this paper identifies and exploits several tuning strategies that are unique for graphics hardware. These tuning strategies include optimizing for multiple-render-targets, SIMD instructions with data packing, overcoming limitations on instruction count and dynamic branch instruction. The generated implementations have comparable performance with expert manually tuned version in spite of the significant overhead incurred due to the use of the high-level BrookGPU language.",
"title": ""
},
{
"docid": "01f8616cafa72c473e33f149faff044a",
"text": "We show that the e-commerce domain can provide all the right ingredients for successful data mining and claim that it is a killer domain for data mining. We describe an integrated architecture, based on our experience at Blue Martini Software, for supporting this integration. The architecture can dramatically reduce the pre-processing, cleaning, and data understanding effort often documented to take 80% of the time in knowledge discovery projects. We emphasize the need for data collection at the application server layer (not the web server) in order to support logging of data and metadata that is essential to the discovery process. We describe the data transformation bridges required from the transaction processing systems and customer event streams (e.g., clickstreams) to the data warehouse. We detail the mining workbench, which needs to provide multiple views of the data through reporting, data mining algorithms, visualization, and OLAP. We conclude with a set of challenges.",
"title": ""
},
{
"docid": "fe41de4091692d1af643bf144ac1dcaa",
"text": "Introduction. This research addresses a primary issue that involves motivating academics to share knowledge. Adapting the theory of reasoned action, this study examines the role of motivation that consists of intrinsic motivators (commitment; enjoyment in helping others) and extrinsic motivators (reputation; organizational rewards) to determine and explain the behaviour of Malaysian academics in sharing knowledge. Method. A self-administered questionnaire was distributed using a non-probability sampling technique. A total of 373 completed responses were collected with a total response rate of 38.2%. Analysis. The partial least squares analysis was used to analyse the data. Results. The results indicated that all five of the hypotheses were supported. Analysis of data from the five higher learning institutions in Malaysia found that commitment and enjoyment in helping others (i.e., intrinsic motivators) and reputation and organizational rewards (i.e., extrinsic motivators) have a positive and significant relationship with attitude towards knowledge-sharing. In addition, the findings revealed that intrinsic motivators are more influential than extrinsic motivators. This suggests that academics are influenced more by intrinsic motivators than by extrinsic motivators. Conclusions. The findings provided an indication of the determinants in enhancing knowledgesharing intention among academics in higher education institutions through extrinsic and intrinsic motivators.",
"title": ""
},
{
"docid": "2da67ed8951caf3388ca952465d61b37",
"text": "As a significant supplier of labour migrants, Southeast Asia presents itself as an important site for the study of children in transnational families who are growing up separated from at least one migrant parent and sometimes cared for by 'other mothers'. Through the often-neglected voices of left-behind children, we investigate the impact of parental migration and the resulting reconfiguration of care arrangements on the subjective well-being of migrants' children in two Southeast Asian countries, Indonesia and the Philippines. We theorise the child's position in the transnational family nexus through the framework of the 'care triangle', representing interactions between three subject groups- 'left-behind' children, non-migrant parents/other carers; and migrant parent(s). Using both quantitative (from 1010 households) and qualitative (from 32 children) data from a study of child health and migrant parents in Southeast Asia, we examine relationships within the caring spaces both of home and of transnational spaces. The interrogation of different dimensions of care reveals the importance of contact with parents (both migrant and nonmigrant) to subjective child well-being, and the diversity of experiences and intimacies among children in the two study countries.",
"title": ""
},
{
"docid": "db0b55cd4064799b9d7c52c6f3da6aac",
"text": "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-toend to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.",
"title": ""
},
{
"docid": "4c21f108d05132ce00fe6d028c17c7ab",
"text": "In this work, a new predictive phase-locked loop (PLL) for encoderless control of a permanent-magnet synchronous generator (PMSG) in a variable-speed wind energy conversion system (WECS) is presented. The idea of the predictive PLL is derived from the direct-model predictive control (DMPC) principle. The predictive PLL uses a limited (discretized) number of rotor-angles for predicting/estimating the back-electromotive-force (BEMF) of the PMSG. subsequently, that predicted angle, which optimizes a pre-defined quality function, is chosen to become the best rotor-angle/position. Accordingly, the fixed gain proportional integral (FGPI) regulator that is normally used in PLLs is eliminated. The performance of the predictive PLL is validated experimentally and compared with that of the traditional one under various operating scenarios and under variations of the PMSG parameters.",
"title": ""
}
] | scidocsrr |
7083fcf39daecbb9e4e1ef55b25e9f16 | Big data on cloud for government agencies: benefits, challenges, and solutions | [
{
"docid": "72944a6ad81c2802d0401f9e0c2d8bb5",
"text": "Available online 10 August 2016 Big Data (BD), with their potential to ascertain valued insights for enhanced decision-making process, have recently attracted substantial interest from both academics and practitioners. Big Data Analytics (BDA) is increasingly becoming a trending practice that many organizations are adopting with the purpose of constructing valuable information from BD. The analytics process, including the deployment and use of BDA tools, is seen by organizations as a tool to improve operational efficiency though it has strategic potential, drive new revenue streams and gain competitive advantages over business rivals. However, there are different types of analytic applications to consider. Therefore, prior to hasty use and buying costly BD tools, there is a need for organizations to first understand the BDA landscape. Given the significant nature of theBDandBDA, this paper presents a state-ofthe-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/ employed by organizations to help others understand this landscape with the objective of making robust investment decisions. In doing so, systematically analysing and synthesizing the extant research published on BD and BDA area. More specifically, the authors seek to answer the following two principal questions: Q1 –What are the different types of BD challenges theorized/proposed/confronted by organizations? and Q2 – What are the different types of BDA methods theorized/proposed/employed to overcome BD challenges?. This systematic literature review (SLR) is carried out through observing and understanding the past trends and extant patterns/themes in the BDA research area, evaluating contributions, summarizing knowledge, thereby identifying limitations, implications and potential further research avenues to support the academic community in exploring research themes/patterns. Thus, to trace the implementation of BD strategies, a profiling method is employed to analyze articles (published in English-speaking peer-reviewed journals between 1996 and 2015) extracted from the Scopus database. The analysis presented in this paper has identified relevant BD research studies that have contributed both conceptually and empirically to the expansion and accrual of intellectual wealth to the BDA in technology and organizational resource management discipline. © 2016 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "03d5c8627ec09e4332edfa6842b6fe44",
"text": "In the same way businesses use big data to pursue profits, governments use it to promote the public good.",
"title": ""
}
] | [
{
"docid": "f4d514a95cc4444dc1cbfdc04737ec75",
"text": "Ultra-high speed data links such as 400GbE continuously push transceivers to achieve better performance and lower power consumption. This paper presents a highly parallelized TRX at 56Gb/s with integrated serializer/deserializer, FFE/CTLE/DFE, CDR, and eye-monitoring circuits. It achieves BER<10−12 under 24dB loss at 14GHz while dissipating 602mW of power.",
"title": ""
},
{
"docid": "d57072f4ffa05618ebf055824e7ae058",
"text": "Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network’s members and non-members; we analyze the impact of privacy concerns on members’ behavior; we compare members’ stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual’s privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members’ misconceptions about the online community’s actual size and composition, and about the visibility of members’ profiles.",
"title": ""
},
{
"docid": "b074ba4ae329ffad0da3216dc84b22b9",
"text": "A recent research trend in Artificial Intelligence (AI) is the combination of several programs into one single, stronger, program; this is termed portfolio methods. We here investigate the application of such methods to Game Playing Programs (GPPs). In addition, we consider the case in which only one GPP is available by decomposing this single GPP into several ones through the use of parameters or even simply random seeds. These portfolio methods are trained in a learning phase. We propose two different offline approaches. The simplest one, BestArm, is a straightforward optimization of seeds or parameters; it performs quite well against the original GPP, but performs poorly against an opponent which repeats games and learns. The second one, namely Nash-portfolio, performs similarly in a “one game” test, and is much more robust against an opponent who learns. We also propose an online learning portfolio, which tests several of the GPP repeatedly and progressively switches to the best one using a bandit algorithm.",
"title": ""
},
{
"docid": "b99efb63e8016c7f5ab09e868ae894da",
"text": "The popular bag of words approach for action recognition is based on the classifying quantized local features density. This approach focuses excessively on the local features but discards all information about the interactions among them. Local features themselves may not be discriminative enough, but combined with their contexts, they can be very useful for the recognition of some actions. In this paper, we present a novel representation that captures contextual interactions between interest points, based on the density of all features observed in each interest point's mutliscale spatio-temporal contextual domain. We demonstrate that augmenting local features with our contextual feature significantly improves the recognition performance.",
"title": ""
},
{
"docid": "3bb1065dfc4e06fa35ef91e2c89d50d2",
"text": "Portable, accurate, and relatively inexpensive high-frequency vector network analyzers (VNAs) have great utility for a wide range of applications, encompassing microwave circuit characterization, reflectometry, imaging, material characterization, and nondestructive testing to name a few. To meet the rising demand for VNAs possessing the aforementioned attributes, we present a novel and simple VNA design based on a standing-wave probing device and an electronically controllable phase shifter. The phase shifter is inserted between a device under test (DUT) and a standing-wave probing device. The complex reflection coefficient of the DUT is then obtained from multiple standing-wave voltage measurements taken for several different values of the phase shift. The proposed VNA design eliminates the need for expensive heterodyne detection schemes required for tuned-receiver-based VNA designs. Compared with previously developed VNAs that operate based on performing multiple power measurements, the proposed VNA utilizes a single power detector without the need for multiport hybrid couplers. In this paper, the efficacy of the proposed VNA is demonstrated via numerical simulations and experimental measurements. For this purpose, measurements of various DUTs obtained using an X-band (8.2-12.4 GHz) prototype VNA are presented and compared with results obtained using an Agilent HP8510C VNA. The results show that the proposed VNA provides highly accurate vector measurements with typical errors on the order of 0.02 and 1° for magnitude and phase, respectively.",
"title": ""
},
{
"docid": "e37a93ff39840e1d6df589b415848a85",
"text": "In this paper we propose a stacked generalization (or stacking) model for event extraction in bio-medical text. Event extraction deals with the process of extracting detailed biological phenomenon, which is more challenging compared to the traditional binary relation extraction such as protein-protein interaction. The overall process consists of mainly three steps: event trigger detection, argument extraction by edge detection and finding correct combination of arguments. In stacking, we use Linear Support Vector Classification (Linear SVC), Logistic Regression (LR) and Stochastic Gradient Descent (SGD) as base-level learning algorithms. As meta-level learner we use Linear SVC. In edge detection step, we find out the arguments of triggers detected in trigger detection step using a SVM classifier. To find correct combination of arguments, we use rules generated by studying the properties of bio-molecular event expressions, and form an event expression consisting of event trigger, its class and arguments. The output of trigger detection is fed to edge detection for argument extraction. Experiments on benchmark datasets of BioNLP2011 show the recall, precision and Fscore of 48.96%, 66.46% and 56.38%, respectively. Comparisons with the existing systems show that our proposed model attains state-of-the-art performance.",
"title": ""
},
{
"docid": "4d585dd4d56dda31c2fb929a61aba5f8",
"text": "Growing greenhouse vegetables is one of the most exacting and intense forms of all agricultural enterprises. In combination with greenhouses, hydroponics is becoming increasingly popular, especially in the United States, Canada, western Europe, and Japan. It is high technology and capital intensive. It is highly productive, conservative of water and land and protective of the environment. For production of leafy vegetables and herbs, deep flow hydroponics is common. For growing row crops such as tomato, cucumber, and pepper, the two most popular artificial growing media are rockwool and perlite. Computers today operate hundreds of devices within a greenhouse by utilizing dozens of input parameters, to maintain the most desired growing environment. The technology of greenhouse food production is changing rapidly with systems today producing yields never before realized. The future for hydroponic/soilless cultured systems appears more positive today than any time over the last 50 years.",
"title": ""
},
{
"docid": "11c4d318abb6d2e838f74d2a6ae61415",
"text": "We propose a new framework for entity and event extraction based on generative adversarial imitation learning – an inverse reinforcement learning method using generative adversarial network (GAN). We assume that instances and labels yield to various extents of difficulty and the gains and penalties (rewards) are expected to be diverse. We utilize discriminators to estimate proper rewards according to the difference between the labels committed by ground-truth (expert) and the extractor (agent). Experiments also demonstrate that the proposed framework outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "5a3f65509a2acd678563cd495fe287de",
"text": "Auditory menus have the potential to make devices that use visual menus accessible to a wide range of users. Visually impaired users could especially benefit from the auditory feedback received during menu navigation. However, auditory menus are a relatively new concept, and there are very few guidelines that describe how to design them. This paper details how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, this set of studies examined possible ways of designing an auditory scrollbar for an auditory menu. The following different auditory scrollbar designs were evaluated: single-tone, double-tone, alphabetical grouping, and proportional grouping. Three different evaluations were conducted to determine the best design. The first two evaluations were conducted with sighted users, and the last evaluation was conducted with visually impaired users. The results suggest that pitch polarity does not matter, and proportional grouping is the best of the auditory scrollbar designs evaluated here.",
"title": ""
},
{
"docid": "4749d4153d09082d81b2b64f7954b9cd",
"text": " Background. Punctate or stippled cartilaginous calcifications are associated with many conditions, including chromosomal, infectious, endocrine, and teratogenic etiologies. Some of these conditions are clinically mild, while others are lethal. Accurate diagnosis can prove instrumental in clinical management and in genetic counseling. Objective. To describe the diagnostic radiographic features seen in Pacman dysplasia, a distinct autosomal recessive, lethal skeletal dysplasia. Materials and methods. We present the fourth reported case of Pacman dysplasia and compare the findings seen in our patient with the three previously described patients. Results. Invariable and variable radiographic findings were seen in all four cases of histologically proven Pacman dysplasia. Conclusion. Pacman dysplasia presents both constant and variable diagnostic radiographic features.",
"title": ""
},
{
"docid": "8a679c93185332398c5261ddcfe81e84",
"text": "We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, using a function approximator involving linear combinations of fixed basis functions. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an ergodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. In addition to proving new and stronger results than those previously available, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. Finally, we prove that on-line updates, based on entire trajectories of the Markov chain, are in a certain sense necessary for convergence. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning.",
"title": ""
},
{
"docid": "4248fb006221fbb74d565705dcbc5a7a",
"text": "Shot boundary detection (SBD) is an important and fundamental step in video content analysis such as content-based video indexing, browsing, and retrieval. In this paper, a hybrid SBD method is presented by integrating a high-level fuzzy Petri net (HLFPN) model with keypoint matching. The HLFPN model with histogram difference is executed as a predetection. Next, the speeded-up robust features (SURF) algorithm that is reliably robust to image affine transformation and illumination variation is used to figure out all possible false shots and the gradual transition based on the assumption from the HLFPN model. The top-down design can effectively lower down the computational complexity of SURF algorithm. The proposed approach has increased the precision of SBD and can be applied in different types of videos.",
"title": ""
},
{
"docid": "b9717a3ce0ed7245621314ba3e1ce251",
"text": "Analog beamforming with phased arrays is a promising technique for 5G wireless communication at millimeter wave frequencies. Using a discrete codebook consisting of multiple analog beams, each beam focuses on a certain range of angles of arrival or departure and corresponds to a set of fixed phase shifts across frequency due to practical hardware considerations. However, for sufficiently large bandwidth, the gain provided by the phased array is actually frequency dependent, which is an effect called beam squint, and this effect occurs even if the radiation pattern of the antenna elements is frequency independent. This paper examines the nature of beam squint for a uniform linear array (ULA) and analyzes its impact on codebook design as a function of the number of antennas and system bandwidth normalized by the carrier frequency. The criterion for codebook design is to guarantee that each beam's minimum gain for a range of angles and for all frequencies in the wideband system exceeds a target threshold, for example 3 dB below the array's maximum gain. Analysis and numerical examples suggest that a denser codebook is required to compensate for beam squint. For example, 54% more beams are needed compared to a codebook design that ignores beam squint for a ULA with 32 antennas operating at a carrier frequency of 73 GHz and bandwidth of 2.5 GHz. Furthermore, beam squint with this design criterion limits the bandwidth or the number of antennas of the array if the other one is fixed.",
"title": ""
},
{
"docid": "2d6d5c8b1ac843687db99ccf50a0baff",
"text": "This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier’s behaviour.",
"title": ""
},
{
"docid": "61ba52f205c8b497062995498816b60f",
"text": "The past century experienced a proliferation of retail formats in the marketplace. However, as a new century begins, these retail formats are being threatened by the emergence of a new kind of store, the online or Internet store. From being almost a novelty in 1995, online retailing sales were expected to reach $7 billion by 2000 [9]. In this increasngly timeconstrained world, Internet stores allow consumers to shop from the convenience of remote locations. Yet most of these Internet stores are losing money [6]. Why is such counterintuitive phenomena prevailing? The explanation may lie in the risks associated with Internet shopping. These risks may arise because consumers are concerned about the security of transmitting credit card information over the Internet. Consumers may also be apprehensive about buying something without touching or feeling it and being unable to return it if it fails to meet their approval. Having said this, however, we must point out that consumers are buying goods on the Internet. This is reflected in the fact that total sales on the Internet are on the increase [8, 11]. Who are the consumers that are patronizing the Internet? Evidently, for them the perception of the risk associated with shopping on the Internet is low or is overshadowed by its relative convenience. This article attempts to determine why certain consumers are drawn to the Internet and why others are not. Since the pioneering research done by Becker [3], it has been accepted that the consumer maximizes his utility subject to not only income constraints but also time constraints. A consumer seeks out his best decision given that he has a limited budget of time and money. While purchasing a product from a store, a consumer has to expend both money and time. Therefore, the consumer patronizes the retail store where his total costs or the money and time spent in the entire process are the least. Since the util-",
"title": ""
},
{
"docid": "28cf177349095e7db4cdaf6c9c4a6cb1",
"text": "Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks.",
"title": ""
},
{
"docid": "8bb5a38908446ca4e6acb4d65c4c817c",
"text": "Column-oriented database systems have been a real game changer for the industry in recent years. Highly tuned and performant systems have evolved that provide users with the possibility of answering ad hoc queries over large datasets in an interactive manner. In this paper we present the column-oriented datastore developed as one of the central components of PowerDrill. It combines the advantages of columnar data layout with other known techniques (such as using composite range partitions) and extensive algorithmic engineering on key data structures. The main goal of the latter being to reduce the main memory footprint and to increase the efficiency in processing typical user queries. In this combination we achieve large speed-ups. These enable a highly interactive Web UI where it is common that a single mouse click leads to processing a trillion values in the underlying dataset.",
"title": ""
},
{
"docid": "4775bf71a5eea05b77cafa53daefcff9",
"text": "There is mounting empirical evidence that interacting with nature delivers measurable benefits to people. Reviews of this topic have generally focused on a specific type of benefit, been limited to a single discipline, or covered the benefits delivered from a particular type of interaction. Here we construct novel typologies of the settings, interactions and potential benefits of people-nature experiences, and use these to organise an assessment of the benefits of interacting with nature. We discover that evidence for the benefits of interacting with nature is geographically biased towards high latitudes and Western societies, potentially contributing to a focus on certain types of settings and benefits. Social scientists have been the most active researchers in this field. Contributions from ecologists are few in number, perhaps hindering the identification of key ecological features of the natural environment that deliver human benefits. Although many types of benefits have been studied, benefits to physical health, cognitive performance and psychological well-being have received much more attention than the social or spiritual benefits of interacting with nature, despite the potential for important consequences arising from the latter. The evidence for most benefits is correlational, and although there are several experimental studies, little as yet is known about the mechanisms that are important for delivering these benefits. For example, we do not know which characteristics of natural settings (e.g., biodiversity, level of disturbance, proximity, accessibility) are most important for triggering a beneficial interaction, and how these characteristics vary in importance among cultures, geographic regions and socio-economic groups. These are key directions for future research if we are to design landscapes that promote high quality interactions between people and nature in a rapidly urbanising world.",
"title": ""
},
{
"docid": "db158f806e56a1aae74aae15252703d2",
"text": "Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.",
"title": ""
},
{
"docid": "4f7fbc3f313e68456e57a2d6d3c90cd0",
"text": "This survey paper describes a focused literature survey of machine learning (ML) and data mining (DM) methods for cyber analytics in support of intrusion detection. Short tutorial descriptions of each ML/DM method are provided. Based on the number of citations or the relevance of an emerging method, papers representing each method were identified, read, and summarized. Because data are so important in ML/DM approaches, some well-known cyber data sets used in ML/DM are described. The complexity of ML/DM algorithms is addressed, discussion of challenges for using ML/DM for cyber security is presented, and some recommendations on when to use a given method are provided.",
"title": ""
}
] | scidocsrr |
3c446b49ef831b5440cd70aeec16bf2a | QoE Doctor: Diagnosing Mobile App QoE with Automated UI Control and Cross-layer Analysis | [
{
"docid": "5af83f822ac3d9379c7b477ff1d32a97",
"text": "Sprout is an end-to-end transport protocol for interactive applications that desire high throughput and low delay. Sprout works well over cellular wireless networks, where link speeds change dramatically with time, and current protocols build up multi-second queues in network gateways. Sprout does not use TCP-style reactive congestion control; instead the receiver observes the packet arrival times to infer the uncertain dynamics of the network path. This inference is used to forecast how many bytes may be sent by the sender, while bounding the risk that packets will be delayed inside the network for too long. In evaluations on traces from four commercial LTE and 3G networks, Sprout, compared with Skype, reduced self-inflicted end-to-end delay by a factor of 7.9 and achieved 2.2× the transmitted bit rate on average. Compared with Google’s Hangout, Sprout reduced delay by a factor of 7.2 while achieving 4.4× the bit rate, and compared with Apple’s Facetime, Sprout reduced delay by a factor of 8.7 with 1.9× the bit rate. Although it is end-to-end, Sprout matched or outperformed TCP Cubic running over the CoDel active queue management algorithm, which requires changes to cellular carrier equipment to deploy. We also tested Sprout as a tunnel to carry competing interactive and bulk traffic (Skype and TCP Cubic), and found that Sprout was able to isolate client application flows from one another.",
"title": ""
},
{
"docid": "74227709f4832c3978a21abb9449203b",
"text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.",
"title": ""
},
{
"docid": "e4ade1f0baea7c50d0dff4470bbbfcd9",
"text": "Ad networks for mobile apps require inspection of the visual layout of their ads to detect certain types of placement frauds. Doing this manually is error prone, and does not scale to the sizes of today’s app stores. In this paper, we design a system called DECAF to automatically discover various placement frauds scalably and effectively. DECAF uses automated app navigation, together with optimizations to scan through a large number of visual elements within a limited time. It also includes a framework for efficiently detecting whether ads within an app violate an extensible set of rules that govern ad placement and display. We have implemented DECAF for Windows-based mobile platforms, and applied it to 1,150 tablet apps and 50,000 phone apps in order to characterize the prevalence of ad frauds. DECAF has been used by the ad fraud team in Microsoft and has helped find many instances of ad frauds.",
"title": ""
}
] | [
{
"docid": "5f516d2453d976d015ae28149892af43",
"text": "This two-part study integrates a quantitative review of one year of US newspaper coverage of climate science with a qualitative, comparative analysis of media-created themes and frames using a social constructivist approach. In addition to an examination of newspaper articles, this paper includes a reflexive comparison with attendant wire stories and scientific texts. Special attention is given to articles constructed with and framed by rhetoric emphasising uncertainty, controversy, and climate scepticism. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d993c297753e963aab55718c556783ea",
"text": "Customer relationship management (CRM) initiatives have gained much attention in recent years. With the aid of data mining technology, businesses can formulate specific strategies for different customer bases more precisely. Additionally, personalisation is another important issue in CRM - especially when a company has a huge product range. This paper presents a case model and investigates the use of computational intelligent techniques for CRM. These techniques allow the complex functions of relating customer behaviour to internal business processes to be learned more easily and the industry expertise and experience from business managers to be integrated into the modelling framework directly. Hence, they can be used in the CRM framework to enhance the creation of targeted strategies for specific customer bases.",
"title": ""
},
{
"docid": "83d330486c50fe2ae1d6960a4933f546",
"text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.",
"title": ""
},
{
"docid": "d0603a92425308bec8c53551d018accc",
"text": "It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.",
"title": ""
},
{
"docid": "4aed0c391351671ccb5297b2fe9d4891",
"text": "Applying evolution to generate simple agent behaviours has become a successful and heavily used practice. However the notion of scaling up behaviour into something more noteworthy and complex is far from elementary. In this paper we propose a method of combining neuroevolution practices with the subsumption paradigm; in which we generate Artificial Neural Network (ANN) layers ordered in a hierarchy such that high-level controllers can override lower behaviours. To explore this proposal we apply our controllers to the dasiaEvoTankspsila domain; a small, dynamic, adversarial environment. Our results show that once layers are evolved we can generate competent and capable results that can deal with hierarchies of multiple layers. Further analysis of results provides interesting insights into design decisions for such controllers, particularly when compared to the original suggestions for the subsumption paradigm.",
"title": ""
},
{
"docid": "5694ebf4c1f1e0bf65dd7401d35726ed",
"text": "Data collection is not a big issue anymore with available honeypot software and setups. However malware collections gathered from these honeypot systems often suffer from massive sample counts, data analysis systems like sandboxes cannot cope with. Sophisticated self-modifying malware is able to generate new polymorphic instances of itself with different message digest sums for each infection attempt, thus resulting in many different samples stored for the same specimen. Scaling analysis systems that are fed by databases that rely on sample uniqueness based on message digests is only feasible to a certain extent. In this paper we introduce a non cryptographic, fast to calculate hash function for binaries in the Portable Executable format that transforms structural information about a sample into a hash value. Grouping binaries by hash values calculated with the new function allows for detection of multiple instances of the same polymorphic specimen as well as samples that are broken e.g. due to transfer errors. Practical evaluation on different malware sets shows that the new function allows for a significant reduction of sample counts.",
"title": ""
},
{
"docid": "b4e3d746969860c7a3946487a2609a03",
"text": "People tracking is a key technology for autonomous systems, intelligent cars and social robots operating in populated environments. What makes the task di fficult is that the appearance of humans in range data can change drastically as a function of body pose, distance to the sensor, self-occlusion and occlusion by other objects. In this paper we propose a novel approach to pedestrian detection in 3D range data based on supervised learning techniques to create a bank of classifiers for different height levels of the human body. In particular, our approach applies AdaBoost to train a strong classifier from geometrical and statistical features of groups of neighboring points at the same height. In a second step, the AdaBoost classifiers mutually enforce their evidence across di fferent heights by voting into a continuous space. Pedestrians are finally found efficiently by mean-shift search for local maxima in the voting space. Experimental results carried out with 3D laser range data illustrate the robustness and e fficiency of our approach even in cluttered urban environments. The learned people detector reaches a classification rate up to 96% from a single 3D scan.",
"title": ""
},
{
"docid": "85d31f3940ee258589615661e596211d",
"text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.",
"title": ""
},
{
"docid": "20beb90c2f2024b3976a2ee0c95059c6",
"text": "In this paper, the robust trajectory tracking design of uncertain nonlinear systems is investigated by virtue of a self-learning optimal control formulation. The primary novelty lies in that an effective learning based robust tracking control strategy is developed for nonlinear systems under a general uncertain environment. The augmented system construction is performed by combining the tracking error with the reference trajectory. Then, an improved adaptive critic technique, which does not depend on the initial stabilizing controller, is employed to solve the Hamilton–Jacobi–Bellman (HJB) equation with respect to the nominal augmented system. Using the obtained control law, the closed-loop form of the augmented system is built with stability proof. Moreover, the robust trajectory tracking performance is guaranteed via Lyapunov approach in theory and then through simulation demonstration, where an application to a practical spring–mass–damper system is included.",
"title": ""
},
{
"docid": "94c7fde13a5792a89b7575ac41827f1c",
"text": "The noise sensitivities of nine different QRS detection algorithms were measured for a normal, single-channel, lead-II, synthesized ECG corrupted with five different types of synthesized noise: electromyographic interference, 60-Hz power line interference, baseline drift due to respiration, abrupt baseline shift, and a composite noise constructed from all of the other noise types. The percentage of QRS complexes detected, the number of false positives, and the detection delay were measured. None of the algorithms were able to detect all QRS complexes without any false positives for all of the noise types at the highest noise level. Algorithms based on amplitude and slope had the highest performance for EMG-corrupted ECG. An algorithm using a digital filter had the best performance for the composite-noise-corrupted data.<<ETX>>",
"title": ""
},
{
"docid": "17d1439650efccf83390834ba933db1a",
"text": "The arterial vascularization of the pineal gland (PG) remains a debatable subject. This study aims to provide detailed information about the arterial vascularization of the PG. Thirty adult human brains were obtained from routine autopsies. Cerebral arteries were separately cannulated and injected with colored latex. The dissections were carried out using a surgical microscope. The diameters of the branches supplying the PG at their origin and vascularization areas of the branches of the arteries were investigated. The main artery of the PG was the lateral pineal artery, and it originated from the posterior circulation. The other arteries included the medial pineal artery from the posterior circulation and the rostral pineal artery mainly from the anterior circulation. Posteromedial choroidal artery was an important artery that branched to the PG. The arterial supply to the PG was studied comprehensively considering the debate and inadequacy of previously published studies on this issue available in the literature. This anatomical knowledge may be helpful for surgical treatment of pathologies of the PG, especially in children who develop more pathology in this region than adults.",
"title": ""
},
{
"docid": "5eb03beba0ac2c94e6856d16e90799fc",
"text": "The explosive growth of malware variants poses a major threat to information security. Traditional anti-virus systems based on signatures fail to classify unknown malware into their corresponding families and to detect new kinds of malware programs. Therefore, we propose a machine learning based malware analysis system, which is composed of three modules: data processing, decision making, and new malware detection. The data processing module deals with gray-scale images, Opcode n-gram, and import functions, which are employed to extract the features of the malware. The decision-making module uses the features to classify the malware and to identify suspicious malware. Finally, the detection module uses the shared nearest neighbor (SNN) clustering algorithm to discover new malware families. Our approach is evaluated on more than 20 000 malware instances, which were collected by Kingsoft, ESET NOD32, and Anubis. The results show that our system can effectively classify the unknown malware with a best accuracy of 98.9%, and successfully detects 86.7% of the new malware.",
"title": ""
},
{
"docid": "66e00cb4593c1bc97a10e0b80dcd6a8f",
"text": "OBJECTIVE\nTo determine the probable factors responsible for stress among undergraduate medical students.\n\n\nMETHODS\nThe qualitative descriptive study was conducted at a public-sector medical college in Islamabad, Pakistan, from January to April 2014. Self-administered open-ended questionnaires were used to collect data from first year medical students in order to study the factors associated with the new environment.\n\n\nRESULTS\nThere were 115 students in the study with a mean age of 19±6.76 years. Overall, 35(30.4%) students had mild to moderate physical problems, 20(17.4%) had severe physical problems and 60(52.2%) did not have any physical problem. Average stress score was 19.6±6.76. Major elements responsible for stress identified were environmental factors, new college environment, student abuse, tough study routines and personal factors.\n\n\nCONCLUSIONS\nMajority of undergraduate students experienced stress due to both academic and emotional factors.",
"title": ""
},
{
"docid": "0251f38f48c470e2e04fb14fc7ba34b2",
"text": "The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.",
"title": ""
},
{
"docid": "2a9d399edc3c2dcc153d966760f38d80",
"text": "Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is over a computer network and the other is on a shared memory system. We establish an ergodic convergence rate O(1/ √ K) for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by √ K (K is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.",
"title": ""
},
{
"docid": "e20d26ce3dea369ae6817139ff243355",
"text": "This article explores the roots of white support for capital punishment in the United States. Our analysis addresses individual-level and contextual factors, paying particular attention to how racial attitudes and racial composition influence white support for capital punishment. Our findings suggest that white support hinges on a range of attitudes wider than prior research has indicated, including social and governmental trust and individualist and authoritarian values. Extending individual-level analyses, we also find that white responses to capital punishment are sensitive to local context. Perhaps most important, our results clarify the impact of race in two ways. First, racial prejudice emerges here as a comparatively strong predictor of white support for the death penalty. Second, black residential proximity functions to polarize white opinion along lines of racial attitude. As the black percentage of county residents rises, so too does the impact of racial prejudice on white support for capital punishment.",
"title": ""
},
{
"docid": "6d396f65f8cb4b7dcd4b502e7b167aca",
"text": "We study cost-sensitive learning of decision trees that incorporate both test costs and misclassification costs. In particular, we first propose a lazy decision tree learning that minimizes the total cost of tests and misclassifications. Then assuming test examples may contain unknown attributes whose values can be obtained at a cost (the test cost), we design several novel test strategies which attempt to minimize the total cost of tests and misclassifications for each test example. We empirically evaluate our treebuilding and various test strategies, and show that they are very effective. Our results can be readily applied to real-world diagnosis tasks, such as medical diagnosis where doctors must try to determine what tests (e.g., blood tests) should be ordered for a patient to minimize the total cost of tests and misclassifications (misdiagnosis). A case study on heart disease is given throughout the paper.",
"title": ""
},
{
"docid": "caac45f02e29295d592ee784697c6210",
"text": "The studies included in this PhD thesis examined the interactions of syphilis, which is caused by Treponema pallidum, and HIV. Syphilis reemerged worldwide in the late 1990s and hereafter increasing rates of early syphilis were also reported in Denmark. The proportion of patients with concurrent HIV has been substantial, ranging from one third to almost two thirds of patients diagnosed with syphilis some years. Given that syphilis facilitates transmission and acquisition of HIV the two sexually transmitted diseases are of major public health concern. Further, syphilis has a negative impact on HIV infection, resulting in increasing viral loads and decreasing CD4 cell counts during syphilis infection. Likewise, HIV has an impact on the clinical course of syphilis; patients with concurrent HIV are thought to be at increased risk of neurological complications and treatment failure. Almost ten per cent of Danish men with syphilis acquired HIV infection within five years after they were diagnosed with syphilis during an 11-year study period. Interestingly, the risk of HIV declined during the later part of the period. Moreover, HIV-infected men had a substantial increased risk of re-infection with syphilis compared to HIV-uninfected men. As one third of the HIV-infected patients had viral loads >1,000 copies/ml, our conclusion supported the initiation of cART in more HIV-infected MSM to reduce HIV transmission. During a five-year study period, including the majority of HIV-infected patients from the Copenhagen area, we observed that syphilis was diagnosed in the primary, secondary, early and late latent stage. These patients were treated with either doxycycline or penicillin and the rate of treatment failure was similar in the two groups, indicating that doxycycline can be used as a treatment alternative - at least in an HIV-infected population. During a four-year study period, the T. pallidum strain type distribution was investigated among patients diagnosed by PCR testing of material from genital lesions. In total, 22 strain types were identified. HIV-infected patients were diagnosed with nine different strains types and a difference by HIV status was not observed indicating that HIV-infected patients did not belong to separate sexual networks. In conclusion, concurrent HIV remains common in patients diagnosed with syphilis in Denmark, both in those diagnosed by serological testing and PCR testing. Although the rate of syphilis has stabilized in recent years, a spread to low-risk groups is of concern, especially due to the complex symptomatology of syphilis. However, given the efficient treatment options and the targeted screening of pregnant women and persons at higher risk of syphilis, control of the infection seems within reach. Avoiding new HIV infections is the major challenge and here cART may play a prominent role.",
"title": ""
},
{
"docid": "124d740d3796d6a707100e0d8c384f1f",
"text": "We present Nodeinfo, an unsupervised algorithm for anomaly detection in system logs. We demonstrate Nodeinfo's effectiveness on data from four of the world's most powerful supercomputers: using logs representing over 746 million processor-hours, in which anomalous events called alerts were manually tagged for scoring, we aim to automatically identify the regions of the log containing those alerts. We formalize the alert detection task in these terms, describe how Nodeinfo uses the information entropy of message terms to identify alerts, and present an online version of this algorithm, which is now in production use. This is the first work to investigate alert detection on (several) publicly-available supercomputer system logs, thereby providing a reproducible performance baseline.",
"title": ""
},
{
"docid": "ef208f640807a377c4301fb22cd172cb",
"text": "Providing access to relevant biomedical literature in a clinical setting has the potential to bridge a critical gap in evidence-based medicine. Here, our goal is specifically to provide relevant articles to clinicians to improve their decision-making in diagnosing, treating, and testing patients. To this end, the TREC 2014 Clinical Decision Support Track evaluated a system’s ability to retrieve relevant articles in one of three categories (Diagnosis, Treatment, Test) using an idealized form of a patient medical record . Over 100 submissions from over 25 participants were evaluated on 30 topics, resulting in over 37k relevance judgments. In this article, we provide an overview of the task, a survey of the information retrieval methods employed by the participants, an analysis of the results, and a discussion on the future directions for this challenging yet important task.",
"title": ""
}
] | scidocsrr |
5a4098d72885cbcbcffd0f1fb7eb6091 | The beliefs behind the teacher that influences their ICT practices | [
{
"docid": "ecddd4f80f417dcec49021065394c89a",
"text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "2683c65d587e8febe45296f1c124e04d",
"text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.",
"title": ""
},
{
"docid": "4f096ba7fc6164cdbf5d37676d943fa8",
"text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.",
"title": ""
},
{
"docid": "1a9e2481abf23501274e67575b1c9be6",
"text": "The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the idealâ€, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utility†for the “majority†and a minimum of an individual regret for the “opponentâ€. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences. a, 1 b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "71aae4cbccf6d3451d35528ceca8b8a9",
"text": "We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.",
"title": ""
},
{
"docid": "372c5918e55e79c0a03c14105eb50fad",
"text": "Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency, and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting’s greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early stopping strategies under which boosting is shown to be consistent based on iid samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step sizes, as known in practice through the works of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with ǫ → 0 stepsize becomes an L-margin maximizer when left to run to convergence.",
"title": ""
},
{
"docid": "efc11b77182119202190f97d705b3bb7",
"text": "In many E-commerce recommender systems, a special class of recommendation involves recommending items to users in a life cycle. For example, customers who have babies will shop on Diapers.com within a relatively long period, and purchase different products for babies within different growth stages. Traditional recommendation algorithms produce recommendation lists similar to items that the target user has accessed before (content filtering), or compute recommendation by analyzing the items purchased by the users who are similar to the target user (collaborative filtering). Such recommendation paradigms cannot effectively resolve the situation with a life cycle, i.e., the need of customers within different stages might vary significantly. In this paper, we model users’ behavior with life cycles by employing handcrafted item taxonomies, of which the background knowledge can be tailored for the computation of personalized recommendation. In particular, our method first formalizes a user’s long-term behavior using the item taxonomy, and then identifies the exact stage of the user. By incorporating collaborative filtering into recommendation, we can easily provide a personalized item list to the user through other similar users within the same stage. An empirical evaluation conducted on a purchasing data collection obtained from Diapers.com demonstrates the efficacy of our proposed method. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b3e9c251b2da6c704da6285602773afe",
"text": "It has been well established that most operating system crashes are due to bugs in device drivers. Because drivers are normally linked into the kernel address space, a buggy driver can wipe out kernel tables and bring the system crashing to a halt. We have greatly mitigated this problem by reducing the kernel to an absolute minimum and running each driver as a separate, unprivileged process in user space. In addition, we implemented a POSIX-conformant operating system as multiple user-mode processes. In this design, all that is left in kernel mode is a tiny kernel of under 3800 lines of executable code for catching interrupts, starting and stopping processes, and doing IPC. By moving nearly the entire operating system to multiple, protected user-mode processes we reduce the consequences of faults, since a driver failure no longer is fatal and does not require rebooting the computer. In fact, our system incorporates a reincarnation server that is designed to deal with such errors and often allows for full recovery, transparent to the application and without loss of data. To achieve maximum reliability, our design was guided by simplicity, modularity, least authorization, and fault tolerance. This paper discusses our lightweight approach and reports on its performance and reliability. It also compares our design to other proposals for protecting drivers using kernel wrapping and virtual machines.",
"title": ""
},
{
"docid": "7e5d83af3c6496e41c19b36b2392f076",
"text": "JavaScript is an interpreted programming language most often used for enhancing webpage interactivity and functionality. It has powerful capabilities to interact with webpage documents and browser windows, however, it has also opened the door for many browser-based security attacks. Insecure engineering practices of using JavaScript may not directly lead to security breaches, but they can create new attack vectors and greatly increase the risks of browser-based attacks. In this article, we present the first measurement study on insecure practices of using JavaScript on the Web. Our focus is on the insecure practices of JavaScript inclusion and dynamic generation, and we examine their severity and nature on 6,805 unique websites. Our measurement results reveal that insecure JavaScript practices are common at various websites: (1) at least 66.4% of the measured websites manifest the insecure practices of including JavaScript files from external domains into the top-level documents of their webpages; (2) over 44.4% of the measured websites use the dangerous eval() function to dynamically generate and execute JavaScript code on their webpages; and (3) in JavaScript dynamic generation, using the document.write() method and the innerHTML property is much more popular than using the relatively secure technique of creating script elements via DOM methods. Our analysis indicates that safe alternatives to these insecure practices exist in common cases and ought to be adopted by website developers and administrators for reducing potential security risks.",
"title": ""
},
{
"docid": "54e5cd296371e7e058a00b1835251242",
"text": "In this paper, a quasi-millimeter-wave wideband bandpass filter (BPF) is designed by using a microstrip dual-mode ring resonator and two folded half-wavelength resonators. Based on the transmission line equivalent circuit of the filter, variations of the frequency response of the filter versus the circuit parameters are investigated first by using the derived formulas and circuit simulators. Then a BPF with a 3dB fractional bandwidth (FBW) of 20% at 25.5 GHz is designed, which realizes the desired wide passband, sharp skirt property, and very wide stopband. Finally, the designed BPF is fabricated, and its measured frequency response is found agree well with the simulated result.",
"title": ""
},
{
"docid": "93d06eafb15063a7d17ec9a7429075f0",
"text": "Non-orthogonal multiple access (NOMA) is emerging as a promising, yet challenging, multiple access technology to improve spectrum utilization for the fifth generation (5G) wireless networks. In this paper, the application of NOMA to multicast cognitive radio networks (termed as MCR-NOMA) is investigated. A dynamic cooperative MCR-NOMA scheme is proposed, where the multicast secondary users serve as relays to improve the performance of both primary and secondary networks. Based on the available channel state information (CSI), three different secondary user scheduling strategies for the cooperative MCR-NOMA scheme are presented. To evaluate the system performance, we derive the closed-form expressions of the outage probability and diversity order for both networks. Furthermore, we introduce a new metric, referred to as mutual outage probability to characterize the cooperation benefit compared to non-cooperative MCR-NOMA scheme. Simulation results demonstrate significant performance gains are obtained for both networks, thanks to the use of our proposed cooperative MCR-NOMA scheme. It is also demonstrated that higher spatial diversity order can be achieved by opportunistically utilizing the CSI available for the secondary user scheduling.",
"title": ""
},
{
"docid": "92386ee2988b6d7b6f2f0b3cdcbf44ba",
"text": "In the rst part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate rule of Littlestone and Warmuth [20] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n . In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary nite set or a bounded segment of the real line.",
"title": ""
},
{
"docid": "858a5ed092f02d057437885ad1387c9f",
"text": "The current state-of-the-art singledocument summarization method generates a summary by solving a Tree Knapsack Problem (TKP), which is the problem of finding the optimal rooted subtree of the dependency-based discourse tree (DEP-DT) of a document. We can obtain a gold DEP-DT by transforming a gold Rhetorical Structure Theory-based discourse tree (RST-DT). However, there is still a large difference between the ROUGE scores of a system with a gold DEP-DT and a system with a DEP-DT obtained from an automatically parsed RST-DT. To improve the ROUGE score, we propose a novel discourse parser that directly generates the DEP-DT. The evaluation results showed that the TKP with our parser outperformed that with the state-of-the-art RST-DT parser, and achieved almost equivalent ROUGE scores to the TKP with the gold DEP-DT.",
"title": ""
},
{
"docid": "49329aef5ac732cc87b3cc78520c7ff5",
"text": "This paper surveys the previous and ongoing research on surface electromyogram (sEMG) signal processing implementation through various hardware platforms. The development of system that incorporates sEMG analysis capability is essential in rehabilitation devices, prosthesis arm/limb and pervasive healthcare in general. Most advanced EMG signal processing algorithms rely heavily on computational resource of a PC that negates the elements of portability, size and power dissipation of a pervasive healthcare system. Signal processing techniques applicable to sEMG are discussed with aim for proper execution in platform other than full-fledge PC. Performance and design parameters issues in some hardware implementation are also being pointed up. The paper also outlines the trends and alternatives solutions in developing portable and efficient EMG signal processing hardware.",
"title": ""
},
{
"docid": "1785d1d7da87d1b6e5c41ea89e447bf9",
"text": "Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given.",
"title": ""
},
{
"docid": "18e1f1171844fa27905246b9246cc975",
"text": "Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topoIogica1. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. @ 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "60182038191a764fd7070e8958185718",
"text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd",
"title": ""
},
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
},
{
"docid": "14bcbfcb6e7165e67247453944f37ac0",
"text": "This study investigated whether psychologists' confidence in their clinical decisions is really justified. It was hypothesized that as psychologists study information about a case (a) their confidence about the case increases markedly and steadily but (b) the accuracy of their conclusions about the case quickly reaches a ceiling. 32 judges, including 8 clinical psychologists, read background information about a published case, divided into 4 sections. After reading each section of the case, judges answered a set of 25 questions involving personality judgments about the case. Results strongly supported the hypotheses. Accuracy did not increase significantly with increasing information, but confidence increased steadily and significantly. All judges except 2 became overconfident, most of them markedly so. Clearly, increasing feelings of confidence are not a sure sign of increasing predictive accuracy about a case.",
"title": ""
},
{
"docid": "1d1ba5f131c9603fe3d919ad493a6dc1",
"text": "By its very nature, software development consists of many knowledge-intensive processes. One of the most difficult to model, however, is requirements elicitation. This paper presents a mathematical model of the requirements elicitation process that clearly shows the critical role of knowledge in its performance. One metaprocess of requirements elicitation, selection of an appropriate elicitation technique, is also captured in the model. The values of this model are: (1) improved understanding of what needs to be performed during elicitation helps analysts improve their elicitation efforts, (2) improved understanding of how elicitation techniques are selected helps less experienced analysts be as successful as more experienced analysts, and (3) as we improve our ability to perform elicitation, we improve the likelihood that the systems we create will meet their intended customers’ needs. Many papers have been written that promulgate specific elicitation methods. A few have been written that model elicitation in general. However, none have yet to model elicitation in a way that makes clear the critical role played by knowledge. This paper’s model captures the critical roles played by knowledge in both elicitation and elicitation technique selection.",
"title": ""
},
{
"docid": "632fc99930154b2caaa83254a0cc3c52",
"text": "Article history: Received 1 May 2012 Received in revised form 1 May 2014 Accepted 3 May 2014 Available online 10 May 2014",
"title": ""
}
] | scidocsrr |
4cb70dbe54b21485773023fd942ae7de | Service-Dominant Strategic Sourcing: Value Creation Versus Cost Saving | [
{
"docid": "dd62fd669d40571cc11d64789314dba1",
"text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.",
"title": ""
}
] | [
{
"docid": "9b0f286b03b3d81942747a98ac0e8817",
"text": "Automated recommendations for next tracks to listen to or to include in a playlist are a common feature on modern music platforms. Correspondingly, a variety of algorithmic approaches for determining tracks to recommend have been proposed in academic research. The most sophisticated among them are often based on conceptually complex learning techniques which can also require substantial computational resources or special-purpose hardware like GPUs. Recent research, however, showed that conceptually more simple techniques, e.g., based on nearest-neighbor schemes, can represent a viable alternative to such techniques in practice.\n In this paper, we describe a hybrid technique for next-track recommendation, which was evaluated in the context of the ACM RecSys 2018 Challenge. A combination of nearest-neighbor techniques, a standard matrix factorization algorithm, and a small set of heuristics led our team KAENEN to the 3rd place in the \"creative\" track and the 7th one in the \"main\" track, with accuracy results only a few percent below the winning teams. Given that offline prediction accuracy is only one of several possible quality factors in music recommendation, practitioners have to validate if slight accuracy improvements truly justify the use of highly complex algorithms in real-world applications.",
"title": ""
},
{
"docid": "4174c1d49ff8755c6b82c2b453918d29",
"text": "Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.",
"title": ""
},
{
"docid": "e6dcc8f80b5b6528531b7f6e617cd633",
"text": "Over 2 million military and civilian personnel per year (over 1 million in the United States) are occupationally exposed, respectively, to jet propulsion fuel-8 (JP-8), JP-8 +100 or JP-5, or to the civil aviation equivalents Jet A or Jet A-1. Approximately 60 billion gallon of these kerosene-based jet fuels are annually consumed worldwide (26 billion gallon in the United States), including over 5 billion gallon of JP-8 by the militaries of the United States and other NATO countries. JP-8, for example, represents the largest single chemical exposure in the U.S. military (2.53 billion gallon in 2000), while Jet A and A-1 are among the most common sources of nonmilitary occupational chemical exposure. Although more recent figures were not available, approximately 4.06 billion gallon of kerosene per se were consumed in the United States in 1990 (IARC, 1992). These exposures may occur repeatedly to raw fuel, vapor phase, aerosol phase, or fuel combustion exhaust by dermal absorption, pulmonary inhalation, or oral ingestion routes. Additionally, the public may be repeatedly exposed to lower levels of jet fuel vapor/aerosol or to fuel combustion products through atmospheric contamination, or to raw fuel constituents by contact with contaminated groundwater or soil. Kerosene-based hydrocarbon fuels are complex mixtures of up to 260+ aliphatic and aromatic hydrocarbon compounds (C(6) -C(17+); possibly 2000+ isomeric forms), including varying concentrations of potential toxicants such as benzene, n-hexane, toluene, xylenes, trimethylpentane, methoxyethanol, naphthalenes (including polycyclic aromatic hydrocarbons [PAHs], and certain other C(9)-C(12) fractions (i.e., n-propylbenzene, trimethylbenzene isomers). While hydrocarbon fuel exposures occur typically at concentrations below current permissible exposure limits (PELs) for the parent fuel or its constituent chemicals, it is unknown whether additive or synergistic interactions among hydrocarbon constituents, up to six performance additives, and other environmental exposure factors may result in unpredicted toxicity. While there is little epidemiological evidence for fuel-induced death, cancer, or other serious organic disease in fuel-exposed workers, large numbers of self-reported health complaints in this cohort appear to justify study of more subtle health consequences. A number of recently published studies reported acute or persisting biological or health effects from acute, subchronic, or chronic exposure of humans or animals to kerosene-based hydrocarbon fuels, to constituent chemicals of these fuels, or to fuel combustion products. This review provides an in-depth summary of human, animal, and in vitro studies of biological or health effects from exposure to JP-8, JP-8 +100, JP-5, Jet A, Jet A-1, or kerosene.",
"title": ""
},
{
"docid": "79079ee1e352b997785dc0a85efed5e4",
"text": "Automatic recognition of the historical letters (XI-XVIII centuries) carved on the stoned walls of St.Sophia cathedral in Kyiv (Ukraine) was demonstrated by means of capsule deep learning neural network. It was applied to the image dataset of the carved Glagolitic and Cyrillic letters (CGCL), which was assembled and pre-processed recently for recognition and prediction by machine learning methods. CGCL dataset contains >4000 images for glyphs of 34 letters which are hardly recognized by experts even in contrast to notMNIST dataset with the better images of 10 letters taken from different fonts. The capsule network was applied for both datasets in three regimes: without data augmentation, with lossless data augmentation, and lossy data augmentation. Despite the much worse quality of CGCL dataset and extremely low number of samples (in comparison to notMNIST dataset) the capsule network model demonstrated much better results than the previously used convolutional neural network (CNN). The training rate for capsule network model was 5-6 times higher than for CNN. The validation accuracy (and validation loss) was higher (lower) for capsule network model than for CNN without data augmentation even. The area under curve (AUC) values for receiver operating characteristic (ROC) were also higher for the capsule network model than for CNN model: 0.88-0.93 (capsule network) and 0.50 (CNN) without data augmentation, 0.91-0.95 (capsule network) and 0.51 (CNN) with lossless data augmentation, and similar results of 0.91-0.93 (capsule network) and 0.9 (CNN) in the regime of lossless data augmentation only. The confusion matrixes were much better for capsule network than for CNN model and gave the much lower type I (false positive) and type II (false negative) values in all three regimes of data augmentation. These results supports the previous claims that capsule-like networks allow to reduce error rates not only on MNIST digit dataset, but on the other notMNIST letter dataset and the more complex CGCL handwriting graffiti letter dataset also. Moreover, capsule-like networks allow to reduce training set sizes to 180 images even like in this work, and they are considerably better than CNNs on the highly distorted and incomplete letters even like CGCL handwriting graffiti. Keywords— machine learning, deep learning, capsule neural network, stone carving dataset, notMNIST, data augmentation",
"title": ""
},
{
"docid": "5ec1cff52a55c5bd873b5d0d25e0456b",
"text": "This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",
"title": ""
},
{
"docid": "0a58aa0c5dff94efa183fcf6fb7952f6",
"text": "When people explore new environments they often use landmarks as reference points to help navigate and orientate themselves. This research paper examines how spatial datasets can be used to build a system for use in an urban environment which functions as a city guide, announcing Features of Interest (FoI) as they become visible to the user (not just proximal), as the user moves freely around the city. Visibility calculations for the FoIs were pre-calculated based on a digital surface model derived from LIDAR (Light Detection and Ranging) data. The results were stored in a textbased relational database management system (RDBMS) for rapid retrieval. All interaction between the user and the system was via a speech-based interface, allowing the user to record and request further information on any of the announced FoI. A prototype system, called Edinburgh Augmented Reality System (EARS) , was designed, implemented and field tested in order to assess the effectiveness of these ideas. The application proved to be an innovating, ‘non-invasive’ approach to augmenting the user’s reality",
"title": ""
},
{
"docid": "4d1be9aebf7534cce625b95bde4696c6",
"text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.",
"title": ""
},
{
"docid": "5c29083624be58efa82b4315976f8dc2",
"text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.",
"title": ""
},
{
"docid": "471af6726ec78126fcf46f4e42b666aa",
"text": "A new thermal tuning circuit for optical ring modulators enables demonstration of an optical chip-to-chip link for the first time with monolithically integrated photonic devices in a commercial 45nm SOI process, without any process changes. The tuning circuit uses independent 1/0 level-tracking and 1/0 bit counting to remain resilient against laser self-heating transients caused by non-DC-balanced transmit data. A 30fJ/bit transmitter and 374fJ/bit receiver with 6μApk-pk photocurrent sensitivity complete the 5Gb/s link. The thermal tuner consumes 275fJ/bit and achieves a 600 GHz tuning range with a heater tuning efficiency of 3.8μW/GHz.",
"title": ""
},
{
"docid": "24a10176ec2367a6a0b5333d57b894b8",
"text": "Automated classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. We have investigated this possibility experimentally and numerically using a diffraction imaging approach. A fast image analysis software based on the gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images. The results of GLCM analysis and subsequent classification demonstrate the potential for rapid classification among six types of cultured cells. Combined with numerical results we show that the method of diffraction imaging flow cytometry has the capacity as a platform for high-throughput and label-free classification of biological cells.",
"title": ""
},
{
"docid": "9edfedc5a1b17481ee8c16151cf42c88",
"text": "Nevus comedonicus is considered a genodermatosis characterized by the presence of multiple groups of dilated pilosebaceous orifices filled with black keratin plugs, with sharply unilateral distribution mostly on the face, neck, trunk, upper arms. Lesions can appear at any age, frequently before the age of 10 years, but they are usually present at birth. We present a 2.7-year-old girl with a very severe form of nevus comedonicus. She exhibited lesions located initially at the left side of the body with a linear characteristic, following Blascko lines T1/T2, T5, T7, S1 /S2, but progressively developed lesions on the right side of the scalp and left gluteal area.",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "3e9a214856235ef36a4dd2e9684543b7",
"text": "Leaf area index (LAI) is a key biophysical variable that can be used to derive agronomic information for field management and yield prediction. In the context of applying broadband and high spatial resolution satellite sensor data to agricultural applications at the field scale, an improved method was developed to evaluate commonly used broadband vegetation indices (VIs) for the estimation of LAI with VI–LAI relationships. The evaluation was based on direct measurement of corn and potato canopies and on QuickBird multispectral images acquired in three growing seasons. The selected VIs were correlated strongly with LAI but with different efficiencies for LAI estimation as a result of the differences in the stabilities, the sensitivities, and the dynamic ranges. Analysis of error propagation showed that LAI noise inherent in each VI–LAI function generally increased with increasing LAI and the efficiency of most VIs was low at high LAI levels. Among selected VIs, the modified soil-adjusted vegetation index (MSAVI) was the best LAI estimator with the largest dynamic range and the highest sensitivity and overall efficiency for both crops. QuickBird image-estimated LAI with MSAVI–LAI relationships agreed well with ground-measured LAI with the root-mean-square-error of 0.63 and 0.79 for corn and potato canopies, respectively. LAI estimated from the high spatial resolution pixel data exhibited spatial variability similar to the ground plot measurements. For field scale agricultural applications, MSAVI–LAI relationships are easy-to-apply and reasonably accurate for estimating LAI. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2848635e59cf2a41871d79748822c176",
"text": "The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito–temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito–temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito–temporal cortex may constitute a multimodal object-related network.",
"title": ""
},
{
"docid": "9960d17cb019350a279e4daccccb8e87",
"text": "Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.",
"title": ""
},
{
"docid": "a2e0163aebb348d3bfab7ebac119e0c0",
"text": "Herein we report the first study of the oxygen reduction reaction (ORR) catalyzed by a cofacial porphyrin scaffold accessed in high yield (overall 53%) using coordination-driven self-assembly with no chromatographic purification steps. The ORR activity was investigated using chemical and electrochemical techniques on monomeric cobalt(II) tetra(meso-4-pyridyl)porphyrinate (CoTPyP) and its cofacial analogue [Ru8(η6-iPrC6H4Me)8(dhbq)4(CoTPyP)2][OTf]8 (Co Prism) (dhbq = 2,5-dihydroxy-1,4-benzoquinato, OTf = triflate) as homogeneous oxygen reduction catalysts. Co Prism is obtained in one self-assembly step that organizes six total building blocks, two CoTPyP units and four arene-Ru clips, into a cofacial motif previously demonstrated with free-base, Zn(II), and Ni(II) porphyrins. Turnover frequencies (TOFs) from chemical reduction (66 vs 6 h-1) and rate constants of overall homogeneous catalysis (kobs) determined from rotating ring-disk experiments (1.1 vs 0.05 h-1) establish a cofacial enhancement upon comparison of the activities of Co Prism and CoTPyP, respectively. Cyclic voltammetry was used to initially probe the electrochemical catalytic behavior. Rotating ring-disk electrode studies were completed to probe the Faradaic efficiency and obtain an estimate of the rate constant associated with the ORR.",
"title": ""
},
{
"docid": "c1632ead357d08c3e019bb12ff75e756",
"text": "Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness, and discuss future directions in this area.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
},
{
"docid": "e3823047ccc723783cf05f24ca60d449",
"text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.",
"title": ""
},
{
"docid": "ec9eb309dd9d6f72bd7286580e75d36d",
"text": "This paper describes SONDY, a tool for analysis of trends and dynamics in online social network data. SONDY addresses two audiences: (i) end-users who want to explore social activity and (ii) researchers who want to experiment and compare mining techniques on social data. SONDY helps end-users like media analysts or journalists understand social network users interests and activity by providing emerging topics and events detection as well as network analysis functionalities. To this end, the application proposes visualizations such as interactive time-lines that summarize information and colored user graphs that reflect the structure of the network. SONDY also provides researchers an easy way to compare and evaluate recent techniques to mine social data, implement new algorithms and extend the application without being concerned with how to make it accessible. In the demo, participants will be invited to explore information from several datasets of various sizes and origins (such as a dataset consisting of 7,874,772 messages published by 1,697,759 Twitter users during a period of 7 days) and apply the different functionalities of the platform in real-time.",
"title": ""
}
] | scidocsrr |
7f905cac740516a87c460e9988988718 | Automatic detection of cyber-recruitment by violent extremists | [
{
"docid": "e07cb04e3000607d4a3f99d47f72a906",
"text": "As part of the NSF-funded Dark Web research project, this paper presents an exploratory study of cyber extremism on the Web 2.0 media: blogs, YouTube, and Second Life. We examine international Jihadist extremist groups that use each of these media. We observe that these new, interactive, multimedia-rich forms of communication provide effective means for extremists to promote their ideas, share resources, and communicate among each other. The development of automated collection and analysis tools for Web 2.0 can help policy makers, intelligence analysts, and researchers to better understand extremistspsila ideas and communication patterns, which may lead to strategies that can counter the threats posed by extremists in the second-generation Web.",
"title": ""
},
{
"docid": "4d791fa53f7ed8660df26cd4dbe9063a",
"text": "The Internet is a powerful political instrument, wh ich is increasingly employed by terrorists to forward their goals. The fiv most prominent contemporary terrorist uses of the Net are information provision , fi ancing, networking, recruitment, and information gathering. This article describes a nd explains each of these uses and follows up with examples. The final section of the paper describes the responses of government, law enforcement, intelligence agencies, and others to the terrorism-Internet nexus. There is a particular emphasis within the te xt on the UK experience, although examples from other jurisdictions are also employed . ___________________________________________________________________ “Terrorists use the Internet just like everybody el se” Richard Clarke (2004) 1 ___________________________________________________________________",
"title": ""
}
] | [
{
"docid": "126b62a0ae62c76b43b4fb49f1bf05cd",
"text": "OBJECTIVE\nThe aim of the study was to evaluate efficacy of fractional CO2 vaginal laser treatment (Laser, L) and compare it to local estrogen therapy (Estriol, E) and the combination of both treatments (Laser + Estriol, LE) in the treatment of vulvovaginal atrophy (VVA).\n\n\nMETHODS\nA total of 45 postmenopausal women meeting inclusion criteria were randomized in L, E, or LE groups. Assessments at baseline, 8 and 20 weeks, were conducted using Vaginal Health Index (VHI), Visual Analog Scale for VVA symptoms (dyspareunia, dryness, and burning), Female Sexual Function Index, and maturation value (MV) of Meisels.\n\n\nRESULTS\nForty-five women were included and 3 women were lost to follow-up. VHI average score was significantly higher at weeks 8 and 20 in all study arms. At week 20, the LE arm also showed incremental improvement of VHI score (P = 0.01). L and LE groups showed a significant improvement of dyspareunia, burning, and dryness, and the E arm only of dryness (P < 0.001). LE group presented significant improvement of total Female Sex Function Index (FSFI) score (P = 0.02) and individual domains of pain, desire, and lubrication. In contrast, the L group showed significant worsening of pain domain in FSFI (P = 0.04), but FSFI total scores were comparable in all treatment arms at week 20.\n\n\nCONCLUSIONS\nCO2 vaginal laser alone or in combination with topical estriol is a good treatment option for VVA symptoms. Sexual-related pain with vaginal laser treatment might be of concern.",
"title": ""
},
{
"docid": "96e56dcf3d38c8282b5fc5c8ae747a66",
"text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.",
"title": ""
},
{
"docid": "a21513f9cf4d5a0e6445772941e9fba2",
"text": "Superficial dorsal penile vein thrombosis was diagnosed 8 times in 7 patients between 19 and 40 years old (mean age 27 years). All patients related the onset of the thrombosis to vigorous sexual intercourse. No other etiological medications, drugs or constricting devices were implicated. Three patients were treated acutely with anti-inflammatory medications, while 4 were managed expectantly. The mean interval to resolution of symptoms was 7 weeks. Followup ranged from 3 to 30 months (mean 11) at which time all patients noticed normal erectile function. Only 1 patient had recurrent thrombosis 3 months after the initial episode, again related to intercourse. We conclude that this is a benign self-limited condition. Anti-inflammatory agents are useful for acute discomfort but they do not affect the rate of resolution.",
"title": ""
},
{
"docid": "688ee7a4bde400a6afbd6972d729fad4",
"text": "Learning-to-Rank ( LtR ) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far. The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features. We extensively analyze the quality versus efficiency trade-off of a wide spectrum of stateof-the-art LtR , and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees ( GBRT ), Lambda-Mart ( λ-MART ), and the first public-domain implementation of Oblivious Lambda-Mart ( λ-MART ), an algorithm that induces forests of oblivious regression trees. We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the qualitycost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget. © 2016 Elsevier Ltd. All rights reserved. ∗ Corresponding author. E-mail addresses: [email protected] (G. Capannini), [email protected] , [email protected] (C. Lucchese), [email protected] (F.M. Nardini), [email protected] (S. Orlando), [email protected] (R. Perego), [email protected] (N. Tonellotto). http://dx.doi.org/10.1016/j.ipm.2016.05.004 0306-4573/© 2016 Elsevier Ltd. All rights reserved. Please cite this article as: G. Capannini et al., Quality versus efficiency in document scoring with learning-to-rank models, Information Processing and Management (2016), http://dx.doi.org/10.1016/j.ipm.2016.05.004 2 G. Capannini et al. / Information Processing and Management 0 0 0 (2016) 1–17 ARTICLE IN PRESS JID: IPM [m3Gsc; May 17, 2016;19:28 ] Document Index Base Ranker Top Ranker Features Learning to Rank Algorithm Query First step Second step N docs K docs 1. ............ 2. ............ 3. ............ K. ............ ... ... Results Page(s) Fig. 1. The architecture of a generic machine-learned ranking pipeline.",
"title": ""
},
{
"docid": "350137bf3c493b23aa6d355df946440f",
"text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.",
"title": ""
},
{
"docid": "bccb8e4cf7639dbcd3896e356aceec8d",
"text": "Over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy.",
"title": ""
},
{
"docid": "866b9d88e90a93357ca6caa4979ef2d7",
"text": "This paper describes a new speech corpus, the RSR2015 database designed for text-dependent speaker recognition with scenario based on fixed pass-phrases. This database consists of over 71 hours of speech recorded from English speakers covering the diversity of accents spoken in Singapore. Acquisition has been done using a set of six portable devices including smart phones and tablets. The pool of speakers consists of 300 participants (143 female and 157 male speakers) from 17 to 42 years old. We propose a protocol for the case of user-dependent pass-phrases in text-dependent speaker recognition and we also report speaker recognition experiments on RSR2015 database.",
"title": ""
},
{
"docid": "d95cc1187827e91601cb5711dbdb1550",
"text": "As data sparsity remains a significant challenge for collaborative filtering (CF, we conjecture that predicted ratings based on imputed data may be more accurate than those based on the originally very sparse rating data. In this paper, we propose a framework of imputation-boosted collaborative filtering (IBCF), which first uses an imputation technique, or perhaps machine learned classifier, to fill-in the sparse user-item rating matrix, then runs a traditional Pearson correlation-based CF algorithm on this matrix to predict a novel rating. Empirical results show that IBCF using machine learning classifiers can improve predictive accuracy of CF tasks. In particular, IBCF using a classifier capable of dealing well with missing data, such as naïve Bayes, can outperform the content-boosted CF (a representative hybrid CF algorithm) and IBCF using PMM (predictive mean matching, a state-of-the-art imputation technique), without using external content information.",
"title": ""
},
{
"docid": "0e68120ea21beb2fdaff6538aa342aa5",
"text": "The development of a truly non-invasive continuous glucose sensor is an elusive goal. We describe the rise and fall of the Pendra device. In 2000 the company Pendragon Medical introduced a truly non-invasive continuous glucose-monitoring device. This system was supposed to work through so-called impedance spectroscopy. Pendra was Conformité Européenne (CE) approved in May 2003. For a short time the Pendra was available on the Dutch direct-to-consumer market. A post-marketing reliability study was performed in six type 1 diabetes patients. Mean absolute difference between Pendra glucose values and values obtained through self-monitoring of blood glucose was 52%; the Pearson’s correlation coefficient was 35.1%; and a Clarke error grid showed 4.3% of the Pendra readings in the potentially dangerous zone E. We argue that the CE certification process for continuous glucose sensors should be made more transparent, and that a consensus on specific requirements for continuous glucose sensors is needed to prevent patient exposure to potentially dangerous situations.",
"title": ""
},
{
"docid": "b47127a755d7bef1c5baf89253af46e7",
"text": "In an effort to explain pro-environmental behavior, environmental sociologists often study environmental attitudes. While much of this work is atheoretical, the focus on attitudes suggests that researchers are implicitly drawing upon attitude theory in psychology. The present research brings sociological theory to environmental sociology by drawing on identity theory to understand environmentally responsive behavior. We develop an environment identity model of environmental behavior that includes not only the meanings of the environment identity, but also the prominence and salience of the environment identity and commitment to the environment identity. We examine the identity process as it relates to behavior, though not to the exclusion of examining the effects of environmental attitudes. The findings reveal that individual agency is important in influencing environmentally responsive behavior, but this agency is largely through identity processes, rather than attitude processes. This provides an important theoretical and empirical advance over earlier work in environmental sociology.",
"title": ""
},
{
"docid": "284c056db69549efe956b81d5316ac6d",
"text": "PURPOSE\nThe aim of this study was to identify the effects of a differential-learning program, embedded in small-sided games, on the creative and tactical behavior of youth soccer players. Forty players from under-13 (U13) and under-15 (U15) were allocated into control and experimental groups and were tested using a randomized pretest to posttest design using small-sided games situations.\n\n\nMETHOD\nThe experimental group participated in a 5-month differential-learning program embodied in small-sided games situations, while the control group participated in a typical small-sided games training program. In-game creativity was assessed through notational analyses of the creative components, and the players' positional data were used to compute tactical-derived variables.\n\n\nRESULTS\nThe findings suggested that differential learning facilitated the development of creative components, mainly concerning attempts (U13, small; U15, small), versatility (U13, moderate; U15, small), and originality (U13, unclear; U15, small) of players' actions. Likewise, the differential-learning approach provided a decrease in fails during the game in both experimental groups (moderate). Moreover, differential learning seemed to favor regularity in pitch-positioning behavior for the distance between players' dyads (U13, small; U15, small), the distance to the team target (U13, moderate; U15, small), and the distance to the opponent target (U13, moderate; U15, small).\n\n\nCONCLUSIONS\nThe differential-learning program stressed creative and positional behavior in both age groups with a distinct magnitude of effects, with the U13 players demonstrating higher improvements over the U15 players. Overall, these findings confirmed that the technical variability promoted by differential learning nurtures regularity of positioning behavior.",
"title": ""
},
{
"docid": "883c37c54b86cea09d04657426a96f97",
"text": "temporal relations tend to emerge later in development than simple comparatives. 2.4.7. Spatial Relations A large number of relations deal with the arrangement of objects or aspects of objects in space, relative to each other, such as in-out, front-back, over-under and so on. These spatial relations are like comparative relations, but often they imply or specify frames of reference that make them quite specific. For example, if you are told that house A faces the back of house B, you could order the front and back doors of both houses into a linear sequence (back door of A, front door of A, back door of B, front door of B). This is because front and back doors are relative to each individual house, and knowing the orientation of the two houses implies the more detailed information. 2.4.8. Conditionality and Causality Conditionality and causality share features with both hierarchical relations and comparative relations. Forexample, if a listener is told, “A causes B and B causes C,” s/he may simply derive, via a frame of comparison, that “A caused C and C was caused by A.” Hierarchical class membership is involved, however, if the listener derives “B was caused by A alone, but C was caused by both A and B.” That is, the listener constructs a precise hierarchy of causeeffect relations, and therefore such relational responding extends beyond the basic frame of comparison. The same type of analysis may be applied to conditional relations such as “ifthen.” The constructed nature of this relation is more obvious than with temporal relations, particularly as one begins to attribute cause to conditional properties. Events are said to cause events based on many features: sequences, contiguity, manipulability, practical exigencies, cultural beliefs, and so on. Causality itself is not a physical dimension of any event. 2.4.9. Deictic Relations By deictic relations we mean those that specify a relation in terms of the perspective of the speaker such as left-right; I-you (and all of its correlates, such as “mine”); here-there; and now-then (see Barnes and Roche, 1997a; Hayes, 1984). Some relations may or may not be deictic, such as front-back or above-below, depending on the perspective applied. For example, the sentence “the back door of my house is in front of me” contains both spatial and deictic forms of “front-back.” Deictic relations seem to be a particularly important family of relational frames that may be critical for perspective-taking. Consider, for example, the three frames of I and YOU, HERE and THERE, and NOW and THEN (when it seems contextually useful, we will capitalize relational terms if they refer to specific relational frames). These frames are unlike DERIVED RELATIONAL RESPONDING AS LEARNED BEHAVIOR 39 the others mentioned previously in that they do not appear to have any formal or nonarbitrary counterparts. Coordination, for instance, is based on formal identity or sameness, while “bigger than” is based on relative size. Temporal frames are more inherently verbal in that they are based on the nonarbitrary experience of change, but the dimensional nature of that experience must be verbally constructed. Frames that depend on perspective, however, cannot be traced to formal dimensions in the environment at all. Instead, the relationship between the individual and other events serves as the constant variable upon which these frames are based. Learning to respond appropriately to (and ask) the following kinds of questions appears to be critical in establishing these kinds of relational frames: “What are you doing now?” “What did you do then?” “What are you doing here?” “What are you doing there?” “What am I doing now?” “What did I do then?” “What am I doing here?” “What will I do there?” Each time one or more of these questions is asked or answered, the physical environment will likely be different. The only constant across all of the questions are the relational properties of I versus You, Here versus There, and Now versus Then. These properties appear to be abstracted through learning to talk about one’s own perspective in relation to other perspectives. For example, I is always from this perspective here, not from someone else’s perspective there. Clearly, a speaker must learn to respond in accordance with these relational frames. For example, if Peter is asked, “What did you do when you got there?” he should not simply describe what someone else is doing now (unless he wishes to hide what he actually did, or annoy and confuse the questioner). We shall consider the relational frames of perspective in greater detail in subsequent chapters. 2.4.10. Interactions Among Relational Frames At the present time very little is known about the effects of learning to respond in accordance with one type of frame on other framing activities. We have seen evidence in our research of such effects. For example, training in SAME may make OPPOSITION easier; training in deictic relations may make appreciation of contingencies easier and so on. One fairly clear prediction from RFT is that there should be some generalization of relational responding, particularly within families of relational frames. For example, an individual who learns to respond in accordance with sameness, may learn to respond in accordance with similarity (or opposition, since sameness is a combinatorially entailed aspect of opposition) more rapidly than, say, comparison. Similarly, learning across more closely associated families of relations may be more expected than learning across more distinct families. For example, to frame in accordance with comparison may facilitate hierarchical framing more readily than a frame of coordination. For the time being, however, such issues will have to await systematic empirical investigation. 40 RELATIONAL FRAME THEORY 2.4.11. Relational Frames: A Caveat In listing the foregoing families of relational frames, we are not suggesting that they are somehow final or absolute. If RFT is correct, the number of relational frames is limited only by the creativity of the social/verbal community that trains them. Some frames, such as coordination, have been the subject of many empirical analyses. Others such as opposition and more-than/less-than have also been studied experimentally, but the relevant database is much smaller than for coordination. Many of the frames listed, however, have not been analyzed empirically, or have only been subjected to the most preliminary of experimental analyses. Thus the list we have presented is to some degree tentative in that some of the relational frames we have identified are based on our preliminary, non-experimental analyses of human language. For example, TIME and CAUSALITY can be thought of as one or two types of relations. It is not yet clear if thinking of them as separate or related may be the most useful. Thus, while the generic concept of a relational frame is foundational to RFT, the concept of any particular relational frame is not. Our aim in presenting this list is to provide a set of conceptual tools, some more firmly grounded in data than others, that may be modified and refined as subsequent empirical analyses are conducted. 2.5. COMPLEX RELATIONAL NETWORKS It is possible to create relational networks from mixtures of various relational frames and to relate entire relational classes with other relational classes. Forexample, if one equivalence class is the opposite of another equivalence class, then normally each member of the first class is the opposite of all members of the second and vice versa. This can continue to virtually any level of complexity. For example, consider the relations that surround a given word, such as “car.” It is part of many hierarchical classes, such as the class “noun,” or the class “vehicles.” Other terms are in a hierarchical relation with it, such as “windshield” or “wheel.” It enters into many comparisons: it is faster then a snail, bigger than a breadbox, heavier than a book. It is the same as “automobile,” but different than a house, and so on. The participation of the word “car” in these relations is part of the training required for the verbal community to use the stimulus “car” in the way that it does. Even the simplest verbal concept quickly becomes the focus of a complex network of stimulus relations in natural language use. We will deal with this in detail in the next three chapters because this is a crucial form of relational responding in such activities as problem-solving, reasoning, and thinking. The generative implications of this process are spectacular. A single specified relation between two sets of relata might give rise to myriad derived relations in an instant. Entire sets of relations can change in an instant. This kind of phenomenon seems to be part of what is being described with terms like “insight.” 2.6. EMPIRICAL EVIDENCE FOR RELATIONAL FRAMES AS OPERANTS Operant behavior can be originated, maintained, modified, or eliminated in the laboratory and it is relatively easy to identify operants in that context. Many naturally occurring DERIVED RELATIONAL RESPONDING AS LEARNED BEHAVIOR 41 behaviors, however, are difficult to bring into the laboratory in such a highly controlled fashion. Nevertheless, we can examine the characteristics of these naturalistic behaviors to see if they have some of the properties characteristic of operants. Four such properties seem most relevant: first, they should develop over time rather than emerging in whole cloth; second, they should have flexible form; third, they should be under antecedent stimulus control; and fourth, they should be under consequential control. If derived stimulus relations are based upon operant behavior, they should show these four characteristics. Although much work remains to be done, there is some supporting evidence for each of them.",
"title": ""
},
{
"docid": "1d0ca28334542ed2978f986cd3550150",
"text": "Recent success of deep learning models for the task of extractive Question Answering (QA) is hinged on the availability of large annotated corpora. However, large domain specific annotated corpora are limited and expensive to construct. In this work, we envision a system where the end user specifies a set of base documents and only a few labelled examples. Our system exploits the document structure to create cloze-style questions from these base documents; pre-trains a powerful neural network on the cloze style questions; and further finetunes the model on the labeled examples. We evaluate our proposed system across three diverse datasets from different domains, and find it to be highly effective with very little labeled data. We attain more than 50% F1 score on SQuAD and TriviaQA with less than a thousand labelled examples. We are also releasing a set of 3.2M cloze-style questions for practitioners to use while building QA systems1.",
"title": ""
},
{
"docid": "802280cdb72ad33987ad57772d932537",
"text": "It is usually believed that drugs of abuse are smuggled into the United States or are clandestinely produced for illicit distribution. Less well known is that many hallucinogens and dissociative agents can be obtained from plants and fungi growing wild or in gardens. Some of these botanical sources can be located throughout the United States; others have a more narrow distribution. This article reviews plants containing N,N-dimethyltryptamine, reversible type A monoamine oxidase inhibitors (MAOI), lysergic acid amide, the anticholinergic drugs atropine and scopolamine, or the diterpene salvinorin-A (Salvia divinorum). Also reviewed are mescaline-containing cacti, psilocybin/psilocin-containing mushrooms, and the Amanita muscaria and Amanita pantherina mushrooms that contain muscimol and ibotenic acid. Dangerous misidentification is most common with the mushrooms, but even a novice forager can quickly learn how to properly identify and prepare for ingestion many of these plants. Moreover, through the ever-expanding dissemination of information via the Internet, this knowledge is being obtained and acted upon by more and more individuals. This general overview includes information on the geographical range, drug content, preparation, intoxication, and the special health risks associated with some of these plants. Information is also offered on the unique issue of when bona fide religions use such plants as sacraments in the United States. In addition to the Native American Church's (NAC) longstanding right to peyote, two religions of Brazilian origin, the Santo Daime and the Uniao do Vegetal (UDV), are seeking legal protection in the United States for their use of sacramental dimethyltryptamine-containing \"ayahuasca.\"",
"title": ""
},
{
"docid": "8976eea8c39d9cb9dea21c42bae8ebea",
"text": "Continuously monitoring schizophrenia patients’ psychiatric symptoms is crucial for in-time intervention and treatment adjustment. The Brief Psychiatric Rating Scale (BPRS) is a survey administered by clinicians to evaluate symptom severity in schizophrenia. The CrossCheck symptom prediction system is capable of tracking schizophrenia symptoms based on BPRS using passive sensing from mobile phones. We present results from an ongoing randomized control trial, where passive sensing data, self-reports, and clinician administered 7-item BPRS surveys are collected from 36 outpatients with schizophrenia recently discharged from hospital over a period ranging from 2-12 months. We show that our system can predict a symptom scale score based on a 7-item BPRS within ±1.45 error on average using automatically tracked behavioral features from phones (e.g., mobility, conversation, activity, smartphone usage, the ambient acoustic environment) and user supplied self-reports. Importantly, we show our system is also capable of predicting an individual BPRS score within ±1.59 error purely based on passive sensing from phones without any self-reported information from outpatients. Finally, we discuss how well our predictive system reflects symptoms experienced by patients by reviewing a number of case studies.",
"title": ""
},
{
"docid": "e9c383d71839547d41829348bebaabcf",
"text": "Receiver operating characteristic (ROC) analysis, which yields indices of accuracy such as the area under the curve (AUC), is increasingly being used to evaluate the performances of diagnostic tests that produce results on continuous scales. Both parametric and nonparametric ROC approaches are available to assess the discriminant capacity of such tests, but there are no clear guidelines as to the merits of each, particularly with non-binormal data. Investigators may worry that when data are non-Gaussian, estimates of diagnostic accuracy based on a binormal model may be distorted. The authors conducted a Monte Carlo simulation study to compare the bias and sampling variability in the estimates of the AUCs derived from parametric and nonparametric procedures. Each approach was assessed in data sets generated from various configurations of pairs of overlapping distributions; these included the binormal model and non-binormal pairs of distributions where one or both pair members were mixtures of Gaussian (MG) distributions with different degrees of departures from binormality. The biases in the estimates of the AUCs were found to be very small for both parametric and nonparametric procedures. The two approaches yielded very close estimates of the AUCs and the corresponding sampling variability even when data were generated from non-binormal models. Thus, for a wide range of distributions, concern about bias or imprecision of the estimates of the AUC should not be a major factor in choosing between the nonparametric and parametric approaches.",
"title": ""
},
{
"docid": "063598613ce313e2ad6d2b0697e0c708",
"text": "Contour shape descriptors are among the important shape description methods. Fourier descriptors (FD) and curvature scale space descriptors (CSSD) are widely used as contour shape descriptors for image retrieval in the literature. In MPEG-7, CSSD has been proposed as one of the contour-based shape descriptors. However, no comprehensive comparison has been made between these two shape descriptors. In this paper we study and compare FD and CSSD using standard principles and standard database. The study targets image retrieval application. Our experimental results show that FD outperforms CSSD in terms of robustness, low computation, hierarchical representation, retrieval performance and suitability for efficient indexing.",
"title": ""
},
{
"docid": "e881c2ab6abc91aa8e7cbe54d861d36d",
"text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.",
"title": ""
},
{
"docid": "df5aaa0492fc07b76eb7f8da97ebf08e",
"text": "The aim of the present case report is to describe the orthodontic-surgical treatment of a 17-year-and-9-month-old female patient with a Class III malocclusion, poor facial esthetics, and mandibular and chin protrusion. She had significant anteroposterior and transverse discrepancies, a concave profile, and strained lip closure. Intraorally, she had a negative overjet of 5 mm and an overbite of 5 mm. The treatment objectives were to correct the malocclusion, and facial esthetic and also return the correct function. The surgical procedures included a Le Fort I osteotomy for expansion, advancement, impaction, and rotation of the maxilla to correct the occlusal plane inclination. There was 2 mm of impaction of the anterior portion of the maxilla and 5 mm of extrusion in the posterior region. A bilateral sagittal split osteotomy was performed in order to allow counterclockwise rotation of the mandible and anterior projection of the chin, accompanying the maxillary occlusal plane. Rigid internal fixation was used without any intermaxillary fixation. It was concluded that these procedures were very effective in producing a pleasing facial esthetic result, showing stability 7 years posttreatment.",
"title": ""
},
{
"docid": "9c6f2a1eb23fc35e5a3a2b54c5dcb0c4",
"text": "Some of the current assembly issues of fine-pitch chip-on-flex (COF) packages for LCD applications are reviewed. Traditional underfill material, anisotropic conductive adhesive (ACA), and nonconductive adhesive (NCA) are considered in conjunction with two applicable bonding methods including thermal and laser bonding. Advantages and disadvantages of each material/process combination are identified. Their applicability is further investigated to identify a process most suitable to the next-generation fine-pitch packages (less than 35 mum). Numerical results and subsequent testing results indicate that the NCA/laser bonding process is advantageous for preventing both lead crack and excessive misalignment compared to the conventional bonding process",
"title": ""
}
] | scidocsrr |
676750cc6699250834bbba06c106c5c6 | Cyber-Physical-Social Based Security Architecture for Future Internet of Things | [
{
"docid": "de8e9537d6b50467d014451dcaae6c0e",
"text": "With increased global interconnectivity, reliance on e-commerce, network services, and Internet communication, computer security has become a necessity. Organizations must protect their systems from intrusion and computer-virus attacks. Such protection must detect anomalous patterns by exploiting known signatures while monitoring normal computer programs and network usage for abnormalities. Current antivirus and network intrusion detection (ID) solutions can become overwhelmed by the burden of capturing and classifying new viral stains and intrusion patterns. To overcome this problem, a self-adaptive distributed agent-based defense immune system based on biological strategies is developed within a hierarchical layered architecture. A prototype interactive system is designed, implemented in Java, and tested. The results validate the use of a distributed-agent biological-system approach toward the computer-security problems of virus elimination and ID.",
"title": ""
},
{
"docid": "e33dd9c497488747f93cfcc1aa6fee36",
"text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.",
"title": ""
}
] | [
{
"docid": "bc5b77c532c384281af64633fcf697a3",
"text": "The purpose of this study was to investigate the effects of a 12-week resistance-training program on muscle strength and mass in older adults. Thirty-three inactive participants (60-74 years old) were assigned to 1 of 3 groups: high-resistance training (HT), moderate-resistance training (MT), and control. After the training period, both HT and MT significantly increased 1-RM body strength, the peak torque of knee extensors and flexors, and the midthigh cross-sectional area of the total muscle. In addition, both HT and MT significantly decreased the abdominal circumference. HT was more effective in increasing 1-RM strength, muscle mass, and peak knee-flexor torque than was MT. These data suggest that muscle strength and mass can be improved in the elderly with both high- and moderate-intensity resistance training, but high-resistance training can lead to greater strength gains and hypertrophy than can moderate-resistance training.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "5c819727ba80894e72531a62e402f0c4",
"text": "omega-3 fatty acids, alpha-tocopherol, ascorbic acid, beta-carotene and glutathione determined in leaves of purslane (Portulaca oleracea), grown in both a controlled growth chamber and in the wild, were compared in composition to spinach. Leaves from both samples of purslane contained higher amounts of alpha-linolenic acid (18:3w3) than did leaves of spinach. Chamber-grown purslane contained the highest amount of 18:3w3. Samples from the two kinds of purslane contained higher leaves of alpha-tocopherol, ascorbic acid and glutathione than did spinach. Chamber-grown purslane was richer in all three and the amount of alpha-tocopherol was seven times higher than that found in spinach, whereas spinach was slightly higher in beta-carotene. One hundred grams of fresh purslane leaves (one serving) contain about 300-400 mg of 18:3w3; 12.2 mg of alpha-tocopherol; 26.6 mg of ascorbic acid; 1.9 mg of beta-carotene; and 14.8 mg of glutathione. We confirm that purslane is a nutritious food rich in omega-3 fatty acids and antioxidants.",
"title": ""
},
{
"docid": "ede12c734b2fb65b427b3d47e1f3c3d8",
"text": "Battery management systems in hybrid-electric-vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state-of-charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose methods, based on extended Kalman filtering (EKF), that are able to accomplish these goals for a lithium ion polymer battery pack. We expect that they will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. This third paper concludes the series by presenting five additional applications where either an EKF or results from EKF may be used in typical BMS algorithms: initializing state estimates after the vehicle has been idle for some time; estimating state-of-charge with dynamic error bounds on the estimate; estimating pack available dis/charge power; tracking changing pack parameters (including power fade and capacity fade) as the pack ages, and therefore providing a quantitative estimate of state-of-health; and determining which cells must be equalized. Results from pack tests are presented. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e4e26cc61b326f8d60dc3f32909d340c",
"text": "We propose two secure protocols namely private equality test (PET) for single comparison and private batch equality test (PriBET) for batch comparisons of l-bit integers. We ensure the security of these secure protocols using somewhat homomorphic encryption (SwHE) based on ring learning with errors (ring-LWE) problem in the semi-honest model. In the PET protocol, we take two private integers input and produce the output denoting their equality or non-equality. Here the PriBET protocol is an extension of the PET protocol. So in the PriBET protocol, we take single private integer and another set of private integers as inputs and produce the output denoting whether single integer equals at least one integer in the set of integers or not. To serve this purpose, we also propose a new packing method for doing the batch equality test using few homomorphic multiplications of depth one. Here we have done our experiments at the 140-bit security level. For the lattice dimension 2048, our experiments show that the PET protocol is capable of doing any equality test of 8-bit to 2048-bit that require at most 107 milliseconds. Moreover, the PriBET protocol is capable of doing about 600 (resp., 300) equality comparisons per second for 32-bit (resp., 64-bit) integers. In addition, our experiments also show that the PriBET protocol can do more computations within the same time if the data size is smaller like 8-bit or 16-bit.",
"title": ""
},
{
"docid": "1cc4048067cc93c2f1e836c77c2e06dc",
"text": "Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.",
"title": ""
},
{
"docid": "440436a887f73c599452dc57c689dc9d",
"text": "This paper will explore the process of desalination by reverse osmosis (RO) and the benefits that it can contribute to society. RO may offer a sustainable solution to the water crisis, a global problem that is not going away without severe interference and innovation. This paper will go into depth on the processes involved with RO and how contaminants are removed from sea-water. Additionally, the use of significant pressures to force water through the semipermeable membranes, which only allow water to pass through them, will be investigated. Throughout the paper, the topics of environmental and economic sustainability will be covered. Subsequently, the two primary methods of desalination, RO and multi-stage flash distillation (MSF), will be compared. It will become clear that RO is a better method of desalination when compared to MSF. This paper will study examples of RO in action, including; the Carlsbad Plant, the Sorek Plant, and applications beyond the potable water industry. It will be shown that The Claude \"Bud\" Lewis Carlsbad Desalination Plant (Carlsbad), located in San Diego, California is a vital resource in the water economy of the area. The impact of the Sorek Plant, located in Tel Aviv, Israel will also be explained. Both plants produce millions of gallons of fresh, drinkable water and are vital resources for the people that live there.",
"title": ""
},
{
"docid": "10496d5427035670d89f72a64b68047f",
"text": "A challenge for human-computer interaction researchers and user interf ace designers is to construct information technologies that support creativity. This ambitious goal can be attained by building on an adequate understanding of creative processes. This article offers a four-phase framework for creativity that might assist designers in providing effective tools for users: (1)Collect: learn from provious works stored in libraries, the Web, etc.; (2) Relate: consult with peers and mentors at early, middle, and late stages, (3)Create: explore, compose, evaluate possible solutions; and (4) Donate: disseminate the results and contribute to the libraries. Within this integrated framework, this article proposes eight activities that require human-computer interaction research and advanced user interface design. A scenario about an architect illustrates the process of creative work within such an environment.",
"title": ""
},
{
"docid": "c19b63a2c109c098c22877bcba8690ae",
"text": "A monolithic current-mode pulse width modulation (PWM) step-down dc-dc converter with 96.7% peak efficiency and advanced control and protection circuits is presented in this paper. The high efficiency is achieved by \"dynamic partial shutdown strategy\" which enhances circuit speed with less power consumption. Automatic PWM and \"pulse frequency modulation\" switching boosts conversion efficiency during light load operation. The modified current sensing circuit and slope compensation circuit simplify the current-mode control circuit and enhance the response speed. A simple high-speed over-current protection circuit is proposed with the modified current sensing circuit. The new on-chip soft-start circuit prevents the power on inrush current without additional off-chip components. The dc-dc converter has been fabricated with a 0.6 mum CMOS process and measured 1.35 mm2 with the controller measured 0.27 mm2. Experimental results show that the novel on-chip soft-start circuit with longer than 1.5 ms soft-start time suppresses the power-on inrush current. This converter can operate at 1.1 MHz with supply voltage from 2.2 to 6.0 V. Measured power efficiency is 88.5-96.7% for 0.9 to 800 mA output current and over 85.5% for 1000 mA output current.",
"title": ""
},
{
"docid": "cc5f814338606b92c92aa6caf2f4a3f5",
"text": "The purpose of this study was to report the outcome of infants with antenatal hydronephrosis. Between May 1999 and June 2006, all patients diagnosed with isolated fetal renal pelvic dilatation (RPD) were prospectively followed. The events of interest were: presence of uropathy, need for surgical intervention, RPD resolution, urinary tract infection (UTI), and hypertension. RPD was classified as mild (5–9.9 mm), moderate (10–14.9 mm) or severe (≥15 mm). A total of 192 patients was included in the analysis; 114 were assigned to the group of non-significant findings (59.4%) and 78 to the group of significant uropathy (40.6%). Of 89 patients with mild dilatation, 16 (18%) presented uropathy. Median follow-up time was 24 months. Twenty-seven patients (15%) required surgical intervention. During follow-up, UTI occurred in 27 (14%) children. Of 89 patients with mild dilatation, seven (7.8%) presented UTI during follow-up. Renal function, blood pressure, and somatic growth were within normal range at last visit. The majority of patients with mild fetal RPD have no significant findings during infancy. Nevertheless, our prospective study has shown that 18% of these patients presented uropathy and 7.8% had UTI during a medium-term follow-up time. Our findings suggested that, in contrast to patients with moderate/severe RPD, infants with mild RPD do not require invasive diagnostic procedures but need strict clinical surveillance for UTI and progression of RPD.",
"title": ""
},
{
"docid": "2f3bb54596bba8cd7a073ef91964842c",
"text": "BACKGROUND AND PURPOSE\nRecent meta-analyses have suggested similar wound infection rates when using single- or multiple-dose antibiotic prophylaxis in the operative management of closed long bone fractures. In order to assist clinicians in choosing the optimal prophylaxis strategy, we performed a cost-effectiveness analysis comparing single- and multiple-dose prophylaxis.\n\n\nMETHODS\nA cost-effectiveness analysis comparing the two prophylactic strategies was performed using time horizons of 60 days and 1 year. Infection probabilities, costs, and quality-adjusted life days (QALD) for each strategy were estimated from the literature. All costs were reported in 2007 US dollars. A base case analysis was performed for the surgical treatment of a closed ankle fracture. Sensitivity analysis was performed for all variables, including probabilistic sensitivity analysis using Monte Carlo simulation.\n\n\nRESULTS\nSingle-dose prophylaxis results in lower cost and a similar amount of quality-adjusted life days gained. The single-dose strategy had an average cost of $2,576 for an average gain of 272 QALD. Multiple doses had an average cost of $2,596 for 272 QALD gained. These results are sensitive to the incidence of surgical site infection and deep wound infection for the single-dose treatment arm. Probabilistic sensitivity analysis using all model variables also demonstrated preference for the single-dose strategy.\n\n\nINTERPRETATION\nAssuming similar infection rates between the prophylactic groups, our results suggest that single-dose prophylaxis is slightly more cost-effective than multiple-dose regimens for the treatment of closed fractures. Extensive sensitivity analysis demonstrates these results to be stable using published meta-analysis infection rates.",
"title": ""
},
{
"docid": "3aa58539c69d6706bc0a9ca0256cdf80",
"text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.",
"title": ""
},
{
"docid": "bf4a991dbb32ec1091a535750637dbd7",
"text": "As cutting-edge experiments display ever more extreme forms of non-classical behavior, the prevailing view on the interpretation of quantum mechanics appears to be gradually changing. A (highly unscientific) poll taken at the 1997 UMBC quantum mechanics workshop gave the once alldominant Copenhagen interpretation less than half of the votes. The Many Worlds interpretation (MWI) scored second, comfortably ahead of the Consistent Histories and Bohm interpretations. It is argued that since all the above-mentioned approaches to nonrelativistic quantum mechanics give identical cookbook prescriptions for how to calculate things in practice, practical-minded experimentalists, who have traditionally adopted the “shut-up-and-calculate interpretation”, typically show little interest in whether cozy classical concepts are in fact real in some untestable metaphysical sense or merely the way we subjectively perceive a mathematically simpler world where the Schrödinger equation describes everything — and that they are therefore becoming less bothered by a profusion of worlds than by a profusion of words. Common objections to the MWI are discussed. It is argued that when environment-induced decoherence is taken into account, the experimental predictions of the MWI are identical to those of the Copenhagen interpretation except for an experiment involving a Byzantine form of “quantum suicide”. This makes the choice between them purely a matter of taste, roughly equivalent to whether one believes mathematical language or human language to be more fundamental.",
"title": ""
},
{
"docid": "f274062a188fb717b8645e4d2352072a",
"text": "CPU-FPGA heterogeneous acceleration platforms have shown great potential for continued performance and energy efficiency improvement for modern data centers, and have captured great attention from both academia and industry. However, it is nontrivial for users to choose the right platform among various PCIe and QPI based CPU-FPGA platforms from different vendors. This paper aims to find out what microarchitectural characteristics affect the performance, and how. We conduct our quantitative comparison and in-depth analysis on two representative platforms: QPI-based Intel-Altera HARP with coherent shared memory, and PCIe-based Alpha Data board with private device memory. We provide multiple insights for both application developers and platform designers.",
"title": ""
},
{
"docid": "c9c29c091c9851920315c4d4b38b4c9f",
"text": "BACKGROUND\nThe presence of six or more café au lait (CAL) spots is a criterion for the diagnosis of neurofibromatosis type 1 (NF-1). Children with multiple CAL spots are often referred to dermatologists for NF-1 screening. The objective of this case series is to characterize a subset of fair-complected children with red or blond hair and multiple feathery CAL spots who did not meet the criteria for NF-1 at the time of their last evaluation.\n\n\nMETHODS\nWe conducted a chart review of eight patients seen in our pediatric dermatology clinic who were previously identified as having multiple CAL spots and no other signs or symptoms of NF-1.\n\n\nRESULTS\nWe describe eight patients ages 2 to 9 years old with multiple, irregular CAL spots with feathery borders and no other signs or symptoms of NF-1. Most of these patients had red or blond hair and were fair complected. All patients were evaluated in our pediatric dermatology clinic, some with a geneticist. The number of CAL spots per patient ranged from 5 to 15 (mean 9.4, median 9).\n\n\nCONCLUSION\nA subset of children, many with fair complexions and red or blond hair, has an increased number of feathery CAL spots and appears unlikely to develop NF-1, although genetic testing was not conducted. It is important to recognize the benign nature of CAL spots in these patients so that appropriate screening and follow-up recommendations may be made.",
"title": ""
},
{
"docid": "fc07af4d49f7b359e484381a0a88aff7",
"text": "In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov Complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee’s level of intelligence in order to obtain an intelligence score within a limited time.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1ec62f70be9d006b7e1295ef8d9cb1e3",
"text": "The aim of this research is to explore social media and its benefits especially from business-to-business innovation and related customer interface perspective, and to create a more comprehensive picture of the possibilities of social media for the business-to-business sector. Business-to-business context was chosen because it is in many ways a very different environment for social media than business-to-consumer context, and is currently very little academically studied. A systematic literature review on B2B use of social media and achieved benefits in the inn ovation con text was performed to answer the questions above and achieve the research goals. The study clearly demonstrates that not merely B2C's, as commonly believed, but also B2B's can benefit from the use of social media in a variety of ways. Concerning the broader classes of innovation --related benefits, the reported benefits of social media use referred to increased customer focus and understanding, increased level of customer service, and decreased time-to-market. The study contributes to the existing social media --related literature, because there were no found earlier comprehensive academic studies on the use of social media in the innovation process in the context of B2B customer interface.",
"title": ""
},
{
"docid": "97c162261666f145da6e81d2aa9a8343",
"text": "Shape optimization is a growing field of interest in many areas of academic research, marine design, and manufacturing. As part of the CREATE Ships Hydromechanics Product, an effort is underway to develop a computational tool set and process framework that can aid the ship designer in making informed decisions regarding the influence of the planned hull shape on its hydrodynamic characteristics, even at the earliest stages where decisions can have significant cost implications. The major goal of this effort is to utilize the increasing experience gained in using these methods to assess shape optimization techniques and how they might impact design for current and future naval ships. Additionally, this effort is aimed at establishing an optimization framework within the bounds of a collaborative design environment that will result in improved performance and better understanding of preliminary ship designs at an early stage. The initial effort demonstrated here is aimed at ship resistance, and examples are shown for full ship and localized bow dome shaping related to the Joint High Speed Sealift (JHSS) hull concept. Introduction Any ship design inherently involves optimization, as competing requirements and design parameters force the design to evolve, and as designers strive to deliver the most effective and efficient platform possible within the constraints of time, budget, and performance requirements. A significant number of applications of computational fluid dynamics (CFD) tools to hydrodynamic optimization, mostly for reducing calm-water drag and wave patterns, demonstrate a growing interest in optimization. In addition, more recent ship design programs within the US Navy illustrate some fundamental changes in mission and performance requirements, and future ship designs may be radically different from current ships in the fleet. One difficulty with designing such new concepts is the lack of experience from which to draw from when performing design studies; thus, optimization techniques may be particularly useful. These issues point to a need for greater fidelity, robustness, and ease of use in the tools used in early stage ship design. The Computational Research and Engineering Acquisition Tools and Environments (CREATE) program attempts to address this in its plan to develop and deploy sets of computational engineering design and analysis tools. It is expected that advances in computers will allow for highly accurate design and analyses studies that can be carried out throughout the design process. In order to evaluate candidate designs and explore the design space more thoroughly shape optimization is an important component of the CREATE Ships Hydromechanics Product. The current program development plan includes fast parameterized codes to bound the design space and more accurate Reynolds-Averaged Navier-Stokes (RANS) codes to better define the geometry and performance of the specified hull forms. The potential for hydrodynamic shape optimization has been demonstrated for a variety of different hull forms, including multi-hulls, in related efforts (see e.g., Wilson et al, 2009, Stern et al, Report Documentation Page Form Approved",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
}
] | scidocsrr |
b5058bb2c8ad7534f010c04fa0032c83 | SurroundSense: mobile phone localization via ambience fingerprinting | [
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "8718d91f37d12b8ff7658723a937ea84",
"text": "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).\n To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and/or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.",
"title": ""
}
] | [
{
"docid": "a5ed1ebf973e3ed7ea106e55795e3249",
"text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.",
"title": ""
},
{
"docid": "cbde86d9b73371332a924392ae1f10d0",
"text": "The difficulty to solve multiple objective combinatorial optimization problems with traditional techniques has urged researchers to look for alternative, better performing approaches for them. Recently, several algorithms have been proposed which are based on the Ant Colony Optimization metaheuristic. In this contribution, the existing algorithms of this kind are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objective genetic algorithms.",
"title": ""
},
{
"docid": "446af0ad077943a77ac4a38fd84df900",
"text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.",
"title": ""
},
{
"docid": "de0d2808f949723f1c0ee8e87052f889",
"text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.",
"title": ""
},
{
"docid": "e0d6212e77cbd54b54db5d38eca29814",
"text": "Summarization aims to represent source documents by a shortened passage. Existing methods focus on the extraction of key information, but often neglect coherence. Hence the generated summaries suffer from a lack of readability. To address this problem, we have developed a graph-based method by exploring the links between text to produce coherent summaries. Our approach involves finding a sequence of sentences that best represent the key information in a coherent way. In contrast to the previous methods that focus only on salience, the proposed method addresses both coherence and informativeness based on textual linkages. We conduct experiments on the DUC2004 summarization task data set. A performance comparison reveals that the summaries generated by the proposed system achieve comparable results in terms of the ROUGE metric, and show improvements in readability by human evaluation.",
"title": ""
},
{
"docid": "d9b19dd523fd28712df61384252d331c",
"text": "Purpose – The purpose of this paper is to examine the ways in which governments build social media and information and communication technologies (ICTs) into e-government transparency initiatives, to promote collaboration with members of the public and the ways in members of the public are able to employ the same social media to monitor government activities. Design/methodology/approach – This study used an iterative strategy that involved conducting a literature review, content analysis, and web site analysis, offering multiple perspectives on government transparency efforts, the role of ICTs and social media in these efforts, and the ability of e-government initiatives to foster collaborative transparency through embedded ICTs and social media. Findings – The paper identifies key initiatives, potential impacts, and future challenges for collaborative e-government as a means of transparency. Originality/value – The paper is one of the first to examine the interrelationships between ICTs, social media, and collaborative e-government to facilitate transparency.",
"title": ""
},
{
"docid": "d7c27413eb3f379618d1aafd85a43d3f",
"text": "This paper presents a tool Altair that automatically generates API function cross-references, which emphasizes reliable structural measures and does not depend on specific client code. Altair ranks related API functions for a given query according to pair-wise overlap, i.e., how they share state, and clusters tightly related ones into meaningful modules.\n Experiments against several popular C software packages show that Altair recommends related API functions for a given query with remarkably more precise and complete results than previous tools, that it can extract modules from moderate-sized software (e.g., Apache with 1000+ functions) at high precision and recall rates (e.g., both exceeding 70% for two modules in Apache), and that the computation can finish within a few seconds.",
"title": ""
},
{
"docid": "44b7ed6c8297b6f269c8b872b0fd6266",
"text": "vii",
"title": ""
},
{
"docid": "ee18a820614aac64d26474796464b518",
"text": "Recommender systems have already proved to be valuable for coping with the information overload problem in several application domains. They provide people with suggestions for items which are likely to be of interest for them; hence, a primary function of recommender systems is to help people make good choices and decisions. However, most previous research has focused on recommendation techniques and algorithms, and less attention has been devoted to the decision making processes adopted by the users and possibly supported by the system. There is still a gap between the importance that the community gives to the assessment of recommendation algorithms and the current range of ongoing research activities concerning human decision making. Different decision-psychological phenomena can influence the decision making of users of recommender systems, and research along these lines is becoming increasingly important and popular. This special issue highlights how the coupling of recommendation algorithms with the understanding of human choice and decision making theory has the potential to benefit research and practice on recommender systems and to enable users to achieve a good balance between decision accuracy and decision effort.",
"title": ""
},
{
"docid": "dd1e7bb3ba33c5ea711c0d066db53fa9",
"text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.",
"title": ""
},
{
"docid": "79287d0ca833605430fefe4b9ab1fd92",
"text": "Passwords are frequently used in data encryption and user authentication. Since people incline to choose meaningful words or numbers as their passwords, lots of passwords are easy to guess. This paper introduces a password guessing method based on Long Short-Term Memory recurrent neural networks. After training our LSTM neural network with 30 million passwords from leaked Rockyou dataset, the generated 3.35 billion passwords could cover 81.52% of the remaining Rockyou dataset. Compared with PCFG and Markov methods, this method shows higher coverage rate.",
"title": ""
},
{
"docid": "27ffdb0d427d2e281ffe84e219e6ed72",
"text": "UNLABELLED\nHitherto, noncarious cervical lesions (NCCLs) of teeth have been generally ascribed to either toothbrush-dentifrice abrasion or acid \"erosion.\" The last two decades have provided a plethora of new studies concerning such lesions. The most significant studies are reviewed and integrated into a practical approach to the understanding and designation of these lesions. A paradigm shift is suggested regarding use of the term \"biocorrosion\" to supplant \"erosion\" as it continues to be misused in the United States and many other countries of the world. Biocorrosion embraces the chemical, biochemical, and electrochemical degradation of tooth substance caused by endogenous and exogenous acids, proteolytic agents, as well as the piezoelectric effects only on dentin. Abfraction, representing the microstructural loss of tooth substance in areas of stress concentration, should not be used to designate all NCCLs because these lesions are commonly multifactorial in origin. Appropriate designation of a particular NCCL depends upon the interplay of the specific combination of three major mechanisms: stress, friction, and biocorrosion, unique to that individual case. Modifying factors, such as saliva, tongue action, and tooth form, composition, microstructure, mobility, and positional prominence are elucidated.\n\n\nCLINICAL SIGNIFICANCE\nBy performing a comprehensive medical and dental history, using precise terms and concepts, and utilizing the Revised Schema of Pathodynamic Mechanisms, the dentist may successfully identify and treat the etiology of root surface lesions. Preventive measures may be instituted if the causative factors are detected and their modifying factors are considered.",
"title": ""
},
{
"docid": "598dd39ec35921242b94f17e24b30389",
"text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.",
"title": ""
},
{
"docid": "159e040b0e74ad1b6124907c28e53daf",
"text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ",
"title": ""
},
{
"docid": "b142873eed364bd471fbe231cd19c27d",
"text": "Robotics have long sought an actuation technology comparable to or as capable as biological muscle tissue. Natural muscles exhibit a high power-to-weight ratio, inherent compliance and damping, fast action, and a high dynamic range. They also produce joint displacements and forces without the need for gearing or additional hardware. Recently, supercoiled commercially available polymer threads (sewing thread or nylon fishing lines) have been used to create significant mechanical power in a muscle-like form factor. Heating and cooling the polymer threads causes contraction and expansion, which can be utilized for actuation. In this paper, we describe the working principle of supercoiled polymer (SCP) actuation and explore the controllability and properties of these threads. We show that under appropriate environmental conditions, the threads are suitable as a building block for a controllable artificial muscle. We leverage off-the-shelf silver-coated threads to enable rapid electrical heating while the low thermal mass allows for rapid cooling. We utilize both thermal and thermomechanical models for feed-forward and feedback control. The resulting SCP actuator regulates to desired force levels in as little as 28 ms. Together with its inherent stiffness and damping, this is sufficient for a position controller to execute large step movements in under 100 ms. This controllability, high performance, the mechanical properties, and the extremely low material cost are indicative of a viable artificial muscle.",
"title": ""
},
{
"docid": "2cb0c74e57dea6fead692d35f8a8fac6",
"text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.",
"title": ""
},
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "e39ad8ee1d913cba1707b6aafafceefb",
"text": "Thoracic Outlet Syndrome (TOS) is the constellation of symptoms caused by compression of neurovascular structures at the superior aperture of the thorax, properly the thoracic inlet! The diagnosis and treatment is contentious and some even question its existence. Symptoms are often confused with distal compression neuropathies or cervical",
"title": ""
},
{
"docid": "f56c5a623b29b88f42bf5d6913b2823e",
"text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.",
"title": ""
},
{
"docid": "863ec0a6a06ce9b3cc46c85b09a7af69",
"text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x2 + y2 + z2 + w2 = 12(x + y + z + w) 2. Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element −n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple.",
"title": ""
}
] | scidocsrr |
01b5af49bd41891b0e9c7c78fbcc468b | Collaborative Networks of Cognitive Systems | [
{
"docid": "8bc04818536d2a8deff01b0ea0419036",
"text": "Research in IT must address the design tasks faced by practitioners. Real problems must be properly conceptualized and represented, appropriate techniques for their solution must be constructed, and solutions must be implemented and evaluated using appropriate criteria. If significant progress is to be made, IT research must also develop an understanding of how and why IT systems work or do not work. Such an understanding must tie together natural laws governing IT systems with natural laws governing the environments in which they operate. This paper presents a two dimensional framework for research in information technology. The first dimension is based on broad types of design and natural science research activities: build, evaluate, theorize, and justify. The second dimension is based on broad types of outputs produced by design research: representational constructs, models, methods, and instantiations. We argue that both design science and natural science activities are needed to insure that IT research is both relevant and effective.",
"title": ""
},
{
"docid": "86b12f890edf6c6561536a947f338feb",
"text": "Looking for qualified reading resources? We have process mining discovery conformance and enhancement of business processes to check out, not only review, yet also download them or even read online. Discover this great publication writtern by now, simply right here, yeah just right here. Obtain the data in the sorts of txt, zip, kindle, word, ppt, pdf, as well as rar. Once again, never ever miss out on to read online as well as download this publication in our site here. Click the link. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files.",
"title": ""
}
] | [
{
"docid": "b2a7c0a96f29a554ecdba2d56778b7c7",
"text": "Existing video streaming algorithms use various estimation approaches to infer the inherently variable bandwidth in cellular networks, which often leads to reduced quality of experience (QoE). We ask the question: \"If accurate bandwidth prediction were possible in a cellular network, how much can we improve video QoE?\". Assuming we know the bandwidth for the entire video session, we show that existing streaming algorithms only achieve between 69%-86% of optimal quality. Since such knowledge may be impractical, we study algorithms that know the available bandwidth for a few seconds into the future. We observe that prediction alone is not sufficient and can in fact lead to degraded QoE. However, when combined with rate stabilization functions, prediction outperforms existing algorithms and reduces the gap with optimal to 4%. Our results lead us to believe that cellular operators and content providers can tremendously improve video QoE by predicting available bandwidth and sharing it through APIs.",
"title": ""
},
{
"docid": "b922460e2a1d8b6dff6cc1c8c8c459ed",
"text": "This paper presents a new dynamic latched comparator which shows lower input-referred latch offset voltage and higher load drivability than the conventional dynamic latched comparators. With two additional inverters inserted between the input- and output-stage of the conventional double-tail dynamic comparator, the gain preceding the regenerative latch stage was improved and the complementary version of the output-latch stage, which has bigger output drive current capability at the same area, was implemented. As a result, the circuit shows up to 25% less input-referred latch offset voltage and 44% less sensitivity of the delay versus the input voltage difference (delay/log(ΔVin)), which is about 17.2ps/decade, than the conventional double-tail latched comparator at approximately the same area and power consumption.",
"title": ""
},
{
"docid": "7cef2ade99ffacfe1df5108665870988",
"text": "We describe improvements of the currently most popular method for prediction of classically secreted proteins, SignalP. SignalP consists of two different predictors based on neural network and hidden Markov model algorithms, where both components have been updated. Motivated by the idea that the cleavage site position and the amino acid composition of the signal peptide are correlated, new features have been included as input to the neural network. This addition, combined with a thorough error-correction of a new data set, have improved the performance of the predictor significantly over SignalP version 2. In version 3, correctness of the cleavage site predictions has increased notably for all three organism groups, eukaryotes, Gram-negative and Gram-positive bacteria. The accuracy of cleavage site prediction has increased in the range 6-17% over the previous version, whereas the signal peptide discrimination improvement is mainly due to the elimination of false-positive predictions, as well as the introduction of a new discrimination score for the neural network. The new method has been benchmarked against other available methods. Predictions can be made at the publicly available web server",
"title": ""
},
{
"docid": "6a19410817766b052a2054b2cb3efe42",
"text": "Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (‘bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.",
"title": ""
},
{
"docid": "d14812771115b4736c6d46aecadb2d8a",
"text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.",
"title": ""
},
{
"docid": "90aceb010cead2fbdc37781c686bf522",
"text": "The present article examines the relationship between age and dominance in bilingual populations. Age in bilingualism is understood as the point in devel10 opment at which second language (L2) acquisition begins and as the chronological age of users of two languages. Age of acquisition (AoA) is a factor in determining which of a bilingual’s two languages is dominant and to what degree, and it, along with age of first language (L1) attrition, may be associated with shifts in dominance from the L1 to the L2. In turn, dominance and chron15 ological age, independently and in interaction with lexical frequency, predict performance on naming tasks. The article also considers the relevance of criticalperiod accounts of the relationships of AoA and age of L1 attrition to L2 dominance, and of usage-based and cognitive-aging accounts of the roles of age and dominance in naming.",
"title": ""
},
{
"docid": "0f85ce6afd09646ee1b5242a4d6122d1",
"text": "Environmental concern has resulted in a renewed interest in bio-based materials. Among them, plant fibers are perceived as an environmentally friendly substitute to glass fibers for the reinforcement of composites, particularly in automotive engineering. Due to their wide availability, low cost, low density, high-specific mechanical properties, and eco-friendly image, they are increasingly being employed as reinforcements in polymer matrix composites. Indeed, their complex microstructure as a composite material makes plant fiber a really interesting and challenging subject to study. Research subjects about such fibers are abundant because there are always some issues to prevent their use at large scale (poor adhesion, variability, low thermal resistance, hydrophilic behavior). The choice of natural fibers rather than glass fibers as filler yields a change of the final properties of the composite. One of the most relevant differences between the two kinds of fiber is their response to humidity. Actually, glass fibers are considered as hydrophobic whereas plant fibers have a pronounced hydrophilic behavior. Composite materials are often submitted to variable climatic conditions during their lifetime, including unsteady hygroscopic conditions. However, in humid conditions, strong hydrophilic behavior of such reinforcing fibers leads to high level of moisture absorption in wet environments. This results in the structural modification of the fibers and an evolution of their mechanical properties together with the composites in which they are fitted in. Thereby, the understanding of these moisture absorption mechanisms as well as the influence of water on the final properties of these fibers and their composites is of great interest to get a better control of such new biomaterials. This is the topic of this review paper.",
"title": ""
},
{
"docid": "ceb270c07d26caec5bc20e7117690f9f",
"text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].",
"title": ""
},
{
"docid": "94105f6e64a27b18f911d788145385b6",
"text": "Low socioeconomic status (SES) is generally associated with high psychiatric morbidity, more disability, and poorer access to health care. Among psychiatric disorders, depression exhibits a more controversial association with SES. The authors carried out a meta-analysis to evaluate the magnitude, shape, and modifiers of such an association. The search found 51 prevalence studies, five incidence studies, and four persistence studies meeting the criteria. A random effects model was applied to the odds ratio of the lowest SES group compared with the highest, and meta-regression was used to assess the dose-response relation and the influence of covariates. Results indicated that low-SES individuals had higher odds of being depressed (odds ratio = 1.81, p < 0.001), but the odds of a new episode (odds ratio = 1.24, p = 0.004) were lower than the odds of persisting depression (odds ratio = 2.06, p < 0.001). A dose-response relation was observed for education and income. Socioeconomic inequality in depression is heterogeneous and varies according to the way psychiatric disorder is measured, to the definition and measurement of SES, and to contextual features such as region and time. Nonetheless, the authors found compelling evidence for socioeconomic inequality in depression. Strategies for tackling inequality in depression are needed, especially in relation to the course of the disorder.",
"title": ""
},
{
"docid": "d8752c40782d8189d454682d1d30738e",
"text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.",
"title": ""
},
{
"docid": "6df45b11d623e8080cc7163632dde893",
"text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamicallyscalable and often virtualized resources are provided as a service over the Internet has become a significant issues. In this paper, we aim to pinpoint the challenges and issues of Cloud computing. We first discuss two related computing paradigms Service-Oriented Computing and Grid computing, and their relationships with Cloud computing. We then identify several challenges from the Cloud computing adoption perspective. Last, we will highlight the Cloud interoperability issue that deserves substantial further research and development. __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "ad967dca901ccdd3f33b83da29e9f18b",
"text": "Energy consumption limits battery life in mobile devices and increases costs for servers and data centers. Approximate computing addresses energy concerns by allowing applications to trade accuracy for decreased energy consumption. Approximation frameworks can guarantee accuracy or performance and generally reduce energy usage; however, they provide no energy guarantees. Such guarantees would be beneficial for users who have a fixed energy budget and want to maximize application accuracy within that budget. We address this need by presenting JouleGuard: a runtime control system that coordinates approximate applications with system resource usage to provide control theoretic formal guarantees of energy consumption, while maximizing accuracy. We implement JouleGuard and test it on three different platforms (a mobile, tablet, and server) with eight different approximate applications created from two different frameworks. We find that JouleGuard respects energy budgets, provides near optimal accuracy, adapts to phases in application workload, and provides better outcomes than application approximation or system resource adaptation alone. JouleGuard is general with respect to the applications and systems it controls, making it a suitable runtime for a number of approximate computing frameworks.",
"title": ""
},
{
"docid": "7da0a472f0a682618eccbfd4229ca14f",
"text": "A Search Join is a join operation which extends a user-provided table with additional attributes based on a large corpus of heterogeneous data originating from the Web or corporate intranets. Search Joins are useful within a wide range of application scenarios: Imagine you are an analyst having a local table describing companies and you want to extend this table with attributes containing the headquarters, turnover, and revenue of each company. Or imagine you are a film enthusiast and want to extend a table describing films with attributes like director, genre, and release date of each film. This article presents the Mannheim Search Join Engine which automatically performs such table extension operations based on a large corpus of Web data. Given a local table, the Mannheim Search Join Engine searches the corpus for additional data describing the entities contained in the input table. The discovered data is then joined with the local table and is consolidated using schema matching and data fusion techniques. As result, the user is presented with an extended table and given the opportunity to examine the provenance of the added data. We evaluate the Mannheim Search Join Engine using heterogeneous data originating from over one million different websites. The data corpus consists of HTML tables, as well as Linked Data and Microdata annotations which are converted into tabular form. Our experiments show that the Mannheim Search Join Engine achieves a coverage close to 100% and a precision of around 90% for the tasks of extending tables describing cities, companies, countries, drugs, books, films, and songs.",
"title": ""
},
{
"docid": "442680dcfbe4651eb5434e6b6703d25e",
"text": "The mammalian genome is transcribed into large numbers of long noncoding RNAs (lncRNAs), but the definition of functional lncRNA groups has proven difficult, partly due to their low sequence conservation and lack of identified shared properties. Here we consider promoter conservation and positional conservation as indicators of functional commonality. We identify 665 conserved lncRNA promoters in mouse and human that are preserved in genomic position relative to orthologous coding genes. These positionally conserved lncRNA genes are primarily associated with developmental transcription factor loci with which they are coexpressed in a tissue-specific manner. Over half of positionally conserved RNAs in this set are linked to chromatin organization structures, overlapping binding sites for the CTCF chromatin organiser and located at chromatin loop anchor points and borders of topologically associating domains (TADs). We define these RNAs as topological anchor point RNAs (tapRNAs). Characterization of these noncoding RNAs and their associated coding genes shows that they are functionally connected: they regulate each other’s expression and influence the metastatic phenotype of cancer cells in vitro in a similar fashion. Furthermore, we find that tapRNAs contain conserved sequence domains that are enriched in motifs for zinc finger domain-containing RNA-binding proteins and transcription factors, whose binding sites are found mutated in cancers. This work leverages positional conservation to identify lncRNAs with potential importance in genome organization, development and disease. The evidence that many developmental transcription factors are physically and functionally connected to lncRNAs represents an exciting stepping-stone to further our understanding of genome regulation.",
"title": ""
},
{
"docid": "7a3441773c79b9fde64ebcf8103616a1",
"text": "SIMD parallelism has become an increasingly important mechanism for delivering performance in modern CPUs, due its power efficiency and relatively low cost in die area compared to other forms of parallelism. Unfortunately, languages and compilers for CPUs have not kept up with the hardware's capabilities. Existing CPU parallel programming models focus primarily on multi-core parallelism, neglecting the substantial computational capabilities that are available in CPU SIMD vector units. GPU-oriented languages like OpenCL support SIMD but lack capabilities needed to achieve maximum efficiency on CPUs and suffer from GPU-driven constraints that impair ease of use on CPUs. We have developed a compiler, the Intel R® SPMD Program Compiler (ispc), that delivers very high performance on CPUs thanks to effective use of both multiple processor cores and SIMD vector units. ispc draws from GPU programming languages, which have shown that for many applications the easiest way to program SIMD units is to use a single-program, multiple-data (SPMD) model, with each instance of the program mapped to one SIMD lane. We discuss language features that make ispc easy to adopt and use productively with existing software systems and show that ispc delivers up to 35x speedups on a 4-core system and up to 240× speedups on a 40-core system for complex workloads (compared to serial C++ code).",
"title": ""
},
{
"docid": "892bad91cfae82dfe3d06d2f93edfe8b",
"text": "Fine-grained image recognition is a challenging computer vision problem, due to the small inter-class variations caused by highly similar subordinate categories, and the large intra-class variations in poses, scales and rotations. In this paper, we prove that selecting useful deep descriptors contributes well to fine-grained image recognition. Specifically, a novel Mask-CNN model without the fully connected layers is proposed. Based on the part annotations, the proposed model consists of a fully convolutional network to both locate the discriminative parts ( e.g. , head and torso), and more importantly generate weighted object/part masks for selecting useful and meaningful convolutional descriptors. After that, a three-stream Mask-CNN model is built for aggregating the selected objectand part-level descriptors simultaneously. Thanks to discarding the parameter redundant fully connected layers, our Mask-CNN has a small feature dimensionality and efficient inference speed by comparing with other fine-grained approaches. Furthermore, we obtain a new state-of-the-art accuracy on two challenging fine-grained bird species categorization datasets, which validates the effectiveness of both the descriptor selection scheme and the proposed",
"title": ""
},
{
"docid": "acc700d965586f5ea65bdcb67af38fca",
"text": "OBJECTIVE\nAttention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD.\n\n\nMETHODS\nSeventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea).\n\n\nRESULTS\nThere were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03).\n\n\nCONCLUSION\nWe suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.",
"title": ""
},
{
"docid": "486417082d921eba9320172a349ee28f",
"text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.",
"title": ""
},
{
"docid": "106df67fa368439db4f5684b4a9f7bd9",
"text": "Issues in cybersecurity; understanding the potential risks associated with hackers/crackers Alan D. Smith William T. Rupp Article information: To cite this document: Alan D. Smith William T. Rupp, (2002),\"Issues in cybersecurity; understanding the potential risks associated with hackers/ crackers\", Information Management & Computer Security, Vol. 10 Iss 4 pp. 178 183 Permanent link to this document: http://dx.doi.org/10.1108/09685220210436976",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
}
] | scidocsrr |
42186b00d162e07d15164ac508e4a539 | Motivation in Software Engineering: A systematic literature review | [
{
"docid": "46ad437443c58d90d4956d4e8ba99888",
"text": "The attributes of individual software engineers are perhaps the most important factors in determining the success of software development. Our goal is to identify the professional competencies that are most essential. In particular, we seek to identify the attributes that di erentiate between exceptional and non-exceptional software engineers. Phase 1 of our research is a qualitative study designed to identify competencies to be used in the quantitative analysis performed in Phase 2. In Phase 1, we conduct an in-depth review of ten exceptional and ten non-exceptional software engineers working for a major computing rm. We use biographical data and Myers-Briggs Type Indicator test results to characterize our sample. We conduct Critical Incident Interviews focusing on the subjects experience in software and identify 38 essential competencies of software engineers. Phase 2 of this study surveys 129 software engineers to determine the competencies that are di erential between exceptional and non-exceptional engineers. Years of experience in software is the only biographical predictor of performance. Analysis of the participants Q-Sort of the 38 competencies identi ed in Phase 1 reveals that nine of these competencies are di erentially related to engineer performance using a t-test. A ten variable Canonical Discrimination Function consisting of three biographical variables and seven competencies is capable of correctly classifying 81% of the cases. The statistical analyses indicate that exceptional engineers (at the company studied) can be distinguished by behaviors associated with an external focus | behaviors directed at people or objects outside the individual. Exceptional engineers are more likely than non-exceptional engineers to maintain a \\big picture\", have a bias for action, be driven by a sense of mission, exhibit and articulate strong convictions, play a pro-active role with management, and help other engineers. Authors addresses: R. Turley, Colorado Memory Systems, Inc., 800 S. Taft Ave., Loveland, CO 80537. Email: RICKTURL.COMEMSYS@CMS SMTP.gr.hp.com, (303) 635-6490, Fax: (303) 635-6613; J. Bieman, Department of Computer Science, Colorado State University, Fort Collins, CO 80523. Email: [email protected], (303)4917096, Fax: (303) 491-6639. Copyright c 1993 by Richard T. Turley and James M. Bieman. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the author. Direct correspondence concerning this paper to: J. Bieman, Department of Computer Science, Colorado State University, Fort Collins, CO 80523, [email protected], (303)491-7096, Fax: (303)491-6639.",
"title": ""
}
] | [
{
"docid": "7c5abed8220171f38e3801298f660bfa",
"text": "Heavy metal remediation of aqueous streams is of special concern due to recalcitrant and persistency of heavy metals in environment. Conventional treatment technologies for the removal of these toxic heavy metals are not economical and further generate huge quantity of toxic chemical sludge. Biosorption is emerging as a potential alternative to the existing conventional technologies for the removal and/or recovery of metal ions from aqueous solutions. The major advantages of biosorption over conventional treatment methods include: low cost, high efficiency, minimization of chemical or biological sludge, regeneration of biosorbents and possibility of metal recovery. Cellulosic agricultural waste materials are an abundant source for significant metal biosorption. The functional groups present in agricultural waste biomass viz. acetamido, alcoholic, carbonyl, phenolic, amido, amino, sulphydryl groups etc. have affinity for heavy metal ions to form metal complexes or chelates. The mechanism of biosorption process includes chemisorption, complexation, adsorption on surface, diffusion through pores and ion exchange etc. The purpose of this review article is to provide the scattered available information on various aspects of utilization of the agricultural waste materials for heavy metal removal. Agricultural waste material being highly efficient, low cost and renewable source of biomass can be exploited for heavy metal remediation. Further these biosorbents can be modified for better efficiency and multiple reuses to enhance their applicability at industrial scale.",
"title": ""
},
{
"docid": "63d26f3336960c1d92afbd3a61a9168c",
"text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.",
"title": ""
},
{
"docid": "fcc021f052f261c27cb67205692cd9ab",
"text": "Various studies showed that inhaled fine particles with diameter less than 10 micrometers (PM10) in the air can cause adverse health effects on human, such as heart disease, asthma, stroke, bronchitis and the like. This is due to their ability to penetrate further into the lung and alveoli. The aim of this study is to develop a state-of-art reliable technique to use surveillance camera for monitoring the temporal patterns of PM10 concentration in the air. Once the air quality reaches the alert thresholds, it will provide warning alarm to alert human to prevent from long exposure to these fine particles. This is important for human to avoid the above mentioned adverse health effects. In this study, an internet protocol (IP) network camera was used as an air quality monitoring sensor. It is a 0.3 mega pixel charge-couple-device (CCD) camera integrates with the associate electronics for digitization and compression of images. This network camera was installed on the rooftop of the school of physics. The camera observed a nearby hill, which was used as a reference target. At the same time, this network camera was connected to network via a cat 5 cable or wireless to the router and modem, which allowed image data transfer over the standard computer networks (Ethernet networks), internet, or even wireless technology. Then images were stored in a server, which could be accessed locally or remotely for computing the air quality information with a newly developed algorithm. The results were compared with the alert thresholds. If the air quality reaches the alert threshold, alarm will be triggered to inform us this situation. The newly developed algorithm was based on the relationship between the atmospheric reflectance and the corresponding measured air quality of PM10 concentration. In situ PM10 air quality values were measured with DustTrak meter and the sun radiation was measured simultaneously with a spectroradiometer. Regression method was use to calibrate this algorithm. Still images captured by this camera were separated into three bands namely red, green and blue (RGB), and then digital numbers (DN) were determined. These DN were used to determine the atmospherics reflectance values of difference bands, and then used these values in the newly developed algorithm to determine PM10 concentration. The results of this study showed that the proposed algorithm produced a high correlation coefficient (R2) of 0.7567 and low root-mean-square error (RMS) of plusmn 5 mu g/m3 between the measured and estimated PM10 concentration. A program was written by using microsoft visual basic 6.0 to download the still images automatically from the camera via the internet and utilize the newly developed algorithm to determine PM10 concentration automatically and continuously. This concluded that surveillance camera can be used for temporal PM10 concentration monitoring. It is more than an air pollution monitoring device; it provides continuous, on-line, real-time monitoring for air pollution at multi location and air pollution warning or alert system. This system also offers low implementation, operation and maintenance cost of ownership because the surveillance cameras become cheaper and cheaper now.",
"title": ""
},
{
"docid": "7af557e5fb3d217458d7b635ee18fee0",
"text": "The growth of mobile commerce, or the purchase of services or goods using mobile technology, heavily depends on the availability, reliability, and acceptance of mobile wallet systems. Although several researchers have attempted to create models on the acceptance of such mobile payment systems, no single comprehensive framework has yet emerged. Based upon a broad literature review of mobile technology adoption, a comprehensive model integrating eleven key consumer-related variables affecting the adoption of mobile payment systems is proposed. This model, based on established theoretical underpinnings originally established in the technology acceptance literature, extends existing frameworks by including attractiveness of alternatives and by proposing relationships between the key constructs. Japan is at the forefront of such technology and a number of domestic companies have been effectively developing and marketing mobile wallets for some time. Using this proposed framework, we present the case of the successful adoption of Mobile Suica in Japan, which can serve as a model for the rapid diffusion of such payment systems for other countries where adoption has been unexpectedly slow.",
"title": ""
},
{
"docid": "173f6fa3b43d2ec394c9bec0d45753dd",
"text": "Semantic instance segmentation remains a challenging task. In this work we propose to tackle the problem with a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step. The loss function encourages the network to map each pixel to a point in feature space so that pixels belonging to the same instance lie close together while different instances are separated by a wide margin. Our approach of combining an offthe-shelf network with a principled loss function inspired by a metric learning objective is conceptually simple and distinct from recent efforts in instance segmentation. In contrast to previous works, our method does not rely on object proposals or recurrent mechanisms. A key contribution of our work is to demonstrate that such a simple setup without bells and whistles is effective and can perform onpar with more complex methods. Moreover, we show that it does not suffer from some of the limitations of the popular detect-and-segment approaches. We achieve competitive performance on the Cityscapes and CVPPP leaf segmentation benchmarks.",
"title": ""
},
{
"docid": "4dd2fc66b1a2f758192b02971476b4cc",
"text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.",
"title": ""
},
{
"docid": "50471274efcc7fd7547dc6c0a1b3d052",
"text": "Recently, the UAS has been extensively exploited for data collection from remote and dangerous or inaccessible areas. While most of its existing applications have been directed toward surveillance and monitoring tasks, the UAS can play a significant role as a communication network facilitator. For example, the UAS may effectively extend communication capability to disaster-affected people (who have lost cellular and Internet communication infrastructures on the ground) by quickly constructing a communication relay system among a number of UAVs. However, the distance between the centers of trajectories of two neighboring UAVs, referred to as IUD, plays an important role in the communication delay and throughput. For instance, the communication delay increases rapidly while the throughput is degraded when the IUD increases. In order to address this issue, in this article, we propose a simple but effective dynamic trajectory control algorithm for UAVs. Our proposed algorithm considers that UAVs with queue occupancy above a threshold are experiencing congestion resulting in communication delay. To alleviate the congestion at UAVs, our proposal adjusts their center coordinates and also, if needed, the radius of their trajectory. The performance of our proposal is evaluated through computer-based simulations. In addition, we conduct several field experiments in order to verify the effectiveness of UAV-aided networks.",
"title": ""
},
{
"docid": "588b6979d71edbcc82769c1782eacb5c",
"text": "Following a centuries-long decline in the rate of self-employment, a discontinuity in this downward trend is observed for many advanced economies starting in the 1970s and 1980s. In some countries the rate of self-employment appears to increase. At the same time, cross-sectional analysis shows a U-shaped relationship between start-up rates of enterprise and levels of economic development. We provide an overview of the empirical evidence concerning the relationship between independent entrepreneurship, also known as self-employment or business ownership, and economic development. We argue that the reemergence of independent entrepreneurship is based on at least two ‘revolutions’. If we distinguish between solo selfemployed at the lower end of the entrepreneurship spectrum, and ambitious and/or innovative entrepreneurs at the upper end, many advanced economies show a revival at both extremes. Policymakers in advanced economies should be aware of both revolutions and tailor their policies accordingly.",
"title": ""
},
{
"docid": "70859cc5754a4699331e479a566b70f1",
"text": "The relationship between mind and brain has philosophical, scientific, and practical implications. Two separate but related surveys from the University of Edinburgh (University students, n= 250) and the University of Liège (health-care workers, lay public, n= 1858) were performed to probe attitudes toward the mind-brain relationship and the variables that account for differences in views. Four statements were included, each relating to an aspect of the mind-brain relationship. The Edinburgh survey revealed a predominance of dualistic attitudes emphasizing the separateness of mind and brain. In the Liège survey, younger participants, women, and those with religious beliefs were more likely to agree that the mind and brain are separate, that some spiritual part of us survives death, that each of us has a soul that is separate from the body, and to deny the physicality of mind. Religious belief was found to be the best predictor for dualistic attitudes. Although the majority of health-care workers denied the distinction between consciousness and the soma, more than one-third of medical and paramedical professionals regarded mind and brain as separate entities. The findings of the study are in line with previous studies in developmental psychology and with surveys of scientists' attitudes toward the relationship between mind and brain. We suggest that the results are relevant to clinical practice, to the formulation of scientific questions about the nature of consciousness, and to the reception of scientific theories of consciousness by the general public.",
"title": ""
},
{
"docid": "76d260180b588f881f1009a420a35b3b",
"text": "Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving.",
"title": ""
},
{
"docid": "1f56f045a9b262ce5cd6566d47c058bb",
"text": "The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.",
"title": ""
},
{
"docid": "013270914bfee85265f122b239c9fc4c",
"text": "Current study is with the aim to identify similarities and distinctions between irony and sarcasm by adopting quantitative sentiment analysis as well as qualitative content analysis. The result of quantitative sentiment analysis shows that sarcastic tweets are used with more positive tweets than ironic tweets. The result of content analysis corresponds to the result of quantitative sentiment analysis in identifying the aggressiveness of sarcasm. On the other hand, from content analysis it shows that irony owns two senses. The first sense of irony is equal to aggressive sarcasm with speaker awareness. Thus, tweets of first sense of irony may attack a specific target, and the speaker may tag his/her tweet irony because the tweet itself is ironic. These tweets though tagged as irony are in fact sarcastic tweets. Different from this, the tweets of second sense of irony is tagged to classify an event to be ironic. However, from the distribution in sentiment analysis and examples in content analysis, irony seems to be more broadly used in its second sense.",
"title": ""
},
{
"docid": "aae97dd982300accb15c05f9aa9202cd",
"text": "Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans, as partner and as friends for us. The authors think that the emotion expression of a robot is effective in joint activities of human and robot. In addition, we also think that bipedal walking is necessary to robots which are active in human living environment. But, there was no robot which has those functions. And, it is not clear what kinds of functions are effective actually. Therefore we developed a new bipedal walking robot which is capable to express emotions. In this paper, we present the design and the preliminary evaluation of the new head of the robot with only a small number of degrees of freedom for facial expression.",
"title": ""
},
{
"docid": "e50ba614fc997f058f8d495b59c18af5",
"text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.",
"title": ""
},
{
"docid": "f7cdf631c12567fd37b04419eb8e4daa",
"text": "A multiple-beam photonic beamforming receiver is proposed and demonstrated. The architecture is based on a large port-count demultiplexer and fast tunable lasers to achieve a passive design, with independent beam steering for multiple beam operation. A single true time delay module with four independent beams is experimentally demonstrated, showing extremely smooth RF response in the -band, fast switching capabilities, and negligible crosstalk.",
"title": ""
},
{
"docid": "653bdddafdb40af00d5d838b1a395351",
"text": "Advances in electronic location technology and the coming of age of mobile computing have opened the door for location-aware applications to permeate all aspects of everyday life. Location is at the core of a large number of high-value applications ranging from the life-and-death context of emergency response to serendipitous social meet-ups. For example, the market for GPS products and services alone is expected to grow to US$200 billion by 2015. Unfortunately, there is no single location technology that is good for every situation and exhibits high accuracy, low cost, and universal coverage. In fact, high accuracy and good coverage seldom coexist, and when they do, it comes at an extreme cost. Instead, the modern localization landscape is a kaleidoscope of location systems based on a multitude of different technologies including satellite, mobile telephony, 802.11, ultrasound, and infrared among others. This lecture introduces researchers and developers to the most popular technologies and systems for location estimation and the challenges and opportunities that accompany their use. For each technology, we discuss the history of its development, the various systems that are based on it, and their trade-offs and their effects on cost and performance. We also describe technology-independent algorithms that are commonly used to smooth streams of location estimates and improve the accuracy of object tracking. Finally, we provide an overview of the wide variety of application domains where location plays a key role, and discuss opportunities and new technologies on the horizon. KEyWoRDS localization, location systems, location tracking, context awareness, navigation, location sensing, tracking, Global Positioning System, GPS, infrared location, ultrasonic location, 802.11 location, cellular location, Bayesian filters, RFID, RSSI, triangulation",
"title": ""
},
{
"docid": "80153230a2ffba44c827b965955eab9d",
"text": "Th e environmentally friendly Eff ective Microorganisms (EM) technology claims an enormous amount of benefi ts (claimed by the companies). Th e use of EM as an addictive to manure or as a spray directly in the fi elds may increase the microfauna diversity of the soil and many benefi ts are derived from that increase. It seems that suffi cient information is available about this new technology.",
"title": ""
},
{
"docid": "1778e5f82da9e90cbddfa498d68e461e",
"text": "Today’s business environment is characterized by fast and unexpected changes, many of which are driven by technological advancement. In such environment, the ability to respond effectively and adapt to the new requirements is not only desirable but essential to survive. Comprehensive and quick understanding of intricacies of market changes facilitates firm’s faster and better response. Two concepts contribute to the success of this scenario; organizational agility and business intelligence (BI). As of today, despite BI’s capabilities to foster organizational agility and consequently improve organizational performance, a clear link between BI and organizational agility has not been established. In this paper we argue that BI solutions have the potential to be facilitators for achieving agility. We aim at showing how BI capabilities can help achieve agility at operational, portfolio, and strategic levels.",
"title": ""
},
{
"docid": "4e1ba3178e40738ccaf2c2d76dd417d8",
"text": "We present the results of a recent large-scale subjective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to assess the visual quality of digital videos as perceived by human observers are becoming increasingly important, due to the large number of applications that target humans as the end users of video. Owing to the many approaches to video quality assessment (VQA) that are being developed, there is a need for a diverse independent public database of distorted videos and subjective scores that is freely available. The resulting Laboratory for Image and Video Engineering (LIVE) Video Quality Database contains 150 distorted videos (obtained from ten uncompressed reference videos of natural scenes) that were created using four different commonly encountered distortion types. Each video was assessed by 38 human subjects, and the difference mean opinion scores (DMOS) were recorded. We also evaluated the performance of several state-of-the-art, publicly available full-reference VQA algorithms on the new database. A statistical evaluation of the relative performance of these algorithms is also presented. The database has a dedicated web presence that will be maintained as long as it remains relevant and the data is available online.",
"title": ""
}
] | scidocsrr |
Subsets and Splits