source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
10 | I'm new to A.I. and I'd like to know in simple words, what is the fuzzy logic concept? How does it help, and when is it used? | As complexity rises, precise statements lose meaning and meaningful statements lose precision. ( Lofti Zadeh ). Fuzzy logic deals with reasoning that is approximate rather than fixed and exact. This may make the reasoning more meaningful for a human: Fuzzy logic is an extension of Boolean logic by Lotfi Zadeh in 1965 based on the
mathematical theory of fuzzy sets, which is a generalization of the classical set theory.
By introducing the notion of degree in the verification of a condition, thus enabling a
condition to be in a state other than true or false, fuzzy logic provides a very valuable
flexibility for reasoning, which makes it possible to take into account inaccuracies and
uncertainties. One advantage of fuzzy logic in order to formalize human reasoning is that the rules
are set in natural language. For example, here are some rules of conduct that a driver
follows, assuming that he does not want to lose his driver’s licence: Intuitively, it thus seems that the input variables like in this example are approximately
appreciated by the brain, such as the degree of verification of a condition in fuzzy
logic. I've written a short introduction to fuzzy logic that goes into a bit more details but should be very accessible. | {
"source": [
"https://ai.stackexchange.com/questions/10",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
17 | I've heard the idea of the technological singularity, what is it and how does it relate to Artificial Intelligence? Is this the theoretical point where Artificial Intelligence machines have progressed to the point where they grow and learn on their own beyond what humans can do and their growth takes off? How would we know when we reach this point? | The technological singularity is a theoretical point in time at which a self-improving artificial general intelligence becomes able to understand and manipulate concepts outside of the human brain's range, that is, the moment when it can understand things humans, by biological design, can't. The fuzziness about the singularity comes from the fact that, from the singularity onwards, history is effectively unpredictable . Humankind would be unable to predict any future events, or explain any present events, as science itself becomes incapable of describing machine-triggered events. Essentially, machines would think of us the same way we think of ants. Thus, we can make no predictions past the singularity. Furthermore, as a logical consequence, we'd be unable to define the point at which the singularity may occur at all, or even recognize it when it happens. However, in order for the singularity to take place, AGI needs to be developed, and whether that is possible is quite a hot debate right now. Moreover, an algorithm that creates superhuman intelligence (or superintelligence ) out of bits and bytes would have to be designed. By definition, a human programmer wouldn't be able to do such a thing, as his/her brain would need to be able to comprehend concepts beyond its range. There is also the argument that an intelligence explosion (the mechanism by which a technological singularity would theoretically be formed) would be impossible due to the difficulty of the design challenge of making itself more intelligent, getting larger proportionally to its intelligence, and that the difficulty of the design itself may overtake the intelligence required to solve the said challenge. Also, there are related theories involving machines taking over humankind and all of that sci-fi narrative. However, that's unlikely to happen, if Asimov's laws are followed appropriately. Even if Asimov's laws were not enough , a series of constraints would still be necessary in order to avoid the misuse of AGI by misintentioned individuals, and Asimov's laws are the nearest we have to that. | {
"source": [
"https://ai.stackexchange.com/questions/17",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/55/"
]
} |
35 | These two terms seem to be related, especially in their application in computer science and software engineering. Is one a subset of another? Is one a tool used to build a system for the other? What are their differences and why are they significant? | Machine learning has been defined by many people in multiple (often similar) ways [ 1 , 2 ]. One definition says that machine learning (ML) is the field of study that gives computers the ability to learn without being explicitly programmed. Given the above definition, we might say that machine learning is geared towards problems for which we have (lots of) data (experience), from which a program can learn and can get better at a task. Artificial intelligence has many more aspects, where machines may not get better at tasks by learning from data, but may exhibit intelligence through rules (e.g. expert systems like Mycin ), logic or algorithms, e.g. path-finding). The book Artificial Intelligence: A Modern Approach shows more research fields of AI, like Constraint Satisfaction Problems , Probabilistic Reasoning or Philosophical Foundations . | {
"source": [
"https://ai.stackexchange.com/questions/35",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/69/"
]
} |
36 | What aspects of quantum computers, if any, can help to further develop Artificial Intelligence? | Quantum computers are super awesome at matrix multiplication, with some limitations . Quantum superposition allows each bit to be in a lot more states than just zero or one, and quantum gates can fiddle those bits in many different ways. Because of that, a quantum computer can process a lot of information at once for certain applications. One of those applications is the Fourier transform , which is useful in a lot of problems, like signal analysis and array processing. There's also Grover's quantum search algorithm , which finds the single value for which a given function returns something different. If an AI problem can be expressed in a mathematical form amenable to quantum computing , it can receive great speedups. Sufficient speedups could transform an AI idea from "theoretically interesting but insanely slow" to "quite practical once we get a good handle on quantum computing." | {
"source": [
"https://ai.stackexchange.com/questions/36",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/29/"
]
} |
74 | I've heard the terms strong-AI and weak-AI used. Are these well defined terms or subjective ones? How are they generally defined? | The terms strong and weak don't actually refer to processing, or optimization power, or any interpretation leading to "strong AI" being stronger than "weak AI". It holds conveniently in practice, but the terms come from elsewhere. In 1980, John Searle coined the following statements: AI hypothesis, strong form: an AI system can think and have a mind (in the philosophical definition of the term); AI hypothesis, weak form: an AI system can only act like it thinks and has a mind. So strong AI is a shortcut for an AI systems that verifies the strong AI hypothesis . Similarly, for the weak form. The terms have then evolved: strong AI refers to AI that performs as well as humans (who have minds), weak AI refers to AI that doesn't. The problem with these definitions is that they're fuzzy. For example, AlphaGo is an example of weak AI, but is "strong" by Go-playing standards. A hypothetical AI replicating a human baby would be a strong AI, while being "weak" at most tasks. Other terms exist: Artificial General Intelligence (AGI), which has cross-domain capability (like humans), can learn from a wide range of experiences (like humans), among other features. Artificial Narrow Intelligence refers to systems bound to a certain range of tasks (where they may nevertheless have superhuman ability), lacking capacity to significantly improve themselves. Beyond AGI, we find Artificial Superintelligence (ASI), based on the idea that a system with the capabilities of an AGI, without the physical limitations of humans would learn and improve far beyond human level. | {
"source": [
"https://ai.stackexchange.com/questions/74",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/55/"
]
} |
86 | How is a neural network having the "deep" adjective actually distinguished from other similar networks? | The difference is mostly in the number of layers. For a long time, it was believed that "1-2 hidden layers are enough for most tasks" and it was impractical to use more than that, because training neural networks can be very computationally demanding. Nowadays, computers are capable of much more, so people have started to use networks with more layers and found that they work very well for some tasks. The word "deep" is there simply to distinguish these networks from the traditional, "more shallow" ones. | {
"source": [
"https://ai.stackexchange.com/questions/86",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
92 | The following page / study demonstrates that the deep neural networks are easily fooled by giving high confidence predictions for unrecognisable images, e.g. How this is possible? Can you please explain ideally in plain English? | First up, those images (even the first few) aren't complete trash despite being junk to humans; they're actually finely tuned with various advanced techniques, including another neural network. The deep neural network is the pre-trained network modeled on AlexNet provided by Caffe . To evolve images, both the directly encoded and indirectly encoded images, we use the Sferes evolutionary framework. The entire code base to conduct the evolutionary experiments can be download [sic] here . The code for the images produced by gradient ascent is available here . Images that are actually random junk were correctly recognized as nothing meaningful: In response to an unrecognizable image, the networks could have output a low confidence for each of the 1000 classes, instead of an extremely high confidence value for one of the classes. In fact, they do just that for randomly generated images (e.g. those in generation 0 of the evolutionary run) The original goal of the researchers was to use the neural networks to automatically generate images that look like the real things (by getting the recognizer's feedback and trying to change the image to get a more confident result), but they ended up creating the above art. Notice how even in the static-like images there are little splotches - usually near the center - which, it's fair to say, are triggering the recognition. We were not trying to produce adversarial, unrecognizable images. Instead, we were trying to produce recognizable images, but these unrecognizable images emerged. Evidently, these images had just the right distinguishing features to match what the AI looked for in pictures. The "paddle" image does have a paddle-like shape, the "bagel" is round and the right color, the "projector" image is a camera-lens-like thing, the "computer keyboard" is a bunch of rectangles (like the individual keys), and the "chainlink fence" legitimately looks like a chain-link fence to me. Figure 8. Evolving images to match DNN classes produces a tremendous diversity of images. Shown are images selected to showcase diversity from 5 evolutionary runs. The diversity suggests that the images are non-random, but that instead evolutions producing [sic] discriminative features of each target class. Further reading: the original paper (large PDF) | {
"source": [
"https://ai.stackexchange.com/questions/92",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
111 | Obviously, self-driving cars aren't perfect, so imagine that the Google car (as an example) got into a difficult situation. Here are a few examples of unfortunate situations caused by a set of events: The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers), Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car, Killing an animal on the street in favour of a human being, Purposely changing lanes to crash into another car to avoid killing a dog, And here are a few dilemmas: Does the algorithm recognize the difference between a human being and an animal? Does the size of the human being or animal matter? Does it count how many passengers it has vs. people in the front? Does it "know" when babies/children are on board? Does it take into the account the age (e.g. killing the older first)? How would an algorithm decide what it should do from the technical perspective? Is it being aware of above (counting the probability of kills), or not (killing people just to avoid its own destruction)? Related articles: Why Self-Driving Cars Must Be Programmed to Kill How to Help Self-Driving Cars Make Ethical Decisions | How could self-driving cars make ethical decisions about who to kill? It shouldn't. Self-driving cars are not moral agents. Cars fail in predictable ways. Horses fail in predictable ways. the car is heading toward a crowd of 10 people crossing the road, so
it cannot stop in time, but it can avoid killing 10 people by hitting
the wall (killing the passengers), In this case, the car should slam on the brakes. If the 10 people die, that's just unfortunate. We simply cannot trust all of our beliefs about what is taking place outside the car. What if those 10 people are really robots made to look like people? What if they're trying to kill you? avoiding killing the rider of the motorcycle considering that the
probability of survival is greater for the passenger of the car, Again, hard-coding these kinds of sentiments into a vehicle opens the rider of the vehicle up to all kinds of attacks, including "fake" motorcyclists. Humans are barely equipped to make these decisions on their own, if at all. When it doubt, just slam on the brakes. killing animal on the street in favour of human being, Again, just hit the brakes. What if it was a baby? What if it was a bomb? changing lanes to crash into another car to avoid killing a dog, Nope. The dog was in the wrong place at the wrong time. The other car wasn't. Just slam on the brakes, as safely as possible. Does the algorithm recognize the difference between a human being and an animal? Does a human? Not always. What if the human has a gun? What if the animal has large teeth? Is there no context? Does the size of the human being or animal matter? Does it count how many passengers it has vs. people in the front? Does it "know" when babies/children are on board? Does it take into the account the age (e.g. killing the older first)? Humans can't agree on these things. If you ask a cop what to do in any of these situations, the answer won't be, "You should have swerved left, weighed all the relevant parties in your head, assessed the relevant ages between all parties, then veered slightly right, and you would have saved 8% more lives." No, the cop will just say, "You should have brought the vehicle to a stop, as quickly and safely as possible." Why? Because cops know people normally aren't equipped to deal with high-speed crash scenarios. Our target for "self-driving car" should not be 'a moral agent on par with a human.' It should be an agent with the reactive complexity of cockroach, which fails predictably. | {
"source": [
"https://ai.stackexchange.com/questions/111",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
154 | I'm aware that neural networks are probably not designed to do that, however asking hypothetically, is it possible to train the deep neural network (or similar) to solve math equations? So given the 3 inputs: 1st number, operator sign represented by the number (1 - + , 2 - - , 3 - / , 4 - * , and so on), and the 2nd number, then after training the network should give me the valid results. Example 1 ( 2+2 ): Input 1: 2 ; Input 2: 1 ( + ); Input 3: 2 ; Expected output: 4 Input 1: 10 ; Input 2: 2 ( - ); Input 3: 10 ; Expected output: 0 Input 1: 5 ; Input 2: 4 ( * ); Input 3: 5 ; Expected output: 25 and so The above can be extended to more sophisticated examples. Is that possible? If so, what kind of network can learn/achieve that? | Yes, it has been done! However, the applications aren't to replace calculators or anything like that. The lab I'm associated with develops neural network models of equational reasoning to better understand how humans might solve these problems. This is a part of the field known as Mathematical Cognition . Unfortunately, our website isn't terribly informative, but here's a link to an example of such work. Apart from that, recent work on extending neural networks to include external memory stores (e.g. Neural Turing Machines) was used to solve math problems as a good proof of concept. This is because many arithmetic problems involve long procedures with stored intermediate results. See the sections of this paper on long binary addition and multiplication. | {
"source": [
"https://ai.stackexchange.com/questions/154",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
1,294 | Geoffrey Hinton has been researching something he calls "capsules theory" in neural networks. What is it? How do capsule neural networks work? | It appears to not be published yet; the best available online are these slides for this talk . (Several people reference an earlier talk with this link , but sadly it's broken at time of writing this answer.) My impression is that it's an attempt to formalize and abstract the creation of subnetworks inside a neural network. That is, if you look at a standard neural network, layers are fully connected (that is, every neuron in layer 1 has access to every neuron in layer 0, and is itself accessed by every neuron in layer 2). But this isn't obviously useful; one might instead have, say, n parallel stacks of layers (the 'capsules') that each specializes on some separate task (which may itself require more than one layer to complete successfully). If I'm imagining its results correctly, this more sophisticated graph topology seems like something that could easily increase both the effectiveness and the interpretability of the resulting network. | {
"source": [
"https://ai.stackexchange.com/questions/1294",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/144/"
]
} |
1,396 | On the Wikipedia page about AI, we can read: Optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology. On the other hand, the MNIST database of handwritten digits is specially designed for training and testing neural networks and their error rates (see: Classifiers ). So, why does the above quote state that OCR is no longer exemple of AI? | Whenever a problem becomes solvable by a computer, people start arguing that it does not require intelligence. John McCarthy is often quoted: "As soon as it works, no one calls it AI anymore" ( Referenced in CACM ). One of my teachers in college said that in the 1950's, a professor was asked what he thought was intelligent for a machine. The professor reputedly answered that if a vending machine gave him the right change, that would be intelligent. Later, playing chess was considered intelligent. However, computers can now defeat grandmasters at chess, and people are no longer saying that it is a form of intelligence. Now we have OCR. It's already stated in another answer that our methods do not have the recognition facilities of a 5 year old. As soon as this is achieved, people will say "meh, that's not intelligence, a 5 year old can do that!" A psychological bias, a need to state that we are somehow superior to machines, is at the basis of this. | {
"source": [
"https://ai.stackexchange.com/questions/1396",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
1,420 | Are there any research teams that attempted to create or have already created an AI robot that can be as close to intelligent as these found in Ex Machina or I, Robot movies? I'm not talking about full awareness, but an artificial being that can make its own decisions and physical and intellectual tasks that a human being can do? | We are absolutely nowhere near, nor do we have any idea how to bridge the gap between what we can currently do and what is depicted in these films. The current trend for DL approaches (coupled with the emergence of data science as a mainstream discipline) has led to a lot of popular interest in AI. However, researchers and practitioners would do well to learn the lessons of the 'AI Winter' and not engage in hubris or read too much into current successes. For example: Success in transfer learning is very limited. The 'hard problem' (i.e. presenting the 'raw, unwashed environment' to the machine and having it come up with a solution from scratch) is not being
addressed by DL to the extent that it is popularly portrayed: expert human knowledge is still required to help decide how the input should be framed, tune parameters, interpret output etc. Someone who has enthusiasm for AGI would hopefully agree that the 'hard problem' is actually the only one that matters. Some years ago, a famous cognitive scientist said "We have yet to successfully represent even a single concept on a computer". In my opinion, recent research trends have done little to change this. All of this perhaps sounds pessimistic - it's not intended to. None of us want another AI Winter, so we should challenge (and be honest about) the limits of our current techniques rather than mythologizing them. | {
"source": [
"https://ai.stackexchange.com/questions/1420",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
1,479 | Do scientists or research experts know from the kitchen what is happening inside complex "deep" neural network with at least millions of connections firing at an instant? Do they understand the process behind this (e.g. what is happening inside and how it works exactly), or it is a subject of debate? For example this study says: However there is no clear understanding of why they perform so well, or how they might be improved. So does this mean that scientists actually don't know how complex convolutional network models work? | There are many approaches that aim to make a trained neural network more interpretable and less like a "black box", specifically convolutional neural networks that you've mentioned. Visualizing the activations and layer weights Activations visualization is the first obvious and straight-forward one. For ReLU networks, the activations usually start out looking relatively blobby and dense, but as the training progresses the activations usually become more sparse (most values are zero) and localized. This sometimes shows what exactly a particular layer is focused on when it sees an image. Another great work on activations that I'd like to mention is deepvis that shows reaction of every neuron at each layer, including pooling and normalization layers. Here's how they describe it : In short, we’ve gathered a few different methods that allow you to
“triangulate” what feature a neuron has learned, which can help you
better understand how DNNs work. The second common strategy is to visualize the weights (filters). These are usually most interpretable on the first CONV layer which is looking directly at the raw pixel data, but it is possible to also show the filter weights deeper in the network. For example, the first layer usually learns gabor-like filters that basically detect edges and blobs. Occlusion experiments Here's the idea. Suppose that a ConvNet classifies an image as a dog. How can we be certain that it’s actually picking up on the dog in the image as opposed to some contextual cues from the background or some other miscellaneous object? One way of investigating which part of the image some classification prediction is coming from is by plotting the probability of the class of interest (e.g. dog class) as a function of the position of an occluder object.
If we iterate over regions of the image, replace it with all zeros and check the classification result, we can build a 2-dimensional heat map of what's most important for the network on a particular image. This approach has been used in Matthew Zeiler’s Visualizing and Understanding Convolutional Networks (that you refer to in your question): Deconvolution Another approach is to synthesize an image that causes a particular neuron to fire, basically what the neuron is looking for. The idea is to compute the gradient with respect to the image, instead of the usual gradient with respect to the weights. So you pick a layer, set the gradient there to be all zero except for one for one neuron and backprop to the image. Deconv actually does something called guided backpropagation to make a nicer looking image, but it's just a detail. Similar approaches to other neural networks Highly recommend this post by Andrej Karpathy , in which he plays a lot with Recurrent Neural Networks (RNN). In the end, he applies a similar technique to see what the neurons actually learn: The neuron highlighted in this image seems to get very excited about
URLs and turns off outside of the URLs. The LSTM is likely using this
neuron to remember if it is inside a URL or not. Conclusion I've mentioned only a small fraction of results in this area of research. It's pretty active and new methods that shed light to the neural network inner workings appear each year. To answer your question, there's always something that scientists don't know yet, but in many cases they have a good picture (literary) of what's going on inside and can answer many particular questions. To me the quote from your question simply highlights the importance of research of not only accuracy improvement, but the inner structure of the network as well. As Matt Zieler tells in this talk , sometimes a good visualization can lead, in turn, to better accuracy. | {
"source": [
"https://ai.stackexchange.com/questions/1479",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
1,507 | I believe artificial intelligence (AI) term is overused nowadays. For example, people see that something is self-moving and they call it AI, even if it's on autopilot (like cars or planes) or there is some simple algorithm behind it. What are the minimum general requirements so that we can say something is AI? | It's true that the term has become a buzzword, and is now widely used to a point of confusion - however if you look at the definition provided by Stuart Russell and Peter Norvig, they write it as follows: We define AI as the study of agents that receive percepts from the environment and perform actions . Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments , and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning . Artificial Intelligence: A Modern Approach - Stuart Russell and Peter Norvig So the example you cite, "autopilot for cars/planes", is actually a (famous) form of AI as it has to use a form of knowledge representation to deal with unknown environments and circumstances . Ultimately, these systems also collect data so that the knowledge representation can be updated to deal with the new inputs that they have found. They do this with autopilot for cars all the time So, directly to your question, for something to be considered as "having AI", it needs to be able to deal with unknown environments/circumstances in order to achieve its objective/goal , and render knowledge in a manner that provides for new learning/information to be added easily. There are many different types of well defined knowledge representation methods, ranging from the popular neural net , through to probabilistic models like bayesian networks (belief networks) - but fundamentally actions by the system must be derived from whichever representation of knowledge you choose for it to be considered as AI. | {
"source": [
"https://ai.stackexchange.com/questions/1507",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
1,508 | I believe normally you can use genetic programming for sorting, however I'd like to check whether it's possible using ANN. Given the unsorted text data from input, which neural network is suitable for doing sorting tasks? | It's true that the term has become a buzzword, and is now widely used to a point of confusion - however if you look at the definition provided by Stuart Russell and Peter Norvig, they write it as follows: We define AI as the study of agents that receive percepts from the environment and perform actions . Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments , and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning . Artificial Intelligence: A Modern Approach - Stuart Russell and Peter Norvig So the example you cite, "autopilot for cars/planes", is actually a (famous) form of AI as it has to use a form of knowledge representation to deal with unknown environments and circumstances . Ultimately, these systems also collect data so that the knowledge representation can be updated to deal with the new inputs that they have found. They do this with autopilot for cars all the time So, directly to your question, for something to be considered as "having AI", it needs to be able to deal with unknown environments/circumstances in order to achieve its objective/goal , and render knowledge in a manner that provides for new learning/information to be added easily. There are many different types of well defined knowledge representation methods, ranging from the popular neural net , through to probabilistic models like bayesian networks (belief networks) - but fundamentally actions by the system must be derived from whichever representation of knowledge you choose for it to be considered as AI. | {
"source": [
"https://ai.stackexchange.com/questions/1508",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/8/"
]
} |
1,768 | In Portal 2 we see that AI's can be " killed " by thinking about a paradox. I assume this works by forcing the AI into an infinite loop which would essentially " freeze " the computer's consciousness. Questions: Would this confuse the AI technology we have today to the point of destroying it? If so, why? And if not, could it be possible in the future? | This classic problem exhibits a basic misunderstanding of what an artificial general intelligence would likely entail. First, consider this programmer's joke: The programmer's wife couldn't take it anymore. Every discussion with her husband turned into an argument over semantics, picking over every piece of trivial detail. One day she sent him to the grocery store to pick up some eggs. On his way out the door, she said, "While you are there, pick up milk." And he never returned. It's a cute play on words, but it isn't terribly realistic. You are assuming because AI is being executed by a computer, it must exhibit this same level of linear, unwavering pedantry outlined in this joke. But AI isn't simply some long-winded computer program hard-coded with enough if-statements and while-loops to account for every possible input and follow the prescribed results. while (command not completed)
find solution() This would not be strong AI. In any classic definition of artificial general intelligence , you are creating a system that mimics some form of cognition that exhibits problem solving and adaptive learning (←note this phrase here). I would suggest that any AI that could get stuck in such an "infinite loop" isn't a learning AI at all. It's just a buggy inference engine. Essentially, you are endowing a program of currently-unreachable sophistication with an inability to postulate if there is a solution to a simple problem at all. I can just as easily say "walk through that closed door" or "pick yourself up off the ground" or even "turn on that pencil" — and present a similar conundrum. "Everything I say is false." — The Liar's Paradox | {
"source": [
"https://ai.stackexchange.com/questions/1768",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/1812/"
]
} |
2,008 | As far as I can tell, neural networks have a fixed number of neurons in the input layer. If neural networks are used in a context like NLP, sentences or blocks of text of varying sizes are fed to a network.
How is the varying input size reconciled with the fixed size of the input layer of the network? In other words, how is such a network made flexible enough to deal with an input that might be anywhere from one word to multiple pages of text? If my assumption of a fixed number of input neurons is wrong and new input neurons are added to/removed from the network to match the input size I don't see how these can ever be trained. I give the example of NLP, but lots of problems have an inherently unpredictable input size. I'm interested in the general approach for dealing with this. For images, it's clear you can up/downsample to a fixed size, but, for text, this seems to be an impossible approach since adding/removing text changes the meaning of the original input. | Three possibilities come to mind. The easiest is the zero-padding . Basically, you take a rather big input size and just add zeroes if your concrete input is too small. Of course, this is pretty limited and certainly not useful if your input ranges from a few words to full texts. Recurrent NNs (RNN) are a very natural NN to choose if you have texts of varying size as input. You input words as word vectors (or embeddings) just one after another and the internal state of the RNN is supposed to encode the meaning of the full string of words. This is one of the earlier papers. Another possibility is using recursive NNs . This is basically a form of preprocessing in which a text is recursively reduced to a smaller number of word vectors until only one is left - your input, which is supposed to encode the whole text. This makes a lot of sense from a linguistic point of view if your input consists of sentences (which can vary a lot in size), because sentences are structured recursively. For example, the word vector for "the man", should be similar to the word vector for "the man who mistook his wife for a hat", because noun phrases act like nouns, etc. Often, you can use linguistic information to guide your recursion on the sentence. If you want to go way beyond the Wikipedia article, this is probably a good start . | {
"source": [
"https://ai.stackexchange.com/questions/2008",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/2522/"
]
} |
2,236 | I've heard before from computer scientists and from researchers in the area of AI that that Lisp is a good language for research and development in artificial intelligence. Does this still apply, with the proliferation of neural networks and deep learning? What was their reasoning for this? What languages are current deep-learning systems currently built in? | First, I guess that you mean Common Lisp (which is a standard language specification, see its HyperSpec ) with efficient implementations (à la SBCL ). But some recent implementations of Scheme could also be relevant (with good implementations such as Bigloo or Chicken/Scheme ). Both Common Lisp and Scheme (and even Clojure ) are from the same Lisp family. And as a scripting language driving big data or machine learning applications, Guile might be a useful replacement to Python and is also a Lisp dialect. BTW, I do recommend reading SICP , an excellent introduction to programming using Scheme. Then, Common Lisp (and other dialects of Lisp) is great for symbolic AI. However, many recent machine learning libraries are coded in more mainstream languages, for example TensorFlow is coded in C++ & Python. Deep learning libraries are mostly coded in C++ or Python or C (and sometimes using OpenCL or Cuda for GPU computing parts). Common Lisp is great for symbolic artificial intelligence because: it has very good implementations (e.g. SBCL , which compiles to machine code every expression given to the REPL ) it is homoiconic , so it is easy to deal with programs as data, in particular it is easy to generate [sub-]programs, that is use meta-programming techniques. it has a Read-Eval-Print Loop to ease interactive programming it provides a very powerful macro machinery (essentially, you define your own domain specific sublanguage for your problem), much more powerful than in other languages like C. it mandates a garbage collector (even code can be garbage collected) it provides many container abstract data types, and can easily handle symbols. you can code both high-level (dynamically typed) and low-level (more or less startically typed) code, thru appropriate annotations. However most machine learning & neural network libraries are not coded in CL. Notice that neither neural network nor deep learning is in the symbolic artificial intelligence field. See also this question . Several symbolic AI systems like Eurisko or CyC have been developed in CL (actually, in some DSL built above CL). Notice that the programming language might not be very important. In the Artificial General Intelligence research topic, some people work on the idea of a AI system which would generate all its own code (so are designing it with a bootstrapping approach). Then, the code which is generated by such a system can even be generated in low level programming languages like C. See J.Pitrat's blog , which has inspired the RefPerSys project. | {
"source": [
"https://ai.stackexchange.com/questions/2236",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/3323/"
]
} |
3,494 | First of all, I'm a beginner studying AI and this is not an opinion-oriented question or one to compare programming languages. I'm not implying that Python is the best language. But the fact is that most of the famous AI frameworks have primary support for Python. They can even be multilanguage supported, for example, TensorFlow that supports Python, C++, or CNTK from Microsoft that supports C# and C++, but the most used is Python (I mean more documentation, examples, bigger community, support, etc). Even if you choose C# (developed by Microsoft and my primary programming language), you must have the Python environment set up. I read in other forums that Python is preferred for AI because the code is simplified and cleaner, good for fast prototyping. I was watching a movie with AI thematics (Ex Machina). In some scenes, the main character hacks the interface of the house automation. Guess which language was on the scene? Python. So, what is the big deal with Python? Why is there a growing association between Python and AI? | Python comes with a huge amount of inbuilt libraries. Many of the libraries are for Artificial Intelligence and Machine Learning. Some of the libraries are TensorFlow (which is a high-level neural network library), scikit-learn (for data mining, data analysis and machine learning), pylearn2 (more flexible than scikit-learn), etc. The list keeps going and never ends. You can find some libraries here . Python has an easy implementation for OpenCV. What makes Python favorite for everyone is its powerful and easy implementation. For other languages, students and researchers need to get to know the language before getting into ML or AI with that language. This is not the case with Python . Even a programmer with very basic knowledge can easily handle Python. Apart from that, the time someone spends on writing and debugging code in Python is way less when compared to C, C++ or Java. This is exactly what the students of AI and ML want. They don't want to spend time on debugging the code for syntax errors, they want to spend more time on their algorithms and heuristics related to AI and ML Not just the libraries but their tutorials, handling of interfaces are easily available online . People build their own libraries and upload them on GitHub or elsewhere to be used by others. All these features make Python suitable for them. | {
"source": [
"https://ai.stackexchange.com/questions/3494",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/7268/"
]
} |
3,938 | Suppose that I have 10K images of sizes $2400 \times 2400$ to train a CNN. How do I handle such large image sizes without downsampling? Here are a few more specific questions. Are there any techniques to handle such large images which are to be trained? What batch size is reasonable to use? Are there any precautions to take, or any increase and decrease in hardware resources that I can do? Here are the system requirements Ubuntu 16.04 64-bit
RAM 16 GB
GPU 8 GB
HDD 500 GB | How do I handle such large image sizes without downsampling? I assume that by downsampling you mean scaling down the input before passing it into CNN. Convolutional layer allows to downsample the image within a network, by picking a large stride, which is going to save resources for the next layers. In fact, that's what it has to do, otherwise your model won't fit in GPU. Are there any techniques to handle such large images which are to be trained? Commonly researches scale the images to a resonable size. But if that's not an option for you, you'll need to restrict your CNN. In addition to downsampling in early layers, I would recommend you to get rid of FC layer (which normally takes most of parameters) in favor of convolutional layer . Also you will have to stream your data in each epoch, because it won't fit into your GPU. Note that none of this will prevent heavy computational load in the early layers, exactly because the input is so large: convolution is an expensive operation and the first layers will perform a lot of them in each forward and backward pass. In short, training will be slow. What batch size is reasonable to use? Here's another problem. A single image takes 2400x2400x3x4 (3 channels and 4 bytes per pixel) which is ~70Mb, so you can hardly afford even a batch size 10. More realistically would be 5. Note that most of the memory will be taken by CNN parameters. I think in this case it makes sense reduce the size by using 16-bit values rather than 32-bit - this way you'll be able to double the batches. Are there any precautions to take, or any increase and decrease in hardware resources that I can do? Your bottleneck is GPU memory. If you can afford another GPU, get it and split the network across them. Everything else is insignificant compared to GPU memory. | {
"source": [
"https://ai.stackexchange.com/questions/3938",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/6382/"
]
} |
4,456 | What's the difference between model-free and model-based reinforcement learning? It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based. In that case, when would model-free learners be appropriate? | What's the difference between model-free and model-based reinforcement learning? In Reinforcement Learning, the terms "model-based" and "model-free" do not refer to the use of a neural network or other statistical learning model to predict values, or even to predict next state (although the latter may be used as part of a model-based algorithm and be called a "model" regardless of whether the algorithm is model-based or model-free). Instead, the term refers strictly as to whether, whilst during learning or acting, the agent uses predictions of the environment response. The agent can use a single prediction from the model of next reward and next state (a sample), or it can ask the model for the expected next reward, or the full distribution of next states and next rewards. These predictions can be provided entirely outside of the learning agent - e.g. by computer code that understands the rules of a dice or board game. Or they can be learned by the agent, in which case they will be approximate. Just because there is a model of the environment implemented, does not mean that a RL agent is "model-based". To qualify as "model-based", the learning algorithms have to explicitly reference the model: Algorithms that purely sample from experience such as Monte Carlo Control, SARSA, Q-learning, Actor-Critic are "model free" RL algorithms. They rely on real samples from the environment and never use generated predictions of next state and next reward to alter behaviour (although they might sample from experience memory, which is close to being a model). The archetypical model-based algorithms are Dynamic Programming (Policy Iteration and Value Iteration) - these all use the model's predictions or distributions of next state and reward in order to calculate optimal actions. Specifically in Dynamic Programming, the model must provide state transition probabilities, and expected reward from any state, action pair. Note this is rarely a learned model. Basic TD learning, using state values only, must also be model-based in order to work as a control system and pick actions. In order to pick the best action, it needs to query a model that predicts what will happen on each action, and implement a policy like $\pi(s) = \text{argmax}_a \sum_{s',r} p(s',r|s,a)(r + v(s'))$ where $p(s',r|s,a)$ is the probability of receiving reward $r$ and next state $s'$ when taking action $a$ in state $s$ . That function $p(s',r|s,a)$ is essentially the model. The RL literature differentiates between "model" as a model of the environment for "model-based" and "model-free" learning, and use of statistical learners, such as neural networks. In RL, neural networks are often employed to learn and generalise value functions, such as the Q value which predicts total return (sum of discounted rewards) given a state and action pair. Such a trained neural network is often called a "model" in e.g. supervised learning. However, in RL literature, you will see the term "function approximator" used for such a network to avoid ambiguity. It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based. I think here you are using the general understanding of the word "model" to include any structure that makes useful predictions. That would apply to e.g. table of Q values in SARSA. However, as explained above, that's not how the term is used in RL. So although your understanding that RL builds useful internal representations is correct, you are not technically correct that this can be used to re-frame between "model-free" as "model-based", because those terms have a very specific meaning in RL. In that case, when would model-free learners be appropriate? Generally with current state of art in RL, if you don't have an accurate model provided as part of the problem definition, then model-free approaches are often superior. There is lots of interest in agents that build predictive models of the environment, and doing so as a "side effect" (whilst still being a model-free algorithm) can still be useful - it may regularise a neural network or help discover key predictive features that can also be used in policy or value networks. However, model-based agents that learn their own models for planning have a problem that inaccuracy in these models can cause instability (the inaccuracies multiply the further into the future the agent looks). Some promising inroads are being made using imagination-based agents and/or mechanisms for deciding when and how much to trust the learned model during planning. Right now (in 2018), if you have a real-world problem in an environment without an explicit known model at the start, then the safest bet is to use a model-free approach such as DQN or A3C. That may change as the field is moving fast and new more complex architectures could well be the norm in a few years. | {
"source": [
"https://ai.stackexchange.com/questions/4456",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/10720/"
]
} |
4,864 | What are "bottlenecks" in the context of neural networks? This term is mentioned, for example, in this TensorFlow article , which also uses the term "bottleneck values". How does one calculate bottleneck values? How do these values help image classification? Please explain in simple words. | The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a layer encourages the network to compress feature representations (of salient features for the target variable) to best fit in the available space. Improvements to compression occur due to the goal of reducing the cost function, as for all weight updates. In a CNN (such as Google's Inception network), bottleneck layers are added to reduce the number of feature maps (aka channels) in the network, which, otherwise, tend to increase in each layer. This is achieved by using 1x1 convolutions with fewer output channels than input channels. You don't usually calculate weights for bottleneck layers directly, the training process handles that, as for all other weights. Selecting a good size for a bottleneck layer is something you have to guess, and then experiment, in order to find network architectures that work well. The goal here is usually finding a network that generalises well to new images, and bottleneck layers help by reducing the number of parameters in the network whilst still allowing it to be deep and represent many feature maps. | {
"source": [
"https://ai.stackexchange.com/questions/4864",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/11837/"
]
} |
5,246 | For instance, the title of this paper reads: "Sample Efficient Actor-Critic with Experience Replay". What is sample efficiency , and how can importance sampling be used to achieve it? | An algorithm is sample efficient if it can get the most out of every sample. Imagine yourself playing PONG for the first time. As a human, it would take you within seconds to learn how to play the game based on very few samples. This makes you very "sample efficient". Modern RL algorithms would have to see $100$ thousand times more data than you so they are, relatively, sample inefficient. In the case of off-policy learning, not all samples are useful in that they are not part of the distribution that we are interested in. Importance sampling is a technique to filter these samples. Its original use was to understand one distribution while only being able to take samples from a different but related distribution. In RL, this often comes up when trying to learn off-policy. Namely, that your samples are produced by some behaviour policy but you want to learn a target policy. Thus one needs to measure how important/similar the samples generated are to samples that the target policy may have made. Thus, one is sampling from a weighted distribution which favours these "important" samples. There are many methods, however, for characterizing what is important, and their effectiveness may differ depending on the application. The most common approach to this off-policy style of importance sampling is finding a ratio of how likely a sample is to be generated by the target policy. The paper On a Connection between Importance Sampling and
the Likelihood Ratio Policy Gradient (2010) by Tang and Abbeel covers this topic. | {
"source": [
"https://ai.stackexchange.com/questions/5246",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/12574/"
]
} |
5,493 | It is said that activation functions in neural networks help introduce non-linearity . What does this mean? What does non-linearity mean in this context? How does the introduction of this non-linearity help? Are there any other purposes of activation functions ? | Almost all of the functionalities provided by the non-linear activation functions are given by other answers. Let me sum them up: First, what does non-linearity mean? It means something (a function in this case) which is not linear with respect to a given variable/variables i.e. $f(c1.x1 + c2.x2...cn.xn + b) != c1.f(x1) + c2.f(x2) ... cn.f(xn) + f(b).$ NOTE: There is some ambiguity about how one might define linearity. In polynomial equations we define linearity in somewhat a different way as compared to in vectors or some systems which take an input $x$ and give an output $f(x)$ . See the second answer . What does non-linearity mean in this context? It means that the Neural Network can successfully approximate functions (up-to a certain error $e$ decided by the user) which does not follow linearity or it can successfully predict the class of a function that is divided by a decision boundary that is not linear. Why does it help? I hardly think you can find any physical world phenomenon which follows linearity straightforwardly. So you need a non-linear function that can approximate the non-linear phenomenon. Also, a good intuition would be any decision boundary or a function is a linear combination of polynomial combinations of the input features (so ultimately non-linear). Purposes of activation function? In addition to introducing non-linearity, every activation function has its own features. Sigmoid $\frac{1} {(1 + e ^ {-(w1*x1...wn*xn + b)})}$ This is one of the most common activation function and is monotonically increasing everywhere. This is generally used at the final output node as it squashes values between 0 and 1 (if the output is required to be 0 or 1 ). Thus above 0.5 is considered 1 while below 0.5 as 0 , although a different threshold (not 0.5 ) maybe set. Its main advantage is that its differentiation is easy and uses already calculated values and supposedly horseshoe crab neurons have this activation function in their neurons. Tanh $\frac{e ^ {(w1*x1...wn*xn + b)} - e ^ {-(w1*x1...wn*xn + b)})}{(e ^ { (w1*x1...wn*xn + b)} + e ^ {-(w1*x1...wn*xn + b)}}$ This has an advantage over the sigmoid activation function as it tends to centre the output to 0 which has an effect of better learning on the subsequent layers (acts as a feature normaliser). A nice explanation here . Negative and positive output values maybe considered as 0 and 1 respectively. Used mostly in RNN's. Re-Lu activation function - This is another very common simple non-linear (linear in positive range and negative range exclusive of each other) activation function that has the advantage of removing the problem of vanishing gradient faced by the above two i.e. gradient tends to 0 as x tends to +infinity or -infinity. Here is an answer about Re-Lu's approximation power in-spite of its apparent linearity. ReLu's have a disadvantage of having dead neurons which result in larger NN's. Also, you can design your own activation functions depending on your specialized problem. You may have a quadratic activation function which will approximate quadratic functions much better. But then, you have to design a cost function that should be somewhat convex in nature, so that you can optimise it using first-order differentials and the NN actually converges to a decent result. This is the main reason why standard activation functions are used. But I believe with proper mathematical tools, there is a huge potential for new and eccentric activation functions. For example, say you are trying to approximate a single-variable quadratic function say $a.x^2 + c$ . This will be best approximated by a quadratic activation $w1.x^2 + b$ where $w1$ and $b$ will be the trainable parameters. But designing a loss function that follows the conventional first-order derivative method (gradient descent) can be quite tough for non-monotonically increasing function. For Mathematicians: In the sigmoid activation function $(1 / (1 + e ^ {-(w1*x1...wn*xn + b)})$ we see that $e ^ {-(w1*x1...wn*xn + b)}$ is always < 1 . By binomial expansion, or by reverse calculation of the infinite GP series we get $sigmoid(y)$ = $1 + y + y^2.....$ . Now in a NN $y = e ^ {-(w1*x1...wn*xn + b)}$ . Thus we get all the powers of $y$ which is equal to $e ^ {-(w1*x1...wn*xn + b)}$ thus each power of $y$ can be thought of as a multiplication of several decaying exponentials based on a feature $x$ , for eaxmple $y^2 = e^ {-2(w1x1)} * e^ {-2(w2x2)} * e^ {-2(w3x3)} *...... e^ {-2(b)}$ . Thus each feature has a say in the scaling of the graph of $y^2$ . Another way of thinking would be to expand the exponentials according to Taylor Series: $$e^{x}=1+\frac{x}{1 !}+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\cdots$$ So we get a very complex combination, with all the possible polynomial combinations of input variables present. I believe if a Neural Network is structured correctly the NN can fine-tune these polynomial combinations by just modifying the connection weights and selecting polynomial terms maximum useful, and rejecting terms by subtracting the output of 2 nodes weighted properly. The $tanh$ activation can work in the same way since output of $|tanh| < 1$ . I am not sure how Re-Lu's work though, but due to its rigid structure and problem of dead neurons we require larger networks with ReLu's for a good approximation. But for a formal mathematical proof, one has to look at the Universal Approximation Theorem. A visual proof that neural nets can compute any function The Universal Approximation Theorem For Neural Networks- An Elegant Proof For non-mathematicians some better insights visit these links: Activation Functions by Andrew Ng - for more formal and scientific answer How does neural network classifier classify from just drawing a decision plane? Differentiable activation function A visual proof that neural nets can compute any function | {
"source": [
"https://ai.stackexchange.com/questions/5493",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/12957/"
]
} |
5,728 | Suppose that a NN contains $n$ hidden layers, $m$ training examples, $x$ features, and $n_i$ nodes in each layer. What is the time complexity to train this NN using back-propagation? I have a basic idea about how they find the time complexity of algorithms, but here there are 4 different factors to consider here i.e. iterations, layers, nodes in each layer, training examples, and maybe more factors. I found an answer here but it was not clear enough. Are there other factors, apart from those I mentioned above, that influence the time complexity of the training algorithm of a NN? | I haven't seen an answer from a trusted source, but I'll try to answer this myself, with a simple example (with my current knowledge). In general, note that training an MLP using back-propagation is usually implemented with matrices. Time complexity of matrix multiplication The time complexity of matrix multiplication for $M_{ij} * M_{jk}$ is simply $\mathcal{O}(i*j*k)$ . Notice that we are assuming the simplest multiplication algorithm here: there exist some other algorithms with somewhat better time complexity. Feedforward pass algorithm The feedforward propagation algorithm is as follows. First, to go from layer $i$ to $j$ , you do $$S_j = W_{ji}*Z_i$$ Then you apply the activation function $$Z_j = f(S_j)$$ If we have $N$ layers (including input and output layer), this will run $N-1$ times. Example As an example, let's compute the time complexity for the forward pass algorithm for an MLP with $4$ layers, where $i$ denotes the number of nodes of the input layer, $j$ the number of nodes in the second layer, $k$ the number of nodes in the third layer and $l$ the number of nodes in the output layer. Since there are $4$ layers, you need $3$ matrices to represent weights between these layers. Let's denote them by $W_{ji}$ , $W_{kj}$ and $W_{lk}$ , where $W_{ji}$ is a matrix with $j$ rows and $i$ columns ( $W_{ji}$ thus contains the weights going from layer $i$ to layer $j$ ). Assume you have $t$ training examples. For propagating from layer $i$ to $j$ , we have first $$S_{jt} = W_{ji} * Z_{it}$$ and this operation (i.e. matrix multiplication) has $\mathcal{O}(j*i*t)$ time complexity. Then we apply the activation function $$
Z_{jt} = f(S_{jt})
$$ and this has $\mathcal{O}(j*t)$ time complexity, because it is an element-wise operation. So, in total, we have $$\mathcal{O}(j*i*t + j*t) = \mathcal{O}(j*t*(i + 1)) = \mathcal{O}(j*i*t)$$ Using same logic, for going $j \to k$ , we have $\mathcal{O}(k*j*t)$ , and, for $k \to l$ , we have $\mathcal{O}(l*k*t)$ . In total, the time complexity for feedforward propagation will be $$\mathcal{O}(j*i*t + k*j*t + l*k*t) = \mathcal{O}(t*(ij + jk + kl))$$ I'm not sure if this can be simplified further or not. Maybe it's just $\mathcal{O}(t*i*j*k*l)$ , but I'm not sure. Back-propagation algorithm The back-propagation algorithm proceeds as follows. Starting from the output layer $l \to k$ , we compute the error signal, $E_{lt}$ , a matrix containing the error signals for nodes at layer $l$ $$
E_{lt} = f'(S_{lt}) \odot {(Z_{lt} - O_{lt})}
$$ where $\odot$ means element-wise multiplication. Note that $E_{lt}$ has $l$ rows and $t$ columns: it simply means each column is the error signal for training example $t$ . We then compute the "delta weights", $D_{lk} \in \mathbb{R}^{l \times k}$ (between layer $l$ and layer $k$ ) $$
D_{lk} = E_{lt} * Z_{tk}
$$ where $Z_{tk}$ is the transpose of $Z_{kt}$ . We then adjust the weights $$
W_{lk} = W_{lk} - D_{lk}
$$ For $l \to k$ , we thus have the time complexity $\mathcal{O}(lt + lt + ltk + lk) = \mathcal{O}(l*t*k)$ . Now, going back from $k \to j$ . We first have $$
E_{kt} = f'(S_{kt}) \odot (W_{kl} * E_{lt})
$$ Then $$
D_{kj} = E_{kt} * Z_{tj}
$$ And then $$W_{kj} = W_{kj} - D_{kj}$$ where $W_{kl}$ is the transpose of $W_{lk}$ . For $k \to j$ , we have the time complexity $\mathcal{O}(kt + klt + ktj + kj) = \mathcal{O}(k*t(l+j))$ . And finally, for $j \to i$ , we have $\mathcal{O}(j*t(k+i))$ . In total, we have $$\mathcal{O}(ltk + tk(l + j) + tj (k + i)) = \mathcal{O}(t*(lk + kj + ji))$$ which is the same as the feedforward pass algorithm. Since they are the same, the total time complexity for one epoch will be $$O(t*(ij + jk + kl)).$$ This time complexity is then multiplied by the number of iterations (epochs). So, we have $$O(n*t*(ij + jk + kl)),$$ where $n$ is number of iterations. Notes Note that these matrix operations can greatly be parallelized by GPUs. Conclusion We tried to find the time complexity for training a neural network that has 4 layers with respectively $i$ , $j$ , $k$ and $l$ nodes, with $t$ training examples and $n$ epochs. The result was $\mathcal{O}(nt*(ij + jk + kl))$ . We assumed the simplest form of matrix multiplication that has cubic time complexity. We used the batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be the same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch) Also, if you use momentum optimization, you will have the same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm. I'm not sure what the results would be using other optimizers such as RMSprop. Sources The following article http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5 describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this. If you're not familiar with back-propagation, check this article: http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4 | {
"source": [
"https://ai.stackexchange.com/questions/5728",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/-1/"
]
} |
6,196 | As far as I understand, Q-learning and policy gradients (PG) are the two major approaches used to solve RL problems. While Q-learning aims to predict the reward of a certain action taken in a certain state, policy gradients directly predict the action itself. However, both approaches appear identical to me, i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). Is the difference in the way the loss is back-propagated? | However, both approaches appear identical to me i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). Both methods are theoretically driven by the Markov Decision Process construct, and as a result use similar notation and concepts. In addition, in simple solvable environments you should expect both methods to result in the same - or at least equivalent - optimal policies. However, they are actually different internally. The most fundamental differences between the approaches is in how they approach action selection, both whilst learning, and as the output (the learned policy). In Q-learning, the goal is to learn a single deterministic action from a discrete set of actions by finding the maximum value. With policy gradients, and other direct policy searches, the goal is to learn a map from state to action, which can be stochastic, and works in continuous action spaces. As a result, policy gradient methods can solve problems that value-based methods cannot: Large and continuous action space. However, with value-based methods, this can still be approximated with discretisation - and this is not a bad choice, since the mapping function in policy gradient has to be some kind of approximator in practice. Stochastic policies. A value-based method cannot solve an environment where the optimal policy is stochastic requiring specific probabilities, such as Scissor/Paper/Stone. That is because there are no trainable parameters in Q-learning that control probabilities of action, the problem formulation in TD learning assumes that a deterministic agent can be optimal. However, value-based methods like Q-learning have some advantages too: Simplicity. You can implement Q functions as simple discrete tables, and this gives some guarantees of convergence. There are no tabular versions of policy gradient, because you need a mapping function $p(a \mid s, \theta)$ which also must have a smooth gradient with respect to $\theta$ . Speed. TD learning methods that bootstrap are often much faster to learn a policy than methods which must purely sample from the environment in order to evaluate progress. There are other reasons why you might care to use one or other approach: You may want to know the predicted return whilst the process is running, to help other planning processes associated with the agent. The state representation of the problem lends itself more easily to either a value function or a policy function. A value function may turn out to have very simple relationship to the state and the policy function very complex and hard to learn, or vice-versa . Some state-of-the-art RL solvers actually use both approaches together, such as Actor-Critic. This combines strengths of value and policy gradient methods. | {
"source": [
"https://ai.stackexchange.com/questions/6196",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/15298/"
]
} |
7,763 | I am studying reinforcement learning and the variants of it. I am starting to get an understanding of how the algorithms work and how they apply to an MDP. What I don't understand is the process of defining the states of the MDP. In most examples and tutorials, they represent something simple like a square in a grid or similar. For more complex problems, like a robot learning to walk, etc., How do you go about defining those states? Can you use learning or classification algorithms to "learn" those states? | The problem of state representation in Reinforcement Learning (RL) is similar to problems of feature representation, feature selection and feature engineering in supervised or unsupervised learning. Literature that teaches the basics of RL tends to use very simple environments so that all states can be enumerated. This simplifies value estimates into basic rolling averages in a table, which are easier to understand and implement. Tabular learning algorithms also have reasonable theoretical guarantees of convergence, which means if you can simplify your problem so that it has, say, less than a few million states, then this is worth trying. Most interesting control problems will not fit into that number of states, even if you discretise them. This is due to the " curse of dimensionality ". For those problems, you will typically represent your state as a vector of different features - e.g. for a robot, various positions, angles, velocities of mechanical parts. As with supervised learning, you may want to treat these for use with a specific learning process. For instance, typically you will want them all to be numeric, and if you want to use a neural network you should also normalise them to a standard range (e.g. -1 to 1). In addition to the above concerns which apply for other machine learning, for RL, you also need to be concerned with the Markov Property - that the state provides enough information, so that you can accurately predict expected next rewards and next states given an action, without the need for any additional information. This does not need to be perfect, small differences due to e.g. variations in air density or temperature for a wheeled robot will not usually have a large impact on its navigation, and can be ignored. Any factor which is essentially random can also be ignored whilst sticking to RL theory - it may make the agent less optimal overall, but the theory will still work. If there are consistent unknown factors that influence result, and could logically be deduced - maybe from history of state or actions - but you have excluded them from the state representation, then you may have a more serious problem, and the agent may fail to learn. It is worth noting the difference here between observation and state . An observation is some data that you can collect. E.g. you may have sensors on your robot that feed back the positions of its joints. Because the state should possess the Markov Property, a single raw observation might not be enough data to make a suitable state. If that is the case, you can either apply your domain knowledge in order to construct a better state from available data, or you can try to use techniques designed for partially observable MDPs (POMDPs) - these effectively try to build missing parts of state data statistically. You could use a RNN or hidden markov model (also called a "belief state") for this, and in some way this is using a " learning or classification algorithms to "learn" those states " as you asked. Finally, you need to consider the type of approximation model you want to use. A similar approach applies here as for supervised learning: A simple linear regression with features engineered based on domain knowledge can do very well. You may need to work hard on trying different state representations so that the linear approximation works. The advantage is that this simpler approach is more robust against stability issues than non-linear approximation A more complex non-linear function approximator, such as a multi-layer neural network. You can feed in a more "raw" state vector and hope that the hidden layers will find some structure or representation that leads to good estimates. In some ways, this too is " learning or classification algorithms to "learn" those states " , but in a different way to a RNN or HMM. This might be a sensible approach if your state was expressed naturally as a screen image - figuring out the feature engineering for image data by hand is very hard. The Atari DQN work by DeepMind team used a combination of feature engineering and relying on deep neural network to achieve its results. The feature engineering included downsampling the image, reducing it to grey-scale and - importantly for the Markov Property - using four consecutive frames to represent a single state, so that information about velocity of objects was present in the state representation. The DNN then processed the images into higher-level features that could be used to make predictions about state values. | {
"source": [
"https://ai.stackexchange.com/questions/7763",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/17853/"
]
} |
9,141 | I am a new learner in NLP. I am interested in the sentence generating task. As far as I am concerned, one state-of-the-art method is the CharRNN , which uses RNN to generate a sequence of words. However, BERT has come out several weeks ago and is very powerful. Therefore, I am wondering whether this task can also be done with the help of BERT? I am a new learner in this field, and thank you for any advice! | For newbies, NO. Sentence generation requires sampling from a language model, which gives the probability distribution of the next word given previous contexts. But BERT can't do this due to its bidirectional nature. For advanced researchers, YES. You can start with a sentence of all [MASK] tokens, and generate words one by one in arbitrary order (instead of the common left-to-right chain decomposition). Though the text generation quality is hard to control. Here's the technical report BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model , its errata and the source code . In summary: If you would like to do some research in the area of decoding with
BERT, there is a huge space to explore If you would like to generate
high quality texts, personally I recommend you to check GPT-2 . | {
"source": [
"https://ai.stackexchange.com/questions/9141",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/20170/"
]
} |
10,623 | What is self-supervised learning in machine learning? How is it different from supervised learning? | Introduction The term self-supervised learning (SSL) has been used (sometimes differently) in different contexts and fields, such as representation learning [ 1 ], neural networks, robotics [ 2 ], natural language processing, and reinforcement learning. In all cases, the basic idea is to automatically generate some kind of supervisory signal to solve some task (typically, to learn representations of the data or to automatically label a dataset). I will describe what SSL means more specifically in three contexts: representation learning, neural networks and robotics. Representation learning The term self-supervised learning has been widely used to refer to techniques that do not use human-annotated datasets to learn (visual) representations of the data (i.e. representation learning). Example In [ 1 ], two patches are randomly selected and cropped from an unlabelled image and the goal is to predict the relative position of the two patches. Of course, we have the relative position of the two patches once you have chosen them (i.e. we can keep track of their centers), so, in this case, this is the automatically generated supervisory signal. The idea is that, to solve this task (known as a pretext or auxiliary task in the literature [ 3 , 4 , 5 , 6 ]), the neural network needs to learn features in the images. These learned representations can then be used to solve the so-called downstream tasks, i.e. the tasks you are interested in (e.g. object detection or semantic segmentation). So, you first learn representations of the data (by SSL pre-training), then you can transfer these learned representations to solve a task that you actually want to solve, and you can do this by fine-tuning the neural network that contains the learned representations on a labeled (but smaller dataset), i.e. you can use SSL for transfer learning. This example is similar to the example given in this other answer . Neural networks Some neural networks, for example, autoencoders (AE) [ 7 ] are sometimes called self-supervised learning tools. In fact, you can train AEs without images that have been manually labeled by a human. More concretely, consider a de-noising AE, whose goal is to reconstruct the original image when given a noisy version of it. During training, you actually have the original image, given that you have a dataset of uncorrupted images and you just corrupt these images with some noise, so you can calculate some kind of distance between the original image and the noisy one, where the original image is the supervisory signal. In this sense, AEs are self-supervised learning tools, but it's more common to say that AEs are unsupervised learning tools, so SSL has also been used to refer to unsupervised learning techniques. Robotics In [ 2 ], the training data is automatically but approximately labeled by finding and exploiting the relations or correlations between inputs coming from different sensor modalities (and this technique is called SSL by the authors). So, as opposed to representation learning or auto-encoders, in this case, an actual labeled dataset is produced automatically. Example Consider a robot that is equipped with a proximity sensor (which is a short-range sensor capable of detecting objects in front of the robot at short distances) and a camera (which is long-range sensor, but which does not provide a direct way of detecting objects). You can also assume that this robot is capable of performing odometry . An example of such a robot is Mighty Thymio . Consider now the task of detecting objects in front of the robot at longer ranges than the range the proximity sensor allows. In general, we could train a CNN to achieve that. However, to train such CNN, in supervised learning, we would first need a labelled dataset, which contains labelled images (or videos), where the labels could e.g. be "object in the image" or "no object in the image". In supervised learning, this dataset would need to be manually labelled by a human, which clearly would require a lot of work. To overcome this issue, we can use a self-supervised learning approach. In this example, the basic idea is to associate the output of the proximity sensors at a time step $t' > t$ with the output of the camera at time step $t$ (a smaller time step than $t'$ ). More specifically, suppose that the robot is initially at coordinates $(x, y)$ (on the plane), at time step $t$ . At this point, we still do not have enough info to label the output of the camera (at the same time step $t$ ). Suppose now that, at time $t'$ , the robot is at position $(x', y')$ . At time step $t'$ , the output of the proximity sensor will e.g. be "object in front of the robot" or "no object in front of the robot". Without loss of generality, suppose that the output of the proximity sensor at $t' > t$ is "no object in front of the robot", then the label associated with the output of the camera (an image frame) at time $t$ will be "no object in front of the robot". | {
"source": [
"https://ai.stackexchange.com/questions/10623",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/2444/"
]
} |
10,812 | I came across these 2 algorithms, but I cannot understand the difference between these 2, both in terms of implementation as well as intuitionally. So, what difference does the second point in both the slides refer to? | The first-visit and the every-visit Monte-Carlo (MC) algorithms are both used to solve the prediction problem (or, also called, "evaluation problem"), that is, the problem of estimating the value function associated with a given (as input to the algorithms) fixed (that is, it does not change during the execution of the algorithm) policy, denoted by $\pi$ . In general, even if we are given the policy $\pi$ , we are not necessarily able to find the exact corresponding value function, so these two algorithms are used to estimate the value function associated with $\pi$ . Intuitively, we care about the value function associated with $\pi$ because we might want or need to know "how good it is to be in a certain state", if the agent behaves in the environment according to the policy $\pi$ . For simplicity, assume that the value function is the state value function (but it could also be e.g. the state-action value function), denoted by $v_\pi(s)$ , where $v_\pi(s)$ is the expected return (or, in other words, expected cumulative future discounted reward ), starting from state $s$ (at some time step $t$ ) and then following (after time step $t$ ) the given policy $\pi$ . Formally, $v_\pi(s) = \mathbb{E}_\pi [ G_t \mid S_t = s ]$ , where $G_t = \sum_{k=0}^\infty \gamma^k R_{t+k+1}$ is the return (after time step $t$ ). In the case of MC algorithms, $G_t$ is often defined as $\sum_{k=0}^T R_{t+k+1}$ , where $T \in \mathbb{N}^+$ is the last time step of the episode, that is, the sum goes up to the final time step of the episode, $T$ . This is because MC algorithms, in this context, often assume that the problem can be naturally split into episodes and each episode proceeds in a discrete number of time steps (from $t=0$ to $t=T$ ). As I defined it here, the return, in the case of MC algorithms, is only associated with a single episode (that is, it is the return of one episode). However, in general, the expected return can be different from one episode to the other, but, for simplicity, we will assume that the expected return (of all states) is the same for all episodes. To recapitulate, the first-visit and every-visit MC (prediction) algorithms are used to estimate $v_\pi(s)$ , for all states $s \in \mathcal{S}$ . To do that, at every episode, these two algorithms use $\pi$ to behave in the environment, so that to obtain some knowledge of the environment in the form of sequences of states, actions and rewards. This knowledge is then used to estimate $v_\pi(s)$ . How is this knowledge used in order to estimate $v_\pi$ ? Let us have a look at the pseudocode of these two algorithms. $N(s)$ is a "counter" variable that counts the number of times we visit state $s$ throughout the entire algorithm (i.e. from episode one to $num\_episodes$ ). $\text{Returns(s)}$ is a list of (undiscounted) returns for state $s$ . I think it is more useful for you to read the pseudocode (which should be easily translatable to actual code) and understand what it does rather than explaining it with words. Anyway, the basic idea (of both algorithms) is to generate trajectories (of states, actions and rewards) at each episode, keep track of the returns (for each state) and number of visits (of each state), and then, at the end of all episodes, average these returns (for all states). This average of returns should be an approximation of the expected return (which is what we wanted to estimate). The differences of the two algorithms are highlighted in $\color{red}{\text{red}}$ . The part " If state $S_t$ is not in the sequence $S_0, S_1, \dots, S_{t-1}$ " means that the associated block of code will be executed only if $S_t$ is not part of the sequence of states that were visited (in the episode sequence generated with $\pi$ ) before the time step $t$ . In other words, that block of code will be executed only if it is the first time we encounter $S_t$ in the sequence of states, action and rewards: $S_0, A_0, R_1, S_1, A_1, R_2 \ldots, S_{T-1}, A_{T-1}, R_T$ (which can be collectively be called "episode sequence"), with respect to the time step and not the way the episode sequence is processed. Note that a certain state $s$ might appear more than once in $S_0, A_0, R_1, S_1, A_1, R_2 \ldots, S_{T-1}, A_{T-1}, R_T$ : for example, $S_3 = s$ and $S_5 = s$ . Do not get confused by the fact that, within each episode, we proceed from the time step $T-1$ to time step $t = 0$ , that is, we process the "episode sequence" backwards. We are doing that only to more conveniently compute the returns (given that the returns are iteratively computed as follows $G \leftarrow G + R_{t+1}$ ). So, intuitively, in the first-visit MC, we only update the $\text{Returns}(S_t)$ (that is, the list of returns for state $S_t$ , that is, the state of the episode at time step $t$ ) the first time we encounter $S_t$ in that same episode (or trajectory). In the every-visit MC, we update the list of returns for the state $S_t$ every time we encounter $S_t$ in that same episode. For more info regarding these two algorithms (for example, the convergence properties), have a look at section 5.1 (on page 92) of the book " Reinforcement Learning: An Introduction " (2nd edition), by Andrew Barto and Richard S. Sutton. | {
"source": [
"https://ai.stackexchange.com/questions/10812",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/-1/"
]
} |
11,285 | In general, the word "latent" means "hidden" and "to embed" means "to incorporate". In machine learning, the expressions "hidden (or latent) space" and "embedding space" occur in several contexts. More specifically, an embedding can refer to a vector representation of a word. An embedding space can refer to a subspace of a bigger space, so we say that the subspace is embedded in the bigger space. The word "latent" comes up in contexts like hidden Markov models (HMMs) or auto-encoders . What is the difference between these spaces? In some contexts, do these two expressions refer to the same concept? | Embedding vs Latent Space Due to Machine Learning's recent and rapid renaissance, and the fact that it draws from many distinct areas of mathematics, statistics, and computer science, it often has a number of different terms for the same or similar concepts. "Latent space" and "embedding" both refer to an (often lower-dimensional) representation of high-dimensional data: Latent space refers specifically to the space from which the low-dimensional representation is drawn. Embedding refers to the way the low-dimensional data is mapped to ("embedded in") the original higher dimensional space. For example, in this "Swiss roll" data, the 3d data on the left is sensibly modelled as a 2d manifold 'embedded' in 3d space. The function mapping the 'latent' 2d data to its 3d representation is the embedding , and the underlying 2d space itself is the latent space (or embedded space ): Synonyms Depending on the specific impression you wish to give, "embedding" often goes by different terms: Term Context dimensionality reduction combating the "curse of dimensionality" feature extraction feature projection feature embedding feature learning representation learning extracting 'meaningful' features from raw data embedding manifold learning latent feature representation understanding the underlying topology of the data However this is not a hard-and-fast rule, and they are often completely interchangeable. | {
"source": [
"https://ai.stackexchange.com/questions/11285",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/2444/"
]
} |
12,870 | Explainable artificial intelligence (XAI) is concerned with the development of techniques that can enhance the interpretability, accountability, and transparency of artificial intelligence and, in particular, machine learning algorithms and models, especially black-box ones, such as artificial neural networks, so that these can also be adopted in areas, like healthcare, where the interpretability and understanding of the results (e.g. classifications) are required. Which XAI techniques are there? If there are many, to avoid making this question too broad, you can just provide a few examples (the most famous or effective ones), and, for people interested in more techniques and details, you can also provide one or more references/surveys/books that go into the details of XAI. The idea of this question is that people could easily find one technique that they could study to understand what XAI really is or how it can be approached. | Explainable AI and model interpretability are hyper-active and hyper-hot areas of current research (think of holy grail, or something), which have been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks, plus the necessity of algorithmic fairness & accountability. Here are some state of the art algorithms and approaches, together with implementations and frameworks. Model-agnostic approaches LIME: Local Interpretable Model-agnostic Explanations ( paper , code , blog post , R port ) SHAP: A Unified Approach to Interpreting Model Predictions ( paper , Python package , R package ). GPU implementation for tree models by NVIDIA using RAPIDS - GPUTreeShap ( paper , code , blog post ) Anchors: High-Precision Model-Agnostic Explanations ( paper , authors' Python code , Java implementation) Diverse Counterfactual Explanations (DiCE) by Microsoft ( paper , code , blog post ) Black Box Auditing and Certifying and Removing Disparate Impact (authors' Python code ) FairML: Auditing Black-Box Predictive Models, by Cloudera Fast Forward Labs ( blog post , paper , code ) SHAP seems to enjoy high popularity among practitioners; the method has firm theoretical foundations on co-operational game theory (Shapley values), and it has in a great degree integrated the LIME approach under a common framework. Although model-agnostic, specific & efficient implementations are available for neural networks ( DeepExplainer ) and tree ensembles ( TreeExplainer , paper ). Neural network approaches (mostly, but not exclusively, for computer vision models) The Layer-wise Relevance Propagation (LRP) toolbox for neural networks ( 2015 paper @ PLoS ONE , 2016 paper @ JMLR , project page , code , TF Slim wrapper ) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization ( paper , authors' Torch code , Tensorflow code , PyTorch code , yet another Pytorch implementation , Keras example notebook , Coursera Guided Project ) Axiom-based Grad-CAM (XGrad-CAM): Towards Accurate Visualization and Explanation of CNNs, a refinement of the existing Grad-CAM method ( paper , code ) SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability ( paper , code , Google blog post ) TCAV: Testing with Concept Activation Vectors ( ICML 2018 paper , Tensorflow code ) Integrated Gradients ( paper , code , Tensorflow tutorial , independent implementations ) Network Dissection: Quantifying Interpretability of Deep Visual Representations, by MIT CSAIL ( project page , Caffe code , PyTorch port ) GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, by MIT CSAIL ( project page , with links to paper & code) Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions ( paper , code ) Transparecy-by-Design (TbD) networks ( paper , code , demo ) Distilling a Neural Network Into a Soft Decision Tree , a 2017 paper by Geoff Hinton, with various independent PyTorch implementations Understanding Deep Networks via Extremal Perturbations and Smooth Masks ( paper ), implemented in TorchRay (see below) Understanding the Role of Individual Units in a Deep Neural Network ( preprint , 2020 paper @ PNAS , code , project page ) GNNExplainer: Generating Explanations for Graph Neural Networks ( paper , code ) Benchmarking Deep Learning Interpretability in Time Series Predictions ( paper @ NeurIPS 2020, code utilizing Captum ) Concept Whitening for Interpretable Image Recognition ( paper , preprint , code ) Libraries & frameworks As interpretability moves toward the mainstream, there are already frameworks and toolboxes that incorporate more than one of the algorithms and techniques mentioned and linked above; here is a partial list: The ELI5 Python library ( code , documentation ) DALEX - moDel Agnostic Language for Exploration and eXplanation ( homepage , code , JMLR paper ), part of the DrWhy.AI project The What-If tool by Google, a feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code ( project page , code , blog post ) The Language Interpretability Tool (LIT) by Google, a visual, interactive model-understanding tool for NLP models ( project page , code , blog post ) Lucid, a collection of infrastructure and tools for research in neural network interpretability by Google ( code ; papers: Feature Visualization , The Building Blocks of Interpretability ) TorchRay by Facebook, a PyTorch package implementing several visualization methods for deep CNNs iNNvestigate Neural Networks ( code , JMLR paper ) tf-explain - interpretability methods as Tensorflow 2.0 callbacks ( code , docs , blog post ) InterpretML by Microsoft ( homepage , code still in alpha, paper ) Captum by Facebook AI - model interpetability for Pytorch ( homepage , code , intro blog post ) Skater, by Oracle ( code , docs ) Alibi, by SeldonIO ( code , docs ) AI Explainability 360, commenced by IBM and moved to the Linux Foundation ( homepage , code , docs , IBM Bluemix , blog post ) Ecco: explaining transformer-based NLP models using interactive visualizations ( homepage , code , article ). Recipes for Machine Learning Interpretability in H2O Driverless AI ( repo ) Reviews & general papers A Survey of Methods for Explaining Black Box Models (2018, ACM Computing Surveys) Definitions, methods, and applications in interpretable machine learning (2019, PNAS) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019, Nature Machine Intelligence, preprint ) Machine Learning Interpretability: A Survey on Methods and Metrics (2019, Electronics) Principles and Practice of Explainable Machine Learning (2020, preprint) Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges (keynote at 2020 ECML XKDD workshop by Christoph Molnar, video & slides ) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI (2020, Information Fusion) Counterfactual Explanations for Machine Learning: A Review (2020, preprint, critique by Judea Pearl) Interpretability 2020 , an applied research report by Cloudera Fast Forward, updated regularly Interpreting Predictions of NLP Models (EMNLP 2020 tutorial) Explainable NLP Datasets ( site , preprint , highlights ) Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges eBooks (available online) Interpretable Machine Learning , by Christoph Molnar, with R code available Explanatory Model Analysis , by DALEX creators Przemyslaw Biecek and Tomasz Burzykowski, with both R & Python code snippets An Introduction to Machine Learning Interpretability (2nd ed. 2019), by H2O Online courses & tutorials Machine Learning Explainability , Kaggle tutorial Explainable AI: Scene Classification and GradCam Visualization , Coursera guided project Explainable Machine Learning with LIME and H2O in R , Coursera guided project Interpretability and Explainability in Machine Learning , Harvard COMPSCI 282BR Other resources explained.ai blog A Twitter thread , linking to several interpretation tools available for R A whole bunch of resources in the Awesome Machine Learning Interpetability repo The online comic book (!) The Hitchhiker's Guide to Responsible Machine Learning , by the team behind the textbook Explanatory Model Analysis and the DALEX package mentioned above ( blog post and backstage ) | {
"source": [
"https://ai.stackexchange.com/questions/12870",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/2444/"
]
} |
13,289 | Imagine you show a neural network a picture of a lion 100 times and label it with "dangerous", so it learns that lions are dangerous. Now imagine that previously you have shown it millions of images of lions and alternatively labeled it as "dangerous" and "not dangerous", such that the probability of a lion being dangerous is 50%. But those last 100 times have pushed the neural network into being very positive about regarding the lion as "dangerous", thus ignoring the last million lessons. Therefore, it seems there is a flaw in neural networks, in that they can change their mind too quickly based on recent evidence. Especially if that previous evidence was in the middle. Is there a neural network model that keeps track of how much evidence it has seen? (Or would this be equivalent to letting the learning rate decrease by $1/T$ where $T$ is the number of trials?) | Yes, indeed, neural networks are very prone to catastrophic forgetting (or interference) . Currently, this problem is often ignored because neural networks are mainly trained offline (sometimes called batch training ), where this problem does not often arise, and not online or incrementally , which is fundamental to the development of artificial general intelligence . There are some people that work on continual lifelong learning in neural networks, which attempts to adapt neural networks to continual lifelong learning, which is the ability of a model to learn from a stream of data continually, so that they do not completely forget previously acquired knowledge while learning new information. See, for example, the paper Continual lifelong learning with neural networks: A review (2019), by German I. Parisi et al., which summarises the problems and existing solutions related to catastrophic forgetting of neural networks. | {
"source": [
"https://ai.stackexchange.com/questions/13289",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/4199/"
]
} |
13,317 | The Wikipedia article for the universal approximation theorem cites a version of the universal approximation theorem for Lebesgue-measurable functions from this conference paper . However, the paper does not include the proofs of the theorem. Does anybody know where the proof can be found? | There are multiple papers on the topic because there have been multiple attempts to prove that neural networks are universal (i.e. they can approximate any continuous function) from slightly different perspectives and using slightly different assumptions (e.g. assuming that certain activation functions are used). Note that these proofs tell you that neural networks can approximate any continuous function, but they do not tell you exactly how you need to train your neural network so that it approximates your desired function. Moreover, most papers on the topic are quite technical and mathematical, so, if you do not have a solid knowledge of approximation theory and related fields, they may be difficult to read and understand. Nonetheless, below there are some links to some possibly useful articles and papers. The article A visual proof that neural nets can compute any function (by Michael Nielsen) should give you some intuition behind the universality of neural networks, so this is probably the first article you should read. Then you should probably read the paper Approximation by Superpositions of a Sigmoidal Function (1989), by G. Cybenko, who proves that multi-layer perceptrons (i.e. feed-forward neural networks with at least one hidden layer) can approximate any continuous function . However, he assumes that the neural network uses sigmoid activations functions, which, nowadays, have been replaced in many scenarios by ReLU activation functions. Other works (e.g. [1] , [2] ) showed that you don't necessarily need sigmoid activation functions, but only certain classes of activation functions do not make neural networks universal. The universality property (i.e. the ability to approximate any continuous function) has also been proved in the case of convolutional neural networks . For example, see Universality of Deep Convolutional Neural Networks (2020), by Ding-Xuan Zhou, which shows that convolutional neural networks can approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. See also Refinement and Universal Approximation via Sparsely Connected ReLU Convolution Nets (by A. Heinecke et al., 2020) See also page 632 of Recurrent Neural Networks Are Universal Approximators (2006), by Schäfer et al., which shows that recurrent neural networks are universal function approximators. See also On the computational power of neural nets (1992, COLT) by Siegelmann and Sontag. This answer could also be useful. For graph neural networks , see Universal Function Approximation on Graphs (by Rickard Brüel Gabrielsson, 2020, NeurIPS) | {
"source": [
"https://ai.stackexchange.com/questions/13317",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/27047/"
]
} |
13,775 | I just finished a 1-year Data Science master's program where we were taught R. I found that Python is more popular and has a larger community in AI. What are the advantages that Python may have over R in terms of features applicable to the field of Data Science and AI (other than popularity and larger community)? What positions in Data Science and AI would be more Python-heavy than R-heavy (especially comparing industry, academic, and government job positions)? In short, is Python worthwhile in all job situations or can I get by with only R in some positions? | I want to reframe your question. Don't think about switching, think about adding. In data science you'll be able to go very far with either python or r but you'll go farthest with both. Python and r integrate very well, thanks to the reticulate package. I often tidy data in r because it is easier for me, train a model in python to benefit from superior speed and visualize the outcomes in r in beautiful ggplot all in one notebook! If you already know r there is no sense in abandoning it, use it where sensible and easy to you. But it is 100% a good idea to add python for many uses. Once you feel comfortable in both you'll have a workflow that fits you best dominated by your favorite language. | {
"source": [
"https://ai.stackexchange.com/questions/13775",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/27652/"
]
} |
14,224 | If the original purpose for developing AI was to help humans in some tasks and that purpose still holds, why should we care about its explainability? For example, in deep learning, as long as the intelligence helps us to the best of their abilities and carefully arrives at its decisions, why would we need to know how its intelligence works? | As argued by Selvaraju et al. , there are three stages of AI evolution, in which interpretability is helpful. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models . It can give a better understanding of how a model works and helps us answer several key questions. For example, why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task). Why is trust so important? First, let me give you a couple of examples of industries where trust is paramount: In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry. Another example is self-driving cars. The same questions arise: if a car crashes, whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry. In fact, according to many, this lack of trust has hindered the adoption of AI in many fields (sources: [1] , [2] , [3] ). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: [4] , [5] , [6] ). In several real-world applications, you can't just say "it works 94% of the time". You might also need to provide a justification... Government regulations Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this. The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: [7] , [8] , [9] ). For instance, the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access "meaningful information about the logic involved" ( Article 15, EU GDPR ) Now, this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example, a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected. To sum up... Explainable AIs are necessary because: It gives us a better understanding, which helps us improve them. In some cases, we can learn from AI how to make better decisions in some tasks. It helps users trust AI, which leads to a wider adoption of AI. Deployed AIs in the (not too distant) future might be required to be more "transparent". | {
"source": [
"https://ai.stackexchange.com/questions/14224",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/16565/"
]
} |
15,449 | We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous. How could artificial intelligence harm us? | tl;dr There are many valid reasons why people might fear (or better be concerned about ) AI, not all involve robots and apocalyptic scenarios. To better illustrate these concerns, I'll try to split them into three categories. Conscious AI This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator" , "The Matrix" , "Age of Ultron" . The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot" , which was also adapted as a movie). The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic). In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious. Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans. Using AI with malicious intent Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today , that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI. I'll focus only on AI applications that are available today . Some examples of AI that can be used for malicious intent: DeepFake : a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1 , 2 , 3 With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second , AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London , Atlanta and Berlin are among the most-surveilled cities in the world . China has taken things a step further by adopting the social credit system , an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984. Influencing people through social media . Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1 , 2 , 3 . Hacking . Military applications, e.g. drone attacks, missile targeting systems. Adverse effects of AI This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are: Jobs becoming redundant . As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same). Reinforcing the bias in our data . This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1 , 2 , 3 , 4 . | {
"source": [
"https://ai.stackexchange.com/questions/15449",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/29713/"
]
} |
15,730 | As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real). For example, at least, we can take into account integers. We can think, principally, and "understand" infinitely many numbers that are displayed on the screen. Nowadays, we are trying to design artificial intelligence which is capable at least human being. However, I am stuck with infinity. I try to find a way how can teach a model (deep or not) to understand infinity. I define "understanding' in a functional approach. For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. This is the basic straight forward approach to "understanding". As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. From this point of view, if I want to create a model, the model is actually a function in an abstract sense, this model must differentiate infinitely many numbers. Since computers are digital machines which have limited capacity to model such an infinite function, how can I create a model that differentiates infinitely many integers? For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity? If I take into account real numbers, the problem becomes much harder. What is the point that I am missing? Are there any resources that focus on the subject? | I think this is a fairly common misconception about AI and computers, especially among laypeople. There are several things to unpack here. Let's suppose that there's something special about infinity (or about continuous concepts) that makes them especially difficult for AI. For this to be true, it must both be the case that humans can understand these concepts while they remain alien to machines, and that there exist other concepts that are not like infinity that both humans and machines can understand. What I'm going to show in this answer is that wanting both of these things leads to a contradiction. The root of this misunderstanding is the problem of what it means to understand . Understanding is a vague term in everyday life, and that vague nature contributes to this misconception. If by understanding, we mean that a computer has the conscious experience of a concept, then we quickly become trapped in metaphysics. There is a long running , and essentially open debate about whether computers can "understand" anything in this sense, and even at times, about whether humans can! You might as well ask whether a computer can "understand" that 2+2=4. Therefore, if there's something special about understanding infinity, it cannot be related to "understanding" in the sense of subjective experience. So, let's suppose that by "understand", we have some more specific definition in mind. Something that would make a concept like infinity more complicated for a computer to "understand" than a concept like arithmetic. Our more concrete definition for "understanding" must relate to some objectively measurable capacity or ability related to the concept (otherwise, we're back in the land of subjective experience). Let's consider what capacity or ability might we pick that would make infinity a special concept, understood by humans and not machines, unlike say, arithmetic. We might say that a computer (or a person) understands a concept if it can provide a correct definition of that concept. However, if even one human understands infinity by this definition, then it should be easy for them to write down the definition. Once the definition is written down, a computer program can output it. Now the computer "understands" infinity too. This definition doesn't work for our purposes. We might say that an entity understands a concept if it can apply the concept correctly. Again, if even the one person understands how to apply the concept of infinity correctly, then we only need to record the rules they are using to reason about the concept, and we can write a program that reproduces the behavior of this system of rules. Infinity is actually very well characterized as a concept, captured in ideas like Aleph Numbers . It is not impractical to encode these systems of rules in a computer, at least up to the level that any human understands them. Therefore, computers can "understand" infinity up to the same level of understanding as humans by this definition as well. So this definition doesn't work for our purposes. We might say that an entity "understands" a concept if it can logically relate that concept to arbitrary new ideas. This is probably the strongest definition, but we would need to be pretty careful here: very few humans (proportionately) have a deep understanding of a concept like infinity. Even fewer can readily relate it to arbitrary new concepts. Further, algorithms like the General Problem Solver can, in principal, derive any logical consequences from a given body of facts, given enough time. Perhaps under this definition computers understand infinity better than most humans, and there is certainly no reason to suppose that our existing algorithms will not further improve this capability over time. This definition does not seem to meet our requirements either. Finally, we might say that an entity "understands" a concept if it can generate examples of it. For example, I can generate examples of problems in arithmetic, and their solutions. Under this definition, I probably do not "understand" infinity, because I cannot actually point to or create any concrete thing in the real world that is definitely infinite. I cannot, for instance, actually write down an infinitely long list of numbers, merely formulas that express ways to create ever longer lists by investing ever more effort in writing them out. A computer ought to be at least as good as me at this. This definition also does not work. This is not an exhaustive list of possible definitions of "understands", but we have covered "understands" as I understand it pretty well. Under every definition of understanding, there isn't anything special about infinity that separates it from other mathematical concepts. So the upshot is that, either you decide a computer doesn't "understand" anything at all, or there's no particularly good reason to suppose that infinity is harder to understand than other logical concepts. If you disagree, you need to provide a concrete definition of "understanding" that does separate understanding of infinity from other concepts, and that doesn't depend on subjective experiences (unless you want to claim your particular metaphysical views are universally correct, but that's a hard argument to make). Infinity has a sort of semi-mystical status among the lay public, but it's really just like any other mathematical system of rules: if we can write down the rules by which infinity operates, a computer can do them as well as a human can (or better). | {
"source": [
"https://ai.stackexchange.com/questions/15730",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/19102/"
]
} |
15,820 | Is there any research on the development of attacks against artificial intelligence systems? For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system. If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material? | Yes, there is some research on this topic, which can be called adversarial machine learning , which is more an experimental field. An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which unexpectedly predicts the object to be an orange. Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al. | {
"source": [
"https://ai.stackexchange.com/questions/15820",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/30335/"
]
} |
15,857 | Is the gradient at a layer (of a feed-forward neural network) independent of the activations of the previous layers? I read this in a paper titled Mean Field Residual Networks: On the Edge of Chaos (2017). I am not sure how far this is true, because the error depends on those activations. | Yes, there is some research on this topic, which can be called adversarial machine learning , which is more an experimental field. An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which unexpectedly predicts the object to be an orange. Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al. | {
"source": [
"https://ai.stackexchange.com/questions/15857",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/30384/"
]
} |
17,721 | I trained a simple CNN on the MNIST database of handwritten digits to 99% accuracy. I'm feeding in a bunch of handwritten digits, and non-digits from a document. I want the CNN to report errors, so I set a threshold of 90% certainty below which my algorithm assumes that what it's looking at is not a digit. My problem is that the CNN is 100% certain of many incorrect guesses. In the example below, the CNN reports 100% certainty that it's a 0. How do I make it report failure? My thoughts on this :
Maybe the CNN is not really 100% certain that this is a zero. Maybe it just thinks that it can't be anything else, and it's being forced to choose (because of normalisation on the output vector). Is there any way I can get insight into what the CNN "thought" before I forced it to choose? PS: I'm using Keras on Tensorflow with Python. Edit Because someone asked. Here is the context of my problem: This came from me applying a heuristic algorithm for segmentation of sequences of connected digits. In the image above, the left part is actually a 4, and the right is the curve bit of a 2 without the base. The algorithm is supposed to step through segment cuts, and when it finds a confident match, remove that cut and continue moving along the sequence. It works really well for some cases, but of course it's totally reliant on being able to tell if what it's looking at is not a good match for a digit. Here's an example of where it kind of did okay. My next best option is to do inference on all permutations and maximise combined score. That's more expensive. | The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here , in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities. | {
"source": [
"https://ai.stackexchange.com/questions/17721",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/16871/"
]
} |
17,732 | The famous Nvidia paper Progressive Growing of GANs for Improved Quality, Stability, and Variation , the GAN can generate hyperrealistic human faces. But, in the very same paper, images of other categories are rather disappointing and there hasn't seemed to be any improvements since then. Why is it the case? Is it because they didn't have enough training data for other categories? Or is it due to some fundamental limitation of GAN? I have come across a paper talking about the limitations of GAN: Seeing What a GAN Cannot Generate . Anybody using GAN for image synthesis other than human faces? Any success stories? | The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here , in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities. | {
"source": [
"https://ai.stackexchange.com/questions/17732",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/33082/"
]
} |
17,755 | We have hundreds of thousands of customers records, and we need to take the benefits of our data to train a model that will recognize fake entries or unrealistic ones for our platform, where customers are asked to enter their names, phone number and zip code. So, our attributes are name, phone number, zip code and IP address to train the model with. We have only data associated with real users. Can we train a model provided with only positive labels (as we do not have a negative dataset to train the model with)? | The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model. This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want. This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations. As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here , in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities. | {
"source": [
"https://ai.stackexchange.com/questions/17755",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/33135/"
]
} |
18,576 | Background: It's well-known that neural networks offer great performance across a large number of tasks, and this is largely a consequence of their universal approximation capabilities . However, in this post I'm curious about the opposite : Question: Namely, what are some well-known cases, problems or real-world applications where neural networks don't do very well? Specification: I'm looking for specific regression tasks (with accessible data-sets) where neural networks are not the state-of-the-art. The regression task should be "naturally suitable", so no sequential or time-dependent data (in which case an RNN or reservoir computer would be more natural). | Here's a snippet from an article by Gary Marcus In particular, they showed that standard deep learning nets often fall
apart when confronted with common stimuli rotated in three dimensional
space into unusual positions, like the top right corner of this
figure, in which a schoolbus is mistaken for a snowplow: .
.
. Mistaking an overturned schoolbus is
not just a mistake, it’s a revealing mistake: it that shows not only
that deep learning systems can get confused, but they are challenged
in making a fundamental distinction known to all philosophers: the
distinction between features that are merely contingent associations
(snow is often present when there are snowplows, but not necessary)
and features that are inherent properties of the category itself
(snowplows ought other things being equal have plows, unless eg they
have been dismantled). We’d already seen similar examples with
contrived stimuli, like Anish Athalye’s carefully designed, 3-d
printed foam covered dimensional baseball that was mistaken for an
espresso Alcorn’s results — some from real photos from the natural
world — should have pushed worry about this sort of anomaly to the top
of the stack. Please note that the opinions of the author are his alone and I do not necessarily share all of them with him. Edit: Some more fun stuff 1) DeepMind's neural network that could play Breakout and Starcraft saw a dramatic dip in performance when the paddle was moved up by a few pixels. See: General Game Playing With Schema Networks While in the latter, it performed well with one race of the character but not on a different map and with different characters. Source 2) AlphaZero searches just 80,000 positions per second in chess and
40,000 in shogi, compared to 70 million for Stockfish and 35 million
for elmo. What the team at Deepmind did was to build a very good search algorithm. A search algorithm that includes the capability to remember facets of previous searches to apply better results to new searches. This is very clever; it undoubtedly has immense value in many areas, but it cannot be considered general intelligence. See: AlphaZero: How Intuition Demolished Logic (Medium) | {
"source": [
"https://ai.stackexchange.com/questions/18576",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/31649/"
]
} |
18,587 | It's an idea I heard a while back but couldn't remember the name of. It involves the existence and development of an AI that will eventually rule the world and that if you don't fund or progress the AI then it will see you as "hostile" and kill you. Also, by knowing about this concept, it essentially makes you a candidate for such consideration, as people who didn't know about it won't understand to progress such an AI. From my understanding, this idea isn't taken that seriously, but I'm curious to know the name nonetheless. | If I'm not mistaken you're looking for Roko's Basilisk , in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence | {
"source": [
"https://ai.stackexchange.com/questions/18587",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/34205/"
]
} |
20,075 | I am reading the article How Transformers Work where the author writes Another problem with RNNs, and LSTMs, is that it’s hard to parallelize the work for processing sentences, since you have to process word by word. Not only that but there is no model of long and short-range dependencies . Why exactly does the transformer do better than RNN and LSTM in long-range context dependencies ? | I'll list some bullet points of the main innovations introduced by transformers , followed by bullet points of the main characteristics of the other architectures you mentioned, so we can then compared them. Transformers Transformers ( Attention is all you need ) were introduced in the context of machine translation with the purpose to avoid recursion in order to allow parallel computation (to reduce training time) and also to reduce drops in performance due to long dependencies. The main characteristics are: Non sequential : sentences are processed as a whole rather than word by word. Self Attention : this is the newly introduced 'unit' used to compute similarity scores between words in a sentence. Positional embeddings : another innovation introduced to replace recurrence. The idea is to use fixed or learned weights which encode information related to a specific position of a token in a sentence. The first point is the main reason why transformer do not suffer from long dependency issues. The original transformers do not rely on past hidden states to capture dependencies with previous words. They instead process a sentence as a whole. That is why there is no risk to lose (or "forget") past information. Moreover, multi-head attention and positional embeddings both provide information about the relationship between different words. RNN / LSTM Recurrent neural networks and Long-short term memory models, for what concerns this question, are almost identical in their core properties: Sequential processing : sentences must be processed word by word. Past information retained through past hidden states : sequence to sequence models follow the Markov property: each state is assumed to be dependent only on the previously seen state. The first property is the reason why RNN and LSTM can't be trained in parallel. In order to encode the second word in a sentence I need the previously computed hidden states of the first word, therefore I need to compute that first. The second property is a bit more subtle, but not hard to grasp conceptually. Information in RNN and LSTM are retained thanks to previously computed hidden states. The point is that the encoding of a specific word is retained only for the next time step, which means that the encoding of a word strongly affects only the representation of the next word, so its influence is quickly lost after a few time steps. LSTM (and also GruRNN) can boost a bit the dependency range they can learn thanks to a deeper processing of the hidden states through specific units (which comes with an increased number of parameters to train) but nevertheless the problem is inherently related to recursion. Another way in which people mitigated this problem is to use bi-directional models. These encode the same sentence from the start to end, and from the end to the start, allowing words at the end of a sentence to have stronger influence in the creation of the hidden representation. However, this is just a workaround rather than a real solution for very long dependencies. CNN Also convolutional neural networks are widely used in nlp since they are quite fast to train and effective with short texts. The way they tackle dependencies is by applying different kernels to the same sentence, and indeed since their first application to text ( Convolutional Neural Networks for Sentence Classification ) they were implement as multichannel CNN. Why do different kernels allow to learn dependencies? Because a kernel of size 2 for example would learn relationships between pairs of words, a kernel of size 3 would capture relationships between triplets of words and so on. The evident problem here is that the number of different kernels required to capture dependencies among all possible combinations of words in a sentence would be enormous and unpractical because of the exponential growing number of combinations when increasing the maximum length size of input sentences. To summarize, Transformers are better than all the other architectures because they totally avoid recursion, by processing sentences as a whole and by learning relationships between words thanks to multi-head attention mechanisms and positional embeddings. Nevertheless, it must be pointed out that also transformers can capture only dependencies within the fixed input size used to train them, i.e. if I use as a maximum sentence size 50, the model will not be able to capture dependencies between the first word of a sentence and words that occur more than 50 words later, like in another paragraph. New transformers like Transformer-XL tries to overcome exactly this issue, by kinda re-introducing recursion by storing hidden states of already encoded sentences to leverage them in the subsequent encoding of the next sentences. | {
"source": [
"https://ai.stackexchange.com/questions/20075",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/9863/"
]
} |
22,957 | The transformer, introduced in the paper Attention Is All You Need , is a popular new neural network architecture that is commonly viewed as an alternative to recurrent neural networks, like LSTMs and GRUs. However, having gone through the paper, as well as several online explanations, I still have trouble wrapping my head around how they work. How can a non-recurrent structure be able to deal with inputs of arbitrary length? | Actually, there is usually an upper bound for inputs of transformers, due to the inability of handling long-sequence. Usually, the value is set as 512 or 1024 at current stage. However, if you are asking handling the various input size, adding padding token such as [PAD] in BERT model is a common solution. The position of [PAD] token could be masked in self-attention, therefore, causes no influence. Let's say we use a transformer model with 512 limit of sequence length, then we pass a input sequence of 103 tokens. We padded it to 512 tokens. In the attention layer, positions from 104 to 512 are all masked, that is, they are not attending or being attended. | {
"source": [
"https://ai.stackexchange.com/questions/22957",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/12201/"
]
} |
22,959 | I recently read some introductions to AI alignment, AIXI and decision theory things. As far as I understood, one of the main problems in AI alignment is how to define a utility function well, not causing something like the paperclip apocalypse. Then a question comes to my mind that whatever the utility function would be, we need a computer to compute the utility and reward, so that there is no way to prevent AGI from seeking it to manipulate the utility function to always give the maximum reward. Just like we humans know that we can give happiness to ourselves in chemical ways and some people actually do so. Is there any way to prevent this from happening? Not just protecting the utility calculator physically from AGI (How can we sure it works forever?), but preventing AGI from thinking of it? | Actually, there is usually an upper bound for inputs of transformers, due to the inability of handling long-sequence. Usually, the value is set as 512 or 1024 at current stage. However, if you are asking handling the various input size, adding padding token such as [PAD] in BERT model is a common solution. The position of [PAD] token could be masked in self-attention, therefore, causes no influence. Let's say we use a transformer model with 512 limit of sequence length, then we pass a input sequence of 103 tokens. We padded it to 512 tokens. In the attention layer, positions from 104 to 512 are all masked, that is, they are not attending or being attended. | {
"source": [
"https://ai.stackexchange.com/questions/22959",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/40196/"
]
} |
38,270 | After seeing StackOverflow's banning of ChatGPT , I explored it out of curiosity. It's marvellous as it can write code by itself! Later to check if it knows chess as well like Google-Deepmind's AlphaZero AI, I asked below questions: Me : Does openai know how to play chess? OpenAI : It is not clear whether OpenAI knows how to play chess. As an artificial intelligence research organization, OpenAI has developed a number of advanced machine learning algorithms and techniques that allow AI systems to learn and adapt to new tasks and situations. ... Me : In a chess if white has a king & bishop and if black has only a king, then who can win with a perfect play? OpenAI : In a chess game where white has a king and a bishop and black has only a king, it is possible for either player to win with perfect play . This type of endgame is known as a king and bishop versus king endgame, and it is considered one of the most challenging in chess. ... Now this answer is entirely wrong. Because as soon as only a bishop (or a knight) remains apart from the kings, it's instantly declared as a draw ! Question : Shouldn't the AI reject my question on a subject which is not in its known criteria? It does so for many other subjects. Note : Should we replace bishop = rook then ChatGPT answers exactly the same answer with replacing those pieces. However that happens to be true. | ChatGPT is a large language model. That means it's very good at stringing together words in ways that humans tend to use them. It's able to construct sentences that are grammatically correct and sound natural, for the most part, because it's been trained on language. Because it's good at stringing together words, it's able to take your prompt and generate words in a grammatically correct way that's similar to what it's seen before.
But that's all that it's doing: generating words and making sure it sounds natural. It doesn't have any built-in fact checking capabilities, and the manual limitations that OpenAI placed can be fairly easily worked around. Someone in the OpenAI Discord server a few days ago shared a screenshot of the question "What mammal lays the largest eggs?" ChatGPT confidently declared that the elephant lays the largest eggs of any mammal. While much of the information that ChatGPT was trained on is accurate, always keep in mind that it's just stringing together words with no way to check if what it's saying is accurate. Its sources may have been accurate, but just writing in the style of your sources doesn't mean that the results will themselves be true. | {
"source": [
"https://ai.stackexchange.com/questions/38270",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/64749/"
]
} |
39,293 | Sorry if this question makes no sense. I'm a software developer but know very little about AI. Quite a while ago, I read about the Chinese room, and the person inside who has had a lot of training/instructions how to combine symbols, and, as a result, is very good at combining symbols in a "correct" way, for whatever definition of correct. I said "training/instructions" because, for the purpose of this question, it doesn't really make a difference if the "knowledge" was acquired by parsing many many examples and getting a "feeling" for what's right and what's wrong (AI/learning), or by a very detailed set of instructions (algorithmic). So, the person responds with perfectly reasonable sentences, without ever understanding Chinese, or the content of its input. Now, as far as I understand ChatGPT (and I might be completely wrong here), that's exactly what ChatGPT does. It has been trained on a huge corpus of text, and thus has a very good feeling which words go together well and which don't, and, given a sentence, what's the most likely continuation of this sentence. But that doesn't really mean it understands the content of the sentence, it only knows how to chose words based on what it has seen. And because it doesn't really understand any content, it mostly gives answers that are correct, but sometimes it's completely off because it "doesn't really understand Chinese" and doesn't know what it's talking about. So, my question: is this "juggling of Chinese symbols without understanding their meaning" an adequate explanation of how ChatGPT works, and if not, where's the difference? And if yes, how far is AI from models that can actually understand (for some definition of "understand") textual content? | Yes, the Chinese Room argument by John Searle essentially demonstrates that at the very least it is hard to locate intelligence in a system based on its inputs and outputs. And the ChatGPT system is built very much as a machine for manipulating symbols according to opaque rules, without any grounding provided for what those symbols mean. The large language models are trained without ever getting to see, touch, or get any experience reference for any of their language components, other than yet more written language. It is much like trying to learn the meaning of a word by looking up its dictionary definition and finding that composed of other words that you don't know the meaning of, recursively without any way of resolving it. If you possessed such a dictionary and no knowledge of the words defined, you would still be able to repeat those definitions, and if they were received by someone who did understand some of the words, the result would look like reasoning and "understanding". But this understanding is not yours, you are simply able to retrieve it on demand from where someone else stored it. This is also related to the symbol grounding problem in cognitive science. It is possible to argue that pragmatically the "intelligence" shown by the overall system is still real and resides somehow in the rules of how to manipulate the symbols. This argument and other similar ones try to side-step or dismiss some proposed hard problems in AI - for instance, by focusing on behaviour of the whole system and not trying to address the currently impossible task of asking whether any system has subjective experience. This is beyond the scope of this answer (and not really what the question is about), but it is worth noting that The Chinese Room argument has some criticism, and is not the only way to think about issues with AI systems based on language and symbols. I would agree with you that the latest language models, and ChatGPT are good example models of the The Chinese Room made real. The room part that is, there is no pretend human in the middle, but actually that's not hugely important - the role of the human in the Chinese room is to demonstrate that from the perspective of an entity inside the room processing a database of rules, nothing need to possess any understanding or subjective experience that is relevant to the text. Now that next-symbol predictors (which all Large Language Models are to date) are demonstrating quite sophisticated, even surprising behaviour, it may lead to some better insights into the role that symbol-to-symbol references can take in more generally intelligent systems. | {
"source": [
"https://ai.stackexchange.com/questions/39293",
"https://ai.stackexchange.com",
"https://ai.stackexchange.com/users/68600/"
]
} |
1 | I'm making a simple Arduino web server and I want to keep it turned on all the time. So it must endure to stay working continuously. I'm using an Arduino Uno with a Ethernet Shield.
It's powered with a simple outlet power supply 5V @ 1A. My Questions: Will I have any problems leaving the Arduino turned on all the time? Is there some other Arduino board better recommended for this? Are there any precautions that I need to heed regarding this? | You shouldn't have any issues keeping it on all the time, however, some thing to consider is any counters you may have, like the use of millis() . From the Arduino docs on millis : This number will overflow (go back to zero), after approximately 50 days. So for projects that are on for long periods of time, you may not see an issue immediately but something like this could pop up and cause errors down the road. | {
"source": [
"https://arduino.stackexchange.com/questions/1",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/3/"
]
} |
6 | For solar & battery powered projects, decreasing power consumption is a necessity. I know a little bit about using a timer and interrupt to put the microcontroller into sleep when it isn't doing anything. I have also read that you can disable some peripherals to further save power. Here is a page explaining these: Power Saving Techniques My question: What other techniques are there to save power? Are there any libraries that make using these AVR features easier? | Note: I wanted to point out that, even though my answer was accepted, the reader (you) should really read Anindo's answer first if you really want to save energy with any Arduino board. Once you address those items in your design, then you should read my answer and worry about how to set the MCU into low power mode. Having said that, there are several techniques to make an Arduino save power. I'll list a few and then point you to a page that explains them all in more detail. While the controller isn't doing anything important (between one read of a sensor and the next, for example), you can put the controller into one of the sleep modes below, with the command set_sleep_mode (SLEEP_MODE_PWR_DOWN) . Next to each mode is the approximate power consumption of each mode. SLEEP_MODE_IDLE: 15 mA SLEEP_MODE_ADC: 6.5 mA SLEEP_MODE_PWR_SAVE: 1.62 mA SLEEP_MODE_EXT_STANDBY: 1.62 mA SLEEP_MODE_STANDBY : 0.84 mA SLEEP_MODE_PWR_DOWN : 0.36 mA Disable brown-out detection (the circuitry that turns off the controller when low voltage is detected). Turn off ADC (analog to digita conversion) Use the internal clock Then, when you put the controller to sleep, you need to use one or more mechanisms below to wake up the controller and do something with it: Wake up with a signal Wake up with a timer This is a summary I made from - Nick Gammon's article: Power saving techniques for microprocessors . That article applies mostly to ATmega328P, but the technique applies to other Arduino compatible controllers as well. As TheDoctor said well, you will need to check the datashet to make sure your controller suports any of those techniques and how to do it more precisely. | {
"source": [
"https://arduino.stackexchange.com/questions/6",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/11/"
]
} |
17 | I made an awesome program the other day, and I wanted to upload it to my Arduino. After clicking the upload button, some mean dude named avr came along and stopped me, saying: avrdude: stk500_getsync(): not in sync: resp=0x00 All I want to do is just upload my program, but avr won't let me. He's even unintelligible, so can someone tell me what the heck he's trying to say and how to get rid of him? i.e.: Whenever I try to upload a program to my Arduino, I get this error message: avrdude: stk500_getsync(): not in sync: resp=0x00 What does this mean, and how can I fix it? | This is caused by a generic connection error between your computer and the Arduino, and can result from many different specific problems. Here are some easy things that can often fix this error: Disconnect and reconnect the USB cable. Press the reset button on the board. Restart the Arduino IDE. Make sure you select the right board in Tools ► Board ► , e.g. If you are using the Duemilanove 328, select that instead of Duemilanove 128. The board should say what version it is on the microchip. Make sure you selected the right port in Tools ► Serial Port ► . One way to figure out which port it is on is by following these steps: Disconnect the USB cable. Go to Tools ► Serial Port ► and see which ports are listed (e.g. COM4 COM5 COM14). Reconnect the USB cable. Go back to Tools ► Serial Port ► , and see which port appeared that wasn't there before. Make sure digital pins 0 and 1 do not have any parts connected, including any shields. If none of those work, you will want to try to isolate the issue by replacing things: try a different computer on the same arduino, try a different arduino on the same computer, and try using a different USB cable. If the issue is with the computer: Double-check all computer-related issues in the "easy fixes" list above. Reinstall the IDE. Reinstall the drivers. If the issue is with the Arduino: Double-check all board-related issues in the "easy fixes" list above. Make sure the microcontroller is seated correctly. You may need to burn the bootloader . Replace the microcontroller if you have another one handy nearby. You may have bricked your Arduino. Sorry :( | {
"source": [
"https://arduino.stackexchange.com/questions/17",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/37/"
]
} |
40 | The basic Arduino IDE lacks a lot of the sophistication present in other IDEs such as code completion, code collapsing, folder organisation, etc. Are there other IDEs that allow programming in C or C++ and improve on these aspects? | There is an Arduino Eclipse plugin named sloeber ! And Eclipse is an awesome cross-platform open-source IDE! Stino is good. It requires Sublime Text 2 which has an indefinite free trial. Visual Micro provides a full build system with debugger for Arduino in Microsoft Visual Studio . For advanced users it also allows the underlying Arduino source code to be viewed or modified, enabled projects and/or libraries to be edited from any location and shared in multiple projects alongside true cross-platform intellisense . For more go to The Official Arduino Site For development on Windows, there is a special edition from Arduino official IDE called arduino-erw , This edition much better the last one because it fixed a lot of lagging and stability issues! | {
"source": [
"https://arduino.stackexchange.com/questions/40",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/76/"
]
} |
61 | I would like to start the development of some basic Arduino projects but I don't own an Arduino board yet. Is there a way I can write my code and emulate/test it using a desktop computer so after my board arrives I just have to upload and run my project on it? | There are a whole slew of Arduino simulators out there, many free, and some paid products as well. The CodeBlocks Arduino development environment includes a free Arduino simulator, still under development but functional. Simuino simulates the Arduino Uno and Mega pins - not a pretty-looking realistic simulator, but it works. The Python based Arduino Simulator is another option, that plays well with the official IDE Virtronics Simulator for Arduino looks promising, but I don't see why I would pay $14.99 for it, when I could buy one or more actual Arduino clones for that price Many other Arduino simulators are out there if you search, and new ones are being announced, even crowdfunded, all the time. | {
"source": [
"https://arduino.stackexchange.com/questions/61",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/87/"
]
} |
85 | I wanted to make a fairly simple circuit which would flash a series of LEDs in sequence, using my Arduino Uno (more specifically, a SainSmart clone). I wrote my sketch and it compiled fine. After that, I connected 8 LEDS+resistors to pins 0 through 7, and then connected the Uno to my computer via USB. I've uploaded sketches successfully in the past, so I'm sure my settings and drivers etc. are correct. However, when I tried to upload my sketch this time, it didn't work. I tried removing everything I'd connected to the Arduino's pins, and suddenly the upload worked again. Why does this happen? Does it mean I have to disconnect everything from the board every time I upload a sketch? | The problem is specifically pins 0 and 1. Although they can be used as regular digital IO pins, they also serve as the RX and TX pins for the Uno's serial port. The USB connection (for uploading sketches etc.) is routed to the same pins internally. Unfortunately that means anything connected on pins 0 and 1 can interfere with the serial connection, preventing communication via USB. In short, it's not necessary to disconnect everything when uploading a sketch. It should only be necessary to disconnect anything from pins 0 and 1. Rather than going through that hassle every time a sketch is uploaded though, it may be best just to avoid using those pins unless necessary (e.g. you run out of other pins, or your project needs a serial connection to another device). | {
"source": [
"https://arduino.stackexchange.com/questions/85",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/42/"
]
} |
105 | I am not very skilled with the C Language and I was wondering if there is a way in which python could be used to program an Arduino. This would most likely require a different IDE in order to be able to debug the scripts them self. | It's going to be extremely difficult to get any kind of Python script running directly on the Arduino. The reason is that it's an interpreted language, so you would need the interpreter on-board in addition to the plain text script. There's probably not going to be enough memory for all of that. Your best bet would probably be finding a way to compile a Python script to native machine code (which is how C/C++ works). I believe there are projects around to do something like that for other platforms, but (as far as I know) none which does it successfully for Arduino yet. You might find some more useful information on this question at Stack Overflow: Is there a way to "compile" Python code onto an Arduino (Uno) . | {
"source": [
"https://arduino.stackexchange.com/questions/105",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/24/"
]
} |
117 | Is it possible to have more than 14 output pins on the Arduino, I am working on a project in which I need to light up several LEDs individually. I only have an Arduino Uno, and I don't want to get a Mega. | A common way to expand the set of available output pins on the Arduino is to use shift registers like the 74HC595 IC ( link to datasheet ). You need 3 pins to control these chips: Clock Latch Data In a program, you pass on the data one bit at a time to the shift register using the shiftOut() command , like so: shiftOut(dataPin, clockPin, data); With that command, you set each of the 8 outputs on the 595 IC with the 8 bits in the data variable. With one 595, you gain 5 pins (8 on the IC, but you spend 3 to talk to it). To get more outputs, you can daisy-chain a series of 595 together, by connecting its serial-out pin, to the data pin of the next one. You also must connect together the clock and latch pins of all of the 595 ICs. The resulting circuit (using one 595) would look like this: The figure above was taken from this codeproject.com webpage: Arduino Platform - Working with Shift Registers The latch pin is used to keep the 595 outputs steady while you are shifting out data into it, like so: digitalWrite(latchPin, LOW);
shiftOut(dataPin, clockPin, data);
digitalWrite(latchPin, HIGH); | {
"source": [
"https://arduino.stackexchange.com/questions/117",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/24/"
]
} |
132 | There are some pins on the Arduino which I haven't been able to find out anything about: IOREF AREF An unlabeled one next to IOREF What are they? | AREF: This is the voltage reference analog-to-digital converter (ADC). It can be used instead of the standard 5V reference for the top end of the analog spectrum – for example, if you wanted to use the ADC to monitor a signal that had a 0-1.5 volt range you could get the full scale of the ADC by connect AREF to a 1.5V signal. DO NOT CONNECT A SIGNAL OUTSIDE THE 0V TO 5V RANGE! Note that in order for this to work, you must run analogReference(EXTERNAL); before using analogRead() . Also: After changing the analog reference, the first few readings from analogRead() may not be accurate. Fore more information, see AnalogReference . IOREF: This is a voltage corresponding to the i/o of that board, for example an Uno would supply 5v to this pin, but a Due would supply 3.3v. Sending a signal to this pin does nothing. Unlabeled pin: This pin is unused, but is there to ensure compatibility with future products. It is not connected to anything on any R3 boards I have seen. | {
"source": [
"https://arduino.stackexchange.com/questions/132",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/144/"
]
} |
176 | Say I have some variables that I want to print out to the terminal, what's the easiest way to print them in a string? Currently I do something like this: Serial.print("Var 1:");Serial.println(var1);
Serial.print(" Var 2:");Serial.println(var2);
Serial.print(" Var 3:");Serial.println(var3); Is there a better way to do this? | ardprintf is a function that I hacked together which simulates printf over the serial connection. This function (given at the bottom) can be pasted in the beginning of the files where the function is needed. It should not create any conflicts. It can be called similar to printf . See it in action in this example: void setup()
{
Serial.begin(9600);
}
void loop()
{
int l=2;
char *j = "test";
long k = 123456789;
char s = 'g';
float f = 2.3;
ardprintf("test %d %l %c %s %f", l, k, s, j, f);
delay(5000);
} The output as expected is: test 2 123456789 g test 2.30 The function prototype is: int ardprintf(char *, ...); It returns the number of arguments detected in the function call. This is the function definition: #ifndef ARDPRINTF
#define ARDPRINTF
#define ARDBUFFER 16
#include <stdarg.h>
#include <Arduino.h>
int ardprintf(char *str, ...)
{
int i, count=0, j=0, flag=0;
char temp[ARDBUFFER+1];
for(i=0; str[i]!='\0';i++) if(str[i]=='%') count++;
va_list argv;
va_start(argv, count);
for(i=0,j=0; str[i]!='\0';i++)
{
if(str[i]=='%')
{
temp[j] = '\0';
Serial.print(temp);
j=0;
temp[0] = '\0';
switch(str[++i])
{
case 'd': Serial.print(va_arg(argv, int));
break;
case 'l': Serial.print(va_arg(argv, long));
break;
case 'f': Serial.print(va_arg(argv, double));
break;
case 'c': Serial.print((char)va_arg(argv, int));
break;
case 's': Serial.print(va_arg(argv, char *));
break;
default: ;
};
}
else
{
temp[j] = str[i];
j = (j+1)%ARDBUFFER;
if(j==0)
{
temp[ARDBUFFER] = '\0';
Serial.print(temp);
temp[0]='\0';
}
}
};
Serial.println();
return count + 1;
}
#undef ARDBUFFER
#endif **To print the % character, use %% .* Now, available on Github gists . | {
"source": [
"https://arduino.stackexchange.com/questions/176",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/11/"
]
} |
179 | I made a sketch, but then I lost it. However, I uploaded it to the Arduino before losing it. Is there any way I can get it back? | It should be possible as long as the security bit isn't set. This question was asked on EE a while back. Is it possible to extract code from an arduino board? But you won't get the Arduino code you wrote back. The code is compiled into assembly and you'll have to convert that back to C yourself. | {
"source": [
"https://arduino.stackexchange.com/questions/179",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/37/"
]
} |
210 | According to the Arduino reference for analogWrite() , the PWM frequency on most pins is ~490 Hz. However, it's ~980 Hz for pins 5 and 6 on the Uno, and for pins 3 and 11 on the Leonardo. Why are these different? Is it a deliberate design feature, or is it somehow dictated by the hardware? | Those aren't the only frequencies available for the PWM signals. However, they are the frequencies as determined by the applied prescaler (which you can readily change as detailed below). Each of the 3 pairs of PWM pins is tied to one timer, each of which has its own base frequency, as follows: Pins 5 and 6 are paired on timer0, with base frequency of 62500Hz Pins 9 and 10 are paired on timer1, with base frequency of 31250Hz Pins 3 and 11 are paired on timer2, with base frequency of 31250Hz Then each set of pins have a number of prescaler values that can be chosen, that will divide the base frequency of that pair of pins. The prescaler values available are: Pins 5 and 6 have prescaler values of 1, 8, 64, 256, and 1024 Pins 9 and 10 have prescaler values of 1, 8, 64, 256, and 1024 Pins 3 and 11 have prescaler values of 1, 8, 32, 64, 128, 256, and 1024 The different combinations yield different frequencies in a given PWM pin. Notice that timer 2 (tied to pins 3 and 11) have more prescaler values available, resulting in more frequencies available. Now, why timer 2 is different, that's a separate question. Edit: Here's a list of possible PWM frequencies per pin (from this article ): For pins 6 and 5 (OC0A and OC0B): If TCCR0B = xxxxx001, frequency is 64kHz If TCCR0B = xxxxx010, frequency is 8 kHz If TCCR0B = xxxxx011, frequency is 1kHz (this is the default from the Diecimila bootloader) If TCCR0B = xxxxx100, frequency is 250Hz If TCCR0B = xxxxx101, frequency is 62.5 Hz For pins 9, 10, 11 and 3 (OC1A, OC1B, OC2A, OC2B): If TCCRnB = xxxxx001, frequency is 32kHz If TCCRnB = xxxxx010, frequency is 4 kHz If TCCRnB = xxxxx011, frequency is 500Hz (this is the default from the Diecimila bootloader) If TCCRnB = xxxxx100, frequency is 125Hz If TCCRnB = xxxxx101, frequency is 31.25 Hz TCCRnB is where you set the prescaler bits for timer n , replacing n by 0, 1 or 2, depending on the timer you want to set. If you are still unsure about bitwise operations, read this bit math tutorial . My sources: http://playground.arduino.cc/Code/PwmFrequency http://arduino.cc/en/Tutorial/SecretsOfArduinoPWM http://arduino.cc/en/Tutorial/PWM http://arduino-info.wikispaces.com/Arduino-PWM-Frequency Note that there seems to be divergence in those sources about whether pins 9 and 10 have the same behavior as 5 and 6 or 3 and 11, but you get the idea anyway. I'm reading the datashet to try and figure out which is correct, or whether this is a difference between boards. | {
"source": [
"https://arduino.stackexchange.com/questions/210",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/42/"
]
} |
221 | According to the Arduino documentation, the ATmega328 has 32KB of Flash memory for the bootloader + uploaded sketch, and only 2KB SRAM for runtime data. The ATmega2560 has quite a bit more, totalling 256KB and 8KB respectively. In either case, those limits seem rather small, especially when compared to similarly sized consumer devices, such as smartphones. What can you do if you run out? E.g. if your sketch is too big, or you need to process a lot of data (such as strings) at runtime? Is there any way to expand the Flash or SRAM? | Optimisation Low-level programming for embedded systems is quite different from programming for general purpose devices, such as computers and cell phones. Efficiency (in terms of speed and space) is far more important because resources are at a premium. That means the very first thing to do if you run out of space is to look at what parts of your code you can optimise. In terms of reducing program space (Flash) usage, the code size can be quite difficult to optimise if you're inexperienced, or if you're more used to programming for desktop computers which don't tend to need that skill. Unfortunately, there's no 'magic bullet' approach which will work for all situations, although it helps if you consider seriously what your sketch really needs to have. If a feature isn't needed, take it out. Sometimes it's also helpful to identify where multiple parts of your code are the same (or very similar). You may be able to condense them into reusable functions which can be called from multiple places. However, be aware that sometimes trying to make code too reusable actually ends up making it more verbose. It's a tricky balance to strike that tends to come with practice. Spending some time looking at how code changes affect the compiler output can help. Runtime data (SRAM) optimisation tends to be a bit easier when you're used to it. A very common pitfall for beginner programmers is using too much global data. Anything declared at global scope will exist for the entire lifetime of the sketch, and that isn't always necessary. If a variable is only used inside one function, and it doesn't need to persist between calls, then make it a local variable. If a value needs to be shared between functions, consider if you can pass it as a parameter instead of making it global. That way you'll only use SRAM for those variables when you actually need it. Another killer for SRAM usage is text processing (e.g. using the String class). Generally speaking, you should avoid doing String operations if possible. They are massive memory hogs. For example, if you're outputting lots of text to serial, use multiple calls to Serial.print() instead of using string concatenation. Also try to reduce the number of string literals in your code if possible. Avoid recursion if possible as well. Each time a recursive call is made, it takes the stack a level deeper. Refactor your recursive functions to be iterative instead. Use EEPROM EEPROM is used for long-term storage of things that only change occasionally. If you need to use large lists or look-up tables of fixed data, then consider storing it in EEPROM in advance, and only pulling out what you need when necessary. Obviously EEPROM is quite limited in size and speed though, and has a limited number of write cycles. It's not a great solution to data limitations, but it might be enough to ease the burden on Flash or SRAM. It's also quite possible to interface with similar external storage, such as an SD card. Expansion If you've exhausted all other options, then expansion may be a possibility. Unfortunately, expanding Flash memory to increase program space isn't possible. However, it is possible to expand SRAM. This means you may be able to refactor your sketch to reduce code size at the expense of increasing data size. Getting more SRAM is actually fairly straightforward. One option is to use one or more 23K256 chips. They are accessed via SPI, and there is the SpiRAM library to help you use them. Just beware that they operate at 3.3V not 5V! If you're using the Mega, you could alternatively get SRAM expansion shields from Lagrangian Point or Rugged Circuits . | {
"source": [
"https://arduino.stackexchange.com/questions/221",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/42/"
]
} |
226 | ATMEL says the cell lifetime of an EEPROM cell is about 100,000 write cycle/ cell.
Is this actually how the EEPROM performs in the wild? If I do not change the value of a cell, does this stress the lifetime? For example, if I write the value 0xFF to the same cell again and again, is this any different to writing 0x00 , 0xFF , 0x00 etc. | As you state, the internal EEPROM has a lifetime of 100,000 write cycles. This isn't a guess - a very significant proportion of ATmega328 will reach this number with no issues. I have tested three processors before, and all reached 150,000 cycles with no issues. It is important to note the failure mode of EEPROM. Most "EEPROM destroyer" projects repeatedly read/write until the data is not written at all. Before this point, the EEPROM will still be damaged. This would be manifested by data not being retained for a reasonable period. It is unwise to rely on anything more than 100,000 write cycles for this reason. EEPROM is different to the RAM on an ATmega. Writing to it is not simple or quick, but it is wrapped up in a friendly Arduino library , hiding this complexity from the user. The first level of indirection is the EEPROM library , which is trivially simple], just calling two other functions for read and write. This calls eeprom_write_byte, found here . This function uses inline assembly, so might not be easily understood. There is a comment that is easily understood though: Set programming mode: erase and write This hints to one of the complexities of dealing with EEPROM - to write to
it, you first need to erase it. This means that if you call EEPROM.write(), it will perform a write cycle regardless of the value you are writing. This means that repeatedly writing 0xFF will likely have the same effect as writing 0xFF,0x00,0xFF,0x00 etc. There are ways to work around this - you can try calling EEPROM.read() before EEPROM.write() to see if the value is already the same, but this takes additional time. There are other techniques to avoid excessive EEPROM wear, but their use depends on your application. | {
"source": [
"https://arduino.stackexchange.com/questions/226",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/190/"
]
} |
286 | Is there a way I can have multiple parts of the program running together without doing multiple things in the same code block? One thread waiting for an external device while also blinking a LED in another thread. | There is no multi-process, nor multi-threading, support on the Arduino. You can do something close to multiple threads with some software though. You want to look at Protothreads : Protothreads are extremely lightweight stackless threads designed for
severely memory constrained systems, such as small embedded systems or
wireless sensor network nodes. Protothreads provide linear code
execution for event-driven systems implemented in C. Protothreads can
be used with or without an underlying operating system to provide
blocking event-handlers. Protothreads provide sequential flow of
control without complex state machines or full multi-threading. Of course, there is an Arduino example here with example code . This SO question might be useful, too. ArduinoThread is a good one too. | {
"source": [
"https://arduino.stackexchange.com/questions/286",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/193/"
]
} |
296 | The standard is 9600 baud. That's just the standard . Using a Arduino Uno SMD R2, what is the highest practical baud rate I can achieve? Bonus points for the audacious: How would you go about creating an error checking mechanism and then increasing the baud rate ridiculous high to get high transfer rates? | There are several factors here: How high of a baud-rate can the ATmega328P MCU achieve? How high of a baud-rate can the USB-Serial interface achieve? What is the oscillator frequency on the ATmega328P? What is the oscillator frequency on the USB-serial interface (if it has one)? How tolerant is the USB-serial interface of baud-rate mismatch? All of these factors are relevant to determining the maximum achieveable baud rate. The ATmega328P uses a hardware divisor from it's clock-rate to generate the base-clock for the serial interface. If there is no integer ratio from the main clock to the bit-time of the desired baud rate, the MCU will not be able to exactly produce the desired rate. This can lead to potential issues, as some devices are much more sensitive to baud-rate mismatch then others. FTDI-based interfaces are quite tolerant of baud-rate mismatch, up to several percent error. However, I have worked with specialized embedded GPS modules that were unable to handle even a 0.5% baud rate error. General serial interfaces are tolerant of ~5% baud-rate error. However, since each end can be off, a more common spec is +-2.5%. This way, if one end is 2.5% fast, and the other is 2.5% slow, your overall error is still only 5%. Anyways. The Uno uses a ATmega328P as the primary MCU, and a ATmega16U2 as the USB-serial interface. We're also fortunate here in that both these MCUs use similar harware USARTs, as well as 16 Mhz clocks. Since both MCUs have the same harware and clock-rate, they'll both have the same baud-rate error in the same direction, so we can functionally ignore the baud error issue. Anyways, the "proper" answer to this question would involve digging up the source for the ATmega16U2, and working out the possible baud-rates from there, but since I'm lazy, I figure simple, empirical testing will work. A quick glance at the ATmega328P datasheet produces the following table: So given the max stated baud-rate of 2 Mbps, I wrote a quick test program: void setup(){};
void loop()
{
delay(1000);
Serial.begin(57600);
Serial.println("\r\rBaud-rate = 57600");
delay(1000);
Serial.begin(76800);
Serial.println("\r\rBaud-rate = 76800");
delay(1000);
Serial.begin(115200);
Serial.println("\r\rBaud-rate = 115200");
delay(1000);
Serial.begin(230400);
Serial.println("\r\rBaud-rate = 230400");
delay(1000);
Serial.begin(250000);
Serial.println("\r\rBaud-rate = 250000");
delay(1000);
Serial.begin(500000);
Serial.println("\r\rBaud-rate = 500000");
delay(1000);
Serial.begin(1000000);
Serial.println("\r\rBaud-rate = 1000000");
delay(1000);
Serial.begin(2000000);
Serial.println("\r\rBaud-rate = 2000000");
}; And then looking at the relevant serial port with a serial terminal: So it appears the hardware can run at 2,000,000 baud without problems. Note that this baud rate only gives the MCU 64 80 clock-cycles per byte, so it would be very challenging to keep the serial interface busy. While the individual bytes may be transferred very rapidly, there is likely to be lots of time when the interface is simply idle. Edit: Actual Testing! The 2 Mbps is real: each bit-time is 500 ns, which matches exactly with what is expected. Performance issues! Overall packet length: 500 Kbaud: 1 Mbaud: 2 Mbaud: Note: The noticeable overshoot is due to poor scope probe grounding practices, and is probably not real. I'm using the ground-clip-lead that's part of my scope probe, and the lead-inductance is likely the cause of the majority of the overshoot. As you can see, the overall transmission length is the same for 0.5, 1 and 2 Mbaud. This is because the code that is placing the bytes in the serial buffer is poorly optimized. As such, you will never achieve anything better then an effective 500 Kbaud, unless you write your own serial libraries. The Arduino libraries are very poorly optimized, so it probably wouldn't be too hard to get a proper 2 Mbaud, at least for burst transmissions, if you spent a bit of time on it. | {
"source": [
"https://arduino.stackexchange.com/questions/296",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/220/"
]
} |
316 | Currently, my sketch is checking an input pin every time round the main loop. If it detects a change, it calls a custom function to respond to it. Here's the code (trimmed down to the essentials): int pinValue = LOW;
void pinChanged()
{
//...
}
void setup()
{
pinMode(2, INPUT);
}
void loop()
{
// Read current input
int newValue = digitalRead(2);
// Has the input changed?
if (newValue != pinValue) {
pinValue = newValue;
pinChanged();
}
} Unfortunately, this doesn't always work properly for very short changes on the input (e.g. brief pulses), especially if loop() is running a bit slowly. Is there a way to make the Arduino detect the input change and call my function automatically? | You can do this using external interrupts. Most Arduinos only support this on a limited number of pins though. For full details, see the documentation on attachInterrupt() . Assuming you're using an Uno, you could do it like this: void pinChanged()
{
//...
}
void setup()
{
pinMode(2, INPUT);
attachInterrupt(0, pinChanged, CHANGE);
}
void loop()
{
} This will call pinChanged() whenever a change is detected on external interrupt 0. On the Uno, that corresponds to GPIO pin 2. The external interrupt numbering is different on other boards, so it's important to check the relevant documentation. There are limitations to this approach though. The custom pinChanged() function is being used as an Interrupt Service Routine (ISR). That means the rest of the code (everything in loop() ) is temporarily stopped while the call is executing. In order to prevent disrupting any important timing, you should aim to make ISRs as fast as possible. It's also important to note that no other interrupts will run during your ISR. That means anything relying on interrupts (such as the core delay() and millis() functions) may not work properly inside it. Lastly, if your ISR needs to change any global variables in the sketch, they should usually be declared as volatile , e.g.: volatile int someNumber; That's important because it tells the compiler that the value could change unexpectedly, so it should be careful not to use any out-of-date copies/caches of it. | {
"source": [
"https://arduino.stackexchange.com/questions/316",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/42/"
]
} |
432 | I'm working on building a solar powered, Arduino based weather station. The weather station consists of a temperature sensor and a photoresistor, and I plan to add an anemometer in the future. I would like to connect the weather station to my wireless network so that I can retrieve the sensor data from my computer without having to run wires (I live in a rental). What are the different options for connecting the Arduino to WiFi? I've looked at ethernet shields, WiFi shields, and something called Xbee, but I don't understand what each of them are for. I also have a wireless home router that I could use. Is it possible to connect my Arduino Uno to the router via the routers ethernet or USB port and then receive data from and send commands to the Arduino wirelessly over my home network? If so, how would this be accomplished? I currently have a bare Arduino Uno. | You have a few options for connecting your Arduino to the network/Internet. Ethernet Something like the Arduino Ethernet Shield allows you to plug in an Ethernet cable from the wall or router into your Arduino. Obviously, the main limitation is that your device is now tethered by the cable. For outdoor use, I wouldn't do this. WiFi The Arduino WiFi Shield allows you to connect to your home WiFi network. This is just like the Ethernet except its now wireless. The ESP8266 is a cheaper alternative that, with the default firmware, has the same functionality as the WiFi Shield. Be careful that you power it with 3.3V and not 5V as the rest of the Arduino. It also uses 3.3V logic levels so don't connect the Arduino's TX pin directly to the ESP's RX pin; use a voltage divider. RF If you have a lot of sensors or other devices that need to communicate with each other, the best option is usually an RF module. You have many options here, XBee being one of them. Check out the Sparkfun XBee Buying Guide to look at all the options available. And that's just XBee. There are many other wireless options available, at all sorts of prices. The thing with RF is that none of these will connect to the Internet. You will have all your devices communicate with each other or a base station, which will then be connected to the network by either a WiFi or Ethernet module. Wireless Router Serial Depending on what kind of wireless router you use, you can have the Arduino communicate directly with it and use that as your connection to a network. Arduino - Cheap wifi connectivity Converting your Ethernet Shield to a wireless shield | {
"source": [
"https://arduino.stackexchange.com/questions/432",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/225/"
]
} |
439 | If I upload any sketch that sends serial data, I immediately see the TX/RX LEDs flash once the sketch is uploaded. If I then start the serial monitor, the sketch appears to restart. A bare minimum sketch that shows this behaviour: void setup()
{
Serial.begin(9600);
Serial.println("Setup");
}
void loop()
{
Serial.println("Loop");
delay(1000);
} Tested with several boards and Mac and Windows versions of the IDE. Example output - it goes back to "Setup" when I open the serial monitor: Why is this? | The Arduino uses the RTS (Request To Send) (and I think DTR (Data Terminal Ready) ) signals to auto-reset. If you get a serial terminal that allows you to change the flow control settings you can change this functionality. The Arduino terminal doesn't give you a lot of options and that's the default. Others will allow you to configure a lot more. Setting the flow control to none will allow you to connect/disconnect from the serial without resetting your board. it's quite useful for debugging when you want to be able to just plug in the connector and see the output without having to start the sketch over. Another way to disable the auto reset is to put a pull up resistor on the reset pin. Disabling Auto Reset On Serial Connection | {
"source": [
"https://arduino.stackexchange.com/questions/439",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/136/"
]
} |
452 | I've traditionally used a text editor with avr-gcc and makefiles for working with Arduino boards. I'm now trying to develop projects for the wider Arduino user-base, so I am trying to use the Arduino libraries and common IDEs for ease of use. I started using Stino, but then found out that the Arduino IDE has some toolchain "quirks" that mean I will need to test everything in Arduino IDE expressly. Since starting to use it more, I have found it frequently hangs or crashes. The triggers for this are: Creating a new sketch takes about 15s, and shows me the OS X beach
ball. Saving a sketch takes about 15s. Going to File->Examples frequently crashes the IDE - I need to force quit. Going to
File->Sketchbook always crashes the IDE. It sometimes randomly hangs. I don't have any other stability issues and other Java based IDEs like Pycharm work flawlessly. I have installed numerous libraries, including the entire Teensyduino suite (which is a lot of libraries!). Outside of that, no changes have been made. Is this a common issue? Is there any logging or anyway I can try and work out what is causing this issue? This is happening with:
- Arduino 1.05
- Arduino 1.55 I have tried uninstalling and reinstalling. | The Arduino uses the RTS (Request To Send) (and I think DTR (Data Terminal Ready) ) signals to auto-reset. If you get a serial terminal that allows you to change the flow control settings you can change this functionality. The Arduino terminal doesn't give you a lot of options and that's the default. Others will allow you to configure a lot more. Setting the flow control to none will allow you to connect/disconnect from the serial without resetting your board. it's quite useful for debugging when you want to be able to just plug in the connector and see the output without having to start the sketch over. Another way to disable the auto reset is to put a pull up resistor on the reset pin. Disabling Auto Reset On Serial Connection | {
"source": [
"https://arduino.stackexchange.com/questions/452",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/136/"
]
} |
506 | Arduino is an odd hybrid, where some C++ functionality is used in the embedded world—traditionally a C environment. Indeed, a lot of Arduino code is very C like though. C has traditionally used #define s for constants. There are a number of reasons for this: You can't set array sizes using const int . You can't use const int as case statement labels (though this does work in some compilers) You can't initialize a const with another const . You can check this question on StackOverflow for more reasoning. So, what should we use for Arduino? I tend towards #define , but I see some code using const and some using a blend. | It's important to note that const int does not behave identically in C and in C++, so in fact several of the objections against it that have been alluded to in the original question and in Peter Bloomfields's extensive answer are not valid: In C++, const int constants are compile time values and can be used to set array limits, as case labels, etc. const int constants do not necessarily occupy any storage. Unless you take their address or declare them extern, they will generally just have a compile time existence. However, for integer constants, it might often be preferable to use a (named or anonymous) enum . I often like this because: It's backward compatible with C. It's nearly as type safe as const int (every bit as type safe in C++11). It provides a natural way of grouping related constants. You can even use them for some amount of namespace control. So in an idiomatic C++ program, there is no reason whatsoever to use #define to define an integer constant. Even if you want to remain C compatible (because of technical requirements, because you're kickin' it old school, or because people you work with prefer it that way), you can still use enum and should do so, rather than use #define . | {
"source": [
"https://arduino.stackexchange.com/questions/506",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/136/"
]
} |
564 | Anyone with kids knows they never help with the toilet paper. Anyone know how to track when it's low or out and sound an audible alarm? I just don't know what sensor to use that may help. Some that came to mind are: by weight, by reflection (the color of the paper) or some laser tripwire - all right on the spool. I don't mind building it, it's just I don't know which sensor. Anyone know which to use? | Bring up several rolls at a time and hang one for use. Put the other two on a short vertical pole within reach of the sitter. Sitter can take another roll when needed. Mechanically sense lack of weight on the shelf at the bottom of the pole. Alarm triggers when the last roll is removed. No one has to get caught short. To sense the weight use a force sensitive resistor such as the FSR 400 (see datasheet ). Alternatively, you could use a lightweight coil spring to rest the toilet paper on with a micro switch that is released when both rolls are removed. Another option would be an IR beam break detector where the circuit is completed when the last roll is removed. | {
"source": [
"https://arduino.stackexchange.com/questions/564",
"https://arduino.stackexchange.com",
"https://arduino.stackexchange.com/users/381/"
]
} |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 26