Most Popular

1500 questions
5
votes
1 answer

Is a decision tree less suitable for incremental learning than e.g. a neural net?

I can recall that a professor once said that decision trees are not good for incremental learning, as they have to be rebuilt from the ground up if new training examples arrive. Is this basically true? Quick googling just brought me to a lot of…
Ulu83
  • 153
  • 4
5
votes
2 answers

Concrete examples of unintentional adversarial AI behaviour

Are there any real-world examples of unintentional "bad" AI behaviour? I'm not looking for hypothetical arguments of malicious AI (AI in a box, paperclip maximizer), but for actual instances in history where some AI directly did something bad due to…
k.c. sayz 'k.c sayz'
  • 2,091
  • 10
  • 26
5
votes
1 answer

Does a bias also have a chance to be dropped out in Dropout layer?

Suppose that you have 80 neurons in a layer, where one neuron is bias. Then you add a dropout layer after the activation function of this layer. In this case, does it have a chance to drop out the bias neuron, or does the dropout only affect the…
Blaszard
  • 1,037
  • 3
  • 11
  • 25
5
votes
2 answers

What is the intuition behind self-attention?

I've been watching a few lectures on transformers, especially for language translation, though it seemingly becomes more confusing the more I watch. In this lecture, there seems to be two conflicting views of self-attention. First, there's an Iron…
User
  • 175
  • 5
5
votes
1 answer

Do Support Vector Machines have the ability to learn while in use?

I've read in some literature,that SVMs are characterized by their adaptivity. Does that mean they can learn while in use?
anon
5
votes
0 answers

What exactly is non-delusional Q-learning?

Problems occur when we combine Q-learning with a function approximator. What exactly is the delusional-bias and non-delusional Q-learning? I am talking about the neurIPS 18 best paper Non-delusional Q-learning and value-iteration. I have trouble…
wrek
  • 183
  • 4
5
votes
3 answers

Why do Decision Tree Learning Algorithm preferably outputs the smallest Decision Tree?

I have been following the ML course by Tom Mitchel. The inherent assumption while using Decision Tree Learning Algo is: The algo. preferably chooses a Decision Tree which is the smallest. Why is this so when we can have bigger extensions of the…
imflash217
  • 499
  • 5
  • 14
5
votes
3 answers

How to implement an Automatic Learning Rate for a Neural Network?

I'm learning Neural Networks, and everything works as planned but, like humans do, adjusting themselves to learn more efficiently, I'm trying to understand conceptually how one might implement an auto adjusting learning rate for a Neural Network. I…
5
votes
2 answers

Do genetic algorithms also evolve?

After witnessing the rise of deep learning as automatic feature/pattern recognition over classic machine learning techniques, I had an insight that the more you automate at each level, the better the results, and I, therefore, turned my focus to…
5
votes
3 answers

How can I determine if an input sentence is consistent with a certain subject?

How can I determine if an input sentence is consistent with a certain subject? For example, suppose I am given the following dataset. | Subject | User input | Output | |---------------|----------------------|--------| | Dog ownership…
bleand
  • 161
  • 2
5
votes
2 answers

How to generate new data given a trained VAE - sample from the learned latent space or from multivariate Gaussian?

To generate synthetic dataset using a trained VAE, there is confusion between two approaches: Use learned latent space: z = mu + (eps * log_var) to generate (theoretically, infinite amounts of) data. Here, we are learning mu and log_var vectors…
Arun
  • 235
  • 2
  • 8
5
votes
6 answers

Why is exploitation necessary during training?

I have read many blog articles making all kinds of broad analogies to explain the exploration/exploitation trade-off. However, I still can't fully grasp it. On an extremely abstract level, I understand why you would want to "try new things to gain…
Vladimir Belik
  • 352
  • 3
  • 14
5
votes
1 answer

Why are only neural networks (and not SVMs, for example) used for reinforcement learning?

I know that neural networks are the "universal function approximator", but they also have a huge number of trainable parameters and are extremely prone to overfitting. So my question is: Why aren't SVMs or Random Forests used as the mediators in…
Vladimir Belik
  • 352
  • 3
  • 14
5
votes
1 answer

Why can't computers be random?

I talked with a graduate computer science who said one challenge of making artificial human-like is making random decisions, and that computers can't be random, that they always need a "seed." But, if a computer's outcomes are determined by the…
RayOfHope
  • 151
  • 1
5
votes
1 answer

Does the term "data augmentation" imply increasing the training dataset?

I have a manuscript that has been reviewed and one of the reviewers commented on my use of the term " data augmentation", saying that it might not be the appropriate term in my case (explained below). I collected a large dataset of short audio files…