Fann in php, how do you make the weekend neurons remember the exits, not the possibility?



  • I'm studying the lung neurosia, which at the entrance I'm giving a broken picture of the picel color and I'm going to get a text on the picture.

    • Input neurons: height* width of picture
    • weekend neurons: 6 (constant length of text)

    I'm studying with help. fann_train($ann, inputs_array, outputs_array) for each picture (and I couldn't figure out how to work. fann_train_on_dataand more precisely how to generate learning data resources)

    The question is what

    Fann on weekends builds weights (believables) how do you get a ready text?

    Possible stupid solution.

    Add an additional 1 to the inlet neurons (the symbol index that I want) and the bulb on the weekends to be equal to and already receive the text

    Less of my decision

    • New neurons are added, which means a larger volume of memory.
    • At least six times, the time of learning and the time of the programme as a whole is reduced

    Ps. I don't think leafing code is mandatory, but if it's necessary, I'll put it in question.



  • Teacher training. You'll never get the text on your way out, but the probability that you want to interpret, for example, from 0 to 1. Then we can try to put the width of one digit to begin, and the exit will be the bulb of symbols in the picture of the collateral exits, in our case, one. And we'll try to learn that, we'll be able to get the numbers from 0 to 9, then we'll share our chances. 0 before 1) On equal intervals between 0.00 and 0.10, it's gonna be "0," if the probability is 0.03-0.07, we're good at it as "0," and so on, with each number, we learn. (from 0.10 to 0.20 is a "1" if 0.13-0.17 lays that oak, we'll believe that "1" and it'll be a mistake we don't see on the edges.

    So we go to the "0" entrance and teach the network to give it 0.05, take "2" and We're going to give up 0.15, etc. The inaccuracy of +-0.02 shall also be taken into account.

    Try one symbol first and then all the alphabet, but I think it'll be very slow to determine. And so there are different types of neural networks for each task that need to be literate.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2