Python gradient



  • How is it possible to perform a function with several arguments if the function is not known in advance in its investigation and its derivative? How to find the function of gradient discharge to the NC for each reason? It's mathematically clear, but implementation isn't good.



  • Why do you even find the derivative? In the dental gradient exit, you're just taking a random coordinate (the function log) and trying to deviate from the point where you're now taking a step (also called "the speed of learning." If there's a function that loses a relatively current point, you're moving there. If not, try the same distance in the opposite direction. Then you take the next random coordinate, etc. If you're stuck in a local minimum and you don't lose your function, either everything or step decreases (e.g. twice) and you try to move again. The process ends when either a step is less than a certain threshold or the total number of steps has exceeded a number. Well, there are different parts of implementation - in which sequence to move at coordinates, how to reduce a step (and sometimes increase to slide a local minimum) - there are different gradient exit options. You don't have a problem compiling:

    f(x1, x2, xk    , ..., xn) # текущая точка
    f(x1, x2, xk + d, ..., xn) # шаг на плюс d по координате xk
    f(x1, x2, xk - d, ..., xn) # шаг на минус d по координате xk
    

    And then compare them among themselves to decide where to take a step (if any)? And that's a step by step, a coordinate behind the coordinate.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2