What is the benefit of a positive-only sigmoid function?

  • Thread starter Thread starter ADDA
  • Start date Start date
  • Tags Tags
    Function
Click For Summary
SUMMARY

The positive-only sigmoid function is primarily utilized in neural networks due to its ability to provide a monotonically increasing output, typically ranging from 0 to 1. This characteristic allows for effective modeling of biological systems by introducing non-linearity, which is essential for simulating complex behaviors in neural networks. The standard equation for this function is 1.0 / (1.0 + e^(-x)). Its design facilitates the activation of nodes, enabling them to represent varying degrees of activation, which is crucial for learning and error correction in neural network architectures.

PREREQUISITES
  • Understanding of neural network architecture
  • Familiarity with activation functions, specifically the sigmoid function
  • Basic knowledge of mathematical concepts, including exponential functions
  • Experience with programming frameworks for neural networks, such as TensorFlow or PyTorch
NEXT STEPS
  • Research the implementation of the sigmoid activation function in TensorFlow 2.x
  • Explore the differences between sigmoid and other activation functions like ReLU and Tanh
  • Learn about the impact of activation functions on neural network convergence
  • Investigate advanced techniques for tuning activation functions in deep learning models
USEFUL FOR

Data scientists, machine learning engineers, and anyone involved in designing or optimizing neural networks will benefit from this discussion, particularly those focusing on activation functions and their effects on model performance.

ADDA
Messages
67
Reaction score
2
Computer science news on Phys.org
ADDA said:
What is the benefit of a positive only sigmoid function? or why is it 'most often' used?

NOTES:
return value monotonically increasing most often from 0 to 1 or alternatively from −1 to 1, depending on convention; source: https://en.wikipedia.org/wiki/Sigmoid_function
Can you give more context to your question? Why are you asking this in the Computing forum? Is it for some modeling work you are doing? More information would make it much easier to try to answer your questions.
 
Is this for a neural net node?

Having the activation function of each node to range from 0 (no activation) to 1 (full activation) is a common convention in neural net design.

Here are some common activation functions:

https://en.wikipedia.org/wiki/Activation_function

The sigmoid activation function provides some non-linearity to the neural net to simulate biological systems better.
 
  • Like
Likes berkeman
jedishrfu, you are correct. The equation, 1.0 / (1.0 + e^(-x)), comes from this video:

When I implemented a network, however, the output always converged to the error vector. Perhaps, I was wrong, I no longer have the code.

My question comes from the idea that the network has to pull down on wrong input. How can a node pull down with a positive only activation function?
 
take a look at this function with only positive x values. y for x=0 equals 0
 

Attachments

  • Sans titre.png
    Sans titre.png
    14.9 KB · Views: 580
you can slide it more to the right by increasing the nb 2 at the end of the equation but past 7 the curve is not smooth enough
 
I am having a hell of a time finding a good all-in-one inkjet printer. I must have gone through 5 Canon, 2 HP, one Brother, one Epson and two 4 X 6 photo printers in the last 7 yrs. all have all sort of problems. I don't even know where to start anymore. my price range is $180-$400, not exactly the cheapest ones. Mainly it's for my wife which is not exactly good in tech. most of the problem is the printers kept changing the way it operate. Must be from auto update. I cannot turn off the...

Similar threads

Replies
43
Views
6K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 29 ·
Replies
29
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
10
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K