this post was submitted on 15 Jun 2023
1 points (100.0% liked)

Machine Learning - Training | Fine Tuning

1 readers
1 users here now

Instance Notes

Please review our community rules and introduce yourself!

Useful links

founded 1 year ago
MODERATORS
 

cross-posted from: https://sh.itjust.works/post/116346

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I have a dataset that contains vectors of shape 1xN where N is the number of features. For each value, there is a float between -4 and 5. For my project I need to make an autoencoder, however, activation functions like ReLU or tanh will either only allow positive values through the layers or within -1 and 1. My concern is that upon decoding from the latent space the data will not be represented in the same way, I will either get vectors with positive values only or constrained negative values while I want it to be close to the original.

Should I apply some kind of transformation like adding a positive constant value, exp() or raise data to power 2, train VAE, and then if I want original representation I just log() or log2() the output? Or am I missing some configuration with activation functions that can give me an output similar to the original input?

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here