Machine Learning - Training | Fine Tuning

1 readers
1 users here now

Instance Notes

Please review our community rules and introduce yourself!

Useful links

founded 1 year ago
MODERATORS
26
27
28
 
 

cross-posted from: https://sh.itjust.works/post/223572

This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question: Autoencoders and auto-associative memory seem to be closely related. It appears the terminology changed, is there a difference between the two or did the wording simply change over time?

29
 
 

cross-posted from: https://sh.itjust.works/post/116346

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I have a dataset that contains vectors of shape 1xN where N is the number of features. For each value, there is a float between -4 and 5. For my project I need to make an autoencoder, however, activation functions like ReLU or tanh will either only allow positive values through the layers or within -1 and 1. My concern is that upon decoding from the latent space the data will not be represented in the same way, I will either get vectors with positive values only or constrained negative values while I want it to be close to the original.

Should I apply some kind of transformation like adding a positive constant value, exp() or raise data to power 2, train VAE, and then if I want original representation I just log() or log2() the output? Or am I missing some configuration with activation functions that can give me an output similar to the original input?

30
31
1
Get some! (lemmy.intai.tech)
submitted 1 year ago by [email protected] to c/[email protected]
 
 
32
33
 
 

cross-posted from: https://sh.itjust.works/post/67956

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

Im training an autoencoder on a time series that consists of repeating patterns (because the same process is repeated again and again). If I then use this autoencoder to reconstruct another one of these patterns, I expect the reconstruction to be worse if the pattern is different from the ones it has been trained on.

Is the fact that the sime series consists of repeating patterns something that needs to be considered in any way for training or data preprocessing? I am currently using this on raw channels.

Thank you.

34
1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
35