Profile cover photo
Profile photo
Paco Zamora-Martínez
http://pakozm.hol.es
http://pakozm.hol.es
About
Paco Zamora-Martínez's posts

Post has attachment
Our last publication, on PLOS ONE

Post has attachment
I have written a wrapper over qsub command, simplifying the way PBS scripts are managed. Find it here, it can be useful to people working with PBS clusters.

https://github.com/pakozm/qsub-wrapper

Post has attachment
I have written a wrapper over qsub command, simplifying the way PBS scripts are managed. Find it here, it can be useful to people working with PBS clusters.

https://github.com/pakozm/qsub-wrapper

Post has attachment
I have written a wrapper over qsub command, simplifying the way PBS scripts are managed. Find it here, it can be useful to people working with PBS clusters.

https://github.com/pakozm/qsub-wrapper

Post has attachment
Our paper at ICANN 2016, about integration of supervised and unsupervised losses for training of deep neural networks.
http://www.slideshare.net/franciscozamoraceu/integration-of-unsupervised-and-supervised-criteria-for-dnns-training

Our findings are that unsupervised loss should be decreased during training in order to ensure model optimality.

Post has attachment
Our paper at ICANN 2016, about integration of supervised and unsupervised losses for training of deep neural networks.
http://www.slideshare.net/franciscozamoraceu/integration-of-unsupervised-and-supervised-criteria-for-dnns-training

Our findings are that unsupervised loss should be decreased during training in order to ensure model optimality.


Post has attachment
Our paper at ICANN 2016, about integration of supervised and unsupervised losses for training of deep neural networks.
http://www.slideshare.net/franciscozamoraceu/integration-of-unsupervised-and-supervised-criteria-for-dnns-training

Our findings are that unsupervised loss should be decreased during training in order to ensure model optimality.

The paper is available at Springer:
http://link.springer.com/chapter/10.1007/978-3-319-44781-0_7

Post has attachment
Our paper at ICANN 2016, about integration of supervised and unsupervised losses for training of deep neural networks.
http://www.slideshare.net/franciscozamoraceu/integration-of-unsupervised-and-supervised-criteria-for-dnns-training

Our findings are that unsupervised loss should be decreased during training in order to ensure model optimality.

Find the paper at Springer:
http://link.springer.com/chapter/10.1007/978-3-319-44781-0_7

Post has attachment

Post has shared content
Wait while more posts are being loaded