Profile cover photo
Profile photo
KyungHyun Cho
533 followers
533 followers
About
KyungHyun's posts

Post has attachment
A Post-Doctoral Researcher Position in
Deep Learning for Medical Image Analysis

Prof. Kyunghyun Cho (http://www.kyunghyuncho.me/) at the Computational Intelligence, Learning, Vision, and Robotics (CILVR) Group (http://cilvr.cs.nyu.edu/), Department of Computer Science (https://cs.nyu.edu/), New York University invites applications for a postdoctoral position on deep learning for medical image analysis.

Applicants are expected to have strong background and experience in developing and investigating deep neural networks for computer vision, in addition to good knowledge of machine learning and excellent programming skills. Applicants should be able to implement deep neural networks, including multilayered convolutional networks and recurrent networks, for a large-scale data which consists of many high-resolution images and associated textual descriptions.

The appointment will be for one year, with the option of renewing for a further year, dependent on satisfactory performance. The candidate will be expected to interact with other students and faculty in CILVR.

To be considered for the position, send your CV, list of publications and the contact details of two references to kyunghyun.cho@nyu.edu.




Post has attachment

Post has attachment
CALL FOR PAPERS

================================================
RepEval 2016: The 1st Workshop on Evaluating Vector-Space Representations for NLP
================================================

Mission Statement: To foster the development of new and improved ways of measuring the quality and understanding the properties of vector space representations in NLP.

Time & Location: Berlin, Germany, August 12th 2016 (ACL 2016 workshop).

Website: https://sites.google.com/site/repevalacl16

===Motivation===

Models that learn real-valued vector representations of words, phrases, sentences, and even document are ubiquitous in todayís NLP landscape. These representations are usually obtained by training a model on large amounts of unlabeled data, and then employed in NLP tasks and downstream applications. While such representations should ideally be evaluated according to their value in these applications, doing so is laborious, and it can be hard to rigorously isolate the effects of different representations for comparison. There is therefore a need for evaluation via simple and generalizable proxy tasks. To date, these proxy tasks have been mainly focused on lexical similarity and relatedness, and do not capture the full spectrum of interesting linguistic properties that are useful for downstream applications. This workshop challenges its participants to propose methods and/or design benchmarks for evaluating the next generation of vector space representations, for presentation and detailed discussion at the event. Following the workshop, the highest-quality proposals will receive the support of the organizers and participants, and some financial support, to help produce their proposed resource to the highest standard.


===Submissions===

We encourage researchers at all levels of experience to consider contributing to the discussion at RepEval by making a short submission. This can either be as an analysis of existing benchmarks or by proposing new ones.

=Analysis Track=

An analysis submission should analyze and discuss the strengths and weaknesses of existing evaluation tasks, providing helpful insights for designers of new tasks. Analysis papers will be reviewed, accepted, and published before the proposal trackís camera-ready deadline, so that new task proposals could benefit from these findings.

As part of their analysis, papers in this track might like to consider the following questions:
What are the pros and cons of existing evaluations?
What are the limitations of task-independent representation or its evaluation?
Given a specific downstream application, which existing evaluation (or family of evaluations) is a good predictor of performance improvement?
Which linguistic/semantic/psychological properties are captured by existing evaluations? Which are not?
What methodological mistakes were made in the creation of existing evaluation datasets?

The analysis track is not limited to these topics. We believe that any manuscript presenting a sound argument on representation evaluation would be a great addition to the workshop.

=Proposal Track=

A proposal submission should propose a novel method for evaluating representations. It does not have to construct an actual dataset, but it should describe a way (or several optional ways) of collecting one. Proposals are expected to provide roughly 5-10 examples as a proof of concept.

In addition, each proposal should explicitly mention:
Which type of representation it evaluates (e.g. word, sentence, document)
For which downstream application(s) it functions as a proxy
Any linguistic/semantic/psychological properties it captures

Among other important points, proposals should take the following into consideration:
If the task captures some linguistic phenomenon via annotators, what evidence is there that it is robustly observed in humans (e.g., inter-annotator agreement)?
How easy would it be for other researchers to accurately reproduce the evaluation (not necessarily the dataset)?
Will the dataset be cost-effective to produce?
Is a specific family of models expected to perform particularly better (or worse) on the task? In other words, which types of models is this evaluation targeted at?
How should the evaluationís results be interpreted?

=Submission Format=

Submissions to both tracks should be 2-4 pages of content in ACL format, with an unlimited amount of pages for references. For the proposal track, we encourage shorter content (2-3 pages), leaving more room for examples and their visualization.


===Best Proposal Awards Sponsored by Facebook AI Research===

Two proposal-track papers will be selected by a special committee, and awarded financial support for turning their idea into a large-scale high-quality dataset via crowdsourcing or other annotation efforts. We hope that the workshop communityís endorsement will also promote the use of these new evaluations.


===Important Dates===

Submission: May 8th 2016
Notification: June 5th 2016
Camera-Ready (Analysis Track): June 12th 2016
Camera-Ready (Proposal Track): June 26th 2016*
Workshop Date: August 12th 2016

*This will give proposal-track authors enough time to go over any relevant results that may rise from the analysis track, and cite them as motivation.



===Organizers===

Omer Levy, Bar-Ilan University
Felix Hill, Cambridge University
Roi Reichart, Technion - Israel Institute of Technology
Kyunghyun Cho, New York University
Anna Korhonen, Cambridge University
Yoav Goldberg, Bar-Ilan University
Antoine Bordes, Facebook AI Research


Post has attachment
1st Workshop on Representation Learning for NLP: Call for Papers

The 1st Workshop on Representation Learning for NLP (https://sites.google.com/site/repl4nlp2016/) invites papers of a theoretical or experimental nature on all relevant topics. Relevant topics for the workshop include, but are not limited to, the following areas (in alphabetical order):

- Analysis of language using eigenvalue, singular value and tensor decompositions
- Distributional compositional semantics
- Integration of distributional representations with other models
- Knowledge base embedding
- Language modeling for automatic speech recognition, statistical machine translation, and information retrieval
- Language modeling for logical and natural reasoning
- Latent-variable and representation learning for language
- Multi-modal learning for distributional representations
- Neural networks and deep learning in NLP
- The role of syntax in compositional models
- Spectral learning and the method of moments in NLP
- Language embeddings and their applications

Important Dates

- Deadline for submission: 8 May 2016
- Notification of acceptance: 5 June 2016
- Deadline for camera-ready version: 22 June 2016
- Early registration deadline (ACL'16): To be announced.
- Workshop: 11 August 2016

Submissions

Authors should submit a full paper of up to 8 pages in electronic, PDF format, with up to 2 additional pages for references. The reported research should be substantially original. Accepted papers will be presented as posters. Selected papers may also be presented orally at the discretion of the committee.

All submissions must be in PDF format and must follow the ACL 2016 formatting requirements. See the ACL 2016 Call For Papers for reference: http://acl2016.org/index.php?article_id=9.

Reviewing will be double-blind, and thus no author information should be included in the papers; self-reference that identifies the authors should be avoided or anonymized.

Submissions must be made through the Softconf website set up for this workshop: https://www.softconf.com/acl2016/repl4nlp2016/. Style files and other information about paper formatting requirements will be made available on the conference website, http://acl2016.org.

Accepted papers will appear in the workshop proceedings, where no distinction will be made on the basis of length or mode of presentation.


Post has shared content
Very proud to be open-sourcing TensorFlow, Google's newest Deep Learning framework! TensorFlow is both a production-grade C++ backend, which runs on Intel CPUs, NVidia GPUs, Android, iOS and OSX, and a very simple and research-friendly Python front-end that interfaces with Numpy, iPython Notebooks, and all the familiar Python-based scientific tooling that we love. TensorFlow is what we use every day in the Google Brain team, and while it's still very early days and there are a ton of rough edges to be ironed out, I'm excited about the opportunity to build a community of researchers, developers and infrastructure providers around it. Try it out!

Post has attachment
=====================================================
NIPS 2015 Workshop: Multimodal Machine Learning

Montreal, Quebec, Canada

https://sites.google.com/site/multiml2015/
=====================================================

 

IMPORTANT DATES

·       Submission Deadline: October 9th, 2015
·       Author Notification:  October 24th, 2015
·       Workshop: December 11, 2015

 

KEYNOTE SPEAKERS

·       Shih-Fu Chang (Columbia University)
·       Li Deng (Microsoft Research)
·       Raymond Mooney (University of Texas, Austin)
·       Ruslan Salakhutdinov (Carnegie Mellon University)

 

OVERVIEW

Multimodal machine learning aims at building models that can process and relate information from multiple modalities. From the early research on audio-visual speech recognition to the recent explosion of interest in models mapping images to natural language, multimodal machine learning is  a vibrant multi-disciplinary field of increasing importance and with extraordinary potential.

Learning from paired multimodal sources offers the possibility of capturing correspondences  between modalities and gain in-depth understanding of natural phenomena. Thus,  multimodal data provides a means of reducing our dependence on the more standard supervised learning paradigm that is inherently limited by the availability of labeled examples.

This research field brings some unique challenges for machine learning researchers given the heterogeneity of the data and the complementarity often found between modalities. This workshop will facilitate the progress in multimodal machine learning by bringing together researchers from natural language processing, multimedia, computer vision, speech processing and machine learning to discuss the current challenges and identify the research infrastructure needed to enable a stronger multidisciplinary collaboration.

 

TOPICS

We are looking for contributed papers that apply machine learning to multimodal data. We are interested in both application-oriented papers as well as more fundamental algorithmic / theoretical works.

A non-exhaustive list of relevant topics:

·       Automatic image and video description
·       Multimodal signal processing
·       Audio-visual speech recognition
·       Multimodal affect recognition
·       Cross-modal multimedia retrieval
·       Multi-view multi-task learning
·       Multimodal representation learning
·       Multi-sensory computational modeling
·       Multilingual, multimodal language processing
·       Multimodal modeling for robotics control
·       Multimodal human behavior modeling
 

SUBMISSIONS

Authors should submit an extended abstract between 4 and 6 pages (including references). We particularly encourage submissions that have been previously published outside the machine learning community (i.e. at NIPS and ICML) to emphasize the multidisciplinary aspect of this research area. We also encourage submission of relevant work in progress.

Submitted abstracts may be a shortened version of a longer paper or technical report, in which case the longer paper should be referred from the submission. Reviewers will be asked to judge the submission solely based on the submitted extended abstract.

All submissions must be in PDF format, and we encourage authors to follow the style guidelines of NIPS 2015 at: 

https://nips.cc/Conferences/2015/PaperInformation/AuthorSubmissionInstructions

Submissions must be made through:

https://cmt.research.microsoft.com/MMML2015/

Submissions will be reviewed for relevance, quality and novelty.  They will be presented as posters during the poster session (before the lunch break). A handful of submissions will be given a short talk.

 

ORGANIZERS

·       Louis-Philippe Morency (morency@cs.cmu.edu)
·       Tadas Baltrušaitis (tbaltrus@cs.cmu.edu)
·       Aaron Courville (aaron.courville@umontreal.ca)
·       KyungHyun Cho (kyunghyun.cho@nyu.edu)

The last post in the series on neural machine translation is now available, this time, with some pointers to the actual working code: http://devblogs.nvidia.com/parallelforall/introduction-neural-machine-translation-gpus-part-3/ The timing is good since the ACL has just started. If anyone's attending ACL and wants to discuss about this, feel free to grab me at the conference venue.

The panel discussion of the century will be hosted at the Deep Learning Workshop @ICML 2015 in now less than two weeks! The confirmed speakers now include +Yoshua Bengio, +Neil Lawrence, +Juergen Schmidhuber, +Demis Hassabis, +Yann LeCun and +Kevin Murphy(alphabetical order with two categories. can you guess?) The discussion will be moderated by +Max Welling. Please, leave any question you have about the future of deep learning here by next Wednesday. I will collect the questions and deliver them to Max!

Post has attachment
The panel discussion of the century will be hosted at the Deep Learning Workshop @ICML 2015 in now less than two weeks!
https://sites.google.com/site/deeplearning2015/panel-discussion

Post has shared content
Wait while more posts are being loaded