Difference between revisions of "ICLR 2020"

From Openresearch
Jump to: navigation, search
 
(2 intermediate revisions by one other user not shown)
Line 7: Line 7:
 
|Start date=2020/04/26
 
|Start date=2020/04/26
 
|End date=2020/04/30
 
|End date=2020/04/30
 +
|Submission deadline=2019/09/25
 
|Homepage=https://iclr.cc/Conferences/2020/
 
|Homepage=https://iclr.cc/Conferences/2020/
 
|Twitter account=@ICLR_conf_
 
|Twitter account=@ICLR_conf_
 
|City=Addis Ababa
 
|City=Addis Ababa
 
|Country=Ethiopia
 
|Country=Ethiopia
|Submission deadline=2019/09/25
 
 
|Notification=2019/12/19
 
|Notification=2019/12/19
 
|Submitting link=https://openreview.net/group?id=ICLR.cc/2020/Conference
 
|Submitting link=https://openreview.net/group?id=ICLR.cc/2020/Conference
Line 21: Line 21:
 
|Accepted papers=687
 
|Accepted papers=687
 
}}
 
}}
''Enter your description here. Maybe just paste in the call for papers.''
+
'''Virtual Conference''' Formerly Addis Ababa ETHIOPIA
 +
 
 +
The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.
 +
 
 +
ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
 +
 
 +
Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
 +
 
 +
==TOPICS==
 +
A non-exhaustive list of relevant topics explored at the conference include:
 +
 
 +
  * unsupervised, semi-supervised, and supervised representation learning
 +
  * representation learning for planning and reinforcement learning
 +
  * metric learning and kernel learning
 +
  * sparse coding and dimensionality expansion
 +
  * hierarchical models
 +
  * optimization for representation learning
 +
  * learning representations of outputs or states
 +
  * implementation issues, parallelization, software platforms, hardware
 +
  * applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field

Latest revision as of 09:47, 12 May 2020

ICLR 2020
Eighth International Conference on Learning Representations
Event in series ICLR
Dates 2020/04/26 (iCal) - 2020/04/30
Homepage: https://iclr.cc/Conferences/2020/
Twitter account: @ICLR_conf_
Submitting link: https://openreview.net/group?id=ICLR.cc/2020/Conference
Location
Location: Addis Ababa, Ethiopia
Loading map...

Important dates
Submissions: 2019/09/25
Notification: 2019/12/19
Papers: Submitted 2594 / Accepted 687 (26.5 %)
Committees
General chairs: Alexander Rush, Shakir Mohamed
PC chairs: Dawn Song, Kyunghyun Cho, Martha White
Workshop chairs: Gabriel Synnaeve, Asja Fischer
PC members: Abhishek Kumar, Adam White, Aleksander Madry, Alexandra Birch
Table of Contents

Contents

Tweets by @ICLR_conf_


Virtual Conference Formerly Addis Ababa ETHIOPIA

The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.

ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

TOPICS

A non-exhaustive list of relevant topics explored at the conference include:

 * unsupervised, semi-supervised, and supervised representation learning
 * representation learning for planning and reinforcement learning
 * metric learning and kernel learning
 * sparse coding and dimensionality expansion
 * hierarchical models
 * optimization for representation learning
 * learning representations of outputs or states
 * implementation issues, parallelization, software platforms, hardware
 * applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field