Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Low Dimensional State Representation Learning with Robotics Priors in Continuous Action Spaces
University of Twente, Faculty of Electrical Engineering, Mathematics and Computer Science, Enschede, Netherlands.
IAV GmbH (Volkswagen Group), Intelligent Driving Functions RD Center, Berlin, Germany.
University of Twente, Faculty of Electrical Engineering, Mathematics and Computer Science, Enschede, Netherlands.
Jönköping University, Tekniska Högskolan, JTH, Avdelningen för datateknik och informatik. Jönköping University, Tekniska Högskolan, JTH, Avdelningen för datavetenskap, Jönköping AI Lab (JAIL).ORCID-id: 0000-0002-0343-5072
Visa övriga samt affilieringar
2021 (Engelska)Ingår i: IEEE International Conference on Intelligent Robots and Systems, Institute of Electrical and Electronics Engineers (IEEE), 2021, s. 190-197Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Reinforcement learning algorithms have proven to be capable of solving complicated robotics tasks in an end-to-end fashion without any need for hand-crafted features or policies. Especially in the context of robotics, in which the cost of real-world data is usually extremely high, Reinforcement Learning solutions achieving high sample efficiency are needed. In this paper, we propose a framework combining the learning of a low-dimensional state representation, from high-dimensional observations coming from the robot's raw sensory readings, with the learning of the optimal policy, given the learned state representation. We evaluate our framework in the context of mobile robot navigation in the case of continuous state and action spaces. Moreover, we study the problem of transferring what learned in the simulated virtual environment to the real robot without further retraining using real-world data in the presence of visual and depth distractors, such as lighting changes and moving obstacles. A video of our experiments can be found at: https://youtu.be/rUdGPKr2Wuo.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2021. s. 190-197
Serie
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Nyckelord [en]
Mobile robots, Reinforcement learning, Robotics, Virtual reality, Action spaces, Continuous actions, End to end, High-dimensional, Low dimensional, Real-world, Reinforcement learning algorithms, Reinforcement learning solution, Robotic tasks, State representation, Learning algorithms
Nationell ämneskategori
Robotik och automation
Identifikatorer
URN: urn:nbn:se:hj:diva-55968DOI: 10.1109/IROS51168.2021.9635936Scopus ID: 2-s2.0-85124346914ISBN: 9781665417150 (tryckt)OAI: oai:DiVA.org:hj-55968DiVA, id: diva2:1641655
Konferens
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021, 27 September 2021 through 1 October 2021
Tillgänglig från: 2022-03-02 Skapad: 2022-03-02 Senast uppdaterad: 2025-02-09Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Sirmacek, Beril

Sök vidare i DiVA

Av författaren/redaktören
Sirmacek, Beril
Av organisationen
JTH, Avdelningen för datateknik och informatikJönköping AI Lab (JAIL)
Robotik och automation

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 124 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf