<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Transportation⋆</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ostap Okhrin</string-name>
          <email>ostap.okhrin@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Transport and Economics</institution>
          ,
          <addr-line>TU Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Reinforcement learning (RL) has emerged as a powerful method for solving complex control tasks across various domains, from autonomous driving to maritime navigation. Work of my team in RL, particularly in value-based algorithms, addresses critical issues such as overestimation bias, proposing innovative solutions like the T-Estimator (TE) and K-Estimator (KE) for bias control and algorithmic robustness. Our advancements are validated through modifications to Q-Learning and the Bootstrapped Deep Q-Network (BDQN), demonstrating superior performance and convergence. Additionally, we have developed a spatial-temporal recurrent neural network architecture for autonomous ships, enhancing robustness in partial observability and compliance with maritime trafic rules. Our recent endeavors also include a modular framework for autonomous surface vehicles on inland waterways, utilizing DRL agents for path planning and following, significantly outperforming traditional control methods. Moreover, our work on dynamic obstacle avoidance environments for mobile robots and drones emphasizes the importance of controlled training dificulty for better generalization and robustness. This approach has been successfully applied across diferent platforms, reducing the simulation-to-reality (Sim2Real) gap and improving performance in real-world scenarios. Through these contributions, we aim to advance the practical application Workshop Proceedings ∗Corresponding author.</p>
      </abstract>
      <kwd-group>
        <kwd>ITAT'24</kwd>
        <kwd>Information technologies - Applications and Theory</kwd>
        <kwd>Septem-</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>and reliability of reinforcement learning in diverse and dynamic environments.
CEUR
htp:/ceur-ws.org
IS N1613-073</p>
      <p>CEUR</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>