A
ABB
  • 4937

    The challenge here is to be able to process this multi-modal data simultaneously and train only one model/agent generating the desired insight.


    Knowing more, doing better – A “better” decision is one in which every relevant fact and possible outcome has been evaluated and included. This requires a combination of knowledge, situational awareness, and a profound understanding of the processes by which a broad array of variables, can be transformed from data into insights.
    In industrial applications, we often are dealing with multi-modal data: speech, text, image, video, joint torque or any other discrete and continuous observations. Aiming towards autonomy in industry, the challenge here is to be able to process this multi-modal data simultaneously and train only one model/agent generating the desired insight.


P
Parsd AB
  • 4931

    How to generate summaries of chapters, individual documents and a set of documents and be able to tune the output based on domain-specific feedback from users?


    Many analysts and knowledge workers are overwhelmed by unstructured content today. It could be investigative journalists, lawyers, risk analysts, cyber security threat analysts, police investigators etc. In the end people hope for the tool that magically tells them what they need out thousands of pages of text. However, before we get there we believe there is a big potential to augment the analysts and optimize their work by providing summaries on a chapter-level, document-level and many similar documents together. That way they can be able to process just enough data to learn something in the initial phases without having to read through all of it. It would be great if the auto-generated summaries could be tuned by the domain expert so that we can generate increasingly better summaries based on the input from a user or a group of users working on the same issues.

  • 4934

    How to tune NLP and named entity extraction (NER) continuously based on user input and feedback in the forms of research questions, domain-significant keywords, and feedback on suggested entities?


    Many analysts and knowledge workers are overwhelmed by unstructured content today. It could be investigative journalists, lawyers, risk analysts, cyber security threat analysts, police investigators, etc. In the end, people hope for the tool that magically tells them what they need out of thousands of pages of text.

    In addition to auto-generated summaries, we believe there is a big potential in generating “content stats” about each document based on NLP. That will give the user a quick overview of the character of the document (or audio/video). If the user collects a set of entities that expands as the new subject area is being explored, how can we use these lists of entities together with user feedback on automatic NER to gradually tune the experience in a given subject area?


S
SAAB
  • 4943

    We are interested in collaboration systems in mixed environments: air – sea – underwater


    The environments are very different and possess different challenges:
    Air – high pace, high bandwidth, long range comms., tough environment
    Sea – Lower pace, high bandwidth, shorter range comms., tough environmnet
    Underwater – Very low pace, very low bandwidth, delays, very short range comms., rather predictable environment

    Please elaborate on the challenges this scenario possess, such as: different speed in OODA loops, latency and delays in communication, how the human would interact with this mixed system of systems, which system should “be in charge” in different situations, the need for “autonomy” for the different systems, etc.


V
Veoneer
  • 4940

    The challenge is to decide where an algorithm for, e.g., computer vision, visualization or optimization, should be placed in the device-edge-cloud compute continuum and its resource requirements in order to maximize the utility (speed, performance, comfort, …) so that the overall energy consumption is minimized.