Trustworthy AI and Requirements Engineering: Opportunities, Risks, and Legal Implications

Short description

ChatGPT brings AI to everyone, but how effective is it in regard to requirements engineering? Is it legal to use a tool by OpenAI? What are the other options, and how accessible are they?

Through this talk, we aim to illustrate the current limitations of Large Language Models (LLMs), provide a brief background on LLMs and explain their limitations. We will provide practical examples, including the detection of epics and personas from text, creating user stories and acceptance criteria from text and structured text, creating models and relations, and the evaluation of user stories by the IREB criteria.

We will demonstrate how other models can be used by everyone, and discuss why they should be used in light of the upcoming European AI act.

Value for the audience:
After the talk, the listener will know how to apply state-of-the-art AI for requirements engineering and understand when it should or should not be used.

Problems addressed:
Can a LLM (Large Language Model) create a specification out of text? Is the job of a requirements engineer in danger? (Spoiler: No.) How can a requirements engineer use the results?

Is it even legal to use an LLM to create requirements? If yes, what do I have to do to make it legal?

Can I use an LLM to evaluate a software specification? Are the results useable, how could I use them?

Talk language: English
Level: Advanced
Target group: Requirement Engineers

Company:
ireo GmbH

Presented by:
DI, MA Simon Jimenez

DI, MA Simon Jimenez