Change search
Link to record
Permanent link

Direct link
Publications (10 of 64) Show all publications
Hettiarachchi, H., Dridi, A., Gaber, M. M., Parsafard, P., Bocaneala, N., Breitenfelder, K., . . . Vakaj, E. (2025). CODE-ACCORD: A Corpus of building regulatory data for rule generation towards automatic compliance checking. Scientific Data, 12(1), Article ID 170.
Open this publication in new window or tab >>CODE-ACCORD: A Corpus of building regulatory data for rule generation towards automatic compliance checking
Show others...
2025 (English)In: Scientific Data, E-ISSN 2052-4463, Vol. 12, no 1, article id 170Article in journal (Refereed) Published
Abstract [en]

Automatic Compliance Checking (ACC) within the Architecture, Engineering, and Construction (AEC) sector necessitates automating the interpretation of building regulations to achieve its full potential. Converting textual rules into machine-readable formats is challenging due to the complexities of natural language and the scarcity of resources for advanced Machine Learning (ML). Addressing these challenges, we introduce CODE-ACCORD, a dataset of 862 sentences from the building regulations of England and Finland. Only the self-contained sentences, which express complete rules without needing additional context, were considered as they are essential for ACC. Each sentence was manually annotated with entities and relations by a team of 12 annotators to facilitate machine-readable rule generation, followed by careful curation to ensure accuracy. The final dataset comprises 4,297 entities and 4,329 relations across various categories, serving as a robust ground truth. CODE-ACCORD supports a range of ML and Natural Language Processing (NLP) tasks, including text classification, entity recognition, and relation extraction. It enables applying recent trends, such as deep neural networks and large language models, to ACC.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Computer Sciences Natural Language Processing
Identifiers
urn:nbn:se:hj:diva-67230 (URN)10.1038/s41597-024-04320-x (DOI)001410897400006 ()39880815 (PubMedID)2-s2.0-85217356919 (Scopus ID)GOA;intsam;998117 (Local ID)GOA;intsam;998117 (Archive number)GOA;intsam;998117 (OAI)
Funder
EU, Horizon Europe, 101056973, 10040207, 10038999, 10049977
Available from: 2025-02-04 Created: 2025-02-04 Last updated: 2025-02-17Bibliographically approved
Ringe, R., Pomarlan, M., Tsiogkas, N., De Giorgis, S., Hedblom, M. M. & Malaka, R. (2025). The Wilhelm Tell Dataset of Affordance Demonstrations. In: HRI '25: Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 2025 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, March 4-6, 2025, Melbourne, Australia (pp. 1078-1082). IEEE Press
Open this publication in new window or tab >>The Wilhelm Tell Dataset of Affordance Demonstrations
Show others...
2025 (English)In: HRI '25: Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, IEEE Press, 2025, p. 1078-1082Conference paper, Published paper (Refereed)
Abstract [en]

Affordances - i.e. possibilities for action that an environment or objects in it provide - are important for robots operating in human environments to perceive. Existing approaches train such capabilities on annotated static images or shapes. This work presents a novel dataset for affordance learning of common household tasks. Unlike previous approaches, our dataset consists of video sequences demonstrating the tasks from first- and third-person perspectives, along with metadata about the affordances that are manifested in the task, and is aimed towards training perception systems to recognize affordance manifestations. The demonstrations were collected from several participants and in total record about seven hours of human activity. The variety of task performances also allows studying preparatory maneuvers that people may perform for a task, such as how they arrange their task space, which is also relevant for collaborative service robots.

Place, publisher, year, edition, pages
IEEE Press, 2025
Keywords
affordance demonstrations, affordance recognition, domestic service robotics
National Category
Computer Sciences Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:hj:diva-67408 (URN)979-8-3503-7893-1 (ISBN)
Conference
2025 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, March 4-6, 2025, Melbourne, Australia
Available from: 2025-03-07 Created: 2025-03-07 Last updated: 2025-03-07Bibliographically approved
Hedblom, M. M. (2024). Beyond Space and Time: An Initial Sketch of Formal Accounts to Non-Spatiotemporal Conceptual Sensory Primitives. In: Maria M. Hedblom & Oliver Kutz (Ed.), Proceedings of The Eighth Image Schema Dayco-located with The 23rd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024): . Paper presented at The Eighth Image Schema Day co-located with The 23rd International Conference of the Italian Association for Artificial Intelligence(AI*IA 2024), Bozen-Bolzano, Italy, November 27-28, 2024. CEUR-WS.org
Open this publication in new window or tab >>Beyond Space and Time: An Initial Sketch of Formal Accounts to Non-Spatiotemporal Conceptual Sensory Primitives
2024 (English)In: Proceedings of The Eighth Image Schema Dayco-located with The 23rd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024) / [ed] Maria M. Hedblom & Oliver Kutz, CEUR-WS.org , 2024Conference paper, Published paper (Refereed)
Abstract [en]

Derived from the embodied cognition hypothesis, image schemas are conceptual primitives thought to capture the spatiotemporal relationships underlying human conceptualisations. However, many embodied experiences are not spatiotemporal but rather based on the different sensory modalities. In order to provide a more comprehensive perspective of conceptual primitives, this paper looks at the traditional five senses and sketches an initial perspective of how some conceptual primitives could be systematically approached.

Place, publisher, year, edition, pages
CEUR-WS.org, 2024
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 3888
Keywords
Image schemas, Conceptual primitives, Sensory-perception, Conceptual spaces
National Category
Computer Sciences General Language Studies and Linguistics
Identifiers
urn:nbn:se:hj:diva-66904 (URN)
Conference
The Eighth Image Schema Day co-located with The 23rd International Conference of the Italian Association for Artificial Intelligence(AI*IA 2024), Bozen-Bolzano, Italy, November 27-28, 2024
Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-01-07Bibliographically approved
Schaap, G. & Hedblom, M. M. (2024). Discussing the creativity of AUTOMATONE: an interactive music generator based on Conway’s Game of Life. In: Kazjon Grace, Maria Teresa Llano, Pedro Martins & Maria M. Hedblom (Ed.), Proceedings of the 15th International Conference on Computational Creativity: . Paper presented at 15th International Conference on Computational Creativity, ICCC'24, June 17-21, 2024, Jönköping, Sweden (pp. 456-460). Association for Computational Creativity (ACC)
Open this publication in new window or tab >>Discussing the creativity of AUTOMATONE: an interactive music generator based on Conway’s Game of Life
2024 (English)In: Proceedings of the 15th International Conference on Computational Creativity / [ed] Kazjon Grace, Maria Teresa Llano, Pedro Martins & Maria M. Hedblom, Association for Computational Creativity (ACC) , 2024, p. 456-460Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, generative AI systems for music composition have transformed not only music generation but the field of computational creativity as a whole. In contrast to the black-boxes of deep learning techniques, classic algorithms offer a transparent alternative to music generation that does not require training data and, due to the autonomous process, such systems could be argued to reflect a more genuine creative process. One such algorithmic system is cellular automata. Designed as grids of binary nodes, cellular automata use mathematical rules to transition between different states that can be used to generate music. The initial state and the particular transition rules allow different patterns to emerge which can be translated into musical compositions. In this paper, we introduce AUTOMATONE, a semi-interactive music generator based on the cellular automaton Conway’s Game of Life. To ensure the quality of the music output, AUTOMATONE is based on pentatonic scales and uses four different state-transition systems to generate beats of different tempos.

Place, publisher, year, edition, pages
Association for Computational Creativity (ACC), 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-66773 (URN)978-989-54160-6-6 (ISBN)
Conference
15th International Conference on Computational Creativity, ICCC'24, June 17-21, 2024, Jönköping, Sweden
Available from: 2024-12-17 Created: 2024-12-17 Last updated: 2024-12-17Bibliographically approved
Hedblom, M. M. (2024). Every dog has its day: An in-depth analysis ofthe creative ability of visual generative AI. Cosmos + Taxis, 12(5 + 6), 88-103
Open this publication in new window or tab >>Every dog has its day: An in-depth analysis ofthe creative ability of visual generative AI
2024 (English)In: Cosmos + Taxis, E-ISSN 2291-5079, Vol. 12, no 5 + 6, p. 88-103Article in journal (Refereed) Published
Abstract [en]

The recent remarkable success of generative AI models to create text and images has already started altering our perspective of intelligence and the “uniqueness” of humanity in this world. Simultaneously, arguments on why AI will never exceed human intelligence are ever-present as seen in Landgrebe and Smith (2022). To address whether machines may rule the world after all, this paper zooms in on one of the aspects of intelligence Landgrebe and Smith (2022) neglected to consider: creativity. Using Rhodes four Ps of creativity as a starting point, this paper evaluates the creative ability in visual generative AI models with respect to the state of the art in creativity theory. The most part of the reflective evaluation is performed through a case study in generating illustrations of dogs using the generative AI tool Midjourney. 

Place, publisher, year, edition, pages
Cosmos + Taxis, 2024
National Category
Computer Sciences Arts
Identifiers
urn:nbn:se:hj:diva-64306 (URN)POA;;64306 (Local ID)POA;;64306 (Archive number)POA;;64306 (OAI)
Available from: 2024-05-27 Created: 2024-05-27 Last updated: 2024-05-27Bibliographically approved
Pomarlan, M., De Giorgis, S., Ringe, R., Hedblom, M. M. & Tsiogkas, N. (2024). Hanging around: Cognitive inspired reasoning for reactive robotics. In: Cassia Trojahn, Daniele Porello & Pedro Paulo Favato Barcelos (Ed.), Cassia Trojahn, Daniele Porello & Pedro Paulo Favato Barcelos (Ed.), Formal Ontology in Information Systems: Proceedings of the 14th International Conference (FOIS 2024). Paper presented at 14th International Conference on Formal Ontology in Information Systems (FOIS 2024), 08-09 July 2024 (online) and 15-19 July 2024 (Enschede, Netherlands) (pp. 2-15). Amsterdam: IOS Press, 394
Open this publication in new window or tab >>Hanging around: Cognitive inspired reasoning for reactive robotics
Show others...
2024 (English)In: Formal Ontology in Information Systems: Proceedings of the 14th International Conference (FOIS 2024) / [ed] Cassia Trojahn, Daniele Porello & Pedro Paulo Favato Barcelos, Amsterdam: IOS Press, 2024, Vol. 394, p. 2-15Conference paper, Published paper (Refereed)
Abstract [en]

Situationally-aware artificial agents operating with competence in natural environments face several challenges: spatial awareness, object affordance detection, dynamic changes and unpredictability. A critical challenge is the agent’s ability to identify and monitor environmental elements pertinent to its objectives. Our research introduces a neurosymbolic modular architecture for reactive robotics. Our system combines a neural component performing object recognition over the environment and image processing techniques such as optical flow, with symbolic representation and reasoning. The reasoning system is grounded in the embodied cognition paradigm, via integrating image schematic knowledge in an ontological structure. The ontology is operatively used to create queries for the perception system, decide on actions, and infer entities’ capabilities derived from perceptual data. The combination of reasoning and image processing allows the agent to focus its perception for normal operation as well as discover new concepts for parts of objects involved in particular interactions. The discovered concepts allow the robot to autonomously acquire training data and adjust its subsymbolic perception to recognize the parts, as well as making planning for more complex tasks feasible by focusing search on those relevant object parts. We demonstrate our approach in a simulated world, in which an agent learns to recognize parts of objects involved in support relations. While the agent has no concept of handle initially, by observing examples of supported objects hanging from a hook it learns to recognize the parts involved in establishing support and becomes able to plan the establishment/destruction of the support relation. This underscores the agent’s capability to expand its knowledge through observation in a systematic way, and illustrates the potential of combining deep reasoning with reactive robotics in dynamic settings.

Place, publisher, year, edition, pages
Amsterdam: IOS Press, 2024
Series
Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389, E-ISSN 1879-8314 ; 394
Keywords
Neurosymbolic Approaches, Image Schemas, Situated Robotics
National Category
Computer Sciences Computer graphics and computer vision
Identifiers
urn:nbn:se:hj:diva-66770 (URN)10.3233/FAIA241288 (DOI)2-s2.0-85217059373 (Scopus ID)978-1-64368-561-8 (ISBN)
Conference
14th International Conference on Formal Ontology in Information Systems (FOIS 2024), 08-09 July 2024 (online) and 15-19 July 2024 (Enschede, Netherlands)
Note

This work was supported by the Future Artificial Intelligence Research (FAIR) project, code PE00000013 CUP 53C22003630006, the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 Project-ID 329551904 EASE- Everyday Activity Science and Engineering, subproject “P01– Embodied semantics for the language of action and change: Combining analysis, reasoning and simulation”, and by the FET-Open Project #951846 “MUHAI Meaning and Understanding for Human-centric AI” by the EU Pathfinder and Horizon 2020 Program.

Available from: 2024-12-17 Created: 2024-12-17 Last updated: 2025-02-19Bibliographically approved
Grace, K., Llano, M. T., Martins, P. & Hedblom, M. M. (Eds.). (2024). Proceedings of the 15th International Conference on Computational Creativity. Paper presented at 15th International Conference on Computational Creativity, ICCC'24, June 17-21, 2024, Jönköping, Sweden. Association for Computational Creativity (ACC)
Open this publication in new window or tab >>Proceedings of the 15th International Conference on Computational Creativity
2024 (English)Conference proceedings (editor) (Refereed)
Abstract [en]

From conference website: 

The International Conference on Computational Creativity is the premier forum for disseminating research on computational and AI creativity, bringing together researchers interested in exploring the ever increasing capacities of technology in creative domains such as writing, visual arts and music. State-of-the-art AI-algorithms can now create works that to a casual observer are comparable to those of human professionals, but many have questioned whether this is sufficient (or even necessary) for a computer to be considered to be “acting creatively”. This conference and its associated workshops exist to discuss the how and what of computational participation in creativity.

Come join us in the beautiful Swedish town of Jönköping ([ˈjœ̂nːˌɕøːpɪŋ]) at Midsummer and discuss questions like: by which metrics should we judge the creativity of AI output? Can an AI tool augment the creativity of its human users? What are the ethical implications of algorithms taking on creative roles? And, of course: can an AI be creative at all? These questions and more are at the heart of the sub-field of AI known as Computational Creativity (CC), defined as “the art, science, philosophy, and engineering of computational systems which, by taking on particular responsibilities, exhibit behaviors that unbiased observers would deem to be creative”.

Place, publisher, year, edition, pages
Association for Computational Creativity (ACC), 2024. p. 465
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-66772 (URN)978-989-54160-6-6 (ISBN)
Conference
15th International Conference on Computational Creativity, ICCC'24, June 17-21, 2024, Jönköping, Sweden
Available from: 2024-12-17 Created: 2024-12-17 Last updated: 2024-12-17Bibliographically approved
Hedblom, M. M. & Kutz, O. (Eds.). (2024). Proceedings of The Eighth Image Schema Dayco-located with The 23rd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024). Paper presented at The Eighth Image Schema Day co-located with The 23rd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024), Bozen-Bolzano, Italy, November 27-28, 2024. CEUR-WS, 3804
Open this publication in new window or tab >>Proceedings of The Eighth Image Schema Dayco-located with The 23rd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024)
2024 (English)Conference proceedings (editor) (Refereed)
Place, publisher, year, edition, pages
CEUR-WS, 2024
Series
CEUR Workshop Proceedings, E-ISSN 1613-0073 ; 3888
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hj:diva-66905 (URN)
Conference
The Eighth Image Schema Day co-located with The 23rd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2024), Bozen-Bolzano, Italy, November 27-28, 2024
Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-01-07Bibliographically approved
Pomarlan, M., Hedblom, M. M., Spillner, L. & Porzel, R. (2024). Revising defeasible theories via instructions. In: Sabrina Kirrane, Mantas Šimkus, Ahmet Soylu & Dumitru Roman (Ed.), Rules and Reasoning: 8th International Joint Conference, RuleML+RR 2024, Bucharest, Romania, September 16–18, 2024, Proceedings. Paper presented at 8th International Joint Conference, RuleML+RR 2024, Bucharest, Romania, September 16–18, 2024 (pp. 176-190). Cham: Springer
Open this publication in new window or tab >>Revising defeasible theories via instructions
2024 (English)In: Rules and Reasoning: 8th International Joint Conference, RuleML+RR 2024, Bucharest, Romania, September 16–18, 2024, Proceedings / [ed] Sabrina Kirrane, Mantas Šimkus, Ahmet Soylu & Dumitru Roman, Cham: Springer, 2024, p. 176-190Conference paper, Published paper (Refereed)
Abstract [en]

Progress in AI raises agent alignment problems. In this paper, we look at the problem of instructing an agent, i.e. informing it about a regularity in the world it did not previously know. We study an idealized case: agents reasoning with logical theories. The idealization helps to understand the space of possibilities of the problem, and illustrates potential pitfalls and solutions. We believe non-monotonic theories more plausibly approximate human practical and commonsense reasoning so our agents here also use non-monotonic inference. However, instructing a non-monotonic theory does not always result in better alignment. One main cause of this phenomenon is humans omitting the kind of information used by a non-monotonic inference system to resolve conflicts between its parts. We illustrate this with theories induced from a dataset consisting of situated objects. We argue that obtaining non-monotonic theories that respond better to instruction requires additional restrictions on the formalism and theory update procedure.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15183
Keywords
Defeasible Logic, Model Reconciliation, Belief Revision
National Category
Computer Sciences
Identifiers
urn:nbn:se:hj:diva-66233 (URN)10.1007/978-3-031-72407-7_13 (DOI)2-s2.0-85205122289 (Scopus ID)978-3-031-72406-0 (ISBN)978-3-031-72407-7 (ISBN)
Conference
8th International Joint Conference, RuleML+RR 2024, Bucharest, Romania, September 16–18, 2024
Available from: 2024-09-17 Created: 2024-09-17 Last updated: 2024-10-07Bibliographically approved
Hedblom, M. M., Neuhaus, F. & Mossakowski, T. (2024). The Diagrammatic Image Schema Language (DISL). Spatial Cognition and Computation
Open this publication in new window or tab >>The Diagrammatic Image Schema Language (DISL)
2024 (English)In: Spatial Cognition and Computation, ISSN 1387-5868, E-ISSN 1573-9252Article in journal (Refereed) Epub ahead of print
Abstract [en]

Image schemas are mental patterns learned from perceptual experiences capturing conceptual constructions in expressions. In linguistic analysis, their visualizations are often context-dependent without a generalizable structure. Addressing this, we introduce The Diagrammatic Image Schema Language: a formal representation language that systematizes a set of visual combination rules for different conceptual primitives from the cognitive science literature. These primitives are distinguished from a formal point of view to allow for more general application. DISL also contains a logical exchange format in which the diagrams may be made machine-readable. Using DISL, the semantic structure of complex scenarios can be represented and computed. 

Place, publisher, year, edition, pages
Taylor & Francis, 2024
Keywords
conceptual mapping, conceptual visualization, diagrammatic representation, Image schemas, logical formalism, Semantics, Visual languages, Context dependent, Diagrammatic representations, Formal representations, Image schemata, Linguistic analysis, Perceptual experience, Schema language, Visualization
National Category
Computer Sciences Natural Language Processing
Identifiers
urn:nbn:se:hj:diva-65724 (URN)10.1080/13875868.2024.2377284 (DOI)001270404700001 ()2-s2.0-85198707025 (Scopus ID)HOA;intsam;963185 (Local ID)HOA;intsam;963185 (Archive number)HOA;intsam;963185 (OAI)
Available from: 2024-07-22 Created: 2024-07-22 Last updated: 2025-02-01
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8308-8906

Search in DiVA

Show all publications