Coming Soon
Women in Computing: Empowerment, Career Development, and Overcoming Barriers in STEM
[Visit external site for more information] (https://sites.google.com/view/wic-icaps25/home)
[Visit external site for more information] (https://sites.google.com/view/wic-icaps25/home)
Despite the widespread adoption of heuristic search and learning-based approaches in automated planning, Constraint and Satisfiability-based Planning (CSP/SAT-based Planning) remains a viable and promising paradigm. Historically, CSP and SAT approaches have demonstrated strong theoretical foundations and practical success in various domains. However, their usage has diminished in recent years due to the rise of alternative methods. The CASP:ER workshop aims to revisit and reinforce the relevance of CSP/SAT-based planning by showcasing recent advancements, discussing its applicability, and identifying opportunities for renewed adoption.
ICAPS 2025 Tutorial, Melbourne, Australia, Date: 10-11 November, 2025
This tutorial provides an accessible account on the recent results in epistemic planning based on Dynamic Epistemic Logic (DEL), a formalism for reasoning about knowledge and belief in multi-agent systems. The talk is structured in two parts. In the first part, we introduce the foundational concepts of epistemic logic and DEL, assuming no prior background. We start with an informal discussion on the notions of knowledge and belief, and we then move towards the full DEL-based planning framework by gradually extending the classical planning formalism with features such as uncertainty, partial observability, non-determinism, and higher-order reasoning. We illustrate each feature by showing incrementally complex examples based on a generalization of the well-known Blocks World problem of classical planning.
ICAPS 2025 Workshop
Room 6, Melbourne Connect,
Melbourne, Victoria, Australia
November 10, 2025
The program is available here.
As artificial intelligence (AI) is increasingly being adopted into application solutions, the challenge of supporting effective interactions with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building and calibrating trust as humans migrate greater competence and responsibility to such systems. The International Workshop on Human-Aware and Explainable Planning (HAXP), formerly known as the Explainable AI Planning (XAIP) workshop, brings together the latest and best in human-AI interaction and explainability, in the context of planning, scheduling, RL and other forms of sequential decision-making process. The workshop is collocated with ICAPS, the premier conference on automated planning and scheduling. Learn more: HAXP
ICAPS 2025 Tutorial, Melbourne, Australia, Date: 10-11 November, 2025
This tutorial will cover recent advances in Learning for Planning (L4P). L4P is the subfield of AI planning which focuses on learning knowledge that generalise and help planners solve planning problems more efficiently by leveraging symbolic models and training data. The overall goal of L4P is to scale up planning technology to solve problems of greater difficulty and number of objects. Indeed due to no free lunch, one cannot expect traditional PDDL planners which plan for each problem individually to handle all domains well. L4P is a rapidly growing subfield of AI Planning which has garnered the attention of both learning researchers and planning researchers alike. For example, a semantic analysis of published ICAPS papers shows that the total number of planning papers using learning until 2024 ranks 3rd, up from 7th in 2019.
ICAPS 2025 Tutorial, Melbourne, Australia, Date: 10-11 November, 2025
We revisit the problem of realizing strategies for Linear Temporal Logics on Finite traces specification (e.g., LTLf) in nondeterministic environments. In particular, we consider the case in which certain inputs (responses) from the environment are not fully reliable. One way to address this unreliability is to disregard the unreliable input completely and not consider it in choosing the next output. This is related to planning/synthesis under partial observability. However, this might be too radical and could drastically reduce the agent’s ability to operate, considering that the unreliability we are considering is only occasional. Our objective instead is to ensure that the system maintains functionality and adheres to critical specifications, despite uncertain and unreliable inputs. If the uncertainty is quantifiable, we could rely on probabilities, turning to MDPs or Stochastic Games. Yet, “software does not necessarily conform neatly to probabilistic distributions, making it difficult to apply statistical models or predictions commonly used in other scientific disciplines.” Here, we aim at exploring a novel synthesis method to manage the potential unreliability of input variables obtained without relying on probabilities. The crux of our approach is not to give up on using input variables that might be unreliable, but to complement them with the guarantee that even when they behave badly, some safeguard conditions are maintained. Specifically, we consider two models simultaneously, a brave model where all input variables are considered reliable (as usual in synthesis/planning), and a cautious one where unreliable input is projected out and discarded. Using these two models, we devise a strategy that simultaneously fulfils the task objectives completely if the input variables behave correctly and maintains certain fallback conditions even if the unreliable input variables behave wrongly.
MiniZinc is a high-level, solver-independent modelling language for constraint programming. It allows users to express complex combinatorial problems declaratively and succinctly. MiniZinc models can be solved using a wide range of backends, including constraint programming, mixed integer programming, and SAT-based solvers, making it an ideal tool for both research and practical applications.
This tutorial will introduce participants to the fundamentals of constraint modelling with MiniZinc. We will begin with the basics of writing and solving MiniZinc models, using simple hands-on examples that participants can follow directly in the web-based MiniZinc Playground. We will then demonstrate how to represent simple planning problems, showing how formulations in PDDL can be mapped into MiniZinc models. While MiniZinc (and its underlying solvers) may not be the most efficient choice for such problems, we will explore how extending these examples with richer side constraints and optimization objectives can make constraint-based approaches highly effective. Finally, we will look at modelling and solving scheduling problems, including scenarios involving optional tasks and resource constraints.
Planning as satisfiability allows for quickly solving even complex planning problems from classical to numerical domains. Though it is an efficient and powerful tool, a major bottleneck remains in modelling the planning domain. While domain models usually are presumed to exist, this typically does not account for real-world planning problems, posing a major limitation in their real-world application. Although data-driven approaches, such as action model learning, have been proposed to alleviate the modeling overhead, their adoption remains limited and largely confined to research settings.
ICAPS 2025 Workshop, Melbourne, Australia, Date: 10-11 November, 2025
Visit https://llmforplanning.github.io/ for up-to-date information.
Large Language Models (LLMs) are a disruptive force, changing how research was done in many sub-areas of AI. Planning is one of the last bastions that remain standing. The focus of this workshop is on the questions in the intersection of these areas. Some of the specific areas we would like to gain a better understanding in include: what LLMs can contribute to planning, how LLMs can/should be used, what are the pitfalls of using LLMs, what are the guarantees that can be obtained.
ICAPS'25 Workshop on Reliability In Planning and Learning (RIPL) Melbourne, Victoria, Australia
Tuesday, November 11, 2025 from 8:30 to 15:00 in Room 3
Learning is the dominating trend in AI at this time. From a planning and scheduling perspective – and for sequential decision making in general – this is manifested in two major kinds of technical artifacts that are rapidly gaining importance. First, planning models generated by large language models, or otherwise learned or partially learned from data (such as a weather forecast in a model of flight actions). Second, planning/search information learned from data, in particular action policies or planning-control knowledge for making decisions in dynamic environments (reinforcement learning or per-domain generalizing knowledge in PDDL). Reliability is a key concern in such artefacts, prominently including safety, robustness, and fairness in various forms, but possibly other concerns as well. Arguably, this is indeed one of the grand challenges in AI for the foreseeable future.