- 7024
How can we detect the electrification of residential gas demand and then forecast future electrification?
The identification of electrification of gas demand is difficult as consumer behaviour and energy choices are not immediately apparent in gas demand data. This is because consumers often choose to partially electrify rather than fully disconnect from the gas network: eg a consumer may maintain their gas connection for hot water or cooking while electrifying their heating. Consumers may also use less gas for other reasons such as higher prices or more efficient appliances. Compounding the problem is that, unlike electricity networks, there is no “smart meter” equivalent that is used for residential gas connections in Australia.
What are some ways that we could detect the electrification of gas demand? Are there particular patterns in data that could be identified that are not immediately obvious?
Looking into the future, could the electrification model take in factors such as inflation, household income and energy prices to outlook what the likely trajectory for electrification will be? - 7027
Can AI be used to generate personalised scenarios for use in AEMO’s new gas training simulator?
AEMO is currently developing a gas transmission and scheduling Training Simulator. The Simulator will be used by AEMO’s gas operation engineers and support teams to undertake training, test extreme situations and practice operating the Victorian transmission system and Declared Wholesale Gas Market in a controlled environment. The Simulator will be configured to feed input data from AEMO’s gas system modelling system as well as forecast gas demand, supply source information, weather and maintenance activities.
Could Artificial Intelligence be incorporated into the Simulator package for scenario creation, personalisation and adaptive learning to identify strengths and weaknesses in training transmission and market systems?
- 7012
How can machine learning techniques be leveraged to reduce the computational cost of implementing Model Predictive Control strategies for batteries participating in wholesale electricity markets?
Model predictive control (MPC) can be used to optimise large and small scale batteries participating in wholesale electricity markets. This approach generally involves solving a mathematical program to identify the sequence of actions a battery should take in order to minimise the cost of operation over a given planning horizon. The first action within the optimised plan determines the control signal sent to the battery, with the process repeated when new price, solar, and load forecasts are available.
Plans showing the optimal trajectory for battery energy, which are generated as part of the MPC control loop, help us reason about the decisions that are being made now, and also communicate upcoming control actions to customers. However, one drawback of this approach is that it’s computationally expensive to generate customised plans for many sites.
Recent advances in machine learning have shown promising results when it comes to emulating models used to simulate complex systems (e.g. weather forecasting). The question then arises if similar techniques can be used to reduce the computational cost of implementing an MPC strategy for a battery?
If a mathematical program is run for many scenarios and both the inputs and outputs are recorded, what machine learning techniques can be used to emulate the behaviour of a mathematical program? What strategies can be used to ensure that solutions outputted by these models are physically feasible? Furthermore, are there any heuristics that can be used to avoid re-optimising if inputs now look sufficiently similar to a previous run? - 7015
How can we incorporate the implicit cost of sending control signals via API when generating optimised plans for household batteries?
The literature often frames MPC battery optimisation strategies as the task of periodically calculating a power target which the battery energy management system (BEMS) then follows. In practice, household batteries use a hierarchical control architecture whereby the BEMS maximises self-consumption in its default state: if solar generation is greater than household load then the battery charges, and if load exceeds generation then the battery discharges. Battery manufacturers may make an API available which allows third parties to override control signals produced by the BEMS.
A control signal sent via API consists of three components: a power target the battery should follow, a duration defining the length of the control action, and the time at which the control action should start. Ideally we want to reduce the number of API calls and also spread them out over time as APIs are subject to rate limits, and the latency between when a target is generated and when it is sent to a device can be influenced by the total number of calls that need to be made for the fleet. One approach to smoothing out the number of API calls involves using optimised plans to schedule future control actions (instead of only deriving control actions from the first interval in the plan). However, if there is a material change to the forecasts these scheduled events may need to be cancelled which will also results in API calls.
More importantly, battery owners don’t want their devices following external control actions when there is limited economic upside. For these reasons we only want to direct a battery (and send an API call) when there is a material economic benefit to doing so.
What are some strategies that could be used to embed the implicit cost of sending a control into the mathematical program used to generate optimised plans? Ideally this mechanism would impose a cost on deviating away from the battery’s default state and filter out control actions that are of limited economic value.
- 7036
How can we statistically align extreme weather metrics derived from CMIP6 data with those derived from current weather observations?
The goal of this research is to develop a method for producing continuous graphs of weather metrics over time, from 1950 to 2100, at a given point in Australia. This involves integrating two datasets of gridded weather data: the Bureau of Meteorology (BoM) Australian Gridded Climate Dataset (AGCD) and the CMIP6 climate model data. The AGCD has a spatial resolution of 0.05 degrees and is derived from interpolating weather observations from 1950 to present.
The CMIP6 dataset, on the other hand, simulates the climate under various emissions scenarios from 2015 to 2100 and has a spatial resolution of approximately 0.25 degrees.
The nature of CMIP6 climate models means that while there is data for 2015 onward, this data does not match the observed day-to-day weather. It is only attempting to represent the general climate. That is to say the CMIP6 climate models allow us to peer into the future, albeit with extremely blurry vision. This picture of the future is blurred not just spatially, but also temporally. To explain by way of example: while a CMIP6 temperature at a given point on a given day in 2015 should not be expected to match what was observed, the average temperature for a region over the 2015 to 2024 period should match what was observed.
We are not attempting to plot average temperature, but rather extreme weather metrics that can be derived from temperature (and precipitation). These metrics could be as simple as the average number of days over 35°C per year, or as complex and the average length of moderate-severity heatwaves as per the BoM definition (a somewhat complex calculation based on the Excess Heat Factor). We have found that regardless of how well a weather variable like temperature lines up between AGCD and CMIP6, these derived metrics usually have significantly different values for the same period, even if averaging over many years (2015-2024). So essentially our goal is to “downscale” data from the large CMIP6 pixels to match the spatial detail of the smaller AGCD pixels in terms of both average absolute value and statistical behaviour.
See the following link for a more detailed research statement:
https://tch521.github.io/AIMday-2024/climatics-research-problem
- 7045
Can we use AI to predict the magnitude of the model error on each market data point?
n options pricing, math models are calibrated to market price data, and in general, the models do not exactly reproduce all market prices because far fewer model parameters are relied on to fit many more market data points (prices) by minimising the total errors of all market data points.
The question is: can we use AI to predict the magnitude of the model error on each market data point?
As an example, the SABR volatility model is used to fit implied volatility surface, the SABR model has four parameters, whereas for each tenor, there are at least five market implied volatilities. Can we use historical data of the implied volatility surface and calibrated SABR model parameters (possibly plus spot movement and interest rate) predict the possible errors on the five market data points?
- 7000
Could the identification of an electric vehicle’s presence within a household be feasibly achieved through analysis of their electricity consumption data?
Electrification of vehicles is expected to rapidly accelerate over the coming decade. As an energy distributor, Jemena has a role to play in empowering Victorians to deliver Net Zero commitments by providing a stable and robust electricity grid. In the EV space, we are exploring the impact that residential and shared charging, commercial fleet electrification, and more will have upon our network. Understanding the current uptake of EVs across the different parts of our network and monitoring the change over time is a vital part of supporting the customer experience.
At surface level, we expect the answer to this question to be a resounding yes. After all, the charging pattern of an electric vehicle (EV) should be immediately apparent in the consumption data. However, initial observations suggest that customers with EVs often also have solar panels, which may obfuscate charging patterns. Furthermore, early adopters of EVs, who are also likely to be early adopters of battery technology, may further complicate analysis.
Other possible confounding factors include:
– Slow-charging versus fast-charging
– Different EV models having different charging patterns
– Communal chargers being used - 7003
What are some ways (or tools) we can use to optimise a day of work for a electric field crew to minimise travel time while allowing flexibility to work on unexpected faults?
While this question may appear less technical than the other, it is an optimisation problem of vital importance to the energy transition. Over the past four years there has been a significant shortage of skilled workers qualified to maintain and upgrade the electricity network. As the demand for electricity grows so to does the amount of physical work that needs to be carried out across the grid.
A typical day for a linesperson may be a mixture of planned appointments with customers, opportunistic maintenance in an area, and unplanned fault work.
Appointments are generally known at least three days in advance and have a small time window in which they can be done along with an exact location and likely time to complete.
Opportunistic maintenance is work where the location is known and there is a large time window in which the job can be done (often spanning months). These tend to be filler jobs that can be slotted between more critical work.
Faults can occur anywhere and generally require the closest crew to attend as quickly as possible. Their duration is difficult to predict but in most cases is less than three hours. Oftentimes crews are required to leave maintenance or appointment work to attend these faults as a matter of public safety.
- 7051
How can we leverage session level EV charging data to detect the presence of an EV in household consumption data?
Origin is committed to playing a pivotal role in the energy transition.
As part of this, we have developed a series of products targeted toward EV owners that orchestrate cheaper charging during off peak periods and periods of high solar:
* https://techau.com.au/hands-on-with-origin-energys-ev-power-up-beta-recharging-for-just-8c-kwh/
* https://www.originenergy.com.au/blog/take-charge-with-ev-smart-scheduling/
When marketing these products to our customer base, we wish to focus our efforts on customers that are likely to have an EV already.
For customers already on these products, we are data rich with data at the EV charge session level including charging times, energy consumed by the EV and charging rates (in kW).
The only data common to these customers and those that may have an EV but are not on these products yet is household consumption data from their electricity meter.
Discerning an EV from this data alone is possible but difficult and other factors such as the presence of solar can make the task even more difficult.
How can we leverage the EV charging and energy consumption data for thousands of customers on these products to better detect an EV from only the consumption data of millions of customers?
- 7046
How can we use big data and machine learning tools to improve the speed and accuracy of power outage validation?
When a fault took place in a distribution network, a lot of information is generated in distribution management system, outage management system and metering databases. At this stage we analyse these data manually to validate large scale power outages but there should be an opportunity to combine big data and machine learning techniques to validate the outages with less effort and more accuracly
- 7039
Recognizing that the majority of LLM are tuned by humans to reduce bias and manage problematic responses, how can we apply such tools to help design pre-survey methodologies and/or to work with and extend traditional surveys to extract richer data that can assist in energy system planning and optimization.
The emergence of large language models (LLMs) provides a rich set of embedded data that describes the online behaviour of distinct cohorts which could be leveraged to provide reasonable predictions to a range of topics. Recognizing that the majority of models are tuned by humans to reduce bias and manage problematic responses, how can we apply such tools to help design pre-survey methodologies and/or to work with and extend traditional surveys to extract richer data that can assist in energy system planning and optimization.
https://www.cambridge.org/core/journals/political-analysis/article/abs/out-of-one-many-using-language-models-to-simulate-human-samples/035D7C8A55B237942FB6DBAD7CAA4E49#article - 7042
How could the application of different methodologies, such as tailored small language models help designers to unlock full value chain benefits including potential co-benefits (better productivity and health outcomes for occupants for instance)?approach, is the potential to receive much better outcomes through fully integrative design beginning with end-use needs.
How could the application of different methodologies, such as tailored small language models help designers to unlock full value chain benefits including potential co-benefits (better productivity and health outcomes for occupants for instance)?
How can we help designers see plausible alternatives that unlock greater system value through the application of tools such as physics informed neural networks which could explore a wider degree of optionality while maintaining physical integrity constraints?Both new and retrofit building design is often focused on the selecting from a range of individual actions on potential benefits such as energy savings. Such actions might include installing building insulation, application of various sensors technologies and the replacement of traditional heating and ventilation systems with heat pump solutions.
A well-recognized issue in this predominately supply-side approach, is the potential to receive much better outcomes through fully integrative design beginning with end-use needs.
How could the application of different methodologies, such as tailored small language models help designers to unlock full value chain benefits including potential co-benefits (better productivity and health outcomes for occupants for instance)?
Furthermore, design solutions are often limited by standard practices that miss potential integrative savings. One example includes the fluid handling that sees the installation of pipes that are too small and with too many bends resulting in large frictional losses that are compensated by large supply-side solutions (bigger pumps, greater electrical loads etc).
How can we help designers see plausible alternatives that unlock greater system value through the application of tools such as physics informed neural networks which could explore a wider degree of optionality while maintaining physical integrity constraints?
- 7048
How can Bayesian Networks augment Large Language Model performance?
Large Language Model performance is improved when augmented by semantic information, ie: from a knowledge graph. How could the understanding of causal information, probability and utility in Bayesian Networks improve the agentic behaviour of Large Language Models
- 6994
Commercialisaiton of energy system research
What would be the ideal use cases for commercialisation of energy systems ? For example, developing a solution to maximise the price for solar energy systems?
- 6997
Industry-academia collaboration – go-to-market !
What would be the best collaboration model to achieve faster go-to-market for industry partners? That is, a fast go-to-market pathway (rather than waiting for months or years in research labs).