Note
You can download this example as a Jupyter notebook or try it out directly in Google Colab.
10. Introduction to Demand-Side Agents and Flexibility in the ASSUME Framework#
In this tutorial, we will explore the concept of Demand-Side Agents (DSA) and their role in managing electricity consumption in response to price signals. The focus will be on modeling flexibility and performing Power Flow Analysis after clearing the day-ahead market.
In modern electricity markets, Demand-Side Management (DSM) plays an essential role in ensuring efficient energy consumption, maintaining grid stability, and optimizing costs. Demand-side agents, such as industrial plants or residential units, can adjust their energy consumption in response to price signals from the electricity market. This capability to adapt energy usage in real-time provides flexibility that is crucial for balancing supply and demand.
In the ASSUME Framework, demand-side agents are modeled as entities with specific energy requirements, flexibility capabilities, and interaction mechanisms in the market. These agents are able to: - Participate in electricity markets by bidding to buy electricity based on demand. - Adjust their consumption based on price signals (flexibility). - Perform load-shifting to optimize costs while supporting grid stability
Integration of the Characteristics of the Agent: - Each demand-side agent is modeled with specific attributes such as rated power, minimum operating power, and flexibility measures. These characteristics dictate how the agent behaves in the electricity market. - Agents can also be equipped with technologies like electrolyser, DRI plant, and electric arc furnace, each contributing to the agent’s overall power demand profile.
In this tutorial, we will: - Introduce how demand-side agents are modeled in the ASSUME framework. - Explore how flexibility is implemented for agents to shift energy usage. - Power flow analysis and grid congestion management. - Simulate a steel plant as a demand-side agent, participating in various market conditions. - Analyze three use cases of the steel plant’s participation in the electricity market: - Use Case 1: Simulation without flexibility. - Use Case 2: Simulation with flexibility. - Use Case 3: Simulation with power plants participating in redispatch markets.
Setting Up the ASSUME Framework#
Before diving into DSA and Flexibility, ensure that you have the ASSUME framework installed and set up correctly. If you haven’t done so already, follow the steps below to install the ASSUME core package.
[ ]:
# Install the ASSUME framework with the PyPSA library for network optimization
!pip install assume-framework[network]
#Install some additional packages for plotting
!pip install plotly
!pip install cartopy
!pip install seaborn
Let’s also import some basic libraries that we will use throughout the tutorial.
[ ]:
import os
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly.graph_objects as go
import pyomo as pyo
import pypsa
import seaborn as sns
import yaml
# Function to display DataFrame in Jupyter
from IPython.display import display
3. Modeling Demand-Side Agents (DSAs) and Flexibility#
The Steel Plant is an example of an industrial demand-side agent in the ASSUME framework. This agent has specific characteristics such as rated power, flexibility, and the ability to adjust its electricity consumption based on price signals. We will now explore how to define the Steel Plant in the framework.
Let’s first import the necessary classes and functions from the ASSUME framework to create the Steel Plant agent.
[ ]:
from assume import World
from assume.common.base import (
BaseStrategy,
MarketConfig,
Orderbook,
Product,
SupportsMinMax,
)
from assume.scenario.loader_csv import load_scenario_folder
from assume.units.dsm_load_shift import DSMFlex
3.1 Creating a Steel Plant as a Demand-Side Agent#
The Steel Plant is an industrial agent that consists of several key components (e.g., DRI plant, electrolyser, EAF, etc.). These components consume electricity and can be modeled to react dynamically to market conditions.
In the ASSUME framework, the steel plant agent is created by defining its characteristics, components, and objectives. Let’s start by defining the Steel Plant with its core characteristics:
[ ]:
class SteelPlant(SupportsMinMax, DSMFlex):
"""
Class to define the Steel Plant as a demand-side agent in the ASSUME framework.
"""
def __init__(
self,
id: str,
unit_operator: str,
bidding_strategies: dict,
technology: str = "steel_plant",
node: str = "node0",
index: pd.DatetimeIndex = None,
location: tuple[float, float] = (0.0, 0.0),
components: dict[str, dict] = None,
objective: str = None,
flexibility_measure: str = "max_load_shift",
demand: float = 0,
cost_tolerance: float = 10,
**kwargs,
):
super().__init__(
id=id,
unit_operator=unit_operator,
technology=technology,
bidding_strategies=bidding_strategies,
index=index,
node=node,
location=location,
**kwargs,
)
Defining Characteristics:
id: The unique identifier for the steel plant agent.
unit_operator: The entity operating the steel plant.
bidding_strategies: Defines the market bidding behavior (such as NaiveDASteelplantStrategy).
technology: Represents the type of technology used, here set to "steel_plant".
node: Specifies the grid connection point for the steel plant.
flexibility_measure: Indicates the flexibility strategy, such as load-shifting capabilities.
demand: The energy demand of the steel plant.
cost_tolerance: The tolerance level for cost variations.
Bringing ``dst_components`` into the Steel Plant Process
In the steel plant, we use components like the electrolyser and hydrogen storage to model the production and storage of hydrogen, which plays a critical role in decarbonized steel production. These components are imported from dst_components.py
and integrated into the plant’s process.
In this section, we will showcase how to model these components, define their characteristics, and integrate them into the overall process of the steel plant.
1. Electrolyser
The electrolyser produces hydrogen through electrolysis, using electricity as the input. The electrolyser’s constraints ensure that it operates within its rated power capacity, follows ramp rates for power changes, and has operational limits like minimum operating time and efficiency.
We will use the create_electrolyser
function from dst_components.py
to add the electrolyser to the steel plant’s Pyomo model.
2. Hydrogen Storage
The hydrogen storage component is used to store hydrogen produced by the electrolyser. It helps manage the supply and ensures that hydrogen is available when needed. The hydrogen storage has parameters like maximum capacity, storage loss rate, and charge/discharge rates.
We will use the create_hydrogen_storage
function from dst_components.py
to integrate hydrogen storage into the model.
3.2 Modeling Components in the ASSUME Framework#
In the ASSUME framework, components like the electrolyser and hydrogen storage are modeled using Pyomo, a Python-based optimization modeling tool. The framework supports detailed modeling of each component by specifying their characteristics and operational constraints.
For each component, attributes such as rated power, ramp rates, and efficiency are defined. These attributes are essential for simulating the component’s behavior in the energy system.
Example: Electrolyser Model The electrolyser is a crucial component in hydrogen production. In this framework, the electrolyser is modeled with various characteristics, including power limits, operational efficiency, and ramp rates. These attributes ensure that the electrolyser operates within its technical and economic boundaries.
Here’s how we define the electrolyser’s model:
[ ]:
def create_electrolyser(
model,
max_power,
min_power,
ramp_up,
ramp_down,
min_operating_time,
min_down_time,
efficiency,
time_steps,
**kwargs,
):
"""
Creates the electrolyser component in the Pyomo model with defined characteristics.
"""
# Define a Pyomo Block for the electrolyser component
model_part = pyo.Block()
# Define the attributes of the electrolyser component
model_part.max_power = pyo.Param(initialize=max_power)
model_part.min_power = pyo.Param(initialize=min_power)
model_part.ramp_up = pyo.Param(initialize=ramp_up)
model_part.ramp_down = pyo.Param(initialize=ramp_down)
model_part.efficiency = pyo.Param(initialize=efficiency)
# Define the Pyomo variables for operational behavior
model_part.power = pyo.Var(time_steps, bounds=(0, max_power))
model_part.hydrogen_out = pyo.Var(time_steps, domain=pyo.NonNegativeReals)
# Define the Pyomo constraints
@model.Constraint(time_steps)
def min_operating_time_constraint(m, t):
"""
Ensures that the electrolyser operates at or above its minimum power.
"""
return model_part.power[t] >= model_part.min_power
@model.Constraint(time_steps)
def ramp_rate_constraints(m, t):
"""
Ensures that the electrolyser respects the ramp up and ramp down constraints.
"""
if t > 0:
return (
model_part.power[t] - model_part.power[t - 1] <= model_part.ramp_up,
model_part.power[t - 1] - model_part.power[t] <= model_part.ramp_down,
)
# Define hydrogen output based on the efficiency
@model.Constraint(time_steps)
def hydrogen_production_constraint(m, t):
"""
Ensures that hydrogen output is proportional to the power input based on efficiency.
"""
return model_part.hydrogen_out[t] == model_part.power[t] * model_part.efficiency
3.3. Initializing the Components and Process Sequence of the Steel Plant#
The steel plant consists of multiple components, such as the DRI plant, electrolyser, Electric Arc Furnace (EAF), and hydrogen storage. These components are initialized and connected into a process sequence to define how the steel plant operates.
Mapping of Component Type Identifiers to Their Respective Classes
Before initializing the components and process sequences, the steel plant needs to map the component types to their respective functions. This is done using the following dictionary, which links the component type identifiers (e.g., "electrolyser"
, "eaf"
) to the respective creation functions in the dst_components.py
file:
[ ]:
from assume.units import demand_side_technologies
print(demand_side_technologies)
# you can also create your own DST component class and add it to the world like follows
# world = World()
# world.dst_components["YourComponent"] = YourComponent
The
def initialize_components(components)
function calls the Pyomo models of the respective components.The
def initialize_process_sequence()
function handles the connection of the entire process, ensuring that the plant operates in a defined sequence.
To initialize the components of the steel plant, we use the initialize_components()
function. This function iterates through the list of components, calls the corresponding factory method for each component (as mapped in dst_components
), and transfers the necessary attributes to the Pyomo model.
[ ]:
# Initialize components of the steel plant
def initialize_components(self, components: dict[str, dict]):
"""
Initializes the components of the steel plant.
Args:
components (dict[str, dict]): The components of the steel plant.
model (pyomo.ConcreteModel): The Pyomo model.
"""
self.model.dsm_blocks = pyo.Block(list(components.keys()))
for technology, component_data in components.items():
if technology in demand_side_technologies:
factory_method = demand_side_technologies[technology]
self.model.dsm_blocks[technology].transfer_attributes_from(
factory_method(
self.model, time_steps=self.model.time_steps, **component_data
)
)
In this function:
The
components
argument is a dictionary where the keys are the component types (e.g.,"electrolyser"
,"eaf"
) and the values are dictionaries of component-specific parameters.For each component, the factory method corresponding to the component type is called (e.g.,
create_electrolyser
for an electrolyser).The attributes of the initialized component are then transferred to the Pyomo model using the
transfer_attributes_from()
function.
3.4 Initializing the Process Sequence for the Steel Plant#
The initialize_process_sequence()
function is responsible for defining how the different components of the steel plant are connected to form a complete process. This function ensures that the flow of materials (such as hydrogen and direct reduced iron (DRI)) between components, such as the electrolyser, hydrogen storage, DRI plant, and Electric Arc Furnace (EAF), is properly constrained.
[ ]:
# Initialize components of the steel plant
def initialize_process_sequence(self):
"""
Initializes the process sequence and constraints for the steel plant. Here, the components/ technologies are connected to establish a process for steel production
"""
# Assuming the presence of 'hydrogen_storage' indicates the desire for dynamic flow management
has_h2storage = "hydrogen_storage" in self.model.dsm_blocks.keys()
# Constraint for direct hydrogen flow from Electrolyser to dri plant
@self.model.Constraint(self.model.time_steps)
def direct_hydrogen_flow_constraint(m, t):
"""
Ensures the direct hydrogen flow from the electrolyser to the DRI plant or storage.
"""
# This constraint allows part of the hydrogen produced by the dri plant to go directly to the EAF
# The actual amount should ensure that it does not exceed the capacity or demand of the EAF
if has_h2storage:
return (
self.model.dsm_blocks["electrolyser"].hydrogen_out[t]
+ self.model.dsm_blocks["hydrogen_storage"].discharge[t]
== self.model.dsm_blocks["dri_plant"].hydrogen_in[t]
+ self.model.dsm_blocks["hydrogen_storage"].charge[t]
)
else:
return (
self.model.dsm_blocks["electrolyser"].hydrogen_out[t]
>= self.model.dsm_blocks["dri_plant"].hydrogen_in[t]
)
# Assuming the presence of dristorage' indicates the desire for dynamic flow management
has_dristorage = "dri_storage" in self.model.dsm_blocks.keys()
# Constraint for direct hydrogen flow from Electrolyser to dri plant
@self.model.Constraint(self.model.time_steps)
def direct_dri_flow_constraint(m, t):
"""
Ensures the direct DRI flow from the DRI plant to the EAF or DRI storage.
"""
# This constraint allows part of the dri produced by the dri plant to go directly to the dri storage
# The actual amount should ensure that it does not exceed the capacity or demand of the EAF
if has_dristorage:
return (
self.model.dsm_blocks["dri_plant"].dri_output[t]
+ self.model.dsm_blocks["dri_storage"].discharge[t]
== self.model.dsm_blocks["eaf"].dri_input[t]
+ self.model.dsm_blocks["dri_storage"].charge[t]
)
else:
return (
self.model.dsm_blocks["dri_plant"].dri_output[t]
== self.model.dsm_blocks["eaf"].dri_input[t]
)
# Constraint for material flow from dri plant to Electric Arc Furnace
@self.model.Constraint(self.model.time_steps)
def shaft_to_arc_furnace_material_flow_constraint(m, t):
"""
Ensures the material flow from the DRI plant to the Electric Arc Furnace.
"""
return (
self.model.dsm_blocks["dri_plant"].dri_output[t]
== self.model.dsm_blocks["eaf"].dri_input[t]
)
Key Process Sequence Steps:
Hydrogen Flow:
The
direct_hydrogen_flow_constraint()
ensures that hydrogen produced by the electrolyser flows to the DRI plant or hydrogen storage.If hydrogen storage is present, it can charge or discharge hydrogen, ensuring that the electrolyser’s output is properly distributed.
DRI Flow:
The
direct_dri_flow_constraint()
manages the flow of direct reduced iron (DRI) from the DRI plant to the EAF or DRI storage.If DRI storage is present, it can charge or discharge DRI, ensuring that the DRI output from the plant is managed efficiently.
Material Flow to EAF:
The
shaft_to_arc_furnace_material_flow_constraint()
ensures that the material produced by the DRI plant flows directly into the Electric Arc Furnace (EAF) for further steel production.
These constraints ensure the proper flow of materials between the steel plant’s components, supporting efficient steel production while maintaining the operational constraints of each technology.
3.5. Executing the Optimization#
To execute the optimization process, the function determine_optimal_operation_without_flex()
is called. This function computes the optimal operation for the steel plant under the defined objectives.
[ ]:
def determine_optimal_operation_without_flex(self):
"""
Execute the optimization for the steel plant based on the current market conditions.
This function will simulate the plant's operation without considering flexibility.
"""
4. Flexibility in the ASSUME Framework#
In the ASSUME framework, Demand-Side Flexibility allows agents, such as the steel plant, to adjust their energy consumption in response to price signals or grid conditions. This flexibility is achieved by shifting loads or adjusting operations based on the agent’s cost tolerance and predefined flexibility measures.
Flexibility is especially important for industrial agents, as it enables them to optimize operational costs by reacting to market signals. Two important functions are used to implement and determine the impact of flexibility: ``flexibility_cost_tolerance()`` and ``determine_optimal_operation_with_flex()``.
4.1 Defining Cost Tolerance for Flexibility#
The ``flexibility_cost_tolerance`` function defines how much additional cost the agent is willing to tolerate for the flexibility it can provide. This tolerance allows the agent to shift its load or adjust operations without exceeding the cost limitations.
[ ]:
def flexibility_cost_tolerance(self, model):
"""
Defines the cost tolerance for demand-side flexibility.
Args:
model (Pyomo model): The Pyomo model for the steel plant.
"""
# Set up the cost tolerance parameter for flexibility
model.flexibility_cost_tolerance = pyo.Param(initialize=self.cost_tolerance)
@model.Constraint(model.time_steps)
def cost_tolerance_constraint(m, t):
"""
Ensures that the cost of shifting does not exceed the allowed tolerance.
This constraint limits the cost associated with shifting loads or adjusting operations.
"""
return (
m.shifted_load[t] * model.energy_price[t]
<= model.flexibility_cost_tolerance
)
[ ]:
def determine_optimal_operation_with_flex(self):
"""
Determines the optimal operation of the agent while considering flexibility.
This function calculates how the agent can adjust its load and operations within
the defined flexibility constraints to minimize costs.
"""
# Implementation of the optimization algorithm considering flexibility constraints
In this function:
``model.flexibility_cost_tolerance``: This parameter defines the maximum cost the agent is willing to accept when adjusting its operations through load shifting.
``cost_tolerance_constraint``: A constraint ensures that the cost of shifting the load remains within the predefined tolerance.
4.2 Determining Optimal Operation with Flexibility#
Once flexibility is implemented, the ``determine_optimal_operation_with_flex`` function is called to compute the optimal operation of the agent while considering flexibility. This function ensures that the agent adjusts its load and operation to minimize costs while respecting the flexibility constraints.
In this function:
The optimal operation of the agent is computed by considering the flexibility constraints, such as load shifting and cost tolerance.
The agent adjusts its consumption based on the flexibility allowed, ensuring that the operational costs are minimized.
Use Case 1: Steel plant Without Flexibility- bidding in a Day-Ahead market#
In this use case, we will simulate the energy system without demand-side flexibility. This involves setting up the power plant units, demand units, and DSM units. We will then run the simulation to understand how the market behaves without the flexibility mechanism.
1. Setting Up Power Plant Units#
Power plant units represent energy generation sources. These units are connected to specific nodes in the grid and have characteristics such as maximum power, efficiency, and operational costs. Below is the creation of the Power Plant Units DataFrame.
[ ]:
# @title Setting Up Power Plant Units
# Define the list of power plants with their characteristics
powerplant_units_data = {
"name": [
"Wind onshore",
"Wind offshore",
"Solar",
"Hydro",
"Biomass",
"KKW ISAR 2",
"KKW BROKDORF",
"KKW PHILIPPSBURG 2",
],
"technology": [
"wind_onshore",
"wind_offshore",
"solar",
"hydro",
"biomass",
"nuclear",
"nuclear",
"nuclear",
],
"bidding_EOM": [
"naive_eom",
"naive_eom",
"naive_eom",
"naive_eom",
"naive_eom",
"naive_eom",
"naive_eom",
"naive_eom",
],
"fuel_type": [
"renewable",
"renewable",
"renewable",
"renewable",
"renewable",
"uranium",
"uranium",
"uranium",
],
"emission_factor": [0, 0, 0, 0, 0, 0, 0, 0],
"max_power": [40, 10, 50, 30, 50, 100, 100, 100],
"min_power": [0, 0, 0, 0, 0, 10, 10, 10],
"efficiency": [1, 1, 1, 1, 1, 0.33, 0.33, 0.33],
"ramp_up": [None, None, None, None, 30, 50, 50, 50],
"ramp_down": [None, None, None, None, 30, 50, 50, 50],
"additional_cost": [0, 0, 0, 0, 6, 10.3, 10.3, 10.3],
"node": ["north", "north", "north", "north", "north", "south", "south", "south"],
"unit_operator": [
"renewables_operator",
"renewables_operator",
"renewables_operator",
"renewables_operator",
"renewables_operator",
"UNIPER",
"UNIPER",
"ENBW ENERGIE BADEN-WURTTEMBERG",
],
}
# Create the DataFrame
powerplant_units = pd.DataFrame(powerplant_units_data)
# Display the Power Plant Units DataFrame
print("Power Plant Units DataFrame:")
display(powerplant_units.head())
Explanation of the Above Code
In the above code:
We have defined several power plants with technologies such as wind, solar, nuclear, and biomass.
Each power plant has attributes like max power, efficiency, ramp rates, and operating costs.
The power plants are distributed across two zones: north and south, and are operated by different companies.
This DataFrame serves as the input for the power plants, and these characteristics will influence the market simulation.
2. Setting Up Demand Units#
Demand units represent the consumers of electricity in the system. These units have specific characteristics such as maximum power demand, node (location), and operator. These characteristics influence how the demand is distributed across the grid and how it interacts with the market.
Below is the creation of the Demand Units DataFrame.
[ ]:
# @title Setting Up Demand Units
# Define the list of demand units with their characteristics
demand_units_data = {
"name": ["demand_EOM1", "demand_EOM2"],
"technology": ["inflex_demand", "inflex_demand"],
"bidding_EOM": ["naive_eom", "naive_eom"],
"max_power": [500, 650], # Max demand in MW
"min_power": [0, 0], # Min demand in MW
"node": ["north", "south"], # Node or zone where the demand is located
"unit_operator": ["eom_de1", "eom_de1"],
}
# Create the DataFrame
demand_units = pd.DataFrame(demand_units_data)
# Display the Demand Units DataFrame
print("Demand Units DataFrame:")
display(demand_units.head())
Explanation of the Above Code
In the above code:
We have defined a single demand unit, demand_EOM, with the technology type inflex_demand.
The demand unit participates in the market using the naive_eom bidding strategy.
The demand unit is located in the north zone with a maximum power demand of 1,000,000 MW.
This DataFrame serves as the input for the demand units, defining how much electricity is consumed at each node and influencing the market dynamics.
3. Preparing Demand Data#
The demand_df DataFrame provides the expected electricity demand for each demand unit over a period of time. In this use case, we will create a demand dataset for 3 days, with demand recorded every 15 minutes. This data will drive the consumption behavior of the demand units and impact the market simulation.
Below is the creation of the demand_df for 3 days.
[ ]:
# @title Prepare demand data
# Define the time range for 3 days, with demand recorded every 15 minutes
time_index = pd.date_range(
start="2019-01-01 00:00", end="2019-01-03 23:45", freq="15min"
)
# Simulate demand data for 'demand_EOM1' and 'demand_EOM2' (example demand pattern)
# For simplicity, we'll create a fluctuating demand pattern using a sinusoidal function
demand_values1 = 450 + 100 * np.sin(np.linspace(0, 6 * np.pi, len(time_index)))
demand_values2 = 500 + 80 * np.sin(np.linspace(0, 6 * np.pi, len(time_index)))
# Create the DataFrame with both 'demand_EOM1' and 'demand_EOM2'
demand_df = pd.DataFrame(
{
"datetime": time_index,
"demand_EOM1": demand_values1,
"demand_EOM2": demand_values2,
}
)
# Set 'datetime' as the index
demand_df.set_index("datetime", inplace=True)
# Display the first few rows of the demand DataFrame
print("Demand DataFrame for 3 Days:")
display(demand_df.head())
Explanation of the Above Code
In the above code:
We generate demand data for 3 days, with demand recorded every 15 minutes.
The demand pattern fluctuates between 44,000 MW and 46,000 MW, using a sinusoidal function to simulate variations in electricity consumption over time.
The demand_df DataFrame will be used as an input for the demand units in the market simulation, determining how much electricity is required at each time step.
This DataFrame serves as the input for the demand profile.
4. Setting Up Industrial DSM Units#
Industrial DSM Units represent the demand-side agents, such as the steel plant, which can adjust their electricity consumption and participate in the market using different flexibility strategies. Below, we will create the industrial_dsm_units DataFrame with relevant attributes for the steel plant.
[ ]:
# @title Setting Up DSM Units
# Define the list of industrial DSM units (for steel plant) with their characteristics
industrial_dsm_units_data = {
"name": ["A360", "A360", "A360"],
"unit_type": ["steel_plant", "steel_plant", "steel_plant"],
"technology": ["electrolyser", "dri_plant", "eaf"],
"node": ["south", "south", "south"],
"bidding_EOM": ["naive_da_steelplant", "", ""],
"unit_operator": ["dsm_operator_1", "dsm_operator_1", "dsm_operator_1"],
"objective": ["min_variable_cost", "", ""],
"flexibility_measure": [None, "", ""], # max_load_shift
"cost_tolerance": [20, None, None],
"demand": [2000, None, None], # MW
"fuel_type": ["hydrogen", "hydrogen", "hydrogen"],
"max_power": [100, 100, 100], # MW
"min_power": [0, 0, 0], # MW
"ramp_up": [100, 100, 100], # MW/hr
"ramp_down": [100, 100, 100], # MW/hr
"min_operating_time": [0, 1, 0],
"min_down_time": [0, 0, 0],
"efficiency": [0.8, None, None],
"specific_dri_demand": [None, None, 1.09],
"specific_electricity_consumption": [None, 0.3, 0.44],
"specific_natural_gas_consumption": [None, 1.9, None],
"specific_hydrogen_consumption": [None, 1.83, None],
"specific_iron_ore_consumption": [None, 1.43, None],
"specific_lime_demand": [None, None, 0.046],
"lime_co2_factor": [None, None, 0.1],
"start_price": [5, None, None],
"max_capacity": [None, None, None],
"min_capacity": [None, None, None],
"initial_soc": [None, None, None],
"storage_loss_rate": [None, None, None],
"charge_loss_rate": [None, None, None],
"discharge_loss_rate": [None, None, None],
}
# Create the DataFrame
industrial_dsm_units = pd.DataFrame(industrial_dsm_units_data)
# Display the Industrial DSM Units DataFrame
print("Industrial DSM Units DataFrame:")
display(industrial_dsm_units.head())
Explanation of the Above Code
In the above code:
The steel plant is represented by three technologies: electrolyser, DRI plant, and EAF (Electric Arc Furnace).
Each unit is associated with specific operational parameters such as rated power, minimum power, ramp rates, and fuel type.
The bidding_EOM columns specify the bidding strategies used by the steel plant in the market.
Additional operational characteristics include specific hydrogen consumption, specific electricity consumption, and specific natural gas consumption for each of the units.
This DataFrame serves as the input for the steel plant, and these characteristics will influence the market simulation.
Bidding Strategy for the Steel Plant
In the ASSUME framework, each demand-side or supply-side agent participates in the market using a predefined bidding strategy. For the steel plant, we assign the ``NaiveDASteelplantStrategy``, which is a basic strategy for submitting bids in the day-ahead market.
Class: NaiveDASteelplantStrategy
The ``NaiveDASteelplantStrategy`` class defines how the steel plant agent interacts with the day-ahead market. It calculates the optimal operation of the steel plant and submits bids based on the plant’s power requirements and marginal costs.
Below is the breakdown of the bidding strategy:
[ ]:
class NaiveDASteelplantStrategy(BaseStrategy):
def calculate_bids(
self,
unit: SupportsMinMax,
market_config: MarketConfig,
product_tuples: list[Product],
**kwargs,
) -> Orderbook:
bids = []
start = product_tuples[0][0] # start time of the first product
# Calculate the optimal operation for the steel plant
unit.calculate_optimal_operation_if_needed()
for product in product_tuples:
"""
For each product, calculate the marginal cost of the unit at the start time of the product
and the volume of the product. Dispatch the order to the market.
"""
start = product[0]
volume = unit.opt_power_requirement.loc[
start
] # Power requirement for the unit at the start time
marginal_price = unit.calculate_marginal_cost(
start, volume
) # Calculate the marginal cost
bids.append(
{
"start_time": product[0], # Start time of the bid
"end_time": product[1], # End time of the bid
"only_hours": product[2], # Relevant hours for the bid
"price": marginal_price, # Marginal price for the bid
"volume": -volume, # Volume of electricity
}
)
return bids
Explanation:
Optimal Operation:
The strategy first calculates the optimal operation of the steel plant using
unit.calculate_optimal_operation_if_needed()
. This ensures that the plant’s operational requirements are determined before submitting bids.
Bids Calculation:
For each time product (which represents a time period in the market), the strategy calculates:
Power requirement: The volume of electricity the steel plant requires at the start of the time product, which is retrieved from
unit.opt_power_requirement.loc[start]
.Marginal cost: The marginal cost of producing this volume of electricity, calculated using
unit.calculate_marginal_cost(start, volume)
.
Bid Structure:
The bid for each product includes:
Start time and end time of the bid (the time window for which the bid applies).
Only hours: The specific hours during which the bid applies.
Price: The marginal cost of the electricity.
Volume: The volume of electricity required, represented as a negative value since the steel plant is bidding to consume power.
Role of the Bidding Strategy
This bidding strategy allows the steel plant to participate in the day-ahead electricity market by submitting bids based on its operational needs and marginal costs.
The strategy ensures that the plant is optimizing its power consumption by submitting bids that reflect the actual cost of production and the power requirements for the upcoming time periods.
This strategy forms a core part of how the steel plant interacts with the market, balancing its electricity needs with the prices in the market. In the context of our simulation, this will be the default behavior for the steel plant when flexibility is not enabled.
Function: ``calculate_marginal_cost``
In the bidding process, the marginal cost of the steel plant is calculated using the calculate_marginal_cost
function. This function determines the marginal cost of producing electricity based on the required power at a specific time. The marginal cost is then used to submit bids in the day-ahead market.
[ ]:
def calculate_marginal_cost(self, start: pd.Timestamp, power: float) -> float:
"""
Calculate the marginal cost of the unit based on the provided time and power.
Args:
start (pandas.Timestamp): The start time of the dispatch.
power (float): The power output of the unit.
Returns:
float: the marginal cost of the unit for the given power.
"""
# Initialize marginal cost
marginal_cost = 0
# If the unit requires power at the given time, calculate the marginal cost
if self.opt_power_requirement[start] > 0:
marginal_cost = (
self.variable_cost_series[start] / self.opt_power_requirement[start]
)
return marginal_cost
Explanation of ``calculate_marginal_cost``
Marginal Cost Calculation:
The marginal cost is determined by dividing the variable cost for the given time by the optimal power requirement at that time:
\[\text{marginal\_cost}(t) = \frac{\text{variable\_cost}(t)}{\text{opt\_power\_requirement}(t)}\]This function ensures that the marginal cost is only calculated when there is a non-zero power requirement at the given time.
Bidding:
The steel plant uses this marginal cost to submit bids in the day-ahead market. The bid will consist of the price (the calculated marginal cost) and the volume (the required power at that time).
This ensures that the steel plant’s bids reflect its actual production costs, allowing it to operate efficiently in the market.
5 Defining Buses#
Buses represent nodes in the network where energy can be injected or withdrawn. Each bus is assigned to a zone, which groups buses into market areas.
Below is the buses DataFrame, which defines the two main nodes in our system: north and south.
[ ]:
# @title Setting Up Buses
# Define the buses DataFrame with two nodes: north and south
buses = pd.DataFrame(
{
"name": ["north", "south"], # Bus names (zones)
"v_nom": [380.0, 380.0], # Nominal voltage levels (kV)
"carrier": ["AC", "AC"], # Carrier type (AC)
"x": [9.598, 13.607], # Geographical coordinates (x-coordinate)
"y": [53.5585, 51.0769], # Geographical coordinates (y-coordinate)
}
)
# Display the buses DataFrame
print("Buses DataFrame:")
display(buses)
Explanation
``name``: Identifier for each bus. In this case, we have two buses named north and south representing zones in the network.
``v_nom``: The nominal voltage level (in kV) of each bus. Here, both buses have a voltage of 380 kV.
``carrier``: The type of carrier associated with the bus. In this case, the carrier is AC (alternating current).
``x``, ``y``: Geographical coordinates that can be used for mapping or spatial analysis. These coordinates define the location of each bus on a map (optional but useful for visualization).
6 Configuring Transmission Lines#
Transmission Lines connect buses, allowing energy to flow between them. Each line has a specified capacity and electrical parameters, such as reactance and resistance.
Below is the lines DataFrame, which defines the line between the north and south buses.
[ ]:
# @title Setting Up Transmission lines
# Define the transmission lines DataFrame
lines = pd.DataFrame(
{
"name": ["Line_N_S"], # Name of the transmission line
"bus0": ["north"], # Starting bus (north)
"bus1": ["south"], # Ending bus (south)
"s_nom": [200.0], # Nominal power capacity (MVA)
"x": [0.01], # Reactance (in per unit)
"r": [0.001], # Resistance (in per unit)
}
)
# Display the transmission lines DataFrame
print("Transmission Lines DataFrame:")
display(lines)
Explanation
``name``: The identifier for the transmission line, in this case, Line_N_S, which connects the north and south buses.
``bus0``, ``bus1``: The two buses that the transmission line connects. Here, bus0 is north and bus1 is south.
``s_nom``: The nominal apparent power capacity of the transmission line, given in MVA (megavolt-amperes). In this case, the capacity is 200 MVA.
``x``: The reactance of the transmission line (in per unit). This defines the impedance of the line, and we set it to 0.01.
``r``: The resistance of the transmission line (in per unit), which represents the electrical resistance. This is set to 0.001 for the line.
7. Integrating with ASSUME Framework#
In a real-world scenario, the ASSUME framework handles the reading of CSV files and the configuration of the simulation through configuration files. For the purpose of this tutorial, we’ll integrate our prepared data and configuration into ASSUME to execute the simulation seamlessly.
7.1 Saving Input Files#
We will save the generated input DataFrames to the inputs/tutorial_09
folder. The required files are:
demand_units.csv
demand_df.csv
powerplant_units.csv
industrial_dsm_units.csv
Additionally, we will create a new file for fuel prices (fuel_prices.csv
).
Create the Inputs Directory and Save CSV Files
First, we need to create the directory for the input files if it does not already exist. Then, we will save the DataFrames as CSV files in this directory.
[ ]:
# Define the input directory
input_dir = "inputs/tutorial_09"
# Create the directory if it doesn't exist
os.makedirs(input_dir, exist_ok=True)
# Extend the demand_df for the 3-day simulation (as we need to simulate for 3 days)
demand_df_extended = pd.concat([demand_df] * 3)
demand_df_extended.index = pd.date_range(
start="2019-01-01", periods=len(demand_df_extended), freq="15min"
)
# Save the DataFrames to CSV files
powerplant_units.to_csv(os.path.join(input_dir, "powerplant_units.csv"), index=False)
demand_units.to_csv(os.path.join(input_dir, "demand_units.csv"), index=False)
demand_df_extended.to_csv(os.path.join(input_dir, "demand_df.csv"))
industrial_dsm_units.to_csv(
os.path.join(input_dir, "industrial_dsm_units.csv"), index=False
)
buses.to_csv(os.path.join(input_dir, "buses.csv"), index=False)
lines.to_csv(os.path.join(input_dir, "lines.csv"), index=False)
print("Input CSV files have been saved to 'inputs/tutorial_09'.")
Create fuel_prices.csv
Next, we need to create a CSV file for fuel prices, which will define the price for each type of fuel used by the power plant units.
[ ]:
# @title Setting Up Fuel Prices
# Define fuel prices for the power plant units
fuel_prices = {
"fuel": ["uranium", "co2"],
"price": [5, 25], # Example prices for uranium and CO2
}
# Convert the dictionary to a DataFrame and save as CSV
fuel_prices_df = pd.DataFrame(fuel_prices).T
fuel_prices_df.to_csv(
os.path.join(input_dir, "fuel_prices_df.csv"), index=True, header=False
)
print("Fuel Prices CSV file has been saved to 'inputs/tutorial_09/fuel_prices.csv'.")
Checking Fuel Types for Power Plants
Now, let’s check which fuels are required for our power plant units, ensuring that the correct prices are applied:
[ ]:
# Display the unique fuel types used in the powerplant_units DataFrame
unique_fuel_types = powerplant_units["fuel_type"].unique()
print(f"Fuel types required for power plants: {unique_fuel_types}")
7.2 Creating ``forecasts_df`` for the Steel Plant#
The steel plant requires forecasts for various prices and demand values, including: - Fuel prices (natural gas, hydrogen, etc.) - Electricity prices (EOM price) - Iron ore prices - Steel demand and prices - Lime prices - CO2 prices
These forecasts will be provided in the forecasts_df.csv file.
Below is the code to generate the forecasts_df DataFrame based on the required parameters.
[ ]:
# @title Setting Up Forecasts
# Define the time range for the forecast (matching the 3-day simulation)
time_index = pd.date_range(start="2019-01-01 00:00", end="2019-01-04 23:45", freq="h")
# Define the base price for the diurnal electricity price and amplitude for fluctuations
base_price = 10
price_amplitude = 60 # Max fluctuation range
hours_in_day = 24
# Use a sine wave to simulate the diurnal price cycle
price_EOM = base_price + price_amplitude * np.sin(
2 * np.pi * (time_index.hour / hours_in_day)
)
# Define the forecast data for required parameters (example values)
forecasts_data = {
"datetime": time_index,
"fuel_price_natural_gas": [25] * len(time_index), # Fixed price for natural gas
"price_EOM": price_EOM, # Diurnal electricity price based on the sine wave
"iron_ore_price": [80] * len(time_index), # Varying iron ore price
"lime_price": [15] * len(time_index), # Varying lime price
"co2_price": [50] * len(time_index), # Varying CO2 price
}
# Create the DataFrame
forecasts_df = pd.DataFrame(forecasts_data)
# Set 'datetime' as the index
forecasts_df.set_index("datetime", inplace=True)
# Save the DataFrame as CSV
forecasts_df.to_csv(os.path.join(input_dir, "forecasts_df.csv"))
print("Forecasts DataFrame has been created and saved as 'forecasts_df.csv'.")
7.3 Creating the Configuration YAML File#
For our simulation, we will define the configuration in a YAML format, which specifies the time range, market setup, and other parameters. This configuration will be saved as a config.yaml file.
Below is the creation of the configuration dictionary and saving it to a YAML file.
[ ]:
# @title Configuring Market
# Define the configuration dictionary
config = {
"steel_plant_Day_Ahead": {
"start_date": "2019-01-01 00:00",
"end_date": "2019-01-03 23:00",
"time_step": "1h",
"save_frequency_hours": 24,
"markets_config": {
"EOM": {
"operator": "EOM_operator",
"product_type": "energy",
"products": [
{
"duration": "1h", # Each product lasts for 1 hour
"count": 24, # Number of products per day (24 hours)
"first_delivery": "1h", # First delivery is 1 hour after the market opens
}
],
"opening_frequency": "24h", # Market opens once every 24 hours
"opening_duration": "1h", # Market stays open for 1 hour
"volume_unit": "MWh", # Market volume is measured in MWh
"maximum_bid_volume": 100000, # Maximum bid volume allowed
"maximum_bid_price": 3000, # Maximum allowed bid price
"minimum_bid_price": -500, # Minimum allowed bid price
"price_unit": "EUR/MWh", # Bid price unit is EUR per MWh
"market_mechanism": "pay_as_clear", # Market clears with pay-as-clear mechanism
}
},
}
}
# Define the path for the config file
config_path = os.path.join(input_dir, "config.yaml")
# Save the configuration to a YAML file
with open(config_path, "w") as file:
yaml.dump(config, file, sort_keys=False)
print(f"Configuration YAML file has been saved to '{config_path}'.")
Explanation
The config.yaml file plays a key role in defining the simulation parameters. Below is a detailed explanation of each configuration parameter:
``steel_plant_Day_Ahead``: - start_date: The start date and time for the simulation (2019-01-01 00:00). - end_date: The end date and time for the simulation (2019-01-03 23:00). - time_step: The simulation time step (1h), indicating hourly intervals. - save_frequency_hours: How frequently the simulation results are saved (24 hours).
``markets_config``: - EOM: The name of the market. Our power plant and demand units had a column named bidding_EOM
, which associates them with this market. This configuration defines how this market operates. - operator: The market operator (EOM_operator), responsible for managing the day-ahead market operations. - product_type: The type of product traded in the market (energy), which indicates that energy is being traded in MWh. - products: - duration:
Duration of the product (1h), meaning each product represents 1 hour of energy delivery. - count: Number of products per day (24), representing the 24 hours in a day for which energy is traded. - first_delivery: Specifies when the first energy delivery will occur (1h after the market opens). - opening_frequency: How often the market opens (24h), indicating that the market opens once every day. - opening_duration: The market stays open for 1 hour every day,
allowing participants to submit bids during this time. - volume_unit: The unit of volume measurement in the market (MWh), representing megawatt-hours of electricity. - maximum_bid_volume: The maximum volume allowed per bid (100,000 MWh), meaning no single bid can exceed this volume. - maximum_bid_price: The highest price allowed per bid (3,000 EUR/MWh). - minimum_bid_price: The lowest price allowed per bid (-500 EUR/MWh). - price_unit: The unit of price
measurement (EUR/MWh), indicating that prices are quoted in euros per megawatt-hour. - market_mechanism: The clearing mechanism for the market (pay_as_clear), meaning all bids are cleared at the same price for all participants based on the marginal clearing price.
This configuration ensures that the simulation accurately represents the energy-only market (EOM) dynamics, including bid restrictions, price limits, and market operations. Each configuration parameter is designed to ensure that the market functions in a structured manner, reflecting real-world market behavior where energy is traded on an hourly basis.
7.4 Running the Simulation#
Now that we have prepared the input files and configuration, we can proceed to run the simulation using the ASSUME framework. In this step, we will load the scenario and execute the simulation.
[ ]:
# Define paths for input and output data
csv_path = "outputs"
# Define the data format and database URI
# Use "local_db" for SQLite database or "timescale" for TimescaleDB in Docker
# Create directories if they don't exist
os.makedirs(csv_path, exist_ok=True)
os.makedirs("local_db", exist_ok=True)
# Choose the data format: either local SQLite database or TimescaleDB
data_format = "local_db" # Options: "local_db" or "timescale"
# Set the database URI based on the selected data format
if data_format == "local_db":
db_uri = "sqlite:///local_db/assume_db.db" # SQLite database
elif data_format == "timescale":
db_uri = "postgresql://assume:assume@localhost:5432/assume" # TimescaleDB
# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)
# Load the scenario by providing the world instance
# The path to the inputs folder and the scenario name (subfolder in inputs)
# and the study case name (which config to use for the simulation)
load_scenario_folder(
world,
inputs_path="inputs",
scenario="tutorial_09", # Scenario folder for our case
study_case="steel_plant_Day_Ahead", # The config we defined earlier
)
# Run the simulation
world.run()
print("Simulation has completed.")
7.5 Visualizing the Unit dispatch and Marginal cost of the Steel Plant#
We will now create a plot showing the electricity consumption profile of the steel plant based on the dispatch data from the unit_dispatch.csv file. This will help us understand how the steel plant consumed electricity during the simulation.
Explanation
Saving the DataFrame:
The following line saves the non-zero power consumption data for the steel plant to a CSV file:
This allows the data to be reused later for comparing Use Case 1 and Use Case 2 results.
[ ]:
# ---- Load the market_orders CSV file ----
market_orders = pd.read_csv(
"outputs/tutorial_09_steel_plant_Day_Ahead/market_orders.csv"
)
# Filter the data for the steel plant (unit_id 'A360')
steel_plant_orders = market_orders[market_orders["unit_id"] == "A360"]
# Convert 'start_time' column to datetime
steel_plant_orders["time"] = pd.to_datetime(steel_plant_orders["start_time"])
# Convert 'volume' to numeric and coerce errors
steel_plant_orders["volume"] = pd.to_numeric(
steel_plant_orders["volume"], errors="coerce"
)
# Make the volume values positive (assuming negative values represent consumption)
steel_plant_orders["volume"] = steel_plant_orders["volume"].abs()
# ---- Handle missing time steps ----
# Define the full time range based on the simulation step (e.g., 1-hour intervals)
full_time_range = pd.date_range(
start="2019-01-01 00:00:00", end="2019-01-03 23:00:00", freq="H"
)
# Reindex the DataFrame to have a row for every time step
steel_plant_orders = steel_plant_orders.set_index("time").reindex(full_time_range)
# Fill NaN values in 'volume' and 'price' with 0 for the missing time steps
steel_plant_orders["volume"].fillna(0, inplace=True)
steel_plant_orders["price"].fillna(0, inplace=True)
# Reset the index back to 'time'
steel_plant_orders = steel_plant_orders.reset_index().rename(columns={"index": "time"})
# ---- Save the DataFrame for future use ----
# Save the updated DataFrame with complete time steps
steel_plant_orders.to_csv(
"outputs/steel_plant_market_orders_use_case_1.csv",
index=False,
)
print(
"Saved the electricity consumption and marginal cost profile with complete time steps to CSV."
)
[ ]:
# ---- Plotting ----
# Use Seaborn to set plot style
sns.set(style="whitegrid")
# Create the figure and axis for plotting
fig, ax1 = plt.subplots(figsize=(10, 6))
# Plot electricity consumption (volume) on the first y-axis
sns.lineplot(
x="time",
y="volume",
data=steel_plant_orders,
label="Power Consumption",
color="b",
ax=ax1,
)
# Set labels for the first y-axis
ax1.set_xlabel("Time", fontsize=12)
ax1.set_ylabel("Power (MW)", fontsize=12, color="b")
# Create the second y-axis for marginal cost with transparency and dashed line
ax2 = ax1.twinx()
# Make marginal cost plot semi-transparent and use a different linestyle
sns.lineplot(
x="time",
y="price",
data=steel_plant_orders,
label="Marginal Cost",
color="g",
ax=ax2,
linestyle="--", # Dashed line for better distinction
alpha=0.7, # Set transparency to make the lines less overwhelming
)
# Set labels for the second y-axis
ax2.set_ylabel("Price (EUR/MWh)", fontsize=12, color="g")
# Set plot title
plt.title("Electricity Consumption and Marginal Cost of Steel Plant", fontsize=14)
# Rotate x-axis ticks for better readability
ax1.tick_params(axis="x", rotation=45)
# Add grid for better readability
ax1.grid(True)
# Add legends for both plots
ax1.legend(loc="upper left")
ax2.legend(loc="upper right")
# Save the plot as a PNG file
plt.savefig(
"outputs/tutorial_09_steel_plant_Day_Ahead/steel_plant_consumption_marginal_cost.png"
)
# Show the plot
plt.tight_layout() # Adjust layout to avoid clipping of labels
plt.show()
[ ]:
# Define the path to the simulation output
output_dir = "outputs/tutorial_09_steel_plant_Day_Ahead"
market_meta_path = os.path.join(output_dir, "market_meta.csv")
# Load the market_meta.csv file
market_meta = pd.read_csv(market_meta_path, index_col="time", parse_dates=True)
# drop the first column
market_meta = market_meta.drop(columns=market_meta.columns[0])
# Display a sample of the data
print("Sample of market_meta.csv:")
display(market_meta.head())
[ ]:
# Load the market_meta CSV file (for Use Case 1)
market_meta_uc1 = pd.read_csv(
"outputs/tutorial_09_steel_plant_Day_Ahead/market_meta.csv"
)
# Convert 'time' column to datetime if it's not already in the correct format
market_meta_uc1["time"] = pd.to_datetime(market_meta_uc1["time"])
# ---- Save the DataFrame for future use ----
# Save the clearing price data as a CSV for future comparison
market_meta_uc1[["time", "price"]].to_csv(
"outputs/tutorial_09_steel_plant_Day_Ahead/clearing_price_use_case_1.csv",
index=False,
)
print("Saved the clearing price data of Use Case 1 to CSV.")
# ---- Plotting ----
# Create a Plotly figure for the clearing price
fig = go.Figure()
# Add a scatter plot trace for the price over time
fig.add_trace(
go.Scatter(
x=market_meta_uc1["time"],
y=market_meta_uc1["price"],
mode="lines",
name="Clearing Price",
line=dict(color="blue"),
)
)
# Set the layout for the figure
fig.update_layout(
title="Clearing Price Over Time",
xaxis_title="Time",
yaxis_title="Price (EUR/MWh)",
xaxis=dict(showgrid=True),
yaxis=dict(showgrid=True),
)
# Display the figure
fig.show()
Explanation
Saving the DataFrame:
The clearing price data is saved to a CSV file with the following line:
[ ]:
market_meta_uc1[["time", "price"]].to_csv(
"outputs/clearing_price_use_case_1.csv", index=False
)
7.6: Setting up grid network with infrastructure#
The grid infrastructure includes mainly three components:
Generators: Used to produce hydrogen for steel production.
Loads: Directly reduces iron ore using hydrogen.
Transmission grid: Converts the reduced iron into steel.
Here the components are defined with their operational constraints (such as power, efficiency, ramp rates etc.)
[ ]:
# Create a new PyPSA network
network = pypsa.Network()
snapshots = pd.date_range(start="2019-01-01 00:00", end="2019-01-02 23:00", freq="h")
network.set_snapshots(snapshots)
solver = "highs"
7.6.1 Add buses to the network#
[ ]:
# @title adding buses to the network
network.madd(
"Bus",
names=buses["name"].to_list(), # Bus names
v_nom=buses["v_nom"].to_list(), # Nominal voltage levels
carrier=buses["carrier"].to_list(), # Carrier types
x=buses["x"].to_list(), # x-coordinates
y=buses["y"].to_list(), # y-coordinates
)
network.buses
7.6.2 Add generators to the network#
[ ]:
# @title adding generators to the network
# Adding generators in the same format as the bus-adding code
market_dispatch = pd.read_csv(
"outputs/tutorial_09_steel_plant_Day_Ahead/market_dispatch.csv", parse_dates=True
)
market_dispatch.head()
[ ]:
gen_p_set = market_dispatch.pivot(index="datetime", columns="unit_id", values="power")
gen_p_set = gen_p_set.rename_axis("snapshot", axis="index")
power_p_set = gen_p_set.drop(columns=["A360", "demand_EOM1", "demand_EOM2"])
power_p_set.index = pd.to_datetime(power_p_set.index)
power_p_set.index = power_p_set.index - pd.Timedelta(hours=1)
power_p_set = power_p_set.fillna(0)
power_p_set.head()
7.6.3 Add DSM Units to the network#
[ ]:
# @title adding powerplants to the network
network.madd(
"Load",
powerplant_units.name,
bus=list(powerplant_units["node"]),
carrier=list(powerplant_units["technology"]),
p_set=power_p_set, # accepted volume of power
sign=1,
)
network.loads
[ ]:
dsm_p_set = gen_p_set[["A360"]]
dsm_p_set.index = pd.to_datetime(dsm_p_set.index)
dsm_p_set = dsm_p_set.rename_axis("snapshot", axis="index")
dsm_p_set.index = dsm_p_set.index - pd.Timedelta(hours=1)
dsm_p_set = dsm_p_set.fillna(0)
dsm_p_set = np.abs(dsm_p_set)
dsm_p_set.head(3)
[ ]:
# @title adding DSU to the network
# Add DSU Units as generators
ind_dsm_units = industrial_dsm_units[
industrial_dsm_units["technology"] == "electrolyser"
]
network.madd(
"Load",
names=list(ind_dsm_units["name"]),
bus=list(ind_dsm_units["node"]),
carrier=list(ind_dsm_units["technology"]),
p_set=dsm_p_set, # accepted volume of power
sign=-1,
)
network.loads
7.6.4 Add Loads to the network#
[ ]:
demand_p_set = gen_p_set[["demand_EOM1", "demand_EOM2"]]
demand_p_set = demand_p_set.rename_axis("snapshot", axis="index")
demand_p_set.index = pd.to_datetime(demand_p_set.index)
demand_p_set.index = demand_p_set.index - pd.Timedelta(hours=1)
demand_p_set = demand_p_set.fillna(0)
demand_p_set = demand_p_set.abs()
demand_p_set.head(10)
[ ]:
# @title adding demand load to the network
network.madd(
"Load",
demand_units.name,
bus=list(demand_units["node"]),
carrier=list(demand_units["technology"]),
p_set=demand_p_set, # accepted volume of power
sign=-1,
)
network.loads_t.p_set.head(3)
7.6.5 Add Lines to the network#
[ ]:
# @title adding transmission line to the network
network.madd(
"Line",
lines.name,
bus0=list(lines.bus0),
bus1=list(lines.bus1),
s_nom=list(lines.s_nom),
s_nom_extendable=False,
x=list(lines.x),
s_max_pu=1,
r=list(lines.r),
)
7.6.6 Congestion identification#
[ ]:
network.lpf()
[ ]:
congestion_df = pd.DataFrame(
{
"line_loading": np.abs(network.lines_t.p0.values.flatten()),
"line_name": network.lines.index.repeat(len(network.snapshots)),
"timestamp": pd.Series(network.snapshots).repeat(len(network.lines)),
}
)
[ ]:
s_nom_values = network.lines.s_nom
congestion_df["s_nom"] = congestion_df["line_name"].map(s_nom_values)
congestion_df["congestion_status"] = (
congestion_df["line_loading"] > congestion_df["s_nom"]
)
congested_lines = congestion_df[congestion_df["congestion_status"]]
congested_lines
[ ]:
now = network.snapshots[2]
loading = network.lines_t.p0.loc[now] / network.lines.s_nom
congestion_threshold = 1
line_colors = np.where(abs(loading) > congestion_threshold, "red", "blue")
# Create the figure and the axis using Cartopy's PlateCarree projection
fig, ax = plt.subplots(figsize=(6, 6), subplot_kw={"projection": ccrs.PlateCarree()})
# Plot the network using the built-in network plot function
network.plot(
ax=ax,
line_colors=line_colors,
line_cmap=None,
title="Line Loading",
bus_sizes=1e-2, # Size of bus markers
bus_alpha=0.7, # Transparency of bus markers
)
# Adjust layout and display the plot
fig.tight_layout()
plt.show()
Use Case 2: Steel plant With Flexibility- bidding in a Day-Ahead market#
In this use case, we will perform a simulation where the steel plant participates in the day-ahead market with flexibility bidding enabled. The setup remains the same as in Use Case 1, except for one key change: we will enable flexibility for the steel plant by updating the industrial_dsm_units input file.
1. Modifying Input Files#
We will update the ``industrial_dsm_units`` file, specifically the ``flexibility_measure`` column, to include the value ``max_load_shift``. This enables flexibility for the steel plant, allowing it to shift its load in response to market conditions.
Explanation
Modifying Input Files:
The only change is in the industrial_dsm_unit file. We will update the ``flexibility_measure`` column to include ``max_load_shift``.
How the Flexibility is Applied:
First Iteration: The system optimizes the operational cost using the standard objective function, minimizing the variable cost.
Second Iteration: The system uses the ``cost_tolerance`` parameter to slightly increase the operational cost, allowing for load shifting. The ``determine_optimal_operation_with_flex()`` function is responsible for handling this flexibility optimization. It adjusts the load based on market signals and the defined cost tolerance.
[ ]:
# @title setting up DSM units with a bidding strategy
# Load the industrial_dsm_units CSV file
industrial_dsm_units = pd.read_csv("inputs/tutorial_09/industrial_dsm_units.csv")
# Update the 'flexibility_measure'
industrial_dsm_units.loc[
industrial_dsm_units["technology"] == "electrolyser", "flexibility_measure"
] = "max_load_shift"
# Update the 'bidding strategy'
industrial_dsm_units.loc[
industrial_dsm_units["technology"] == "electrolyser", "bidding_EOM"
] = "naive_steel_redispatch"
# Save the updated industrial_dsm_units file
industrial_dsm_units.to_csv("inputs/tutorial_09/industrial_dsm_units.csv", index=False)
print("Updated industrial_dsm_units.csv with 'max_load_shift' for flexibility.")
2. Running the Simulation with Flexibility#
Once the input file has been updated, you can run the simulation just as in Use Case 1, but now with the flexibility enabled for the steel plant.
[ ]:
# Create the World instance (as per the previous simulation)
csv_path = "outputs"
db_uri = "sqlite:///local_db/assume_db.db" # Use the same database as before
world = World(database_uri=db_uri, export_csv_path=csv_path)
# Load the scenario folder with the updated inputs
load_scenario_folder(
world,
inputs_path="inputs", # Path to the updated inputs folder
scenario="tutorial_09", # Scenario folder remains the same
study_case="steel_plant_Day_Ahead", # Study case remains the same
)
# Run the simulation (which now includes flexibility bidding)
world.run()
print("Simulation with flexibility completed.")
3. Visualizing the Unit Dispatch of the Steel Plant#
We will now create a plot showing the electricity consumption profile of the steel plant based on the dispatch data from the unit_dispatch.csv file. This will help us understand how the steel plant consumed electricity during the simulation.
[ ]:
# ---- Load the market_orders CSV file for Use Case 2 ----
market_orders_uc2 = pd.read_csv(
"outputs/tutorial_09_steel_plant_Day_Ahead/market_orders.csv"
)
# Filter the data for the steel plant (unit_id 'A360')
steel_plant_orders_uc2 = market_orders_uc2[market_orders_uc2["unit_id"] == "A360"]
# Convert 'start_time' column to datetime
steel_plant_orders_uc2["time"] = pd.to_datetime(steel_plant_orders_uc2["start_time"])
# Convert 'volume' to numeric and coerce errors
steel_plant_orders_uc2["volume"] = pd.to_numeric(
steel_plant_orders_uc2["volume"], errors="coerce"
)
# Make the volume values positive (assuming negative values represent consumption)
steel_plant_orders_uc2["volume"] = steel_plant_orders_uc2["volume"].abs()
# ---- Handle missing time steps ----
# Define the full time range based on the simulation step (e.g., 1-hour intervals)
full_time_range = pd.date_range(
start="2019-01-01 00:00:00", end="2019-01-03 23:00:00", freq="H"
)
# Reindex the DataFrame to have a row for every time step
steel_plant_orders_uc2 = steel_plant_orders_uc2.set_index("time").reindex(
full_time_range
)
# Fill NaN values in 'volume' and 'price' with 0 for the missing time steps
steel_plant_orders_uc2["volume"].fillna(0, inplace=True)
steel_plant_orders_uc2["price"].fillna(0, inplace=True)
# Reset the index back to 'time'
steel_plant_orders_uc2 = steel_plant_orders_uc2.reset_index().rename(
columns={"index": "time"}
)
# ---- Save the DataFrame for future use ----
# Save the updated DataFrame with complete time steps
steel_plant_orders_uc2.to_csv(
"outputs/steel_plant_market_orders_use_case_2.csv",
index=False,
)
print(
"Saved the electricity consumption and marginal cost profile of Use Case 2 with complete time steps to CSV."
)
[ ]:
# ---- Plotting ----
# Use Seaborn to set plot style
sns.set(style="whitegrid")
# Create the figure and axis for plotting
fig, ax1 = plt.subplots(figsize=(10, 6))
# Plot electricity consumption (volume) on the first y-axis
sns.lineplot(
x="time",
y="volume",
data=steel_plant_orders_uc2,
label="Power Consumption",
color="b",
ax=ax1,
)
# Set labels for the first y-axis
ax1.set_xlabel("Time", fontsize=12)
ax1.set_ylabel("Power (MW)", fontsize=12, color="b")
# Create the second y-axis for marginal cost with transparency and dashed line
ax2 = ax1.twinx()
# Make marginal cost plot semi-transparent and use a different linestyle
sns.lineplot(
x="time",
y="price",
data=steel_plant_orders_uc2,
label="Marginal Cost",
color="g",
ax=ax2,
linestyle="--", # Dashed line for better distinction
alpha=0.7, # Set transparency to make the lines less overwhelming
)
# Set labels for the second y-axis
ax2.set_ylabel("Price (EUR/MWh)", fontsize=12, color="g")
# Set plot title
plt.title(
"Electricity Consumption and Marginal Cost of Steel Plant - Flexibility",
fontsize=14,
)
# Rotate x-axis ticks for better readability
ax1.tick_params(axis="x", rotation=45)
# Add grid for better readability
ax1.grid(True)
# Add legends for both plots
ax1.legend(loc="upper left")
ax2.legend(loc="upper right")
# Save the plot as a PNG file
plt.savefig(
"outputs/tutorial_09_steel_plant_Day_Ahead/steel_plant_consumption_marginal_cost_flexibility.png"
)
# Show the plot
plt.tight_layout() # Adjust layout to avoid clipping of labels
plt.show()
Explanation
Saving the DataFrame:
The clearing price data is saved to a CSV file with the following line:
[ ]:
# Load the market_meta CSV file for Use Case 2
market_meta_uc2 = pd.read_csv(
"outputs/tutorial_09_steel_plant_Day_Ahead/market_meta.csv"
)
# Convert 'time' column to datetime if it's not already in the correct format
market_meta_uc2["time"] = pd.to_datetime(market_meta_uc2["time"])
# ---- Save the DataFrame for future use ----
# Save the clearing price data as a CSV for future comparison
market_meta_uc2[["time", "price"]].to_csv(
"outputs/clearing_price_use_case_2.csv", index=False
)
print("Saved the clearing price data of Use Case 2 to CSV.")
# ---- Plotting ----
# Create a Plotly figure for the clearing price
fig = go.Figure()
# Add a scatter plot trace for the price over time
fig.add_trace(
go.Scatter(
x=market_meta_uc2["time"],
y=market_meta_uc2["price"],
mode="lines",
name="Clearing Price - Flexibility",
line=dict(color="blue"),
)
)
# Set the layout for the figure
fig.update_layout(
title="Clearing Price with Flexibility Over Time",
xaxis_title="Time",
yaxis_title="Price (EUR/MWh)",
xaxis=dict(showgrid=True),
yaxis=dict(showgrid=True),
)
# Display the figure
fig.show()
Explanation
Saving the DataFrame:
The clearing price data is saved to a CSV file with the following line:
[ ]:
market_meta_uc2[["time", "price"]].to_csv(
"outputs/clearing_price_use_case_2.csv", index=False
)
4. Setting up grid network with infrastructure#
Since we have added all generators, demand units and dsm units as loads, we replace the timeseries of demands, dsm units and dispatches from generators as one file
[ ]:
# Reading the dispatch data from the output results
market_dispatch = pd.read_csv(
"outputs/tutorial_09_steel_plant_Day_Ahead/market_dispatch.csv", parse_dates=True
)
market_dispatch.head()
[ ]:
# cleaning dispatch data from the simulation to feed it to the PyPSA network
p_set = market_dispatch.pivot(index="datetime", columns="unit_id", values="power")
p_set = p_set.rename_axis("snapshot", axis="index")
p_set.index = pd.to_datetime(p_set.index)
p_set.index = p_set.index - pd.Timedelta(hours=1)
p_set = p_set.fillna(0)
p_set = np.abs(p_set)
p_set.head(3)
[ ]:
# Replacing p_set values in the same network which was created for Use case 1
network.loads_t.p_set = p_set
network.loads_t.p_set.head()
5. Congestion identification#
Solving linear power flow to identify congestion in lines
[ ]:
network.lpf()
[ ]:
congestion_df = pd.DataFrame(
{
"line_loading": np.abs(network.lines_t.p0.values.flatten()),
"line_name": network.lines.index.repeat(len(network.snapshots)),
"timestamp": pd.Series(network.snapshots).repeat(len(network.lines)),
}
)
[ ]:
s_nom_values = network.lines.s_nom
congestion_df["s_nom"] = congestion_df["line_name"].map(s_nom_values)
congestion_df["congestion_status"] = (
congestion_df["line_loading"] > congestion_df["s_nom"]
)
congested_lines = congestion_df[congestion_df["congestion_status"]]
congested_lines
[ ]:
# Creating a plot to see the nodes and congestion in the lines
now = network.snapshots[10]
loading = network.lines_t.p0.loc[now] / network.lines.s_nom
congestion_threshold = 1
line_colors = np.where(abs(loading) > congestion_threshold, "red", "blue")
# Create the figure and the axis using Cartopy's PlateCarree projection
fig, ax = plt.subplots(figsize=(4, 4), subplot_kw={"projection": ccrs.PlateCarree()})
# Plot the network using the built-in network plot function
network.plot(
ax=ax,
line_colors=line_colors,
line_cmap=None,
title="Line Loading",
bus_sizes=5e-2, # Size of bus markers
bus_alpha=1, # Transparency of bus markers
)
# Adjust layout and display the plot
fig.tight_layout()
plt.show()
6. Comparison of Electricity Consumption and Clearing Prices: Use Case 1 vs Use Case 2#
In this section, we will compare the results of the electricity consumption profile and clearing prices between Use Case 1 (without flexibility) and Use Case 2 (with flexibility). This comparison will provide insights into how enabling flexibility affects the power consumption and market-clearing prices for the steel plant.
Electricity Consumption Profile Comparison
We first compare the electricity consumption profile of the steel plant (A360) for both use cases. The goal is to observe the changes in the plant’s consumption when flexibility is enabled.
[ ]:
# ---- Load the CSV files for both use cases ----
# Use Case 1
uc1_orders = pd.read_csv("outputs/steel_plant_market_orders_use_case_1.csv")
# Use Case 2
uc2_orders = pd.read_csv("outputs/steel_plant_market_orders_use_case_2.csv")
# Convert 'time' column to datetime for both use cases
uc1_orders["time"] = pd.to_datetime(uc1_orders["time"])
uc2_orders["time"] = pd.to_datetime(uc2_orders["time"])
# Make the volume values positive for both use cases
uc1_orders["volume"] = uc1_orders["volume"].abs()
uc2_orders["volume"] = uc2_orders["volume"].abs()
# ---- Plotting Comparison ----
# Use Seaborn to set plot style
sns.set(style="whitegrid")
# Create the plot for comparison with Seaborn and Matplotlib
plt.figure(figsize=(10, 6))
# Plot Use Case 1 data (solid line)
sns.lineplot(
x="time",
y="volume",
data=uc1_orders,
label="Power Consumption - Use Case 1",
color="b",
linestyle="-", # Solid line for Use Case 1
)
# Plot Use Case 2 data (dotted line)
sns.lineplot(
x="time",
y="volume",
data=uc2_orders,
label="Power Consumption - Use Case 2 (Flexibility)",
color="g",
linestyle="--", # Dotted line for Use Case 2
)
# Set plot title and labels
plt.title(
"Comparison of Electricity Consumption Profiles of the Steel Plant (A360)",
fontsize=14,
)
plt.xlabel("Time", fontsize=12)
plt.ylabel("Power (MW)", fontsize=12)
# Reduce the number of x-axis ticks to avoid crowding
plt.gca().xaxis.set_major_locator(plt.MaxNLocator(nbins=10))
# Rotate x-axis ticks for better readability
plt.xticks(rotation=45)
# Add a grid for better readability
plt.grid(True)
# Save the comparison plot as a PNG file
plt.savefig(
"outputs/tutorial_09_steel_plant_Day_Ahead/steel_plant_comparison_consumption.png"
)
# Show the plot
plt.tight_layout() # Adjust layout to avoid clipping of labels
plt.show()
Explanation
Electricity Consumption Comparison: - Use Case 1 (without flexibility) shows how the steel plant consumes electricity under normal operation, without adjusting its load based on market prices. - Use Case 2 (with flexibility) illustrates how the plant modifies its consumption profile by shifting loads, optimizing its costs by reacting to price signals. - The plot allows us to compare the impact of enabling flexibility on the steel plant’s electricity consumption, revealing how the plant adapts its load to market conditions.
[ ]:
# Load the saved clearing price data for both use cases
df_price_uc1 = pd.read_csv("outputs/clearing_price_use_case_1.csv")
df_price_uc2 = pd.read_csv("outputs/clearing_price_use_case_2.csv")
# Convert 'time' to datetime for both datasets
df_price_uc1["time"] = pd.to_datetime(df_price_uc1["time"])
df_price_uc2["time"] = pd.to_datetime(df_price_uc2["time"])
# Plot both Use Case 1 and Use Case 2 clearing prices on the same plot
plt.figure(figsize=(10, 6))
# Plot Use Case 1
plt.plot(
df_price_uc1["time"],
df_price_uc1["price"],
label="Use Case 1 - Clearing Price",
color="b",
)
# Plot Use Case 2 (with flexibility)
plt.plot(
df_price_uc2["time"],
df_price_uc2["price"],
label="Use Case 2 - Clearing Price (Flexibility)",
color="g",
)
# Set plot title and labels
plt.title("Clearing Price Comparison: Use Case 1 vs Use Case 2", fontsize=14)
plt.xlabel("Time", fontsize=12)
plt.ylabel("Price (EUR/MWh)", fontsize=12)
# Rotate x-axis ticks for better readability
plt.xticks(rotation=45)
# Add a grid for better readability
plt.grid(True)
# Add a legend to differentiate between Use Case 1 and Use Case 2
plt.legend()
# Save the comparison plot as a PNG file
plt.savefig("outputs/tutorial_09_steel_plant_Day_Ahead/clearing_price_comparison.png")
# Show the plot
plt.tight_layout() # Adjust layout to avoid clipping of labels
plt.show()
Clearing Price Comparison - Use Case 1 (without flexibility) represents the market-clearing price under normal conditions, where the steel plant follows its original consumption pattern. - Use Case 2 (with flexibility) reflects how enabling flexibility in the plant’s operation can affect market-clearing prices, particularly during periods when the plant shifts its demand. - The plot provides a clear comparison between the two use cases, showing how flexibility can reduce or increase price volatility based on the plant’s adjusted demand patterns.
This comparison provides valuable insights into the benefits of demand-side flexibility in electricity markets, helping us understand how plants like the steel plant can optimize costs and influence market dynamics through load shifting.
Use Case 3: Redispatch#
In this use case, we introduce a Redispatch market simulation, where power plants participate in both the Energy Only Market (EOM) and the Redispatch market.
Redispatch is a crucial mechanism for ensuring grid stability by managing network congestion. Once the day-ahead market has cleared and the initial dispatch is known, redispatch is used to adjust the power generation to ensure the safe operation of the electricity grid.
In this simulation: - Power plants first participate in the day-ahead market (EOM) to optimize their electricity production. - After the day-ahead market clears, a redispatch market is executed to manage network constraints and ensure that electricity flows in a safe and reliable manner. - The simulation includes power plants bidding into both the EOM and Redispatch markets, allowing us to observe how the redispatch mechanism adjusts generation based on network conditions.
This use case will cover: 1. Creating a new market configuration that includes both the day-ahead market (EOM) and the Redispatch market. 2. Simulating power plants participating in both markets. 3. Analyzing the results of redispatch on power plants and grid stability.
This use case demonstrates how the Redispatch market ensures grid stability by adjusting generation after the day-ahead market clears, ensuring that grid constraints are respected and energy is delivered reliably.
1. Prepare Input Files#
We’ll reuse the DataFrames for powerplant units, demand units, and demand_df that were already created in Use Case 1. These files will be saved in the inputs/tutorial_10 folder for Use Case 3.
[ ]:
# Define the input directory
input_dir = "inputs/tutorial_10"
# Create the directory if it doesn't exist
os.makedirs(input_dir, exist_ok=True)
# ---- Buses Units ----
# Load the Buses
powerplant_units = pd.read_csv("inputs/tutorial_09/powerplant_units.csv")
# Add the new 'bidding_redispatch' column based on the 'technology' column
# For simplicity, we're assigning 'naive_redispatch' for all rows (can be customized)
powerplant_units["bidding_redispatch"] = "naive_redispatch"
# Save it to the new directory for Use Case 3
powerplant_units.to_csv(os.path.join(input_dir, "powerplant_units.csv"), index=False)
# ---- Demand Units ----
# Load the demand_units DataFrame from Use Case 1
demand_units = pd.read_csv("inputs/tutorial_09/demand_units.csv")
demand_units["bidding_redispatch"] = "naive_redispatch"
# Save it to the new directory for Use Case 3
demand_units.to_csv(os.path.join(input_dir, "demand_units.csv"), index=False)
# ---- Buses ----
# Load the Buses DataFrame from Use Case 1
buses = pd.read_csv("inputs/tutorial_09/buses.csv")
# Save it to the new directory for Use Case 3
buses.to_csv(os.path.join(input_dir, "buses.csv"), index=False)
# ---- Lines ----
# Load the Lines DataFrame from Use Case 1
lines = pd.read_csv("inputs/tutorial_09/lines.csv")
# Save it to the new directory for Use Case 3
lines.to_csv(os.path.join(input_dir, "lines.csv"), index=False)
# ---- Demand Data (demand_df) ----
# Load the demand_df DataFrame from Use Case 1
demand_df = pd.read_csv("inputs/tutorial_09/demand_df.csv")
# Save it to the new directory for Use Case 3
demand_df.to_csv(os.path.join(input_dir, "demand_df.csv"), index=False)
# Load the forecasts_df DataFrame from Use Case 1
forecasts_df = pd.read_csv("inputs/tutorial_09/forecasts_df.csv")
# Save it to the new directory for Use Case 3
forecasts_df.to_csv(os.path.join(input_dir, "forecasts_df.csv"), index=False)
# ---- Fuel Prices DataFrame ----
# Load the fuel_prices_df DataFrame from Use Case 1
fuel_prices_df = pd.read_csv("inputs/tutorial_09/fuel_prices_df.csv")
# Save it to the new directory for Use Case 3
fuel_prices_df.to_csv(os.path.join(input_dir, "fuel_prices_df.csv"), index=False)
# ---- Fuel Prices DataFrame ----
# Load the fuel_prices_df DataFrame from Use Case 1
fuel_prices = pd.read_csv("inputs/tutorial_09/fuel_prices_df.csv")
# Save it to the new directory for Use Case 3
fuel_prices.to_csv(os.path.join(input_dir, "fuel_prices.csv"), index=False)
print(
f"Input CSV files for powerplant units, demand units, and demand_df have been saved to '{input_dir}'."
)
2. Update Industrial DSM Unit#
[ ]:
# Load industrial_dsm_units CSV
industrial_dsm_units = pd.read_csv("inputs/tutorial_09/industrial_dsm_units.csv")
# Update the 'flexibility_measure'
industrial_dsm_units.loc[
industrial_dsm_units["technology"] == "electrolyser", "flexibility_measure"
] = "None"
# Update the 'bidding strategy'
industrial_dsm_units.loc[
industrial_dsm_units["technology"] == "electrolyser", "bidding_EOM"
] = "naive_da_steelplant"
# Save the updated industrial_dsm_units CSV to the new folder
industrial_dsm_units.to_csv(
os.path.join(input_dir, "industrial_dsm_units.csv"), index=False
)
print(f"Updated 'industrial_dsm_units.csv' saved to '{input_dir}'.")
3. Creating a New Market Configuration with Redispatch#
Below is the configuration where we define both the EOM (Energy Only Market) and Redispatch markets.
[ ]:
# Define the configuration dictionary
config = {
"redispatch_case": {
"start_date": "2019-01-01 00:00",
"end_date": "2019-01-03 23:00",
"time_step": "1h",
"save_frequency_hours": 24,
"markets_config": {
"EOM": {
"start_date": "2019-01-01 00:00",
"operator": "EOM_operator",
"product_type": "energy",
"products": [
{
"duration": "1h", # Each product lasts for 1 hour
"count": 24, # Number of products per day (24 hours)
"first_delivery": "24h", # First delivery is 1 hour after the market opens
}
],
"opening_frequency": "24h", # Market opens once every 24 hours
"opening_duration": "20h", # Market stays open for 1 hour
"volume_unit": "MWh", # Market volume is measured in MWh
"maximum_bid_volume": 100000, # Maximum bid volume allowed
"maximum_bid_price": 3000, # Maximum allowed bid price
"minimum_bid_price": -500, # Minimum allowed bid price
"price_unit": "EUR/MWh", # Bid price unit is EUR per MWh
"market_mechanism": "pay_as_clear", # Pay-as-clear mechanism
},
"redispatch": {
"start_date": "2019-01-01 21:00",
"operator": "network_operator",
"product_type": "energy",
"products": [
{
"duration": "1h",
"count": 24,
"first_delivery": "3h",
}
],
"opening_frequency": "24h",
"opening_duration": "2h",
"volume_unit": "MWh",
"maximum_bid_volume": 100000,
"maximum_bid_price": 3000,
"minimum_bid_price": -500,
"price_unit": "EUR/MWh",
"market_mechanism": "redispatch",
"additional_fields": ["node", "min_power", "max_power"],
"param_dict": {
"network_path": ".",
"solver": "gurobi",
"payment_mechanism": "pay_as_bid",
"backup_marginal_cost": 10000,
},
},
},
}
}
# Define the path for the config file
input_dir = "inputs/tutorial_10"
config_path = os.path.join(input_dir, "config.yaml")
# Create the input directory if it doesn't exist
os.makedirs(input_dir, exist_ok=True)
# Save the configuration to a YAML file
with open(config_path, "w") as file:
yaml.dump(config, file, sort_keys=False)
print(f"Configuration YAML file has been saved to '{config_path}'.")
# Define the path for the config file
input_dir = "inputs/tutorial_10"
config_path = os.path.join(input_dir, "config.yaml")
# Create the input directory if it doesn't exist
os.makedirs(input_dir, exist_ok=True)
# Save the configuration to a YAML file
with open(config_path, "w") as file:
yaml.dump(config, file, sort_keys=False)
print(f"Configuration YAML file has been saved to '{config_path}'.")
4. Setting up a New Simulation with Redispatch#
Now, we will set up the new simulation that includes the Redispatch market.
[ ]:
# Define paths for input and output data
csv_path = "outputs"
os.makedirs(csv_path, exist_ok=True)
# Database connection for the simulation
db_uri = "sqlite:///local_db/assume_db.db"
# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)
# Load the scenario with Redispatch setup
load_scenario_folder(
world,
inputs_path="inputs",
scenario="tutorial_10", # Folder where our scenario inputs are located
study_case="redispatch_case", # Config for Redispatch we defined earlier
)
# Run the simulation
world.run()
print("Simulation with redipatch completed.")
5. Setting up a grid network with Infrastructure for Redispatch#
Now, we will set up the new simulation that includes the Redispatch market.
[ ]:
market_dispatch_total = pd.read_csv(
"outputs/tutorial_10_redispatch_case/market_dispatch.csv", parse_dates=True
)
market_dispatch_total.head()
[ ]:
# cleaning dispatch data from the simulation to feed it to the PyPSA network
redispatches = market_dispatch_total[
market_dispatch_total["market_id"].str.contains("redispatch", case=False, na=False)
]
redisp_p_set = redispatches.pivot(index="datetime", columns="unit_id", values="power")
redisp_p_set = redisp_p_set.drop(columns=["demand_EOM1", "demand_EOM2"])
redisp_p_set = redisp_p_set.rename_axis("snapshot", axis="index")
redisp_p_set.index = pd.to_datetime(redisp_p_set.index)
redisp_p_set.index = redisp_p_set.index - pd.Timedelta(hours=1)
redisp_p_set = redisp_p_set.fillna(0)
redisp_p_set.sort_index(inplace=True)
redisp_p_set.head(3)
[ ]:
# cleaning dispatch data from the simulation to feed it to the PyPSA network
redispatches = market_dispatch_total[
market_dispatch_total["market_id"].str.contains("redispatch", case=False, na=False)
]
redisp_p_set = redispatches.pivot(index="datetime", columns="unit_id", values="power")
redisp_p_set = redisp_p_set.drop(columns=["demand_EOM1", "demand_EOM2"])
redisp_p_set = redisp_p_set.rename_axis("snapshot", axis="index")
redisp_p_set.index = pd.to_datetime(redisp_p_set.index)
redisp_p_set.index = redisp_p_set.index - pd.Timedelta(hours=1)
redisp_p_set = redisp_p_set.fillna(0)
redisp_p_set.sort_index(inplace=True)
# Plotting the power values for each energy source over time
plt.figure(figsize=(9, 5))
# Plot each energy source's power over time
for column in redisp_p_set.columns: # Since no need to skip 'datetime'
plt.plot(redisp_p_set.index, redisp_p_set[column], label=column)
# Add labels and title
plt.title("Redispatch Power Output Over Time by Energy Source", fontsize=16)
plt.xlabel("Datetime", fontsize=14)
plt.ylabel("Power (MW)", fontsize=14)
plt.xticks(rotation=45)
plt.grid(True)
# Add legend
plt.legend(title="Energy Source", bbox_to_anchor=(1.05, 1), loc="upper left")
# Adjust layout to avoid overlap
plt.tight_layout()
# Display the chart
plt.show()