Note

You can download this example as a Jupyter notebook or try it out directly in Google Colab.

7. Interoperability and Input-Output#

This tutorial describes how ASSUME can be used to create market simulations from energy system simulations as well as other market simulations like AMIRIS. A broad comparison towards AMIRIS is submitted to the EEM2024.

This tutorial describes how one can create scenarios from different input sets and use existing scenarios from it.

As a whole, this tutorial covers the following

  1. running a small scenario from CSV folder with the CLI

  2. creating a small simulation from scratch as shown in tutorial 01

  3. load a scenario from an AMIRIS scenario.yaml

  4. load a scenario from a pypsa network

1. Scenario from CLI#

First we need to install assume

[ ]:
# Install Pyomo, GLPK solver, and PyPSA
# This is needed only when running on Google Colab
# When running on your local machine, you can simply run pip install assume-framework[optimization]
!pip install pyomo
!apt-get install -y -qq glpk-utils
!pip install pypsa==0.30.3

!pip install assume-framework

If we run in Google Colab, we need to first clone the ASSUME repository there to access the tutorial data

[ ]:
!git clone --depth=1 https://github.com/assume-framework/assume.git assume-repo

Now we can use the CLI script to run a simulation - relative to the examples folder

[ ]:
!cd assume-repo && assume -s example_01a -c tiny -db "sqlite:///local_db/assume_db.db"

Protip: with argcomplete - one can create very nice tab completion for python scripts.

Though one has to run eval "$(register-python-argcomplete assume)" once in the env before

We did not use the postgresql database - therefore we can not use our visualization - lets fix this. You need to have have postgresql and grafana installed (available through docker).

[ ]:
!assume -s example_01a -c base -db "postgresql://assume:assume@localhost:5432/assume"

If you are running locally and have our docker with the database and the Grafan dashboards installed, we can now look at the results here:

http://localhost:3000/?orgId=1&var-simulation=example_01a_base&from=1546300800000&to=1548892800000&refresh=5s

2. Run from a script to customize scenario yourself#

This is a more advanced option - though it gives full control on what we are doing here:

[ ]:
import logging
import os
from datetime import datetime, timedelta

import pandas as pd
from dateutil import rrule as rr

from assume import World
from assume.common.forecasts import NaiveForecast
from assume.common.market_objects import MarketConfig, MarketProduct

log = logging.getLogger(__name__)

os.makedirs("./local_db", exist_ok=True)

db_uri = "sqlite:///./local_db/assume_db.db"

world = World(database_uri=db_uri)

start = datetime(2023, 1, 1)
end = datetime(2023, 3, 31)
index = pd.date_range(
    start=start,
    end=end + timedelta(hours=24),
    freq="h",
)
sim_id = "world_script_simulation"

world.loop.run_until_complete(
    world.setup(
        start=start,
        end=end,
        save_frequency_hours=48,
        simulation_id=sim_id,
        index=index,
    )
)

marketdesign = [
    MarketConfig(
        market_id="EOM",
        opening_hours=rr.rrule(rr.HOURLY, interval=24, dtstart=start, until=end),
        opening_duration=timedelta(hours=1),
        market_mechanism="pay_as_clear",
        market_products=[MarketProduct(timedelta(hours=1), 24, timedelta(hours=1))],
        additional_fields=["block_id", "link", "exclusive_id"],
    )
]

mo_id = "market_operator"
world.add_market_operator(id=mo_id)

for market_config in marketdesign:
    world.add_market(market_operator_id=mo_id, market_config=market_config)

    world.add_unit_operator("demand_operator")

demand_forecast = NaiveForecast(index, demand=100)

world.add_unit(
    id="demand_unit",
    unit_type="demand",
    unit_operator_id="demand_operator",
    unit_params={
        "min_power": 0,
        "max_power": 1000,
        "bidding_strategies": {"EOM": "naive_eom"},
        "technology": "demand",
    },
    forecaster=demand_forecast,
)

world.add_unit_operator("unit_operator")

nuclear_forecast = NaiveForecast(index, availability=1, fuel_price=3, co2_price=0.1)

world.add_unit(
    id="nuclear_unit",
    unit_type="power_plant",
    unit_operator_id="unit_operator",
    unit_params={
        "min_power": 200,
        "max_power": 1000,
        "bidding_strategies": {"EOM": "naive_eom"},
        "technology": "nuclear",
    },
    forecaster=nuclear_forecast,
)

world.run()

3. Load AMIRIS scenario#

First we need to download the examples repository from amiris

[ ]:
!cd .. && git clone https://gitlab.com/dlr-ve/esy/amiris/examples.git amiris-examples

Now that we have the repository at the right place, we can run the amiris scenario:

[ ]:
from assume import World
from assume.scenario.loader_amiris import load_amiris_async

scenario = "Simple"  # Germany20{15-19}, Austria2019 or Simple
base_path = f"../amiris-examples/{scenario}/"

# make sure that you have a database server up and running - preferabely in docker
# DB_URI = "postgresql://assume:assume@localhost:5432/assume"
# but you can use a file-based sqlite database too:
data_format = "local_db"  # "local_db" or "timescale"

if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"

world = World(database_uri=db_uri)
world.loop.run_until_complete(
    load_amiris_async(
        world,
        "amiris",
        scenario.lower(),
        base_path,
    )
)
print(f"did load {scenario} - now simulating")
world.run()

If you are running locally and have our docker with the database and the Grafan dashboards installed, we can now look at the results here:

http://localhost:3000/d/mQ3Lvkr4k/assume3a-main-overview?orgId=1&var-simulation=amiris_simple&from=1609459200000&to=1609545600000&refresh=5s

4. Load PyPSA scenario#

[ ]:
from collections import defaultdict
from datetime import timedelta

import pypsa

# python-dateutil
from dateutil import rrule as rr

from assume import MarketConfig, MarketProduct, World
from assume.scenario.loader_pypsa import load_pypsa_async

# make sure that you have a database server up and running - preferabely in docker
# DB_URI = "postgresql://assume:assume@localhost:5432/assume"
# but you can use a file-based sqlite database too:
data_format = "local_db"  # "local_db" or "timescale"

if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"


world = World(database_uri=db_uri)

scenario = "world_pypsa"
study_case = "ac_dc_meshed"
# "pay_as_clear", "redispatch" or "nodal"
market_mechanism = "pay_as_clear"

network = pypsa.examples.ac_dc_meshed(from_master=True)
# network = pypsa.examples.storage_hvdc(True)
# network = pypsa.examples.scigrid_de(True, from_master=True)

start = network.snapshots[0]
end = network.snapshots[-1]
marketdesign = [
    MarketConfig(
        "EOM",
        rr.rrule(rr.HOURLY, interval=1, dtstart=start, until=end),
        timedelta(hours=1),
        market_mechanism,
        [MarketProduct(timedelta(hours=1), 1, timedelta(hours=1))],
        additional_fields=["node", "max_power", "min_power"],
        maximum_bid_volume=1e9,
        maximum_bid_price=1e9,
    )
]
default_strategies = {
    mc.market_id: (
        "naive_redispatch" if mc.market_mechanism == "redispatch" else "naive_eom"
    )
    for mc in marketdesign
}

bidding_strategies = defaultdict(lambda: default_strategies)

world.loop.run_until_complete(
    load_pypsa_async(
        world, scenario, study_case, network, marketdesign, bidding_strategies
    )
)
world.run()

If you are running locally and have our docker with the database and the Grafan dashboards installed, we can now look at the results here:

http://localhost:3000/d/nodalview/assume-nodal-view?orgId=1&var-simulation=world_pypsa_ac_dc_meshed&var-market=EOM

This also shows a visualization of the grid.

Conclusion#

In this tutorial, we have shown how different input formats can be used with ASSUME to create interoperability between different energy market simulations. It can also be used to load data from your personal existing simulations created in one of the other cases.