Skip to content

Commit

Permalink
Merge pull request #474 from assume-framework/fix_spelling
Browse files Browse the repository at this point in the history
Add codespell to pre-commit
  • Loading branch information
maurerle authored Nov 10, 2024
2 parents d62d18a + c0bce15 commit 0aec637
Show file tree
Hide file tree
Showing 51 changed files with 138 additions and 131 deletions.
6 changes: 6 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,9 @@ repos:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-illegal-windows-names
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
types_or: [python, rst, markdown]
files: ^(assume|docs|tests)/
2 changes: 1 addition & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ SPDX-License-Identifier: AGPL-3.0-or-later
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
identity and expression, level of experience, education, socioeconomic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.

Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ RUN mkdir /src
WORKDIR /src
COPY README.md pyproject.toml .
#RUN python -m pip install --upgrade pip
# thats needed to use create the requirements.txt only
# that's needed to use create the requirements.txt only
RUN pip install pip-tools
RUN mkdir assume assume_cli
RUN touch assume/__init__.py
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ To ease your way into ASSUME we provided some examples and tutorials. The former

### The Tutorials

The tutorials work completly detached from your own machine on google colab. They provide code snippets and task that show you, how you can work with the software package one your own. We have two tutorials prepared, one for introducing a new unit and one for getting reinforcement learning ready on ASSUME.
The tutorials work completely detached from your own machine on google colab. They provide code snippets and task that show you, how you can work with the software package one your own. We have two tutorials prepared, one for introducing a new unit and one for getting reinforcement learning ready on ASSUME.

How to configure a new unit in ASSUME?
[![Open Learning Tutorial in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/assume-framework/assume/blob/main/examples/notebooks/03_custom_unit_example.ipynb)
Expand Down
16 changes: 8 additions & 8 deletions assume/common/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -417,12 +417,12 @@ def get_operation_time(self, start: datetime) -> int:
# before start of index
return max_time
is_off = not arr.iloc[0]
runn = 0
run = 0
for val in arr:
if val == is_off:
break
runn += 1
return (-1) ** is_off * runn
run += 1
return (-1) ** is_off * run

def get_average_operation_times(self, start: datetime) -> tuple[float, float]:
"""
Expand All @@ -448,15 +448,15 @@ def get_average_operation_times(self, start: datetime) -> tuple[float, float]:

op_series = []
status = arr.iloc[0]
runn = 0
run = 0
for val in arr:
if val == status:
runn += 1
run += 1
else:
op_series.append(-((-1) ** status) * runn)
runn = 1
op_series.append(-((-1) ** status) * run)
run = 1
status = val
op_series.append(-((-1) ** status) * runn)
op_series.append(-((-1) ** status) * run)

op_times = [operation for operation in op_series if operation > 0]
if op_times == []:
Expand Down
4 changes: 2 additions & 2 deletions assume/common/forecasts.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ def calc_forecast_if_needed(self):
Calculates the forecasts if they are not already calculated.
This method calculates price forecast and residual load forecast for available markets, if
thise don't already exist.
these don't already exist.
"""

cols = []
Expand Down Expand Up @@ -293,7 +293,7 @@ def calculate_market_price_forecast(self, market_id):
# calculate infeed of renewables and residual demand_df
# check if max_power is a series or a float

# select only those power plant units, which have a bidding strategy for the specifi market_id
# select only those power plant units, which have a bidding strategy for the specific market_id
powerplants_units = self.powerplants_units[
self.powerplants_units[f"bidding_{market_id}"].notnull()
]
Expand Down
2 changes: 1 addition & 1 deletion assume/common/grid_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ def read_pypsa_grid(
):
"""
Generates the pypsa grid from a grid dictionary.
Does not add the generators, as they are added in different ways, depending on wether redispatch is used.
Does not add the generators, as they are added in different ways, depending on whether redispatch is used.
Args:
network (pypsa.Network): the pypsa network to which the components will be added
Expand Down
8 changes: 4 additions & 4 deletions assume/common/outputs.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,10 @@ def __init__(
if self.episode == 0:
self.del_similar_runs()

# contruct all timeframe under which hourly values are written to excel and db
# construct all timeframe under which hourly values are written to excel and db
self.start = start
self.end = end
# initalizes dfs for storing and writing asynchron
# initializes dfs for storing and writing asynchronous
self.write_dfs: dict = defaultdict(list)
self.locks = defaultdict(lambda: Lock())

Expand Down Expand Up @@ -422,7 +422,7 @@ def write_market_orders(self, market_orders: any, market_id: str):
market_orders (any): The market orders.
market_id (str): The id of the market.
"""
# check if market results list is empty and skip the funktion and raise a warning
# check if market results list is empty and skip the function and raise a warning
if not market_orders:
return

Expand Down Expand Up @@ -585,7 +585,7 @@ def write_flows(self, data: dict[tuple[datetime, str], float]):
Args:
data: The records to be put into the table. Formatted like, "(datetime, line), flow" if generated by pyomo or df if it comes from pypsa.
"""
# Daten in ein DataFrame umwandeln depending on the data format which differes when different solver are used
# Daten in ein DataFrame umwandeln depending on the data format which differs when different solver are used
# transformation done here to avoid adapting format during clearing

# if data is dataframe
Expand Down
4 changes: 2 additions & 2 deletions assume/common/units_operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ def handle_registration_feedback(
break
if not found:
logger.error(
"Market %s sent registation but is unknown", content["market_id"]
"Market %s sent registration but is unknown", content["market_id"]
)
else:
logger.error("Market %s did not accept registration", meta["sender_id"])
Expand Down Expand Up @@ -386,7 +386,7 @@ async def submit_bids(self, opening: OpeningMessage, meta: MetaDict) -> None:
meta (MetaDict): The meta data of the market.
Note:
This function will accomodate the portfolio optimization in the future.
This function will accommodate the portfolio optimization in the future.
"""

products = opening["products"]
Expand Down
8 changes: 4 additions & 4 deletions assume/common/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -408,13 +408,13 @@ def separate_orders(orderbook: Orderbook):

def get_products_index(orderbook: Orderbook) -> pd.DatetimeIndex:
"""
Creates an index containing all start times of orders in orderbook and all inbetween.
Creates an index containing all start times of orders in orderbook and all between.
Args:
orderbook (Orderbook): The orderbook.
Returns:
pd.DatetimeIndex: The index containing all start times of orders in orderbook and all inbetween.
pd.DatetimeIndex: The index containing all start times of orders in orderbook and all between.
"""
if orderbook == []:
return []
Expand Down Expand Up @@ -503,7 +503,7 @@ def adjust_unit_operator_for_learning(
unit_operator_id = "Operator-RL"
logger.debug(
"Your chosen unit operator %s for the learning unit %s was overwritten with 'Operator-RL', "
"since all learning units need to be handeled by one unit operator.",
"since all learning units need to be handled by one unit operator.",
unit_operator_id,
id,
)
Expand Down Expand Up @@ -587,7 +587,7 @@ def rename_study_case(path: str, old_key: str, new_key: str):
Args:
path (str): The path to the config file.
old_key (str): The orginal name of the key without adjustments. E.g. study_case from available_examples: "base".
old_key (str): The original name of the key without adjustments. E.g. study_case from available_examples: "base".
new_key (str): The name of the key with adjustments. E.g. added run number: "base_run_1".
"""
# Read the YAML file
Expand Down
2 changes: 1 addition & 1 deletion assume/markets/clearing_algorithms/all_or_nothing.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ def clear(
supply_orders[i]["accepted_volume"] = supply_orders[i]["volume"]
demand_orders[i]["accepted_volume"] = demand_orders[i]["volume"]

# pay as bid - so the generator gets payed more than he needed to operate
# pay as bid - so the generator gets paid more than he needed to operate
supply_orders[i]["accepted_price"] = demand_orders[i]["price"]
demand_orders[i]["accepted_price"] = demand_orders[i]["price"]

Expand Down
2 changes: 1 addition & 1 deletion assume/markets/clearing_algorithms/contracts.py
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,7 @@ def swingcontract(
end: datetime,
):
"""
The swing contract is used to provide a band in which one price is payed, while the second (higher) price is paid, when the band is left.
The swing contract is used to provide a band in which one price is paid, while the second (higher) price is paid, when the band is left.
Args:
contract (dict): the contract which is executed
Expand Down
2 changes: 1 addition & 1 deletion assume/markets/clearing_algorithms/redispatch.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ def clear(
# run linear powerflow
redispatch_network.lpf()

# check lines for congestion where power flow is larget than s_nom
# check lines for congestion where power flow is larger than s_nom
line_loading = (
redispatch_network.lines_t.p0.abs() / redispatch_network.lines.s_nom
)
Expand Down
2 changes: 1 addition & 1 deletion assume/reinforcement_learning/algorithms/matd3.py
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ def update_policy(self):
actor_target = self.learning_role.rl_strats[u_id].actor_target

if i % 100 == 0:
# only update target netwroks every 100 steps, to have delayed network update
# only update target networks every 100 steps, to have delayed network update
transitions = self.learning_role.buffer.sample(self.batch_size)
states = transitions.observations
actions = transitions.actions
Expand Down
2 changes: 1 addition & 1 deletion assume/reinforcement_learning/buffer.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ def add(

def sample(self, batch_size: int) -> ReplayBufferSamples:
"""
Samples a randome batch of experiences from the replay buffer.
Samples a random batch of experiences from the replay buffer.
Args:
batch_size (int): The number of experiences to sample.
Expand Down
6 changes: 3 additions & 3 deletions assume/reinforcement_learning/learning_role.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,8 @@ def __init__(

self.learning_rate = learning_config.get("learning_rate", 1e-4)

# if we do not have initital experience collected we will get an error as no samples are avaiable on the
# buffer from which we can draw exprience to adapt the strategy, hence we set it to minium one episode
# if we do not have initial experience collected we will get an error as no samples are available on the
# buffer from which we can draw experience to adapt the strategy, hence we set it to minimum one episode

self.episodes_collecting_initial_experience = max(
learning_config.get("episodes_collecting_initial_experience", 5), 1
Expand Down Expand Up @@ -312,7 +312,7 @@ def compare_and_save_policies(self, metrics: dict) -> bool:
f"New best policy saved, episode: {self.eval_episodes_done + 1}, {metric=}, value={value:.2f}"
)

# if we do not see any improvment in the last x evaluation runs we stop the training
# if we do not see any improvement in the last x evaluation runs we stop the training
if len(self.rl_eval[metric]) >= self.early_stopping_steps:
self.avg_rewards.append(
sum(self.rl_eval[metric][-self.early_stopping_steps :])
Expand Down
2 changes: 1 addition & 1 deletion assume/scenario/loader_amiris.py
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,7 @@ def load_amiris(
"""
Loads an Amiris scenario.
Markups and markdowns are handled by linearly interpolating the agents volume.
This mimicks the behavior of the way it is done in AMIRIS.
This mimics the behavior of the way it is done in AMIRIS.
Args:
world (World): the ASSUME world
Expand Down
10 changes: 5 additions & 5 deletions assume/scenario/loader_csv.py
Original file line number Diff line number Diff line change
Expand Up @@ -548,7 +548,7 @@ def setup_world(
scenario_data (dict): A dictionary containing the configuration and loaded files for the scenario and study case.
study_case (str): The specific study case within the scenario to be loaded.
perform_evaluation (bool, optional): A flag indicating whether evaluation should be performed. Defaults to False.
terminate_learning (bool, optional): An automatically set flag indicating that we terminated the learning process now, either because we reach the end of the episode itteration or because we triggered an early stopping.
terminate_learning (bool, optional): An automatically set flag indicating that we terminated the learning process now, either because we reach the end of the episode iteration or because we triggered an early stopping.
episode (int, optional): The episode number for learning. Defaults to 0.
eval_episode (int, optional): The episode number for evaluation. Defaults to 0.
Expand Down Expand Up @@ -695,7 +695,7 @@ def setup_world(
units[op].extend(op_units)

# if distributed_role is true - there is a manager available
# and we cann add each units_operator as a separate process
# and we can add each units_operator as a separate process
if world.distributed_role is True:
logger.info("Adding unit operators and units - with subprocesses")
for op, op_units in units.items():
Expand Down Expand Up @@ -742,7 +742,7 @@ def load_scenario_folder(
scenario (str): The name of the scenario to be loaded.
study_case (str): The specific study case within the scenario to be loaded.
perform_evaluation (bool, optional): A flag indicating whether evaluation should be performed. Defaults to False.
terminate_learning (bool, optional): An automatically set flag indicating that we terminated the learning process now, either because we reach the end of the episode itteration or because we triggered an early stopping.
terminate_learning (bool, optional): An automatically set flag indicating that we terminated the learning process now, either because we reach the end of the episode iteration or because we triggered an early stopping.
episode (int, optional): The episode number for learning. Defaults to 0.
eval_episode (int, optional): The episode number for evaluation. Defaults to 0.
Expand Down Expand Up @@ -880,7 +880,7 @@ def run_learning(
world.learning_role.initialize_policy(actors_and_critics=actors_and_critics)
world.output_role.del_similar_runs()

# check if we already stored policies for this simualtion
# check if we already stored policies for this simulation
save_path = world.learning_config["trained_policies_save_path"]

if Path(save_path).is_dir():
Expand Down Expand Up @@ -939,7 +939,7 @@ def run_learning(
)

# -----------------------------------------
# Give the newly initliazed learning role the needed information across episodes
# Give the newly initialized learning role the needed information across episodes
world.learning_role.load_inter_episodic_data(inter_episodic_data)

world.run()
Expand Down
4 changes: 2 additions & 2 deletions assume/scenario/oeds/infrastructure.py
Original file line number Diff line number Diff line change
Expand Up @@ -440,9 +440,9 @@ def get_wind_turbines_in_area(self, area=520, wind_type="on_shore"):
# If the response Dataframe is not empty set technical parameter
if df.empty:
return df
# all WEA with nan set hight to mean value
# all WEA with nan set height to mean value
df["height"] = df["height"].fillna(df["height"].mean())
# all WEA with nan set hight to mean diameter
# all WEA with nan set height to mean diameter
df["diameter"] = df["diameter"].fillna(df["diameter"].mean())
# all WEA with na are on shore and not allocated to a sea cluster
df["nordicSea"] = df["nordicSea"].astype(float).fillna(0)
Expand Down
10 changes: 5 additions & 5 deletions assume/strategies/learning_advanced_orders.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ class RLAdvancedOrderStrategy(RLStrategy):
If SB and linked orders (LB) are allowed, the strategy will use SB for the inflexible power and LB for the flexible power.
If SB and block orders (BB) are allowed, the strategy will use BB for the inflexible power and SB for the flexible power.
If all three order types (SB, BB, LB) are allowed, the strategy will use BB for the inflexible power
and LB for the flexible power, exept the inflexible power is 0,
and LB for the flexible power, except the inflexible power is 0,
then it will use SB for the flexible power (as for VREs).
"""

Expand Down Expand Up @@ -133,7 +133,7 @@ def calculate_bids(
bid_quantity_inflex = min_power[start]

# 3.1 formulate the bids for Pmax - Pmin
# Pmin, the minium run capacity is the inflexible part of the bid, which should always be accepted
# Pmin, the minimum run capacity is the inflexible part of the bid, which should always be accepted

if op_time <= -unit.min_down_time or op_time > 0:
bid_quantity_flex = max_power[start] - bid_quantity_inflex
Expand Down Expand Up @@ -317,7 +317,7 @@ def create_observation(

current_costs = unit.calculate_marginal_cost(start, current_volume)

# scale unit outpus
# scale unit outputs
scaled_max_power = current_volume / scaling_factor_total_capacity
scaled_marginal_cost = current_costs / scaling_factor_marginal_cost

Expand All @@ -342,7 +342,7 @@ def create_observation(
]
)

# transfer arry to GPU for NN processing
# transfer array to GPU for NN processing
observation = (
th.tensor(observation, dtype=self.float_type)
.to(self.device, non_blocking=True)
Expand Down Expand Up @@ -445,7 +445,7 @@ def calculate_reward(

# ---------------------------
# 4.1 Calculate Reward
# The straight forward implemntation would be reward = profit, yet we would like to give the agent more guidance
# The straight forward implementation would be reward = profit, yet we would like to give the agent more guidance
# in the learning process, so we add a regret term to the reward, which is the opportunity cost
# define the reward and scale it

Expand Down
Loading

0 comments on commit 0aec637

Please sign in to comment.