Replies: 2 comments 14 replies
-
I think it's difficult to only read some part of a model. Maybe the speed of reading a model can be improved. The code below tests how much time asttokens spends to parse all the souce files in a model. import time
from pathlib import Path
import asttokens
def read_init_files(directory):
init_files_content = {} # Dictionary to store file paths and their contents
# Convert the input directory to a Path object
directory = Path(directory)
# Walk through the directory structure
for init_file_path in directory.rglob('__init__.py'):
try:
# Read the contents of the __init__.py file
init_files_content[init_file_path] = txt = init_file_path.read_text()
atok = asttokens.ASTTokens(txt, parse=True)
for i, stmt in enumerate(atok.tree.body):
atok.get_text(stmt) # dummpy operation
except Exception as e:
print(f"Error reading file {init_file_path}: {e}")
return init_files_content
if __name__ == "__main__":
directory = input("Enter the model directory path to scan: ")
start_time = time.time()
init_files_content = read_init_files(directory)
end_time = time.time()
print(f"Script run time: {end_time - start_time:.4f} seconds")
# for file_path, content in init_files_content.items():
# print(f"{file_path}: {len(content)} characters") |
Beta Was this translation helpful? Give feedback.
-
I think by optimizing modelx's code, your model could be loaded in 120 - 150 seconds. If you only want to change the fomula of a cells, you can just edit it in the source |
Beta Was this translation helpful? Give feedback.
-
In #153 , I mentioned reading model consumes quite long time and large memory (Almost 7GB for memory and 600 sec for opening model)
So, I tested memory and time of reading data.
The most memory consumed modules were as follows:
C:\Users\LG\AppData\Local\Programs\Python\Python312\Lib\tokenize.py:537
C:\Users\LG\AppData\Local\Programs\Python\Python312\Lib\ast.py
C:\Users\LG\AppData\Local\Programs\Python\Python312\Lib\site-packages\asttokens\line_numbers.py
The most time consumed module was as follows:
C:\Users\LG\AppData\Local\Programs\Python\Python312\Lib\site- packages\asttokens\mark_tokens
C:\Users\LG\AppData\Local\Programs\Python\Python312\Lib\site-packages\asttokens\asttokens.py
C:\Users\LG\AppData\Local\Programs\Python\Python312\Lib\site-packages\asttokens\mark_tokens.py
I didn't use external data as pickle.
It seems if there are a lot of complex functions, lots of resources are consumed to transfer the code to byte stream and tokenize them (not sure though).
Thinking of practices, for the maintenance purpose, we open specific spaces not all the spaces in the model at the same time.
so, I think it would be great if we can open specific spaces and operate them and save.
Beta Was this translation helpful? Give feedback.
All reactions