New user experience #916
Replies: 1 comment
-
@IsaiMaganTNO This is great feedback!
One way users can work is to do everything "by hand" in Excel and send CSVs to the model. This is safe/familiar but is not recommended for large analyses (poor analyst - hope they at least use scripts a little). What we want to promote is scripting to manipulate their data using DuckDB (preferably in a Pluto notebook). To do this @suvayu has been building functions in TulipaIO that should make this easier. The functions won't be "put this table in Tulipa format" but more like "merge these tables based on this column" or "filter by this and replace with this other data" or "transform this table from wide to long." This is where we need your help. Suvayu has made some general functions he thinks are useful, but needs feedback from what people actually want to do with their data. He's busy right now but in the future (a month?) you guys'll be working together on this type of pipeline. We'll also work on this in the TLC next year, but it'd be nice to improve it before introducing users less comfortable with scripting. So keep that in mind while taking notes on your experience! :) |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
Thanks for all the hard work you've been putting into the TulipaEnergyModel. Here is a little reflection on my side on the first time I started using this model. Together with Diego's help we are trying to run a case study on offshore bidding zone configurations. I started off with zero knowledge of the model, and as such decided the best course of action would be to read through the documentation of the model and the file structure first. After a small hiccup in the installation (newest DuckDB version wouldn't work) I could get started.
After the setup, proceeded to read documentation and run tutorials (Tiny/Norse). Here are a few points a beginner (like me) would maybe have questions about (not that these things are right or wrong, perhaps just a bit counterintuitive):
File structure
Graph-assets-data.csv: Difference between investment methods? Simple/compact? What does it entail?
Graph-flows-data.csv: Supposed to represent flows (defined as a pair of assets), why does it have information (columns) on capacity, I don't think this is transport capacity for example? Why does it have economic information (economic lifetime/discount rate) - is this purely for investments?
Assets-data.csv: Why have year and commission_year? Why for example does this not provide information on building time (if that is considered, e.g. time between investment and operation?)
Also, how to interpret unit_commitment_method?
False -> missing
True -> basic/advanced
So why not join columns into: false OR basic OR advanced
Also for consumer_balance_sense, why have the option for empty, == or >=. If the default (empty) refers to ==, why not have the options as empty or >= if it happens to be different?
Additionally, is construction time included in investments?
Rep-periods-mapping.csv: Took a little sparring with Diego on how weights should sum up but in the end the description is clear!
Groups-data.csv: How do I find what is included in the different groups? Assumed it was in assets-data but it's not, turns out it was in graph-assets-data!
Assets-rep-periods-partitions.csv: no comment, very clear explanation! :)
Assets-timeframe-partitions.csv: Could do with an explanation like for assets-rep-periods-partitions, ideally a small scheme or something to highlight the difference! It should still be very much understandable through reading the sections below about the time frame and representative periods but just in case it would be nice. For me, chat with Diego cleared up my uncertainties
Workflow
Currently, a lot of restructuring of the input data is going on to get it to fit the required input data format. As this is an ongoing process, I will further reflect on this in due time either here or in the separate discussion on workflow. Most importantly for now though is that I understand what the required structure is, and ideally Diego and I can get started on ideas on what a desired data (structure) pipeline would look like. For example, how is information on variable renewable energy production profiles usually presented? How could this follow a standardized transformation to be restructured into the desired model input structure?
General thoughts
Overall, I was very interested in working with the model, and as I started running one of the EU test cases together with Diego I was also surprised at the speed of the model (and free solver) in solving the problem. Handling data is for sure not the most user-friendly as of now but I am sure it will become easier in the future :).
As I said, not too many points as of now but as I work more with the model I will be sure to reflect again!
All the best,
Isaï
Beta Was this translation helpful? Give feedback.
All reactions