File management

Good day folks,

I'm looking for a way to store data files i.e plots, tables etc in one "master folder" and be able to call said files in an external experiment file. 

First, welcome to the forum.

Data and their representation (plots, tables etc.) are two very different things in Igor. What is the use case? I used to have a 'master file' with all the data (from a month etc.) sorted and labeled in an experiment file, which I then could draw from for further processing. What do you want to do?

You can also draw data from any other experiment via the Browse Experiment function. If you want to drag in plots or tables (which, again, are just a representation of said data) it gets a bit more difficult, but there are options too.

Execute this for background information:

DisplayHelpTopic "Experiments, Files and Folders"

Also, if you have not already done it, do the first half of the Igor Guided Tour. Choose Help->Getting Started.

You do not clarify where you need the file management? At the OS level or within Igor Pro itself? If I read between the lines, it seems that you want to have one master Igor Pro experiment that serves as a database to all other Igor Pro experiments. Alternatively, you want to have a method to segregate one Igor Pro experiment into different types of analysis approaches.

Please give a better description, perhaps with a specific example.

I have a program that requires generating 5 or more new data sets a day (5 new sets of wave files). These waves then need to be analyzed and compared to each other, as well as compared to historical data. In order to make it simpler to generate these comparative plots, I have been keeping all waves (new data and all historical data) in the same experiment file along with the comparative plots. This means I'm working with an experiment file that contains 100-200 waves and graphs, which is extremely memory intensive to load and then manipulate. What is the preferred Igor method for handling this type of data workflow? I simply need a way to access new data and then compare with old data (though I can't specify ahead of time which previous data sets will be most relevant for comparison).

What I had envisioned is having one master Igor pro experiment file that contains waves (or potentially waves and graphs). Then I could create sub-experiment files that would call waves (or even better just the graphs) from the master Igor experiment file to do a comparative analysis. I'm unsure if that master file should have only the waves, or if it's better to have the graphs also created there. The main point is to not generate and work with such a large, single experiment file that contains every bit of analysis that has been done.

Load all your data into the master file and preferably sort them into folders (e.g., according to measurement date). I would not recommend to create graphs there, since it is not super easy to transfer graphs into other experiment files. Then, to compare data (i.e., in a graph), open a new experiment file and use the Browse Experiment button in the Data Browser to access your master file. Load all relevant waves and create the plots in the new experiment. How big is your data? Unless it is gigabytes, it should be no problem to have hundreds of waves in one experiment file.

Thank you for the details to appreciate what you are doing. The approach might depend on what you need as and after you process the daily inputs.

Case 1

You might have just one summary report from the daily input. This might be one average wave over all inputs or one set of average + standard uncertainty values from analysis on all input waves. I would create a template to process and store each daily input sets in its own experiment, with the experiment named by date stamp (e.g. DailyExperiment2023-10-13). The processing would generate the comparative analysis only within the data set. It would store the summary in a data folder called summary. I would have a second experiment (e.g SummaryAnalysis) to process all summaries. The workflow would be a) open the next DailyExperiment, b) load the daily inputs, c) process, d) open the SummaryAnalysis experiment, e) browse to the DailyExperiment for the give day, f) copy in to SummaryAnalysis the summary data folder from the DailyExperiment, g) rename the newly copied folder by the date (e.g. summary2023-10-13), and h) continue processing over all data with the new data summary.

Case 2

You might need to process the daily input unto itself and include ALL of the daily inputs in a global processing scheme. Here I would stay within one experiment. I would store daily inputs in their own data folders (e.g. dailyinput2023-10-13). I would process within the daily folder. I would store a summary processing in its own data folder (e.g. summary). I would limit the size of the master experiment by a time frame (weekly, monthly, quarterly).

Instead of loading data into a "master" Igor experiment consider writing code in Igor to create a control panel (GUI) that lets the user load data from disc files, process as needed and create plots/tables.  In addition, this code could produce summary reports.  You only need to decide how to organize data on the computer drive.   

It's difficult to give definitive advice without knowing more about your workflow over time.  If you've programmed before, it will not be difficult to pick up Igor programming.  It will take some time and effort to learn the syntax and explore the breadth of available functions.  There are many very skilled Igor coders on this list who generously answer questions.

The description does not state, where the data originate. 

If these data are created by Igor itself, then saving them inside (properly back up) Igor experiment inside properly organized folders is likely easiest solution. Depending on size of the data waves, 100 - 200 waves is trivial - if they are organized in folders. Some of my Igor experiments have thousands of waves and nearly gigabyte size, and Igor runs perfectly fine. 

If these data are generated by something else, e.g., Python program, device, etc. then storing data outside Igor in some kind of structure is more convenient. Structure could be disk folder tree or HDF5 file or anything else properly structured Igor can easily read. Then your Igor experiment would interface with external storage and get data only when needed. 

I personally prefer data in non-app specific structures = outside Igor, so in case someone else needs access to them they can use other tools. But optimum selection depends on your specific case. 

In Igor 9 you can have best of both worlds if you generate data in Igor and save in hdf5 Igor experiment - extension h5xp.  That is hdf5 file - readable by other tools (e.g. python) and so your data are open to the world without need for Igor.