8000 GitHub - bidhya/verse
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

bidhya/verse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Blender description of HPC (NASA Discover)

The notes here are now only specific to LIS related 1km runs.

Project folder: /discover/nobackup/projects/coressd

Everything is relative to this main folder on Discover

General structure of project directory on Discover

/discover/nobackup/projects/coressd
├── OSU
│   ├── MOD10A1F.061/MODIS_Proc
│   └── MOD10A1F.061/MODIS_Proc/download_snow
├── mwrzesie
│   └── runOL/out.OL/SURFACEMODEL
├── Blender
│   ├── Inputs
│   ├── Modis
│   └── Runs
├── Github
│   ├── verse
│   ├── Slurm_Blender
│   ├── slurm_jobs
│   └── wshed
└── installs
    ├── miniconda3
    ├── julia-1-11.4
    └── .julia

Important subfolder paths and description

OSU: Modis snow cover data downloaded here.
mwrzesie: LIS data generated by Melisa

Github: codes related to Input generation, Blender runs, and post-processing.

  1. verse : scripts (both Python and Julia)
  2. Slurm_Blender : slrum jobs related to generation of input files and post-processing of Blender runs.
  3. slurm_jobs : will be created by blender run automatically as required.
  4. Blender_notebooks : Jupyter notebooks. Mostly for prototyping and analysis of data. But two imporant ones for Blender workflow:
    • Modis/Forest/05_MOD44B_on_Discover.ipynb : use directly from Jupyterhub on discover. used to process annual tree-cover data. Since, data already processed no need to run again.
    • paper_outputs : not required to check per se. Output notebooks created by papermill is saved here.
    • rest of notebooks were created during various phases of project and mostly used for protyping of ideas and analysis.

Blender: Blender related data processed by various scripts.

  1. Inputs: LIS and MODIS files staged by water year. Input files for Blender run.
  2. Modis: MODIS intermediate files only
    • MOD44B : modis tree-cover reprojected and matched to LIS data
    • MOD10A1F : modis snow-cover data corrected for tree and clipped to match LIS data
  3. Runs: Blender runs saved here relative to water_year.
    • WY2015/temp_nc: temporary netcdf files for each slice or run. Temporary netcdf files. Delete after creating final outputs.
    • WY2015/outputs: Final PAN Americal Outputs created by merging files from "temp_nc" folder

Verse Structure (Julia and Python codes)

../Github/verse
├── Julia
│   ├── Estimate_v61.jl
│   ├── call_Blender_v19b.jl
│   ├── combine_nc_files.jl
│   └── submit_slurm.jl
├── Python
│   ├── Extract_LIS.py
│   ├── process_modis_cgf.py
│   └── tictoc.py
├── README.md
├── Updates.md
└── data
    ├── README.txt
    ├── pixels.csv
    └── wshed.csv

Order of Script Execution for coressd project

I. Generate Input Files for Blender Run

Code Location: ../coressd/Github

  1. Slurm_Jobs/a_lis_process.sh : Process LIS generate input files for Blender run.
    Use multiple cores
    Outputs: All daily LIS netcdf files saved separately as follows

    • ../coress/Blender/Inputs/WY2015
      • Snowf_tavg.nc
      • SWE_tavg.nc
      • Tair_f_tavg.nc
      • Qg_tavg.nc
      • SCF.nc (will be created by next slurm job (b_process_modis_cgf.sh))
  2. Slurm_Jobs/b_process_modis_cgf.sh: Extract NA (North America) scale daily Modis_CGF matching the LIS rasters

    • uses process_modis_cgf.py
    • uses ../OSU/MOD10A1F.061/MODIS_Proc/download_snow/.. for snow-cover data
    • uses MOD44B for tree-cover correction
    • Hard dependency to use Qg_tavg.nc as template and mask
    • intermediate outputs: ../coressd/Blender/Modis/MOD10A1F/clipped2015_010: clipped and matched to seup resolution
    • ../Blender/Inputs/WY2015/SCF.nc : ie, same location as LIS outputs as above.

II. Run Blender (Julia) Code

Required Julia packages: JuMP Ipopt Rasters NCDatasets CSV LoggingExtras Distributions

  1. On HPC (Discover): the following julia script will generate slurm jobs for a water_year.

    • cd to Github folder
      • julia verse/Julia/submit_slurm.jl 2015 1 # generate and submit slurm jobs for WY2015 with adaptive slice = 1 row.
      • generate ~ 500 slurm jobs that calls the following julia script:
      • julia verse/Julia/call_Blender_v18.jl WY2015 1 17 : example script for Blender run for WY2015 for slice 1 to 17
      • slurm job folder:
        • ../slurm_jobs/2015/
          • .out/ : slurm outputs saved here
    • Blender Outputs: ../Blender/Runs/WY2015/temp_nc/ : These netcdf files for each slice will be combined by next script, after which this folder can be deleted.
  2. Post-processing (Required for continental map): Combine individual nc_files into PAN American netcdf file

    • uses: ../Github/Slurm_Blender/c_combine_out_nc_files.sh
    • calls: julia verse/Julia/combine_nc_files.jl WY$water_year

Edge Cases

These configurations are used for testing and prototyping. Two of these configurations include "pixel" and "watershed" runs.
Pixel run:

  • Use "pixel" prefix to WY. Example: pixel_WY2015 while calling call_Blender julia script.
  • folder and sub-folders created within script to save outputs.
  • usage: julia verse/Julia/call_Blender_v19.jl "pixel/pixel_WY2015" 200 200 2 1

Watershed run:

  • need a csv file of watershed bounding box (../coressd/Blender/coordinates/wshed.csv)
  • select watershed by passing the index (1-based in Julia) of watershed
  • usage: julia verse/Julia/call_Blender_v19.jl "wshed/Tuolumne/wshed_WY2015" 100 100 2 1

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0