./
read_csv_pp_strings.pro
Routines
result = read_csv_fieldnames(fieldCount)
result = read_csv_pp_strings(Filename, COUNT=COUNT, HEADER=HEADER, MISSING_VALUE=MISSING_VALUE, NUM_RECORDS=NUM_RECORDS, RECORD_START=RECORD_START, N_TABLE_HEADER=N_TABLE_HEADER, TABLE_HEADER=TABLE_HEADER, _EXTRA=_EXTRA, types=types, nan=nan, infinity=infinity, integer=integer, trim=trim, blank=blank, rows_for_testing=rows_for_testing)
:Description: The READ_CSV function reads data from a "comma-separated value" (comma-delimited) text file into an IDL structure variable.
Routine details
top source read_csv_fieldnames
result = read_csv_fieldnames(fieldCount)
Parameters
- fieldCount
Statistics
Lines: | 10 lines |
McCabe complexity: | 1 |
top source read_csv_pp_strings
result = read_csv_pp_strings(Filename, COUNT=COUNT, HEADER=HEADER, MISSING_VALUE=MISSING_VALUE, NUM_RECORDS=NUM_RECORDS, RECORD_START=RECORD_START, N_TABLE_HEADER=N_TABLE_HEADER, TABLE_HEADER=TABLE_HEADER, _EXTRA=_EXTRA, types=types, nan=nan, infinity=infinity, integer=integer, trim=trim, blank=blank, rows_for_testing=rows_for_testing)
:Description: The READ_CSV function reads data from a "comma-separated value" (comma-delimited) text file into an IDL structure variable. This routine handles CSV files consisting of an optional line of column headers, followed by columnar data, with commas separating each field. Each row is assumed to be a new record. The READ_CSV routine will automatically return each column (or field) in the correct IDL variable type using the following rules: * Long - All data within that column consists of integers, all of which are smaller than the maximum 32-bit integer. * Long64 - All data within that column consists of integers, with at least one greater than the maximum 32-bit integer. * Double - All data within that column consists of numbers, at least one of which has either a decimal point or an exponent. * String - All data which does not fit into one of the above types. This routine is written in the IDL language. Its source code can be found in the file read_csv.pro in the lib subdirectory of the IDL distribution. :Syntax: Result = READ_CSV( Filename [, COUNT=variable] [, HEADER=variable] [, MISSING_VALUE=value] [, NUM_RECORDS=value] [, RECORD_START=value] [, N_TABLE_HEADER=value] [,TABLE_HEADER=variable] ) :Params: Filename A string containing the name of a CSV file to read into an IDL variable. :Keywords: COUNT Set this keyword equal to a named variable that will contain the number of records read. HEADER Set this keyword equal to a named variable that will contain the column headers as a vector of strings. If no header exists, an empty scalar string is returned. MISSING_VALUE Set this keyword equal to a value used to replace any missing floating-point or integer data. The default value is 0. NUM_RECORDS Set this keyword equal to the number of records to read. The default is to read all records in the file. RECORD_START Set this keyword equal to the index of the first record to read. The default is the first record of the file (record 0). N_TABLE_HEADER Set this keyword to the number of lines to skip at the beginning of the file, not including the HEADER line. These extra lines may be retrieved by using the TABLE_HEADER keyword. TABLE_HEADER Set this keyword to a named variable in which to return an array of strings containing the extra table headers at the beginning of the file, as specified by N_TABLE_HEADER. :History: Written, CT, VIS, Oct 2008 MP, VIS, Oct 2009: Added keyword NSKIP and SKIP_HEADER
Parameters
- Filename
Keywords
- COUNT
- HEADER
- MISSING_VALUE
- NUM_RECORDS
- RECORD_START
- N_TABLE_HEADER
- TABLE_HEADER
- _EXTRA
- types
- nan
- infinity
- integer
- trim
- blank
- rows_for_testing
Statistics
Lines: | 257 lines |
McCabe complexity: | 43 |
File attributes
Modification date: | Tue Sep 16 14:02:13 2014 |
Lines: | 345 |