silx.io.specfile module

This module is a cython binding to wrap the C SpecFile library.

Documentation for the original C library SpecFile can be found on the ESRF website: The manual for the SpecFile Library

Examples

Start by importing SpecFile and instantiate it:

from silx.io.specfile import SpecFile

sf = SpecFile("test.dat")

A SpecFile instance can be accessed like a dictionary to obtain a Scan instance.

If the key is a string representing two values separated by a dot (e.g. "1.2"), they will be treated as the scan number (#S header line) and the scan order:

# get second occurrence of scan "#S 1"
myscan = sf["1.2"]

# access scan data as a numpy array
nlines, ncolumns = myscan.data.shape

If the key is an integer, it will be treated as a 0-based index:

first_scan = sf[0]
second_scan = sf[1]

It is also possible to browse through all scans using SpecFile as an iterator:

for scan in sf:
    print(scan.scan_header['S'])

MCA data can be selectively loaded using an instance of MCA provided by Scan:

# Only one MCA line is loaded in memory
second_mca = first_scan.mca[1]

# Iterating trough all MCA in a scan:
for mca_data in first_scan.mca:
    print(sum(mca_data))

Classes

class silx.io.specfile.SpecFile

SpecFile(filename)

Parameters:filename (string) – Path of the SpecFile to read

This class wraps the main data and header access functions of the C SpecFile library.

__len__()

Return the number of scans in the SpecFile

__iter__()

Return the next Scan in a SpecFile each time this method is called.

This usually happens when the python built-in function next() is called with a SpecFile instance as a parameter, or when a SpecFile instance is used as an iterator (e.g. in a for loop).

__getitem__(key)

Return a Scan object.

This special method is called when a SpecFile instance is accessed as a dictionary (e.g. sf[key]).

Parameters:key (int or str) – 0-based scan index or "n.m" key, where n is the scan number defined on the #S header line and m is the order
Returns:Scan defined by its 0-based index or its "n.m" key
Return type:Scan
columns(scan_index)

Return number of columns in a scan from the #N header line (without #N and scan number)

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:Number of columns in scan from #N line
Return type:int
command(scan_index)

Return #S line (without #S and scan number)

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:S line
Return type:str
data(scan_index)

Returns data for the specified scan index.

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:Complete scan data as a 2D array of doubles
Return type:numpy.ndarray
data_column_by_name(scan_index, label)

Returns data column for the specified scan index and column label.

Parameters:
  • scan_index (int) – Unique scan index between 0 and len(self)-1.
  • label (str) – Label of data column, as defined in the #L line of the scan header.
Returns:

Data column as a 1D array of doubles

Return type:

numpy.ndarray

date(scan_index)

Return date from #D line

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:Date from #D line
Return type:str
file_header(scan_index)

Return list of file header lines.

A file header contains all lines between a #F header line and a #S header line (start of scan). We need to specify a scan number because there can be more than one file header in a given file. A file header applies to all subsequent scans, until a new file header is defined.

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:List of raw file header lines
Return type:list of str
get_mca(scan_index, mca_index)

Return one MCA spectrum

Parameters:
  • scan_index (int) – Unique scan index between 0 and len(self)-1.
  • mca_index (int) – Index of MCA in the scan
Returns:

MCA spectrum

Return type:

1D numpy array

index(scan_number, scan_order=1)

Returns scan index from scan number and order.

Parameters:
  • scan_number (int) – Scan number (possibly non-unique).
  • scan_order (int default 1) – Scan order.
Returns:

Unique scan index

Return type:

int

Scan indices are increasing from 0 to len(self)-1 in the order in which they appear in the file. Scan numbers are defined by users and are not necessarily unique. The scan order for a given scan number increments each time the scan number appers in a given file.

keys()

Returns list of scan keys (eg ['1.1', '2.1',...]).

Returns:list of scan keys
Return type:list of strings
labels(scan_index)

Return all labels from #L line

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:All labels from #L line
Return type:list of strings
list()

Returns list (1D numpy array) of scan numbers in SpecFile.

Returns:list of scan numbers (from `` #S`` lines) in the same order as in the original SpecFile (e.g [1, 1, 2, 3, …]).
Return type:numpy array
mca_calibration(scan_index)

Return MCA calibration in the form \(a + b x + c x²\)

Raise a KeyError if there is no @CALIB line in the scan header.

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:MCA calibration as a list of 3 values \((a, b, c)\)
Return type:list of floats
motor_names(scan_index)

Return all motor names from #O lines

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:All motor names
Return type:list of strings
motor_position_by_name(scan_index, name)

Return motor position

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:Specified motor position
Return type:double
motor_positions(scan_index)

Return all motor positions

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:All motor positions
Return type:list of double
number(scan_index)

Returns scan number from scan index.

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:User defined scan number.
Return type:int
number_of_mca(scan_index)

Return number of mca spectra in a scan.

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:Number of mca spectra.
Return type:int
order(scan_index)

Returns scan order from scan index.

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:Scan order (sequential number incrementing each time a non-unique occurrence of a scan number is encountered).
Return type:int
scan_header(scan_index)

Return list of scan header lines.

Parameters:scan_index (int) – Unique scan index between 0 and len(self)-1.
Returns:List of raw scan header lines
Return type:list of str
class silx.io.specfile.Scan

Scan(specfile, scan_index)

Parameters:
  • specfile (SpecFile) – Parent SpecFile from which this scan is extracted.
  • scan_index (int) – Unique index defining the scan in the SpecFile

Interface to access a SpecFile scan

A scan is a block of descriptive header lines followed by a 2D data array.

Following three ways of accessing a scan are equivalent:

sf = SpecFile("/path/to/specfile.dat")

# Explicit class instantiation
scan2 = Scan(sf, scan_index=2)

# 0-based index on a SpecFile object
scan2 = sf[2]

# Using a "n.m" key (scan number starting with 1, scan order)
scan2 = sf["3.1"]
data

Scan data as a numpy.ndarray with the usual attributes (e.g. data.shape).

data_column_by_name(label)

Returns a data column

Parameters:label (str) – Label of data column to retrieve, as defined on the #L line of the scan header.
Returns:Line data as a 1D array of doubles
Return type:numpy.ndarray
data_line(line_index)

Returns data for a given line of this scan.

Parameters:line_index (int) – Index of data line to retrieve (starting with 0)
Returns:Line data as a 1D array of doubles
Return type:numpy.ndarray
file_header

List of raw file header lines (as a list of strings).

file_header_dict

Dictionary of file header strings, keys without the leading # (e.g. file_header_dict["F"]).

header

List of raw header lines (as a list of strings).

This includes the file header, the scan header and possibly a MCA header.

index

Unique scan index 0 - len(specfile)-1

This attribute is implemented as a read-only property as changing its value may cause nasty side-effects (such as loading data from a different scan without updating the header accordingly.

labels

List of data column headers from #L scan header

mca

MCA data in this scan.

Each multichannel analysis is a 1D numpy array. Metadata about MCA data is to be found in mca_header.

Return type:MCA
mca_header_dict

Dictionary of MCA header strings, keys without the leading #@ (e.g. mca_header_dict["CALIB"]).

motor_names

List of motor names from the #O file header line.

motor_position_by_name(name)

Returns the position for a given motor

Parameters:name (str) – Name of motor, as defined on the #O line of the file header.
Returns:Motor position
Return type:float
motor_positions

List of motor positions as floats from the #P scan header line.

number

First value on #S line (as int)

order

Order can be > 1 if the same number is repeated in a specfile

record_exists_in_hdr(record)

Check whether a scan header line exists.

This should be used before attempting to retrieve header information using a C function that may crash with a segmentation fault if the header isn’t defined in the SpecFile.

Parameters:record (str) – single upper case letter corresponding to the header you want to test (e.g. L for labels)
Returns:True or False
Return type:boolean
scan_header

List of raw scan header lines (as a list of strings).

scan_header_dict

Dictionary of scan header strings, keys without the leading``#`` (e.g. scan_header_dict["S"]). Note: this does not include MCA header lines starting with #@.

class silx.io.specfile.MCA

MCA(scan)

Parameters:

scan (Scan) – Parent Scan instance

Variables:
  • calibration – MCA calibration \((a, b, c)\) (as in \(a + b x + c x²\)) from #@CALIB scan header.
  • channels – MCA channels list from #@CHANN scan header. In the absence of a #@CHANN header, this attribute is a list [0, …, N-1] where N is the length of the first spectrum. In the absence of MCA spectra, this attribute defaults to None.

This class provides access to Multi-Channel Analysis data stored in a SpecFile scan.

To create a MCA instance, you must provide a parent Scan instance, which in turn will provide a reference to the original SpecFile instance:

sf = SpecFile("/path/to/specfile.dat")
scan2 = Scan(sf, scan_index=2)
mcas_in_scan2 = MCA(scan2)
for i in len(mcas_in_scan2):
    mca_data = mcas_in_scan2[i]
    ... # do some something with mca_data (1D numpy array)

A more pythonic way to do the same work, without having to explicitly instantiate scan and mcas_in_scan, would be:

sf = SpecFile("specfilename.dat")
# scan2 from previous example can be referred to as sf[2]
# mcas_in_scan2 from previous example can be referred to as scan2.mca
for mca_data in sf[2].mca:
    ... # do some something with mca_data (1D numpy array)
__len__()
Returns:Number of mca in Scan
Return type:int
__iter__()

Return the next MCA data line each time this method is called.

Returns:Single MCA
Return type:1D numpy array
__getitem__(key)

Return a single MCA data line

Parameters:key (int) – 0-based index of MCA within Scan
Returns:Single MCA
Return type:1D numpy array

silx.io.spech5 module

h5py-like API to SpecFile

API description

Specfile data structure exposed by this API:

/
    1.1/
        title = "…"
        start_time = "…"
        instrument/
            specfile/
                file_header = ["…", "…", …]
                scan_header = ["…", "…", …]
            positioners/
                motor_name = value
                …
            mca_0/
                data = …
                calibration = …
                channels = …
                preset_time = …
                elapsed_time = …
                live_time = …

            mca_1/
                …
            …
        measurement/
            colname0 = …
            colname1 = …
            …
            mca_0/
                 data -> /1.1/instrument/mca_0/data
                 info -> /1.1/instrument/mca_0/
            …
    2.1/
        …

file_header and scan_header are numpy arrays of fixed-length strings containing raw header lines relevant to the scan.

The title is the content of the #S scan header line without the leading #S (e.g "1  ascan  ss1vo -4.55687 -0.556875  40 0.2").

The start time is in ISO8601 format ("2016-02-23T22:49:05Z")

All datasets that are not strings store values as float32.

Motor positions (e.g. /1.1/instrument/positioners/motor_name) can be 1D numpy arrays if they are measured as scan data, or else scalars as defined on #P scan header lines. A simple test is done to check if the motor name is also a data column header defined in the #L scan header line.

Scan data (e.g. /1.1/measurement/colname0) is accessed by column, the dataset name colname0 being the column label as defined in the #L scan header line.

MCA data is exposed as a 2D numpy array containing all spectra for a given analyser. The number of analysers is calculated as the number of MCA spectra per scan data line. Demultiplexing is then performed to assign the correct spectra to a given analyser.

MCA calibration is an array of 3 scalars, from the #@CALIB header line. It is identical for all MCA analysers, as there can be only one #@CALIB line per scan.

MCA channels is an array containing all channel numbers. This information is computed from the #@CHANN scan header line (if present), or computed from the shape of the first spectrum in a scan ([0, len(first_spectrum] - 1]).

Accessing data

Data and groups are accessed in h5py fashion:

from silx.io.spech5 import SpecH5

# Open a SpecFile
sfh5 = SpecH5("test.dat")

# using SpecH5 as a regular group to access scans
scan1group = sfh5["1.1"]
instrument_group = scan1group["instrument"]

# altenative: full path access
measurement_group = sfh5["/1.1/measurement"]

# accessing a scan data column by name as a 1D numpy array
data_array = measurement_group["Pslit HGap"]

# accessing all mca-spectra for one MCA device
mca_0_spectra = measurement_group["mca_0/data"]

SpecH5 and SpecH5Group provide a SpecH5Group.keys() method:

>>> sfh5.keys()
['96.1', '97.1', '98.1']
>>> sfh5['96.1'].keys()
['title', 'start_time', 'instrument', 'measurement']

They can also be treated as iterators:

for scan_group in SpecH5("test.dat"):
    dataset_names = [item.name in scan_group["measurement"] if
                     isinstance(item, SpecH5Dataset)]
    print("Found data columns in scan " + scan_group.name)
    print(", ".join(dataset_names))

You can test for existence of data or groups:

>>> "/1.1/measurement/Pslit HGap" in sfh5
True
>>> "positioners" in sfh5["/2.1/instrument"]
True
>>> "spam" in sfh5["1.1"]
False

Classes

class silx.io.spech5.SpecH5(filename)[source]

Bases: silx.io.spech5.SpecH5Group

Special SpecH5Group representing the root of a SpecFile.

Parameters:filename (str) – Path to SpecFile in filesystem

In addition to all generic SpecH5Group attributes, this class also keeps a reference to the original SpecFile object and has a filename attribute.

Its immediate children are scans, but it also gives access to any group or dataset in the entire SpecFile tree by specifying the full path.

keys()[source]
Returns:List of all scan keys in this SpecFile (e.g. ["1.1", "2.1"…])
class silx.io.spech5.SpecH5Dataset[source]

Bases: numpy.ndarray

Emulate h5py.Dataset for a SpecFile object

Parameters:
  • array_like – Input dataset in a type that can be digested by numpy.array() (str, list, numpy.ndarray…)
  • name (str) – Dataset full name (posix path format, starting with /)
  • file – Parent SpecH5
  • parent – Parent SpecH5Group which contains this dataset

This class inherits from numpy.ndarray and adds name and value attributes for HDF5 compatibility. value is a reference to the class instance (value = self).

Data is stored in float32 format, unless it is a string.

class silx.io.spech5.SpecH5Group(name, specfileh5)[source]

Bases: object

Emulate h5py.Group for a SpecFile object

Parameters:
  • name (str) – Group full name (posix path format, starting with /)
  • specfileh5 – parent SpecH5 instance
__contains__(key)[source]
Parameters:key – Path to child element (e.g. "mca_0/info") or full name of group or dataset (e.g. "/2.1/instrument/positioners")
Returns:True if key refers to a valid member of this group, else False
__getitem__(key)[source]

Return a SpecH5Group or a SpecH5Dataset if key is a valid name of a group or dataset.

key can be a member of self.keys(), i.e. an immediate child of the group, or a path reaching into subgroups (e.g. "instrument/positioners")

In the special case were this group is the root group, key can start with a / character.

Parameters:key (str) – Name of member
Raise:KeyError if key is not a known member of this group.
__len__()[source]

Return number of members,subgroups and datasets, attached to this group.

attrs = None

Attributes dictionary

file = None

Parent SpecH5 object

keys()[source]
Returns:List of all names of members attached to this group
name = None

Full name/path of group

parent[source]

Parent group (group that contains this group)

visit(func)[source]

Recursively visit all names in this group and subgroups.

Parameters:func (function) – Callable (function, method or callable object)

You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:

func(<member name>) => <None or return value>

Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guaranteed.

Example:

# Get a list of all contents (groups and datasets) in a SpecFile
mylist = []
f = File('foo.dat')
f.visit(mylist.append)
visititems(func)[source]

Recursively visit names and objects in this group.

Parameters:func (function) – Callable (function, method or callable object)

You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:

func(<member name>, <object>) => <None or return value>

Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guaranteed.

Example:

# Get a list of all datasets in a specific scan
mylist = []
def func(name, obj):
    if isinstance(obj, SpecH5Dataset):
        mylist.append(name)

f = File('foo.dat')
f["1.1"].visititems(func)
class silx.io.spech5.SpecH5LinkToDataset[source]

Bases: silx.io.spech5.SpecH5Dataset

Special SpecH5Dataset representing a link to a dataset. It works exactly like a regular dataset, but SpecH5Group.visit() and SpecH5Group.visititems() methods will recognize that it is a link and will ignore it.

class silx.io.spech5.SpecH5LinkToGroup(name, specfileh5)[source]

Bases: silx.io.spech5.SpecH5Group

Special SpecH5Group representing a link to a group.

It works exactly like a regular group but SpecH5Group.visit() and SpecH5Group.visititems() methods will recognize it as a link and will ignore it.

keys()[source]
Returns:List of all names of members attached to the target group
silx.io.spech5.is_dataset(name)[source]

Check if name matches a valid dataset name pattern in a SpecH5.

Parameters:name (str) – Full name of member
silx.io.spech5.is_group(name)[source]

Check if name matches a valid group name pattern in a SpecH5.

Parameters:name (str) – Full name of member

Check if name is a valid link to a dataset in a SpecH5. Return True or False

Parameters:name (str) – Full name of member

Check if name is a valid link to a group in a SpecH5. Return True or False

Parameters:name (str) – Full name of member
silx.io.spech5.spec_date_to_iso8601(date, zone=None)[source]

Convert SpecFile date to Iso8601.

Parameters:
  • date (str) – Date (see supported formats below)
  • zone – Time zone as it appears in a ISO8601 date

Supported formats:

  • DDD MMM dd hh:mm:ss YYYY
  • DDD YYYY/MM/dd hh:mm:ss YYYY

where DDD is the abbreviated weekday, MMM is the month abbreviated name, MM is the month number (zero padded), dd is the weekday number (zero padded) YYYY is the year, hh the hour (zero padded), mm the minute (zero padded) and ss the second (zero padded). All names are expected to be in english.

Examples:

>>> spec_date_to_iso8601("Thu Feb 11 09:54:35 2016")
'2016-02-11T09:54:35'

>>> spec_date_to_iso8601("Sat 2015/03/14 03:53:50")
'2015-03-14T03:53:50'

silx.io.spectoh5 module

This module provides functions to convert a SpecFile into a HDF5 file

silx.io.spectoh5.convert(specfile, h5file, mode='w-', create_dataset_args=None)[source]
Convert a SpecFile into an HDF5 file, write scans into the root (/)
group.
Parameters:
  • specfile – Path of input SpecFile or SpecH5 instance
  • h5file – Path of output HDF5 file or HDF5 file handle
  • mode – Can be "w" (write, existing file is lost), "w-" (write, fail if exists). This is ignored if h5file is a file handle.
  • create_dataset_args – Dictionary of args you want to pass to h5f.create_dataset. This allows you to specify filters and compression parameters. Don’t specify name and data. These arguments don’t apply to scalar datasets.

This is a convenience shortcut to call:

write_spec_to_h5(specfile, h5file, h5path='/',
                 h5_file_mode="w-", link_type="hard")
silx.io.spectoh5.write_spec_to_h5(specfile, h5file, h5path='/', mode='a', overwrite_data=False, link_type='hard', create_dataset_args=None)[source]

Write content of a SpecFile in a HDF5 file.

Parameters:
  • specfile – Path of input SpecFile or SpecH5 instance
  • h5file – Path of output HDF5 file or HDF5 file handle
  • h5path – Target path in HDF5 file in which scan groups are created. Default is root ("/")
  • mode – Can be "r+" (read/write, file must exist), "w" (write, existing file is lost), "w-" (write, fail if exists) or "a" (read/write if exists, create otherwise). This parameter is ignored if h5file is a file handle.
  • overwrite_data – If True, existing groups and datasets can be overwritten, if False they are skipped. This parameter is only relevant if file_mode is "r+" or "a".
  • link_type"hard" (default) or "soft"
  • create_dataset_args – Dictionary of args you want to pass to h5f.create_dataset. This allows you to specify filters and compression parameters. Don’t specify name and data. These arguments don’t apply to scalar datasets.

The structure of the spec data in an HDF5 file is described in the documentation of silx.io.spech5.

silx.io.dicttoh5 module

Nested python dictionary to HDF5 file conversion

silx.io.dicttoh5.dicttoh5(treedict, h5file, h5path='/', mode='a', overwrite_data=False, create_dataset_args=None)[source]

Write a nested dictionary to a HDF5 file, using keys as member names.

Parameters:
  • treedict – Nested dictionary/tree structure with strings as keys and array-like objects as leafs. The "/" character is not allowed in keys.
  • h5file – HDF5 file name or handle. If a file name is provided, the function opens the file in the specified mode and closes it again before completing.
  • h5path – Target path in HDF5 file in which scan groups are created. Default is root ("/")
  • mode – Can be "r+" (read/write, file must exist), "w" (write, existing file is lost), "w-" (write, fail if exists) or "a" (read/write if exists, create otherwise). This parameter is ignored if h5file is a file handle.
  • overwrite_data – If True, existing groups and datasets can be overwritten, if False they are skipped. This parameter is only relevant if h5file_mode is "r+" or "a".
  • create_dataset_args – Dictionary of args you want to pass to h5f.create_dataset. This allows you to specify filters and compression parameters. Don’t specify name and data.

Example:

from silx.io.dicttoh5 import dicttoh5

city_area = {
    "Europe": {
        "France": {
            "Isère": {
                "Grenoble": "18.44 km2"
            },
            "Nord": {
                "Tourcoing": "15.19 km2"
            },
        },
    },
}

create_ds_args = {'compression': "gzip",
                  'shuffle': True,
                  'fletcher32': True}

dicttoh5(city_area, "cities.h5", h5path="/area",
         create_dataset_args=create_ds_args)

silx.io.utils module

I/O utility functions

silx.io.utils.h5ls(h5group, lvl=0)[source]

Return a simple string representation of a HDF5 tree structure.

Parameters:
  • h5group – Any h5py.Group or h5py.File instance, or a HDF5 file name
  • lvl – Number of tabulations added to the group. lvl is incremented as we recursively process sub-groups.
Returns:

String representation of an HDF5 tree structure

Group names and dataset representation are printed preceded by a number of tabulations corresponding to their depth in the tree structure. Datasets are represented as h5py.Dataset objects.

Example:

>>> print(h5ls("Downloads/sample.h5"))
+fields
    +fieldB
        <HDF5 dataset "z": shape (256, 256), type "<f4">
    +fieldE
        <HDF5 dataset "x": shape (256, 256), type "<f4">
        <HDF5 dataset "y": shape (256, 256), type "<f4">
silx.io.utils.save1D(fname, x, y, xlabel=None, ylabels=None, filetype=None, fmt='%.7g', csvdelim=';', newline='\n', header='', footer='', comments='#', autoheader=False)[source]

Saves any number of curves to various formats: Specfile, CSV, txt or npy. All curves must have the same number of points and share the same x values.

Parameters:
  • fname – Output file path, or file handle open in write mode. If fname is a path, file is opened in w mode. Existing file with a same name will be overwritten.
  • x – 1D-Array (or list) of abscissa values.
  • y – 2D-array (or list of lists) of ordinates values. First index is the curve index, second index is the sample index. The length of the second dimension (number of samples) must be equal to len(x). y can be a 1D-array in case there is only one curve to be saved.
  • filetype – Filetype: "spec", "csv", "txt", "ndarray". If None, filetype is detected from file name extension (.dat, .csv, .txt, .npy)
  • xlabel – Abscissa label
  • ylabels – List of y labels
  • fmt – Format string for data. You can specify a short format string that defines a single format for both x and y values, or a list of two different format strings (e.g. ["%d", "%.7g"]). Default is "%.7g". This parameter does not apply to the npy format.
  • csvdelim – String or character separating columns in txt and CSV formats. The user is responsible for ensuring that this delimiter is not used in data labels when writing a CSV file.
  • newline – String or character separating lines/records in txt format (default is line break character \n).
  • header – String that will be written at the beginning of the file in txt format.
  • footer – String that will be written at the end of the file in txt format.
  • comments – String that will be prepended to the header and footer strings, to mark them as comments. Default: #.
  • autoheader

    In CSV or txt, True causes the first header line to be written as a standard CSV header line with column labels

    separated by the specified CSV delimiter.

When saving to Specfile format, each curve is saved as a separate scan with two data columns (x and y).

CSV and txt formats are similar, except that the txt format allows user defined header and footer text blocks, whereas the CSV format has only a single header line with columns labels separated by field delimiters and no footer. The txt format also allows defining a record separator different from a line break.

The npy format is written with numpy.save and can be read back with numpy.load. If xlabel and ylabels are undefined, data is saved as a regular 2D numpy.ndarray (contatenation of x and y). If both xlabel and ylabels are defined, the data is saved as a numpy.recarray after being transposed and having labels assigned to columns.

silx.io.utils.savespec(specfile, x, y, xlabel='X', ylabel='Y', fmt='%.7g', scan_number=1, mode='w', write_file_header=True, close_file=False)[source]

Saves one curve to a SpecFile.

The curve is saved as a scan with two data columns. To save multiple curves to a single SpecFile, call this function for each curve by providing the same file handle each time.

Parameters:
  • specfile – Output SpecFile name, or file handle open in write or append mode. If a file name is provided, a new file is open in write mode (existing file with the same name will be lost)
  • x – 1D-Array (or list) of abscissa values
  • y – 1D-array (or list) of ordinates values
  • xlabel – Abscissa label (default "X")
  • ylabel – Ordinate label
  • fmt – Format string for data. You can specify a short format string that defines a single format for both x and y values, or a list of two different format strings (e.g. ["%d", "%.7g"]). Default is "%.7g".
  • scan_number – Scan number (default 1).
  • mode – Mode for opening file: w (default), a, r+, w+, a+. This parameter is only relevant if specfile is a path.
  • write_file_header – If True, write a file header before writing the scan (#F`` and ``#D line).
  • close_file – If True, close the file after saving curve.
Returns:

None if close_file is True, else return the file handle.

silx.io.utils.savetxt(fname, X, fmt='%.7g', delimiter=';', newline='\n', header='', footer='', comments='#')[source]

numpy.savetxt backport of header and footer arguments from numpy=1.7.0.

Replace with numpy.savetxt when dropping support of numpy < 1.7.0

See numpy.savetxt help: http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.savetxt.html