|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectncsa.hdf.object.HObject
ncsa.hdf.object.Dataset
ncsa.hdf.object.ScalarDS
ncsa.hdf.object.nc2.NC2Dataset
public class NC2Dataset
NC2Dataset describes an multi-dimension array of HDF5 scalar or atomic data types, such as byte, int, short, long, float, double and string, and operations performed on the scalar dataset
The library predefines a modest number of datatypes. For details, read The Datatype Interface (H5T)
Field Summary | |
---|---|
static long |
serialVersionUID
|
Fields inherited from class ncsa.hdf.object.ScalarDS |
---|
INTERLACE_LINE, INTERLACE_PIXEL, INTERLACE_PLANE, isFillValueConverted |
Fields inherited from class ncsa.hdf.object.HObject |
---|
separator |
Constructor Summary | |
---|---|
NC2Dataset(FileFormat fileFormat,
ucar.nc2.Variable ncDataset,
long[] oid)
Constructs an NC2Dataset object with specific netcdf variable. |
Method Summary | |
---|---|
void |
close(int did)
Closes access to the object. |
Dataset |
copy(Group pgroup,
java.lang.String dstName,
long[] dims,
java.lang.Object buff)
Creates a new dataset and writes the data buffer to the new dataset. |
static NC2Dataset |
create(java.lang.String name,
Group pgroup,
Datatype type,
long[] dims,
long[] maxdims,
long[] chunks,
int gzip,
java.lang.Object data)
Creates a new dataset. |
Datatype |
getDatatype()
Returns the datatype object of the dataset. |
java.util.List |
getMetadata()
Retrieves the metadata such as attributes from file. |
java.util.List |
getMetadata(int... attrPropList)
|
byte[][] |
getPalette()
Returns the palette of this scalar dataset or null if palette does not exist. |
byte[] |
getPaletteRefs()
returns the byte array of palette refs. |
boolean |
hasAttribute()
Check if the object has any attributes attached. |
void |
init()
Retrieve and initialize dimensions and member information. |
int |
open()
Opens an existing object such as dataset or group for access. |
java.lang.Object |
read()
Reads the data from file. |
byte[] |
readBytes()
Reads the raw data of the dataset from file to a byte array. |
byte[][] |
readPalette(int idx)
read specific image palette from file. |
void |
removeMetadata(java.lang.Object info)
Deletes an existing metadata from this data object. |
void |
setName(java.lang.String newName)
Sets the name of the data object. |
void |
write(java.lang.Object buf)
Writes a memory buffer to the dataset in file. |
void |
writeMetadata(java.lang.Object info)
Writes a specific metadata (such as attribute) into file. |
Methods inherited from class ncsa.hdf.object.ScalarDS |
---|
clearData, convertFromUnsignedC, convertToUnsignedC, getFillValue, getImageDataRange, getInterlace, getPaletteName, isDefaultImageOrder, isImage, isImageDisplay, isText, isTrueColor, isUnsigned, setIsImage, setIsImageDisplay, setPalette |
Methods inherited from class ncsa.hdf.object.Dataset |
---|
byteToString, clear, convertFromUnsignedC, convertFromUnsignedC, convertToUnsignedC, convertToUnsignedC, getChunkSize, getCompression, getConvertByteToString, getData, getDimNames, getDims, getHeight, getMaxDims, getRank, getSelectedDims, getSelectedIndex, getSize, getStartDims, getStride, getWidth, isEnumConverted, isString, setConvertByteToString, setData, setEnumConverted, stringToByte, write |
Methods inherited from class ncsa.hdf.object.HObject |
---|
equalsOID, getFID, getFile, getFileFormat, getFullName, getLinkTargetObjName, getName, getOID, getPath, setLinkTargetObjName, setPath, toString |
Methods inherited from class java.lang.Object |
---|
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait |
Field Detail |
---|
public static final long serialVersionUID
Constructor Detail |
---|
public NC2Dataset(FileFormat fileFormat, ucar.nc2.Variable ncDataset, long[] oid)
fileFormat
- the netcdf file.ncDataset
- the netcdf variable.oid
- the unique identifier for this dataset.Method Detail |
---|
public boolean hasAttribute()
DataFormat
public Dataset copy(Group pgroup, java.lang.String dstName, long[] dims, java.lang.Object buff) throws java.lang.Exception
Dataset
This function allows applications to create a new dataset for a given data buffer. For example, users can select a specific interesting part from a large image and create a new image with the selection.
The new dataset retains the datatype and dataset creation properties of this dataset.
copy
in class Dataset
pgroup
- the group which the dataset is copied to.dstName
- the name of the new dataset.dims
- the dimension sizes of the the new dataset.buff
- the data values of the subset to be copied.
java.lang.Exception
public byte[] readBytes() throws java.lang.Exception
Dataset
readBytes() reads raw data to an array of bytes instead of array of its datatype. For example, for an one-dimension 32-bit integer dataset of size 5, the readBytes() returns of a byte array of size 20 instead of an int array of 5.
readBytes() can be used to copy data from one dataset to another efficiently because the raw data is not converted to its native type, it saves memory space and CPU time.
readBytes
in class Dataset
java.lang.Exception
public java.lang.Object read() throws java.lang.Exception
Dataset
read() reads the data from file to a memory buffer and returns the memory buffer. The dataset object does not hold the memobry buffer. To store the memory buffer in the dataset object, one must call getData().
By default, the whole dataset is read into memory. Users can also select subset to read. Subsetting is done in an implicit way.
How to Select a Subset
A selection is specified by three arrays: start, stride and count.
The following example shows how to make a subset. In the example, the
dataset is a 4-dimensional array of [200][100][50][10], i.e. dims[0]=200;
dims[1]=100; dims[2]=50; dims[3]=10;
We want to select every other data point in dims[1] and dims[2]
int rank = dataset.getRank(); // number of dimension of the dataset long[] dims = dataset.getDims(); // the dimension sizes of the dataset long[] selected = dataset.getSelectedDims(); // the selected size of the dataet long[] start = dataset.getStartDims(); // the off set of the selection long[] stride = dataset.getStride(); // the stride of the dataset int[] selectedIndex = dataset.getSelectedIndex(); // the selected dimensions for display // select dim1 and dim2 as 2D data for display,and slice through dim0 selectedIndex[0] = 1; selectedIndex[1] = 2; selectedIndex[1] = 0; // reset the selection arrays for (int i = 0; i < rank; i++) { start[i] = 0; selected[i] = 1; stride[i] = 1; } // set stride to 2 on dim1 and dim2 so that every other data points are selected. stride[1] = 2; stride[2] = 2; // set the selection size of dim1 and dim2 selected[1] = dims[1] / stride[1]; selected[2] = dims[1] / stride[2]; // when dataset.getData() is called, the slection above will be used since // the dimension arrays are passed by reference. Changes of these arrays // outside the dataset object directly change the values of these array // in the dataset object.
For ScalarDS, the memory data buffer is an one-dimensional array of byte, short, int, float, double or String type based on the datatype of the dataset.
For CompoundDS, the meory data object is an java.util.List object. Each element of the list is a data array that corresponds to a compound field.
For example, if compound dataset "comp" has the following nested structure, and memeber datatypes
comp --> m01 (int) comp --> m02 (float) comp --> nest1 --> m11 (char) comp --> nest1 --> m12 (String) comp --> nest1 --> nest2 --> m21 (long) comp --> nest1 --> nest2 --> m22 (double)getData() returns a list of six arrays: {int[], float[], char[], Stirng[], long[] and double[]}.
read
in class Dataset
java.lang.Exception
#getData()}
public void write(java.lang.Object buf) throws java.lang.Exception
Dataset
write
in class Dataset
buf
- the data to write
java.lang.Exception
public java.util.List getMetadata() throws java.lang.Exception
DataFormat
Metadata such as attributes are stored in a List.
java.lang.Exception
public void writeMetadata(java.lang.Object info) throws java.lang.Exception
DataFormat
If an HDF(4&5) attribute exists in file, the method updates its value. If the attribute does not exists in file, it creates the attribute in file and attaches it to the object. It will fail to write a new attribute to the object where an attribute with the same name already exists. To update the value of an existing attribute in file, one needs to get the instance of the attribute by getMetadata(), change its values, and use writeMetadata() to write the value.
info
- the metadata to write.
java.lang.Exception
public void removeMetadata(java.lang.Object info) throws java.lang.Exception
DataFormat
info
- the metadata to delete.
java.lang.Exception
public int open()
HObject
open
in class HObject
HObject.close(int)
public void close(int did)
HObject
Sub-classes must implement this interface because different data objects have their own ways of how the data resources are closed.
For example, H5Group.close() calls the ncsa.hdf.hdf5lib.H5.H5Gclose() method and closes the group resource specified by the group id.
close
in class HObject
did
- The object identifier.public void init()
init
in class Dataset
public byte[][] getPalette()
ScalarDS
Scalar dataset can be displayed as spreadsheet data or image. When a scalar dataset is chosen to display as an image, the palette or color table may be needed to translate a pixel value to color components (for example, red, green, and blue). Some scalar datasets have no palette and some datasets have one or more than one palettes. If an associated palette exists but not loaded, this interface retrieves the palette from the file and returns the palette. If the palette is loaded, it returnd the palette. It returns null if there is no palette assciated with the dataset.
Current implementation only supports palette model of indexed RGB with 256 colors. Other models such as YUV", "CMY", "CMYK", "YCbCr", "HSV will be supported in the future.
The palette values are stored in a two-dimensional byte array and arrange by color components of red, green and blue. palette[][] = byte[3][256], where, palette[0][], palette[1][] and palette[2][] are the red, green and blue components respectively.
Sub-classes have to implement this interface. HDF4 and HDF5 images use different libraries to retrieve the associated palette.
getPalette
in class ScalarDS
public byte[][] readPalette(int idx)
readPalette
in class ScalarDS
idx
- the palette index to read
public static NC2Dataset create(java.lang.String name, Group pgroup, Datatype type, long[] dims, long[] maxdims, long[] chunks, int gzip, java.lang.Object data) throws java.lang.Exception
name
- the name of the dataset to create.pgroup
- the parent group of the new dataset.type
- the datatype of the dataset.dims
- the dimension size of the dataset.maxdims
- the max dimension size of the dataset.chunk
- the chunk size of the dataset.gzip
- the level of the gzip compression.data
- the array of data values.
java.lang.Exception
public byte[] getPaletteRefs()
getPaletteRefs
in class ScalarDS
public Datatype getDatatype()
Dataset
getDatatype
in class Dataset
public void setName(java.lang.String newName) throws java.lang.Exception
setName
in class HObject
newName
- the new name of the object.
java.lang.Exception
public java.util.List getMetadata(int... attrPropList) throws java.lang.Exception
java.lang.Exception
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |