public interface FsVolumeSpi extends org.apache.hadoop.hdfs.server.datanode.checker.Checkable<FsVolumeSpi.VolumeCheckContext,org.apache.hadoop.hdfs.server.datanode.checker.VolumeCheckResult>
| Modifier and Type | Interface and Description | 
|---|---|
| static interface  | FsVolumeSpi.BlockIteratorBlockIterator will return ExtendedBlock entries from a block pool in
 this volume. | 
| static class  | FsVolumeSpi.ScanInfoTracks the files and other information related to a block on the disk
 Missing file is indicated by setting the corresponding member
 to null. | 
| static class  | FsVolumeSpi.VolumeCheckContextContext for the  Checkable.check(K)call. | 
| Modifier and Type | Method and Description | 
|---|---|
| void | compileReport(String bpid,
             Collection<FsVolumeSpi.ScanInfo> report,
             DirectoryScanner.ReportCompiler reportCompiler)Compile a list of  FsVolumeSpi.ScanInfofor the blocks in
 the block pool with idbpid. | 
| long | getAvailable() | 
| URI | getBaseURI() | 
| String[] | getBlockPoolList() | 
| org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi | getDataset()Get the FSDatasetSpi which this volume is a part of. | 
| org.apache.hadoop.hdfs.server.datanode.FileIoProvider | getFileIoProvider() | 
| org.apache.hadoop.hdfs.server.datanode.fsdataset.DataNodeVolumeMetrics | getMetrics() | 
| String | getStorageID() | 
| org.apache.hadoop.hdfs.server.datanode.StorageLocation | getStorageLocation() | 
| org.apache.hadoop.fs.StorageType | getStorageType() | 
| org.apache.hadoop.fs.DF | getUsageStats(org.apache.hadoop.conf.Configuration conf) | 
| boolean | isTransientStorage()Returns true if the volume is NOT backed by persistent storage. | 
| FsVolumeSpi.BlockIterator | loadBlockIterator(String bpid,
                 String name)Load a saved block iterator. | 
| byte[] | loadLastPartialChunkChecksum(File blockFile,
                            File metaFile)Load last partial chunk checksum from checksum file. | 
| FsVolumeSpi.BlockIterator | newBlockIterator(String bpid,
                String name)Create a new block iterator. | 
| FsVolumeReference | obtainReference()Obtain a reference object that had increased 1 reference count of the
 volume. | 
| void | releaseLockedMemory(long bytesToRelease)Release reserved memory for an RBW block written to transient storage
 i.e. | 
| void | releaseReservedSpace(long bytesToRelease)Release disk space previously reserved for block opened for write. | 
| void | reserveSpaceForReplica(long bytesToReserve)Reserve disk space for a block (RBW or Re-replicating)
 so a writer does not run out of space before the block is full. | 
FsVolumeReference obtainReference() throws ClosedChannelException
FsVolumeReference to decrease
 the reference count on the volume.ClosedChannelExceptionString getStorageID()
String[] getBlockPoolList()
long getAvailable()
           throws IOException
IOExceptionURI getBaseURI()
org.apache.hadoop.fs.DF getUsageStats(org.apache.hadoop.conf.Configuration conf)
org.apache.hadoop.hdfs.server.datanode.StorageLocation getStorageLocation()
StorageLocation to the volumeorg.apache.hadoop.fs.StorageType getStorageType()
StorageType of the volumeboolean isTransientStorage()
void reserveSpaceForReplica(long bytesToReserve)
void releaseReservedSpace(long bytesToRelease)
void releaseLockedMemory(long bytesToRelease)
FsVolumeSpi.BlockIterator newBlockIterator(String bpid, String name)
bpid - The block pool id to iterate over.name - The name of the block iterator to create.FsVolumeSpi.BlockIterator loadBlockIterator(String bpid, String name) throws IOException
bpid - The block pool id to iterate over.name - The name of the block iterator to load.IOException - If there was an IO error loading the saved
                           block iterator.org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi getDataset()
byte[] loadLastPartialChunkChecksum(File blockFile, File metaFile) throws IOException
blockFile - metaFile - IOExceptionvoid compileReport(String bpid, Collection<FsVolumeSpi.ScanInfo> report, DirectoryScanner.ReportCompiler reportCompiler) throws InterruptedException, IOException
FsVolumeSpi.ScanInfo for the blocks in
 the block pool with id bpid.bpid - block pool id to scanreport - the list onto which blocks reports are placedreportCompiler - InterruptedExceptionIOExceptionorg.apache.hadoop.hdfs.server.datanode.FileIoProvider getFileIoProvider()
org.apache.hadoop.hdfs.server.datanode.fsdataset.DataNodeVolumeMetrics getMetrics()
Copyright © 2008–2023 Apache Software Foundation. All rights reserved.