Poll Results
No votes. Be the first one to vote.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
HDFS (Hadoop Distributed File System) Block is the basic unit of storage in HDFS. When a file is stored in HDFS, it is divided into blocks of a fixed size, which by default is 128 MB (though it can be configured). Each HDFS block is distributed across the nodes in the Hadoop cluster to provide redundancy and fault tolerance. This means that multiple copies of blocks are stored on different nodes to ensure data availability in the event of hardware failure. HDFS is designed to handle large files efficiently, and its architecture allows for scalable storage and processing of big data across a distributed environment.
it is the physical representation of data
B. it is the physical representation of data