How does distributed file system work
One process involved in implementing the DFS is giving access control and storage management controls to the client system in a centralized way. The servers that are involved have to be able to capably dole out data with sufficient dexterity.
Transparency is also of the core processes in DFS. Systems should be designed so that files are accessed, stored, and managed on the local client machines, while the process itself is actually held on the servers.
Transparency brings convenience to the end-user on a client machine, while the network file system efficiently manages all the processes. A DFS allows efficient and well-managed data and storage sharing options on a network compared to other options. Another option for users in network-based computing is a shared disk file system. DFS, however, is fault-tolerant, and the data is accessible even if some of the network nodes are offline.
A DFS makes it possible to restrict access to the file system, depending on access lists or capabilities on both the servers and the clients, depending on how the protocol is designed. Also, since the server also provides a single central point of access for data requests, it is thought to be fault-tolerant as mentioned above in that it will still function well if some of the nodes are taken offline.
This dovetails with some of the reasons that DFS was developed in the first place — the system can still have that integrity if a few workstations get moved around. Ironically enough, even though a DFS server is prized for being a single central point of access, another server may also be in play. However, that doesn't mean that there won't be that single central access point.
The second server will be for backup. Because businesses invest in having one central DFS server, they will worry that the server could be compromised somehow. With the development of the network itself, different technologies such as DFS have been developed to bring convenience and efficiency when sharing resources and files on the network.
What is DFS? DFS is the abbreviation of the distributed file system, which is a file system that stores data on a server. Access and process data as if stored on a local client computer. Through DFS, you can easily share information and files between users on the network in a controlled and authorized manner.
The server allows client users to share files and store data as if they were storing information locally. However, the server has full control over the data and delegates access control to the client. DFS supports stand-alone DFS namespaces, those with one host server, and domain-based namespaces that have multiple host servers and high availability. Each DFS tree structure has one or more root targets. The root target is a host server that runs the DFS service. Each DFS link points to one or more shared folders on the network.
In earlier documentation, DFS links were called junction points. A DFS link can point to one or more shared folders; the folders are called targets. The client accesses the first available target in the set.
This helps to distribute client requests across the possible targets and can provide continued accessibility for users even when some servers fail. You can also publish any non-Microsoft shares for which client redirectors are available in a DFS namespace. However, unlike a share that is published on a server that is running Windows Server, they cannot host a DFS root or provide referrals to other DFS targets.
0コメント