docker build fastdfs

docker build fastdfs

Introduction

FastDFS is an open source lightweight distributed file system that manages files. Its functions include: file storage, file synchronization, file access (file upload, file download), etc., which solves the problem of large-capacity storage and load balancing. It is especially suitable for online services that use files as the carrier, such as photo album websites, video websites, and so on. FastDFS is tailor-made for the Internet, taking full consideration of mechanisms such as redundant backup, load balancing, and linear expansion, and paying attention to indicators such as high availability and high performance. Using FastDFS, it is easy to build a high-performance file server cluster to provide file upload and download. Wait for service

advantage

advantage:

i. Reduced system complexity and higher processing efficiency

ii. Support online expansion mechanism to enhance the scalability of the system

iii. Realize software RAID, enhance the system's concurrent processing capability and data fault-tolerant recovery capability

iv. Support master and slave files, support custom extension

Active and standby Tracker services to enhance the availability of the system

Disadvantage

i. It does not support resumable transmission, not suitable for large file storage

ii. POSIX (Portable Operating System Interface) general interface access is not supported, and the versatility is low

iii. There is a large delay in file synchronization across public networks, and corresponding fault-tolerant strategies need to be applied

iv. Download via API, there is a single point of performance bottleneck

Do not provide access control directly, you need to implement it yourself

Comparison of major file systems

ftp: Password file can be set to have high security (if the password program is set to access trouble), can be interrupted to resume transmission, cannot be expanded, and no disaster tolerance

HDFS: suitable for large file storage, cloud computing, not suitable for simple small file storage (not efficient)

TFS: from Taobao, suitable for massive files less than 1M, relatively troublesome to use, and relatively few documents

GlusterFS: powerful and flexible, suitable for large files, supports a variety of data types, troublesome to use, high hardware requirements, at least two nodes, less Chinese data

mogileFS: Similar to FastDFS architecture, FastDFS refers to mogileFS, FastDFS is more efficient than mogileFS

FastDFS: From Taobao, it is suitable for a large number of small files (recommended range: 4KB <file_size <500MB), relatively simple to use, and it is recommended to use FastDFS for small file storage.

fastdfs role

FastDFS server has three roles: tracking server (tracker server), storage server (storage server) and client (client)

a) Tracker server: Tracker server, which is mainly used for scheduling and has the function of load balancing. The state information of all storage groups and storage servers in the cluster is recorded in the memory, which is the hub of interaction between the client and the data server. Compared with the master in GFS, it is more streamlined, does not record file index information, and occupies a small amount of memory

b) storage server: storage server (also known as storage node or data server), files and file attributes (meta data) are all saved on the storage server. Storage server directly uses the OS file system to call and manage files

client: The client, as the initiator of the business request, uses the TCP/IP protocol to interact with the tracker server or storage node through a proprietary interface. FastDFS provides users with basic file access interfaces, such as upload, download, append, delete, etc., which are provided to users in the form of a client library

docker+nginx+fastdfs stand-alone mode

Environment and software version

System: centos7.7

nginx: nginx/1.12.2

Host: 192.168.0.191

Docker installation and configuration

The docker installation and configuration will not be repeated here, remember to configure the Alibaba Cloud accelerator and it is ok

Nginx installation and configuration

  • Pull nginx image

    docker pull nginx copy code

  • View mirror

    docker imagescopy code

  • Run nginx

    docker run --name nginx -d -p 80:80 nginx copy the code

    Note: At this time, the relevant directory of the nginx container is not mounted. All future configurations have to enter the container to change, which is extremely inconvenient. After the above is started

    Use the command

    Docker Exec -it nginx nginx -t Copy the code

    Check the directory where the nginx configuration file is located, as shown in the figure below, the configuration file is in/etc/nginx/nginx.conf

  • Copy the configuration file in the container to the host, and then mount the directory when the container is running (docker does not seem to be allowed to directly mount the nginx configuration file, you need to copy the configuration file in the container to the host first, otherwise it will fail)

    docker cp -a nginx:/etc/ nginx//home/nginx/conf/duplicated code

    After copying, stop and delete the running nginx container, and then start it in the following way

    docker stop nginx copy the code
    docker rm nginx copy the code

  • Run nginx again

    docker run --name nginx -d -p 80:80 --restart always -v/home/nginx/conf/nginx/:/etc/nginx/-v/home/nginx/log/:/var/log/nginx/nginxcopy code

    1. --V: Mount the host to the container to facilitate data synchronization and to modify the configuration. As shown in the figure below, it is the mount directory of nginx. Modifying the relevant configuration will be synchronized to the container

    2. --P: specify the port

    3. --restart always: means to start on boot, or not to add

  • access

    http://192.168.0.191/Copy code

    Because the specified port is 80, you can access without writing the port here

    The above Nginx installation is complete, and then we will talk about the specific configuration when combining fastdfs.

fastdfs installation and configuration

  • Pull fastdfs image

    docker pull delron/fastdfs copy code

    Pull the latest version

  • View mirror

    docker imagescopy code

  • Use docker image to build tracker container (tracking server, play a role in scheduling)

    docker run -dti --network = host --name tracker -v/var/fdfs/tracker:/var/fdfs -v/etc/localtime:/etc/localtime delron/fastdfs tracker duplicated code

    -v: means to mount the host directory to the container

  • Use docker image to build storage container (storage server, system capacity and backup service)

    docker run -dti --network=host --name storage -e TRACKER_SERVER=192.168.0.191:22122 -v/var/fdfs/storage:/var/fdfs -v/etc/localtime:/etc/localtime delron/fastdfs storage Copy code

    -v: means to mount the host directory to the container

    TRACKER_SERVER=Local ip: 22122, do not use 127.0.0.1 for the local IP address

  • Enter the storage container and configure the http access port in the storage configuration file. The configuration file is storage.conf in the/etc/fdfs/directory

    Docker Exec -it Storage bash copy the code

    The default port is 8888, and you don t need to modify it. I haven t modified it here.

  • Configure nginx, enter the storage container to configure nginx, and modify the nginx.conf file in the/usr/local/nginx/conf/directory

    Docker Exec -it Storage bash copy the code
    CD/usr/local/Nginx/the conf/ duplicated code
    vi nginx.confCopy code

    The default port is 8888, and the default configuration is not modified. Here I use the default

    Note: If the storage.conf port in the above step is changed, then the configuration of nginx must also be changed here, and it needs to be consistent

  • Test file upload

    Use the web module to upload files and upload files to the fastdfs file system

    1. First upload a photo to the/var/fdfs/storage/directory

    2. Enter the storage container and execute the following command, as shown in the figure below

      /usr/bin/fdfs_upload_file/etc/fdfs/client.conf/var/fdfs/1.jpg duplicated code

      At this time, the picture has been uploaded to the file system, and the url where the picture is stored is returned: group1/M00/00/00/wKgAv13qDs-AfJN6ABCG3sAMTlE315.jpg

    3. Browser access: http://192.168.0.191:8888/group1/M00/00/00/wKgAv13qDs-AfJN6ABCG3sAMTlE315.jpg will get this picture

      According to the group1/M00/00/00/wKgAv13qDs-AfJN6ABCG3sAMTlE315.jpg returned from the file, we can know that the picture is stored in

      Server/var/fdfs/storage/data/00/00/directory, as shown in the figure below

  • Boot up the container

    docker update --restart = always tracker duplicated code
    docker update --restart = always storage duplicated code

  • common problem

    1. Storage cannot be started

    You can delete the fdfs_storage.pid file in the/var/fdfs/storage/data/directory, and then re-run storage

springboot integrated fastdfs

  • pom.xml

    <!-- https://mvnrepository.com/artifact/com.github.tobato/fastdfs-client --> < dependency > < groupId > com.github.tobato </groupId > < artifactId > fastdfs-client </artifactId > < Version > 1.26.7 </Version > </dependency > copy the code
  • yml configuration

    #fastdfs service configuration FDFS: SO-timeout: 1500 Connect-timeout: 600 #-Tracker List: 192.168.0.191:22122 # single connection service configuration Tracker-List: 192.168 .0 .192 : 22122,192.168.0.193: 22122 # clusters connection service configuration # visit-host: 192.168.0.191:8888 # access service configuration from visit-Host: 192.168 .0 .191 copy the code
  • java code

    package com.xy.controller.fastdfs; import com.xy.entity.FastdfsFile; import com.xy.service.IFastdfsFileService; import com.github.tobato.fastdfs.domain.fdfs.StorePath; import com.github.tobato.fastdfs.exception.FdfsUnsupportStorePathException; import com.github .tobato.fastdfs.service.FastFileStorageClient; import io.swagger.annotations.ApiOperation; import io.swagger.annotations.ApiParam; import org.apache.commons.io.FilenameUtils; import org.apache.commons.lang3.StringUtils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; importorg.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController ; import org.springframework.web.multipart.MultipartFile; import java.io.IOException; import static com.xy.utils.common.GenerateUniqueCode.getTimeAddRandom7; import static com.xy.utils.date.DateUtils.getCurDateTimeFull; /** * @Description fastdfs file operations * @Author xy * @Date 2019/12/6 17:25 */ @RestController @RequestMapping(value = "/fastdfs") public class FastdfsFileTestController { @Autowired private FastFileStorageClient storageClient; @Autowired IFastdfsFileService fastdfsFileService; @Value("${fdfs.visit-host}") private String hostIP; /** * @param multipartFile file * @return java.lang.String * @Description fastdfs file upload * @Author xy * @Date 2019/12/6 17:49 **/ @ApiOperation(value = "File upload") @PostMapping(path = "/fileUpload", name = "File upload") public String uploadFile ( @ApiParam(value = "file") @RequestParam(name = "file ", required = true) MultipartFile multipartFile ) throws IOException { String fullPath = "" ; StringBuffer stringBuffer = new StringBuffer(); try { /*** * File Upload */ StorePath storePath = storageClient.uploadFile(multipartFile.getInputStream(), multipartFile.getSize(), FilenameUtils.getExtension(multipartFile.getOriginalFilename()), null ); String filePath = storePath.getFullPath(); /*** * Insert the library */ stringBuffer.append(hostIP).append( "/" ).append(filePath); fullPath = new String(stringBuffer); FastdfsFile fastdfsFile = new FastdfsFile() .setFdfsFileName(multipartFile.getOriginalFilename()) .setFdfsFileUrl(filePath) .setFdfsFileFullUrl(fullPath) .setFdfsCode(getTimeAddRandom7()) .setCreeTime(getCurDateTimeFull()) .setCreeUser( "xy" ); boolean bool = fastdfsFileService.save(fastdfsFile); System.out.println(bool); System.out.println(filePath); } catch (Exception e) { e.printStackTrace(); } return fullPath; } /** * @param fileUrl file address * @return void * @Description delete files * @Author xy * @Date 2019/12/9 9:08 **/ @ApiOperation(value = "Delete File") @PostMapping(path = "/deleteFile", name = "Delete File") public String deleteFile ( @ApiParam(value = "File Address") @RequestParam(name = " fileUrl", required = true) String fileUrl ) { if (StringUtils.isEmpty(fileUrl)) { return "Parameter cannot be empty" ; } try { StorePath storePath = StorePath.parseFromUrl(fileUrl); storageClient.deleteFile(storePath.getGroup(), storePath.getPath()); } catch (FdfsUnsupportStorePathException e) { e.printStackTrace(); } return "Operation successful" ; } } Copy code
  • pay attention

    The configuration file in the code is about the configuration of the fastdfs service, as shown in the figure below, the ports are different

    The service connection port is 22122, which is the TRACKER_SERVER=192.168.0.191:22122 specified when we start storage

    The access port is 8888, which is the default port we configured above for nginx

docker+nginx+fastdfs cluster

surroundings

System: Same as above

nginx: Same as above

fastdfs: Same as above

Host:

a) 192.168.0.191:nginx

b) 192.168.0.192:tracker1,storage1

c) 192.168.0.193: tracker2, storage2

fastdfs cluster construction

Referred to during the construction process

[ Build fastfds cluster version under docker_perylene flower to warm Nan Zhixianghan -CSDN blog_docker fastdfs cluster ]( blog.csdn.net/weixin_4024... )

[ Use docker to deploy fastdfs cluster version_ @leon-CSDN _docker fastdfs cluster ]( blog.csdn.net/zhanngle/ar... )

  • 192.168.0.192/192.168.0.193, two hosts, pull fastdfs image

    docker pull delron/fastdfs copy code

  • Start the tracker of the two hosts

    1. 192.168.0.192:

      docker run -dti --network = host --restart always --name tracker -v/var/fdfs/tracker:/var/fdfs -v/etc/localtime:/etc/localtime delron/fastdfs tracker duplicated code

      -v: means to mount the host directory and the directory in the container

      --restart always: means start on boot

    2. 192.168.0.193:

      docker run -dti --network = host --restart always --name tracker -v/var/fdfs/tracker:/var/fdfs -v/etc/localtime:/etc/localtime delron/fastdfs tracker duplicated code
  • Start storage of two machines

    1. 192.168.0.192:

      docker run -dti --network=host --restart always --name storage -e TRACKER_SERVER=192.168.0.192:22122 -v/var/fdfs/storage:/var/fdfs -v/etc/localtime:/etc/localtime delron/fastdfs storage duplicated code
    2. 192.168.0.193:

      docker run -dti --network=host --restart always --name storage -e TRACKER_SERVER=192.168.0.193:22122 -v/var/fdfs/storage:/var/fdfs -v/etc/localtime:/etc/localtime delron/fastdfs storage duplicated code

    3. Enter the storage container of the two machines separately

      Docker Exec -it Storage bash copy the code

      Enter the/etc/fdfs/directory and modify the following files

      All three files are found as shown in the figure below, add tracker_server for two machines (note that both must be configured)

      tracker_server=192.168.0.192:22122

      tracker_server=192.168.0.193:22122

  • Finally modify the nginx.conf configuration file of the storage container of the two machines

    1. Enter the/usr/local/nginx/conf directory and change it to group1 (this tutorial seems to be built without changing it). If the port is changed, you must change it here. I used the default above, so there is no need to change it here.

  • After configuration, you can verify

    1. Reboot

      fdfs_storaged/etc/fdfs/storage.conf restart
    2. fdfs_monitor/etc/fdfs/storage.conf

      Group count:group

      Storage server count:

      Active server count:

      ...

  • storage , , ,

    1. /usr/bin/fdfs_upload_file/etc/fdfs/client.conf/var/fdfs/3.jpg

      , -v/var/fdfs/storage:/var/fdfs , ,

    2. http://192.168.0.192:8888/group1/M00/00/00/wKgAwV3wkkuAfs7yABWOZCYlEqY065.jpg

      http://192.168.0.193:8888/group1/M00/00/00/wKgAwV3wkkuAfs7yABWOZCYlEqY065.jpg

  • 192 storage ,193

    192 ,193

    ,192

nginx

fastdfs , ip , , 191 nginx , 191

, cd/home/nginx/conf/nginx/nginx

:

#fastdfs , 8888, , upstream fdfs { server 192.168.0.192:8888; server 192.168.0.193:8888; } server { listen 80; server_name localhost; location/{ root html; index index.html index.htm; proxy_pass http://fdfs; should be consistent with the above, be careful not to bring a/diagonal bar at the back, if you bring a diagonal bar, you must add/fdfs/when you visit } } Copy code

Save and exit then docker restart nginx restart nginx

  • The path to access the nginx host ip+picture, because the port specified by starting nginx is 80, you can access without writing the port here

    http://192.168.0.191/group1/M00/00/00/wKgAwF3wUBWAFW58ABWOZCYlEqY420.jpg

  • By default, nginx uses polling to proxy, so how do you know whether to proxy 192 and 193?

    I close the storage service of 192 to see if nginx can still proxy 193. If the access is successful, it means that two services have been proxyed.

    See below is still accessible

    If both of my storage services are closed, as shown in the figure below, the access fails, which proves that the nginx proxy is valid

  • Question: As mentioned above, nginx polls the proxy by default, so after I close a storage, I should refresh it to access it, and refresh it again, but I can't access it, but I tried many times, even if I close one, it can still be accessed.

    After raising this question, I immediately felt that my brain was rusty, because the problem did not exist at all. If it does exist, suppose a certain website is configured with a polling agent. If a service bounces, will the website be like the question I asked.

    It can be displayed after refreshing, but it will not be displayed after refreshing. Obviously, it does not exist.

    The correct conclusion should be: Assuming that a service crashes, nginx will immediately remove that server to avoid access failure.

  • Scaling: six ways of nginx load balancing

springboot connection configuration

Connection configuration only needs to add one more service to the configuration file, as shown in the figure below

Access to the service configurationBecause nginx is used as a proxy, it is good to write the host of nginx here, because the port is 80, it can be accessed without a port

Same code as above