问题: export导出大于1T的大文件到cephfs目录时报错
1 挂载cephfs的文件系统到本地
mount -t ceph 192.168.x.xx:6789:/ /mnt/mycephfs
2 写入数据
dd if=/dev/zero of=test.img bs=1M seek=1048K count=512
3 使用stap进行跟踪
probe syscall.*.call { if (execname() == "dd") { printf("%s = %d\n", ppfunc(), $return); } }
4 stap的输出
SyS_execve = 0 ... SyS_open = 3 SyS_dup2 = 1 SyS_close = 0 SyS_ftruncate = -27 SyS_newfstat = 0 SyS_open = 3 SyS_newfstat = 0 SyS_mmap_pgoff = 140023037267968 SyS_read = 2502 SyS_read = 0 ....
5 分析
SyS_ftruncate 为big file error
int inode_newsize_ok(const struct inode *inode, loff_t offset) { if (offset > inode->i_sb->s_maxbytes) goto out_big; out_big: return -EFBIG; }
s_maxbytes的大小赋值:1T
static int ceph_set_super(struct super_block *s, void *data) { s->s_maxbytes = 1ULL << 40; /* temp value until we get mdsmap */ }
s_maxbytes 大小的更新
void ceph_mdsc_handle_map(struct ceph_mds_client *mdsc, struct ceph_msg *msg) { mdsc->fsc->sb->s_maxbytes = mdsc->mdsmap->m_max_file_size; }
max_file_size 的大小设置来自于参数的配置
6 修改参数
修改conf文件中的
[mds]
max_file_size = xxx
7 查看参数修改后的结果
[root@vm181 ~]# ceph mds dump dumped fsmap epoch 78 fs_name cephfs epoch 78 flags 0 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776