rsync多进程并发执行同步数据

#!/bin/sh
#

src='rsync://your.anrip.com:873' #源路径
dst='/mnt/backup' #目标路径
dir='project' #待同步的项目

opt="-rltPv --delete" #同步选项

num=32 #进程数
depth='4 3 2 1' #归递目录深度

skip=/tmp/`echo $src$dir | md5sum | head -c 16`.skip
[ -f $skip-next ] && cp $skip-next $skip
[ -f $skip ] || touch $skip

# 创建目标目录结构
rsync $opt --include "*/" --exclude "*" $src/$dir/ $dst/$dir

# 进入本地索引目录
cd $dst

# 从深到浅同步目录
for l in $depth ;do
    # 启动rsync进程
    for i in `find $dir -maxdepth $l -mindepth $l -type d`; do
        if `grep -q "$i$" $skip`; then
            echo "skip $i"
            continue
        fi
        while true; do
            now_num=`ps axw | grep rsync | grep $dir | grep -v '\-\-daemon' | wc -l`
            if [ $now_num -lt $num ]; then
                rsync $opt $src/$i/ $dst/$i &
                echo $i >>$skip-next
                sleep 1
                break
            else
                sleep 5
            fi
        done
    done
done

# 最终单进程验证
while true; do
    sleep 5
    now_num=`ps axw | grep rsync | grep $dir | grep -v '\-\-daemon' | wc -l`
    if [ $now_num -lt 1 ]; then
        rsync $opt $src/$dir/ $dst/$dir
        break
    fi
done

制作MacOS启动U盘

真是好记性不如烂笔头啊,以前经常用的命令,一时竟然没想起来

sudo /Applications/Install\ macOS\ Mojave.app/Contents/Resources/createinstallmedia --volume /Volumes/your-disk-name /Applications/Install\ macOS\ Mojave.app —nointeraction

PostgreSQL 更改库及所有表的所有者(OWNER)

  • database, public,schema1 为你需要修改的库名
  • username 为更改后的所有者用户名
\c database;

DO $$
DECLARE
    r record;
    i int;
    v_schema text[] := '{public,schema1}';
    v_new_owner varchar := 'username';
BEGIN
    FOR r IN
        SELECT 'ALTER TABLE "' || table_schema || '"."' || table_name || '" OWNER TO ' || v_new_owner || ';' AS a FROM information_schema.tables WHERE table_schema = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER TABLE "' || sequence_schema || '"."' || sequence_name || '" OWNER TO ' || v_new_owner || ';' AS a FROM information_schema.sequences WHERE sequence_schema = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER TABLE "' || table_schema || '"."' || table_name || '" OWNER TO ' || v_new_owner || ';' AS a FROM information_schema.views WHERE table_schema = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER FUNCTION "' || nsp.nspname || '"."' || p.proname || '"(' || pg_get_function_identity_arguments(p.oid) || ') OWNER TO ' || v_new_owner || ';' AS a FROM pg_proc p JOIN pg_namespace nsp ON p.pronamespace = nsp.oid WHERE nsp.nspname = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER DATABASE "' || current_database() || '" OWNER TO ' || v_new_owner
    LOOP
        EXECUTE r.a;
    END LOOP;
    FOR i IN array_lower(v_schema, 1)..array_upper(v_schema, 1)
    LOOP
        EXECUTE 'ALTER SCHEMA "' || v_schema[i] || '" OWNER TO ' || v_new_owner;
    END LOOP;
END
$$;

ESXi 6.7.0 通过命令行升级 6.7.0 Update3

上次写了基于离线包的升级方式。今天写一下在线升级的基本步骤。

一、查看可用的更新包

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep ESXi-6.7

二、执行在线更新(不含TOOLS,防止 No Space Left On Device)

esxcli software profile update -p ESXi-6.7.0-20190802001-no-tools -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

三、单独更新TOOLS

esxcli software vib install -v https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_10.3.10.12406962-14141615.vib

四、重启服务器

reboot

ESXi Version 6.7.0 History

https://esxi-patches.v-front.de/ESXi-6.7.0.html

ESXi 通过命令行升级

从官方网站下载升级用的zip包,上传到ESXi主机存储任意目录,登陆SSH控制台,执行更新命令

esxcli software vib install -d=/full/path/VMware-ESXi-6.7.0-8169922-depot.zip

执行成功后提示如下

Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.670.0.0.8169922, …, VMware_locker_tools-light_6.5.0-0.0.4564106
VIBs Skipped:

手动删除Piwik历史日志(MySQL删除大量数据)

不知不觉间,Piwik数据的容量已经达到500GB,虽然并没有感觉到性能有什么影响,但也是时候考虑如何清理其最占空间的访问记录数据了。于是决定拿备份库演练一番:

首先想到的是使用后台的自动清理功能,但是发现其每次10000条的删除能力,对于此等量级的数据可谓是杯水车薪。最后只能翻看源码,得出最基本的两条删除语句。

DELETE FROM `wk_log_visit` WHERE visit_last_action_time < '2018-11-05 00:00:00';
DELETE FROM `wk_log_link_visit_action` WHERE server_time < '2018-11-05 00:00:00';

至于 wk_log_action 表一则也就10G多些,无伤大雅;二则这个表可以根据上面的两个表对其引用情况执行自动清理。

https://your.domain.com/index.php?action=executeDataPurge&date=today&format=html&idSite=1&module=PrivacyManager&period=day

至此,我们将遇到第二个问题:

如果要从InnoDB大表中删除许多行,则可能会超出表的锁定表大小InnoDB。也许删除500G的数据需要8个小时左右。

为了避免这个问题,或者只是为了最小化表保持锁定的时间,官方给出以下策略(根本不使用 DELETE):

  • 选择不要删除的行到与原始表具有相同结构的空表中

    INSERT INTO t_copy SELECT * FROM t WHERE ... ;
  • 使用RENAME TABLE以原子移动原始表的方式进行,并重新命名拷贝到原来的名称

    RENAME TABLE t TO t_old, t_copy TO t;
  • 删除原始表

    DROP TABLE t_old;

最终,我们优化查询如下

CREATE TABLE wk_log_visit_copy LIKE wk_log_visit;
INSERT INTO wk_log_visit_copy SELECT * FROM wk_log_visit WHERE visit_last_action_time >= '2018-11-05 00:00:00';
RENAME TABLE wk_log_visit TO wk_log_visit_old, wk_log_visit_copy TO wk_log_visit;

CREATE TABLE wk_log_link_visit_action_copy LIKE wk_log_link_visit_action;
INSERT INTO wk_log_link_visit_action_copy SELECT * FROM wk_log_link_visit_action WHERE server_time >= '2018-11-05 00:00:00';
RENAME TABLE wk_log_link_visit_action TO wk_log_link_visit_action_old, wk_log_link_visit_action_copy TO wk_log_link_visit_action;

DROP TABLE wk_log_visit_old, wk_log_link_visit_action_old;

CentOS7之LVM磁盘扩容

创建文件系统,并设置t为8e

fdisk /dev/sdb

扩容LVM卷

pvcreate /dev/sdb1
vgextend centos /dev/sdb1
lvextend -l +100%FREE /dev/centos/root

若根目录/文件系统为ext4

resize2fs /dev/centos/root

若根目录/文件系统为xfs

xfs_growfs /dev/centos/root

查看磁盘状态

df -h