PHP版百度云加速API/SDK封装

<?php

/**
 * Author: anrip <https://www.arnip.com>
 * Update: 2020-11-19
 */

class Yunjiasu
{
    private $api_base = 'https://api.su.baidu.com/';

    private $access_key = '1234567';
    private $secret_key = '1234567';

    public function zones($data)
    {
        $path = 'v31/yjs/zones';
        return $this->api_call('GET', $path, $data);
    }

    public function purge_cache($id, $data)
    {
        $path = 'v31/yjs/zones/' . $id . '/purge_cache';
        return $this->api_call('DELETE', $path, $data);
    }

    private function api_call($method, $path, $data = NULL)
    {
        $url = $this->api_base . $path;
        $header = $this->api_header($path, $data);
        $result = $this->http_repuest($method, $url, $header, $data);

        if (!empty($result['errors'])) {
            $error = array_pop($result['errors']);
            $error['error'] = 1;
            return $error;
        }

        if (!empty($result['result'])) {
            return $result['result'];
        }

        if (!empty($result['success'])) {
            return ['success' => 1];
        }

        return $result;
    }

    private function api_header($path, $data = NULL)
    {
        $params = [
            'X-Auth-Access-Key' => $this->access_key,
            'X-Auth-Nonce' => uniqid(),
            'X-Auth-Path-Info' => $path,
            'X-Auth-Signature-Method' => 'HMAC-SHA1',
            'X-Auth-Timestamp' => time(),
        ];

        if (is_array($data)) {
            $params = array_merge($params, $data);
        }

        ksort($params);

        $header = $signls = [];

        foreach ($params as $k => $v) {
            if (is_bool($v)) {
                $v = $v ? 'true' : 'false';
            }
            if (is_array($v)) {
                $v = str_replace('","', '", "', json_encode($v, JSON_UNESCAPED_SLASHES));
            }
            if (strpos($k, 'X-Auth') === 0) {
                $header[] = $k . ':' . $v;
            }
            if ($v !== '') {
                $signls[] = $k . '=' . $v;
            }
        }

        $header[] = 'X-Auth-Sign:' . base64_encode(
            hash_hmac('sha1', implode('&', $signls), $this->secret_key, true)
        );

        return $header;
    }

    private function http_repuest($method, $url, $header = NULL, $body = NULL)
    {
        $ch = curl_init();

        if ($method == 'GET' && $body) {
            $url .= '?' . http_build_query($body);
            $body = NULL;
        }

        curl_setopt($ch, CURLOPT_URL, $url);
        curl_setopt($ch, CURLOPT_CUSTOMREQUEST, $method);
        curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
        curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);

        if ($header) {
            curl_setopt($ch, CURLOPT_HTTPHEADER, $header);
        }

        if ($body) {
            if (is_array($body)) {
                $body = json_encode($body);
            }
            curl_setopt($ch, CURLOPT_POSTFIELDS, $body);
        }

        $result = curl_exec($ch);
        $errno = curl_errno($ch);
        $error = curl_error($ch);

        curl_close($ch);

        if ($errno) {
            return ['error' => $errno, 'message' => $error];
        }

        return json_decode($result, true);
    }
}

#######################################################################

$su = new Yunjiasu();

$rs_zones = $su->zones(array(
    'name' => 'anrip.com'
));

if (isset($rs_zones['error'])) {
    exit(json_encode($rs_zones));
}

print_r($su->purge_cache($rs_zones['id'], array(
    'files' => array('https://www.anrip.com/test.jpg')
)));

rsync多进程并发执行同步数据

原始命令

rsync -aPv --delete rsync://1.2.3.4/dp1/ /mnt/backup

多线程模式

#!/bin/sh
#

src='rsync://1.2.3.4/dp1' #源路径,结尾不带斜线
dst='/mnt/backup' #目标路径,结尾不带斜线

opt="-aPv --delete" #同步选项

num=16 #并发进程数
depth='5 4 3 2 1' #归递目录深度

task=/tmp/`echo $src$ | md5sum | head -c 16`
[ -f $task-next ] && cp $task-next $task-skip
[ -f $task-skip ] || touch $task-skip

# 创建目标目录结构
rsync $opt --include "*/" --exclude "*" $src/ $dst

# 从深到浅同步目录
for l in $depth ;do
    # 启动rsync进程
    for i in `find $dst -maxdepth $l -mindepth $l -type d`; do
        i=`echo $i | sed "s#$dst/##"`
        if `grep -q "$i$" $task-skip`; then
            echo "skip $i"
            continue
        fi
        while true; do
            now_num=`ps axw | grep rsync | grep $dst | grep -v '\-\-daemon' | wc -l`
            if [ $now_num -lt $num ]; then
                echo "rsync $opt $src/$i/ $dst/$i" >>$task-log
                rsync $opt $src/$i/ $dst/$i &
                echo $i >>$task-next
                sleep 1
                break
            else
                sleep 5
            fi
        done
    done
done

# 最终单进程验证
while true; do
    sleep 5
    now_num=`ps axw | grep rsync | grep $dst | grep -v '\-\-daemon' | wc -l`
    if [ $now_num -lt 1 ]; then
        echo "rsync $opt $src/ $dst" >>$task-log
        rsync $opt $src/ $dst
        break
    fi
done

制作MacOS启动U盘

真是好记性不如烂笔头啊,以前经常用的命令,一时竟然没想起来

sudo /Applications/Install\ macOS\ Mojave.app/Contents/Resources/createinstallmedia --volume /Volumes/your-disk-name /Applications/Install\ macOS\ Mojave.app —nointeraction

PostgreSQL 更改库及所有表的所有者(OWNER)

  • database, public,schema1 为你需要修改的库名
  • username 为更改后的所有者用户名
\c database;

DO $$
DECLARE
    r record;
    i int;
    v_schema text[] := '{public,schema1}';
    v_new_owner varchar := 'username';
BEGIN
    FOR r IN
        SELECT 'ALTER TABLE "' || table_schema || '"."' || table_name || '" OWNER TO ' || v_new_owner || ';' AS a FROM information_schema.tables WHERE table_schema = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER TABLE "' || sequence_schema || '"."' || sequence_name || '" OWNER TO ' || v_new_owner || ';' AS a FROM information_schema.sequences WHERE sequence_schema = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER TABLE "' || table_schema || '"."' || table_name || '" OWNER TO ' || v_new_owner || ';' AS a FROM information_schema.views WHERE table_schema = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER FUNCTION "' || nsp.nspname || '"."' || p.proname || '"(' || pg_get_function_identity_arguments(p.oid) || ') OWNER TO ' || v_new_owner || ';' AS a FROM pg_proc p JOIN pg_namespace nsp ON p.pronamespace = nsp.oid WHERE nsp.nspname = ANY (v_schema)
        UNION ALL
        SELECT 'ALTER DATABASE "' || current_database() || '" OWNER TO ' || v_new_owner
    LOOP
        EXECUTE r.a;
    END LOOP;
    FOR i IN array_lower(v_schema, 1)..array_upper(v_schema, 1)
    LOOP
        EXECUTE 'ALTER SCHEMA "' || v_schema[i] || '" OWNER TO ' || v_new_owner;
    END LOOP;
END
$$;

ESXi 6.7.0 通过命令行升级 6.7.0 Update3

上次写了基于离线包的升级方式。今天写一下在线升级的基本步骤。

一、查看可用的更新包

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep ESXi-6.7

二、执行在线更新(不含TOOLS,防止 No Space Left On Device)

esxcli software profile update -p ESXi-6.7.0-20190802001-no-tools -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

三、单独更新TOOLS

esxcli software vib install -v https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_10.3.10.12406962-14141615.vib

四、重启服务器

reboot

ESXi Version 6.7.0 History

https://esxi-patches.v-front.de/ESXi-6.7.0.html

ESXi 通过命令行升级

从官方网站下载升级用的zip包,上传到ESXi主机存储任意目录,登陆SSH控制台,执行更新命令

esxcli software vib install -d=/full/path/VMware-ESXi-6.7.0-8169922-depot.zip

执行成功后提示如下

Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.670.0.0.8169922, …, VMware_locker_tools-light_6.5.0-0.0.4564106
VIBs Skipped:

手动删除Piwik历史日志(MySQL删除大量数据)

不知不觉间,Piwik数据的容量已经达到500GB,虽然并没有感觉到性能有什么影响,但也是时候考虑如何清理其最占空间的访问记录数据了。于是决定拿备份库演练一番:

首先想到的是使用后台的自动清理功能,但是发现其每次10000条的删除能力,对于此等量级的数据可谓是杯水车薪。最后只能翻看源码,得出最基本的两条删除语句。

DELETE FROM `wk_log_visit` WHERE visit_last_action_time < '2018-11-05 00:00:00';
DELETE FROM `wk_log_link_visit_action` WHERE server_time < '2018-11-05 00:00:00';

至于 wk_log_action 表一则也就10G多些,无伤大雅;二则这个表可以根据上面的两个表对其引用情况执行自动清理。

https://your.domain.com/index.php?action=executeDataPurge&date=today&format=html&idSite=1&module=PrivacyManager&period=day

至此,我们将遇到第二个问题:

如果要从InnoDB大表中删除许多行,则可能会超出表的锁定表大小InnoDB。也许删除500G的数据需要8个小时左右。

为了避免这个问题,或者只是为了最小化表保持锁定的时间,官方给出以下策略(根本不使用 DELETE):

  • 选择不要删除的行到与原始表具有相同结构的空表中

    INSERT INTO t_copy SELECT * FROM t WHERE ... ;
  • 使用RENAME TABLE以原子移动原始表的方式进行,并重新命名拷贝到原来的名称

    RENAME TABLE t TO t_old, t_copy TO t;
  • 删除原始表

    DROP TABLE t_old;

最终,我们优化查询如下

CREATE TABLE wk_log_visit_copy LIKE wk_log_visit;
INSERT INTO wk_log_visit_copy SELECT * FROM wk_log_visit WHERE visit_last_action_time >= '2018-11-05 00:00:00';
RENAME TABLE wk_log_visit TO wk_log_visit_old, wk_log_visit_copy TO wk_log_visit;

CREATE TABLE wk_log_link_visit_action_copy LIKE wk_log_link_visit_action;
INSERT INTO wk_log_link_visit_action_copy SELECT * FROM wk_log_link_visit_action WHERE server_time >= '2018-11-05 00:00:00';
RENAME TABLE wk_log_link_visit_action TO wk_log_link_visit_action_old, wk_log_link_visit_action_copy TO wk_log_link_visit_action;

DROP TABLE wk_log_visit_old, wk_log_link_visit_action_old;