mysql、canal、kafka、数据同步系列(四)canal的安装和配置

过去的,未来的
2020-03-29 / 0 评论 / 0 点赞 / 2,292 阅读 / 0 字 / 正在检测是否收录...
温馨提示:
本文最后更新于 2020-03-29,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

历经了前面的各种安装,终于到canal的安装和配置了。这里依然采用docker-compose安装.

1、编写docker-compose.yml文件

version: '3'
services: 
  canal-server: 
    image: canal/canal-server:v1.1.3
    container_name: canal-server
    ports: 
      - 11111:11111
    environment: 
      - canal.instance.mysql.slaveId=12
      - canal.auto.scan=false
      - canal.destinations=test
      - canal.instance.master.address=192.168.1.9:3306
      - canal.instance.dbUsername=canal
      - canal.instance.dbPassword=canal
      - canal.mq.topic=test
      - canal.instance.filter.regex=esen_approval.apt_approval
    volumes: 
      - ./conf:/canal-server/conf
      - ./logs:/canal-server/logs

2、说明

canal.instance.mysql.slaveId:slaveId不能与mysql的serverId一样
canal.instance.master.address:mysql地址
canal.instance.dbUsername:mysql账号
canal.instance.dbPassword:mysql密码

3、启动

docker-compose up -d

4、到这一步,我们就实现了canal同步mysql binlog数据了

5、我们来测试下

1)我们采用maven,构建一个项目。加入pom
<dependency>
    <groupId>com.alibaba.otter</groupId>
    <artifactId>canal.client</artifactId>
    <version>1.1.0</version>
</dependency>
2)在数据库里面有一个库test,我们新建一个表
CREATE TABLE `user` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `name` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB
3)编写测试类
public class Cannel {

    public static void main(String args[]) {
        // 创建链接
        CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress("10.10.10.10",
                11111), "test", "xxxx", "xxxx");
        int batchSize = 1000;
        int emptyCount = 0;
        try {
            connector.connect();
            connector.subscribe(".*\\..*");
            connector.rollback();
            int totalEmptyCount = 120;
            while (emptyCount < totalEmptyCount) {
                Message message = connector.getWithoutAck(batchSize); // 获取指定数量的数据
                long batchId = message.getId();
                int size = message.getEntries().size();
                if (batchId == -1 || size == 0) {
                    emptyCount++;
                    System.out.println("empty count : " + emptyCount);
                    try {
                        Thread.sleep(1000);
                    } catch (InterruptedException e) {
                    }
                } else {
                    emptyCount = 0;
                    // System.out.printf("message[batchId=%s,size=%s] \n", batchId, size);
                    printEntry(message.getEntries());
                }

                connector.ack(batchId); // 提交确认
                // connector.rollback(batchId); // 处理失败, 回滚数据
            }

            System.out.println("empty too many times, exit");
        } finally {
            connector.disconnect();
        }
    }

    private static void printEntry(List<CanalEntry.Entry> entrys) {
        for (CanalEntry.Entry entry : entrys) {
            if (entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONBEGIN || entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONEND) {
                continue;
            }

            CanalEntry.RowChange rowChage = null;
            try {
                rowChage = CanalEntry.RowChange.parseFrom(entry.getStoreValue());
            } catch (Exception e) {
                throw new RuntimeException("ERROR ## parser of eromanga-event has an error , data:" + entry.toString(),
                        e);
            }

            CanalEntry.EventType eventType = rowChage.getEventType();
            System.out.println(String.format("================&gt; binlog[%s:%s] , name[%s,%s] , eventType : %s",
                    entry.getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(),
                    entry.getHeader().getSchemaName(), entry.getHeader().getTableName(),
                    eventType));

            for (CanalEntry.RowData rowData : rowChage.getRowDatasList()) {
                if (eventType == CanalEntry.EventType.DELETE) {
                    printColumn(rowData.getBeforeColumnsList());
                } else if (eventType == CanalEntry.EventType.INSERT) {
                    printColumn(rowData.getAfterColumnsList());
                } else {
                    System.out.println("-------&gt; before");
                    printColumn(rowData.getBeforeColumnsList());
                    System.out.println("-------&gt; after");
                    printColumn(rowData.getAfterColumnsList());
                }
            }
        }
    }

    private static void printColumn(List<CanalEntry.Column> columns) {
        for (CanalEntry.Column column : columns) {
            System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
        }
    }

}

4)运行main方法,我们在数据库表user中insert、update,查看控制台的信息

,我们会收到binlogs信息。完美,我们实现了初步的同步。

0

评论区