分类目录归档:微服务

pinpoint 安装部署

序章

pinpoint是开源在github上的一款APM监控工具,它是用Java编写的,用于大规模分布式系统监控。它对性能的影响最小(只增加约3%资源利用率),安装agent是无侵入式的,只需要在被测试的Tomcat中加上3句话,打下探针,就可以监控整套程序了。这篇Blog主要是想记录一下它安装的过程,方便日后查阅。

 

我安装它用到的2台 CentOS6.8 虚拟机,一台主要部署pinpoint的主程序,一台模拟测试环境。配置如下:

IP 操作系统 安装项 描述
192.168.245.136 CentOS 6.8 pinpoint pinpoint的web展示端,逻辑控制机,以及Hbase存储
192.168.245.135 CentOS 6.8 pinpoint-agent 主要用来采集数据,发送给pinpoint处理

 

java 1.7 http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

pinpoint https://github.com/naver/pinpoint

我将需要的资源都整合起来了,上传至百度网盘

百度网盘: <链接: http://pan.baidu.com/s/1eRU5RW2 密码: u89p>

 

下面是官方的一些截图,很帅,很直观

demo0

 

demo1

demo2

demo3

 

 

1. 环境配置

 

1.1 获取需要的依赖包

进入home目录,创建一个”pp_res”的资源目录,用来存放需要安装的包

mkdir /home/pp_res
cd /home/pp_res/

使用xshell等类似的工具,将需要的文件上传到Linux虚拟机中,主要要传的文件都在百度网盘

  1. jdk7 — Java运行环境
  2. hbase-1.0 — 数据库,用来存储监控信息
  3. tomcat8.0 — Web服务器
  4. pinpoint-collector.war — pp的控制器
  5. pinpoint-web.war — pp展示页面
  6. pp-collector.init — 用来快速启动pp-col,不要也可以
  7. pp-web.init — 用来快速启动pp-web,不要也可以

xshell

 

使用ll命令,查看一下是否上传成功

[root@localhost pp_res]# ll
total 367992
-rw-r–r–. 1 root root   9277365 Nov 15 00:07 apache-tomcat-8.0.36.tar.gz
-rw-r–r–. 1 root root 103847513 Nov 15 00:07 hbase-1.0.3-bin.tar.gz
-rw-r–r–. 1 root root 153512879 Nov 15 00:07 jdk-7u79-linux-x64.tar.gz
-rw-r–r–. 1 root root   6621915 Nov 15 00:07 pinpoint-agent-1.5.2.tar.gz
-rw-r–r–. 1 root root  31339914 Nov 15 00:07 pinpoint-collector-1.5.2.war
-rw-r–r–. 1 root root  54505168 Nov 15 00:07 pinpoint-web-1.5.2.war
-rw-r–r–. 1 root root      3084 Nov 15 00:07 pp-collector.init
-rw-r–r–. 1 root root      3072 Nov 15 00:07 pp-web.init
-rw-r–r–. 1 root root  17699306 Nov 15 00:07 zookeeper-3.4.6.tar.gz

1.2 配置jdk1.7

这套APM系统主要是用jdk1.7来进行部署的,首先要配置jdk的环境变量

cd /home/pp_res/
tar -zxvf jdk-7u79-linux-x64.tar.gz
mkdir /usr/java
mv jdk1.7.0_79/ /usr/java/jdk17

配置java环境变量

vi /etc/profile

将下列复制到profile的最后一行中

export JAVA_HOME=/usr/java/jdk17
export PATH=$PATH:$JAVA_HOME/bin

让环境变量生效

source /etc/profile

测试java的环境变量是否配置好了

[root@localhost pp_res]# java -version
java version “1.7.0_79”
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

2. 安装Hbase

pinpoint收集来的测试数据,主要是存在Hbase数据库的。所以它可以收集大量的数据,可以进行更加详细的分析。

2.1 将Hbase解压,并且放入指定目录

cd /home/pp_res/
tar -zxvf hbase-1.0.3-bin.tar.gz
mkdir -p /data/service
mv hbase-1.0.3/ /data/service/hbase

2.2 修改hbase-env.sh的JAVA_HOME环境变量位置

cd /data/service/hbase/conf/
vi hbase-env.sh

在27行左右的位置,修改如下

export JAVA_HOME=/usr/java/jdk17/

2.3 修改Hbase的配置信息

vi hbase-site.xml

在结尾修改成如下,这里我们指定Hbase本地来存储数据,生产环境将数据建议存入HDFS中。

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>file:///data/hbase</value>
 </property>
</configuration>

2.4 启动hbase

cd /data/service/hbase/bin
./start-hbase.sh

查看Hbase是否启动成功,如果启动成功的会看到”HMaster”的进程

[root@localhost bin]# jps
12075 Jps
11784 HMaster

2.5 初始化Hbase的pinpoint库

执行pinpoint提供的Hbase初始化语句,这时会初始化一会。

./hbase shell /home/pp_res/hbase-create.hbase

执行完了以后,进入Hbase

./hbase shell

进入后可以看到Hbase的版本,还有一些相关的信息

2016-11-15 01:55:44,861 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using built
in-java classes where applicableHBase Shell; enter ‘help<RETURN>’ for list of supported commands.
Type “exit<RETURN>” to leave the HBase Shell
Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016
 
hbase(main):001:0>

输入”status ‘detailed'”可以查看刚才初始化的表,是否存在

hbase(main):001:0> status ‘detailed’
version 1.0.3
0 regionsInTransition
master coprocessors: []
1 live servers
    localhost:50887 1478538574709
        requestsPerSecond=0.0, numberOfOnlineRegions=498, usedHeapMB=24, maxHeapMB=237, numberOfStores=626, numberOfStorefiles=0, storefileUncom
pressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=7714, writeRequestsCount=996, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[MultiRowMutationEndpoint]        “AgentEvent,,1478539104778.aa1b3b14d0b48d83cbf4705b75cb35b7.”
            numberOfStores=1, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0

也可以登录web,来查看HBase的数据是否初始化成功

HbaseWeb : http://192.168.245.134:16010/master-status

HbaseWeb

3. 安装pinpoint-collector

 

3.1 部署war包

解压Tomcat,将Tomcat重命名移动到指定位置

cd /home/pp_res/
tar -zxvf apache-tomcat-8.0.36.tar.gz
mv apache-tomcat-8.0.36/ /data/service/pp-col

 

修改pp-col的Tomcat的配置,主要修改端口,避免与pp-web的Tomcat的端口冲突。我在原本默认的端口前都加了1,下面是替换的shell命令。

【注意】最后一条是将tomcat的私有ip开放,需要将localhost替换成本机的ip,我本机的网卡是默认的,如果你本机的网卡不是eth0,需要进行相关的修改。或者直接用”vi”进去,修改localhost

cd /data/service/pp-col/conf/
sed -i ‘s/port=”8005″/port=”18005″/g’ server.xml
sed -i ‘s/port=”8080″/port=”18080″/g’ server.xml
sed -i ‘s/port=”8443″/port=”18443″/g’ server.xml
sed -i ‘s/port=”8009″/port=”18009″/g’ server.xml
sed -i ‘s/redirectPort=”8443″/redirectPort=”18443″/g’ server.xml
sed -i “s/localhost/`ifconfig eth0 | grep ‘inet addr’ | awk ‘{print $2}’ | awk -F: ‘{print $2}’`/g” server.xml

 

部署pinpoint-collector.war包

【注意:如果没有unzip命令,可以 “yum install unzip” 】

cd /home/pp_res/
rm -rf /data/service/pp-col/webapps/*
unzip pinpoint-collector-1.5.2.war -d /data/service/pp-col/webapps/ROOT

 

启动Tomcat

cd /data/service/pp-col/bin/
./startup.sh

 

查看日志,是否成功启动

tail -f ../logs/catalina.out

3.2 配置快速启动

配置快速启动需要修改pp-collector.init的路径( pp-collector在网盘里面有 ),可以”vi”进去,大概在18,24,27行处,修改相关的路径。我这边为了方便,直接就用替换的shell做了,如果路径与我的不一致,需要将路径修改成自己的路径。

cd /home/pp_res
sed -i “s/JAVA_HOME=\/usr\/java\/default\//JAVA_HOME=\/usr\/java\/jdk17\//g” pp-collector.init
sed -i “s/CATALINA_HOME=\/data\/service\/pinpoint-collector\//CATALINA_HOME=\/data\/service\/pp-col\//g” pp-collector.init
sed -i “s/CATALINA_BASE=\/data\/service\/pinpoint-collector\//CATALINA_BASE=\/data\/service\/pp-col\//g” pp-collector.init

 

将文件赋予”执行”的权限,把它放到”init.d”中去。以后就可以restart快速重启了。

chmod 711 pp-collector.init
mv pp-collector.init /etc/init.d/pp-col
 
 
# 测试一下restart
[root@localhost pp_res]# /etc/init.d/pp-col restart
Stoping Tomcat
Using CATALINA_BASE:   /data/service/pp-col/
Using CATALINA_HOME:   /data/service/pp-col/
Using CATALINA_TMPDIR: /data/service/pp-col//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-col//bin/bootstrap.jar:/data/service/pp-col//bin/tomcat-juli.jar
 
waiting for processes to exitStarting tomcat
Using CATALINA_BASE:   /data/service/pp-col/
Using CATALINA_HOME:   /data/service/pp-col/
Using CATALINA_TMPDIR: /data/service/pp-col//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-col//bin/bootstrap.jar:/data/service/pp-col//bin/tomcat-juli.jar
Tomcat started.
Tomcat is running with pid: 22824

4. 安装pinpoint-web

4.1 部署war包

解压Tomcat,将Tomcat重命名移动到指定位置

cd /home/pp_res/
tar -zxvf apache-tomcat-8.0.36.tar.gz
mv apache-tomcat-8.0.36/ /data/service/pp-web

 

修改pp-web的Tomcat的配置,主要修改端口,避免与pp-col的Tomcat的端口冲突。我在原本默认的端口前都加了2,下面是替换的shell命令

【注意】最后一条是将tomcat的私有ip开放,需要将localhost替换成本机的ip,我本机的网卡是默认的,如果你本机的网卡不是eth0,需要进行相关的修改。或者直接用”vi”进去,修改localhost

cd /data/service/pp-web/conf/
sed -i ‘s/port=”8005″/port=”28005″/g’ server.xml
sed -i ‘s/port=”8080″/port=”28080″/g’ server.xml
sed -i ‘s/port=”8443″/port=”28443″/g’ server.xml
sed -i ‘s/port=”8009″/port=”28009″/g’ server.xml
sed -i ‘s/redirectPort=”8443″/redirectPort=”28443″/g’ server.xml
sed -i “s/localhost/`ifconfig eth0 | grep ‘inet addr’ | awk ‘{print $2}’ | awk -F: ‘{print $2}’`/g” server.xml

 

部署pinpoint-collector.war包

【注意:如果没有unzip命令,可以 “yum install unzip” 】

cd /home/pp_res/
rm -rf /data/service/pp-web/webapps/*
unzip pinpoint-web-1.5.2.war -d /data/service/pp-web/webapps/ROOT

 

查看war包是否解压成功

[root@localhost conf]# ll /data/service/pp-web/webapps/ROOT/WEB-INF/classes/
total 88
-rw-rw-r–. 1 root root 2164 Apr  7  2016 applicationContext-cache.xml
-rw-rw-r–. 1 root root 3649 Apr  7  2016 applicationContext-dao-config.xml
-rw-rw-r–. 1 root root 1490 Apr  7  2016 applicationContext-datasource.xml
-rw-rw-r–. 1 root root 6680 Apr  7  2016 applicationContext-hbase.xml
-rw-rw-r–. 1 root root 1610 Apr  7  2016 applicationContext-websocket.xml
-rw-rw-r–. 1 root root 6576 Apr  7  2016 applicationContext-web.xml
drwxrwxr-x. 2 root root 4096 Apr  7  2016 batch
-rw-rw-r–. 1 root root  106 Apr  7  2016 batch.properties
drwxrwxr-x. 3 root root 4096 Apr  7  2016 com
-rw-rw-r–. 1 root root  682 Apr  7  2016 ehcache.xml
-rw-rw-r–. 1 root root 1001 Apr  7  2016 hbase.properties
-rw-rw-r–. 1 root root  153 Apr  7  2016 jdbc.properties
-rw-rw-r–. 1 root root 3338 Apr  7  2016 log4j.xml
drwxrwxr-x. 2 root root 4096 Apr  7  2016 mapper
-rw-rw-r–. 1 root root 1420 Apr  7  2016 mybatis-config.xml
drwxrwxr-x. 3 root root 4096 Apr  7  2016 org
-rw-rw-r–. 1 root root  630 Apr  7  2016 pinpoint-web.properties
-rw-rw-r–. 1 root root  141 Apr  7  2016 project.properties
-rw-rw-r–. 1 root root 3872 Apr  7  2016 servlet-context.xml
drwxrwxr-x. 2 root root 4096 Apr  7  2016 sql

这里说明一下:

  • hbase.properties 配置我们pp-web从哪个数据源获取采集数据,这里我们只指定Hbase的zookeeper地址。
  • jdbc.properties pp-web连接自身Mysql数据库的连接认证配置。
  • sql目录 pp-web本身有些数据需要存放在MySQL数据库中,这里需要初始化一下表结构。
  • pinpoint-web.properties 这里pp-web集群的配置文件,如果你需要pp-web集群的话。
  • applicationContext-* .xml 这些文件在后续的调优工作中会用到。
  • log4j.xml 日志相关配置。

 

启动Tomcat

cd /data/service/pp-web/bin/
./startup.sh

 

查看日志,Tocmat是否启动成功

tail -f ../logs/catalina.out

 

日志中出现下面这句话,说明已经启动成功了

org.apache.catalina.startup.Catalina.start Server startup in 79531 ms

这时候我们可以访问一下这个地址,在浏览器中输入”http://192.168.245.136:28080″,就会出现主页面了

如果访问不了的话,关闭防火墙

[root@localhost conf]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]

pp-web

4.2 配置快速启动

需要修改”pp-web.init”,与上面的步骤一致

cd /home/pp_res
sed -i “s/JAVA_HOME=\/usr\/java\/default\//JAVA_HOME=\/usr\/java\/jdk17\//g” pp-web.init
sed -i “s/CATALINA_HOME=\/data\/service\/pinpoint-web\//CATALINA_HOME=\/data\/service\/pp-web\//g” pp-web.init
sed -i “s/CATALINA_BASE=\/data\/service\/pinpoint-web\//CATALINA_BASE=\/data\/service\/pp-web\//g” pp-web.init

 

将文件赋予”执行”的权限,把让放到”init.d”中去。以后就可以restart快速重启了。

chmod 711 pp-web.init
mv pp-web.init /etc/init.d/pp-web
 
 
# 测试一下restart
[root@localhost pp_res]# /etc/init.d/pp-web restart
Stoping Tomcat
Using CATALINA_BASE:   /data/service/pp-web/
Using CATALINA_HOME:   /data/service/pp-web/
Using CATALINA_TMPDIR: /data/service/pp-web//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-web//bin/bootstrap.jar:/data/service/pp-web//bin/tomcat-juli.jar
 
waiting for processes to exitStarting tomcat
Using CATALINA_BASE:   /data/service/pp-web/
Using CATALINA_HOME:   /data/service/pp-web/
Using CATALINA_TMPDIR: /data/service/pp-web//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-web//bin/bootstrap.jar:/data/service/pp-web//bin/tomcat-juli.jar
Tomcat started.
Tomcat is running with pid: 22703

5. 部署pp-agent采集监控数据

5.1 在测试系统中,部署pp-agent采集监控数据

 

部署采集器就很简单了,只需要加3句话就好了。我这边做一个测试的Tomcat,来模拟部署。

首先,先建立一个文件夹,放测试需要的包

mkdir /home/pp_test
cd /home/test

 

将测试需要的pp-agent拉到服务器上

pp-test

查看包是否上传成功

[root@localhost pp_test]# ll
total 16820
-rw-r–r–. 1 root root 9277365 Nov  9 02:25 apache-tomcat-8.0.36.tar.gz
-rw-r–r–. 1 root root 6621915 Nov  9 02:25 pinpoint-agent-1.5.2.tar.gz
-rw-r–r–. 1 root root 1320206 Nov  9 02:25 test.war

5.2 配置模拟的Tomcat测试环境

为了方便观察,配置一个假的系统,解压Tomcat到指定目录

cd /home/pp_test
mkdir /data
tar -zxvf apache-tomcat-8.0.36.tar.gz

 

配置localhost让外部可以访问

cd /data/pp-test/conf/
sed -i “s/localhost/`ifconfig eth0 | grep ‘inet addr’ | awk ‘{print $2}’ | awk -F: ‘{print $2}’`/g” server.xml

 

解压测试用的war包

cd /home/pp_test/
rm -rf /data/pp-test/webapps/*
unzip test.war -d /data/pp-test/webapps/ROOT

5.3 配置pp-agent采集器

解压pp-agent

cd /home/pp_test
tar -zxvf pinpoint-agent-1.5.2.tar.gz
mv pinpoint-agent-1.5.2 /data/pp-agent

 

编辑配置文件

cd /data/pp-agent/
vi pinpoint.config

 

主要修改IP,只需要指定到安装pp-col的IP就行了,安装pp-col启动后,自动就开启了9994,9995,9996的端口了。这里就不需要操心了,如果有端口需求,要去pp-col的配置文件(“pp-col/webapps/ROOT/WEB-INF/classes/pinpoint-collector.properties”)中,修改这些端口

profiler.collector.ip=192.168.245.136

 

修改测试项目下的tomcat启动文件”catalina.sh”,修改这个只要是为了监控测试环境的Tomcat,增加探针

cd /data/pp-test/bin
vi catalina.sh

 

在20行增加如下字段

  1. 第一行是pp-agent的jar包位置
  2. 第二行是agent的ID,这个ID是唯一的,我是用pp + 今天的日期命名的,只要与其他的项目的ID不重复就好了
  3. 第三行是采集项目的名字,这个名字可以随便取,只要各个项目不重复就好了
CATALINA_OPTS=”$CATALINA_OPTS -javaagent:/data/pp-agent/pinpoint-bootstrap-1.5.2.jar”
CATALINA_OPTS=”$CATALINA_OPTS -Dpinpoint.agentId=pp20161122″
CATALINA_OPTS=”$CATALINA_OPTS -Dpinpoint.applicationName=MyTestPP

5.4 监控Tomcat

 

配置好了。就可以开始监控了,我们启动测试用的Tomcat的服务器

cd /data/pp-test/bin/
./startup.sh

 

查看启动日志,确实Tomcat启动

tail -f ../logs/catalina.out

 

启动了,我们就可以访问测试环境了

test

test1

这时候我们在访问pp-web,可以发现它的下拉框中,多了一个app

pp-testApp

pp-testView

 

因为我访问了两次,所以他显示有两条请求记录,可以在右上角的框查看详情。

【注意】鼠标点击右上角箭头位置,鼠标左键按住不动,拉框查看。我被这个坑,坑懵逼了,特此写清楚。

pp-detail

 

这时候就弹出了新页面,可以看到,我访问了一次主页,访问了一次test的servlet。而且详细信息都记录在下表中。

pp-code

总结

到这里,整个部署过程就完了。值得要注意的地方:

  1. 如果Hbase不是与pp-web, pp-col装在一台机器上,需要安装zookeeper,只要安装就好,确实2181端口启动就好。
  2. 如果zookeeper安装在独立机器上,这里需要修改一下pp-colletor 和 pp-web的配置文件pinpoint-collector.properties,pinpoint-web.properties,不然会导致俩个模块启动失败。
  3. 发现pinpoint还是有些缺陷,异步的操作监控不到,比如我写了个多线程来发送HttpClient4的请求,但是pinpoint监控不到。但是它介绍又说可以监控到Httpclient4的请求。现在都是分布式系统,异步拿数据再常见不过来,如果监控不到异步的操作,就很鸡肋了。看pp1.6会不会修复这个问题
  4. 在pp1.6部署,Hbase中的默认字段有增加,如果没有加上默认字段,取得的数据就会变得相当少了。

一个可供中小团队参考的微服务架构技术栈

近年,Spring Cloud 俨然已经成为微服务开发的主流技术栈,在国内开发者社区非常火爆。我近年一直在一线互联网公司(携程,拍拍贷等)开展微服务架构实践,根据我个人的一线实践经验和我平时对 Spring Cloud 的调研,我认为 Spring Cloud 技术栈中的有些组件离生产级开发尚有一定距离。比方说 Spring Cloud Config 和 Spring Cloud Sleuth 都是 Pivotal 自研产品,尚未得到大规模企业级生产应用,很多企业级特性缺失(具体见我后文描述)。

另外 Spring Cloud 体系还缺失一些关键的微服务基础组件,比如 Metrics 监控,健康检查和告警等。所以我在参考 Spring Cloud 微服务技术栈的基础上,结合自身的实战落地经验,也结合国内外一线互联网公司(例如 Netflix,点评,携程,Zalando 等)的开源实践,综合提出更贴近国内技术文化特色的轻量级的微服务参考技术栈。希望这个参考技术栈对一线的架构师(或者是初创公司)有一个好的指导,能够少走弯路,快速落地微服务架构。

这个参考技术栈和总体架构如下图所示:

主要包含 11 大核心组件,分别是:

核心支撑组件

  1. 服务网关 Zuul
  2. 服务注册发现 Eureka+Ribbon
  3. 服务配置中心 Apollo
  4. 认证授权中心 Spring Security OAuth2
  5. 服务框架 Spring MVC/Boot

监控反馈组件

  1. 数据总线 Kafka
  2. 日志监控 ELK
  3. 调用链监控 CAT
  4. Metrics 监控 KairosDB
  5. 健康检查和告警 ZMon
  6. 限流熔断和流聚合 Hystrix/Turbine

后续章节我会依次简单介绍这 11 个组件,在《2018 波波的微服务基础架构和实践》课程中,我会对其中的 8 款核心组件的架构设计和生产实践进行深度剖析,感兴趣的同学可以扫描下方二维码了解详情。

核心支撑组件
服务网关 Zuul

2013 年左右,InfoQ 曾经对前 Netflix 架构总监 Adrian Cockcroft 有过一次专访 [附录 1],其中有问 Adrian:“Netflix 开源这么多项目,你认为哪一个是最不可或缺的 (MOST Indispensable)”,Adrian 回答说:“在 NetflixOSS 开源项目中,有一个容易被忽略,但是 Netflix 最强大的基础服务之一,它就是 Zuul 网关服务。Zuul 网关主要用于智能路由,同时也支持认证,区域和内容感知路由,将多个底层服务聚合成统一的对外 API。Zuul 网关的一大亮点是动态可编程,配置可以秒级生效”。从 Adrian 的回答中,我们可以感受到 Zuul 网关对微服务基础架构的重要性。

Zuul 在英文中是一种怪兽,星际争霸中虫族里头也有 Zuul,Netflix 为网关起名 Zuul,寓意看门神兽。

Zuul 网关在 Netflix 经过生产级验证,在纳入 Spring Cloud 体系之后,在社区中也有众多成功的应用。Zuul 网关在携程(日流量超 50 亿),拍拍贷等公司也有成功的落地实践,是微服务基础架构中网关一块的首选。其它开源产品像 Kong 或者 Nginx 等也可以改造支持网关功能,但是较复杂门槛高一点。

Zuul 网关虽然不完全支持异步,但是同步模型反而使它简单轻量,易于编程和扩展,当然同步模型需要做好限流熔断(和限流熔断组件 Hystrix 配合),否则可能造成资源耗尽甚至雪崩效应(cascading failure)。

服务注册发现 Eureka + Ribbon

针对微服务注册发现场景,社区里头的开源产品当中,经过生产级大流量验证的,目前只有 Netflix Eureka 一个,它也已经纳入 Spring Cloud 体系,在社区中有众多成功应用,例如携程 Apollo 配置中心也是使用 Eureka 做软负载。其它产品如 Zookeeper/Etcd/Consul 等,都是比较通用的产品,还需要进一步封装定制才可生产级使用。Eureka 支持跨数据中心高可用,但它是 AP 最终一致系统,不是强一致性系统。

Ribbon 是可以和 Eureka 配套对接的客户端软负载库,在 Eureka 的配合下能够支持多种灵活的动态路由和负载均衡策略。内部微服务直连可以直接走 Ribbon 客户端软负载,网关上也可以部署 Ribbon,这时网关相当于一个具有路由和软负载能力的超级客户端。

Ribbon 是蝴蝶结的意思。

服务配置中心 Apollo

Spring Cloud 体系里头有个 Spring Cloud Config 产品,但是功能远远达不到生产级,只能小规模场景下用,中大规模企业级场景不建议采用。携程框架研发部开源的 Apollo 是一款在携程和其它众多互联网公司生产落地下来的产品,开源两年多,目前在 github 上有超过 4k 星,非常成功,文档齐全也是它的一大亮点,推荐作为企业级的配置中心产品。

Apollo 支持完善的管理界面,支持多环境,配置变更实时生效,权限和配置审计等多种生产级功能。Apollo 既可以用于连接字符串等常规配置场景,也可用于发布开关(Feature Flag)和业务配置等高级场景。在《2018 波波的微服务基础架构和实践》课程中,第二个模块就配置中心相关主题,会深度剖析携程 Apollo 的架构和实践,预计 6 月份推出,欢迎大家关注学习。

阿波罗是希腊神话中太阳神的意思

认证授权中心 Spring Security OAuth2

目前开源社区还没有特别成熟的微服务安全认证中心产品,之前我工作过的一些中大型互联网公司,比如携程,唯品会等,在这一块基本都是定制自研的,但是对一般企业来说,定制自研还是有门槛的。OAuth2 是一种基于令牌 Token 的授权框架,已经得到众多大厂(Google, Facebook, Twitter, Microsoft 等)的支持,可以认为是事实上的微服务安全协议标准,适用于开放平台联合登录,现代微服务安全(包括单页浏览器 App/ 无线原生 App/ 服务器端 WebApp 接入微服务,以及微服务之间调用等场景),和企业内部应用认证授权 (IAM/SSO) 等多种场景。

Spring Security OAuth2 是 Spring Security 基础上的一个扩展,支持四种主要的 OAuth2 Flows,基本可以作为微服务认证授权中心的推荐产品。但是 Spring Security OAuth2 还只是一个框架,不是一个端到端的开箱即用的产品,企业级应用仍需在其上进行定制,例如提供 Web 端管理界面,对接企业内部的用户认证登录系统,使用 Cache 缓存令牌,和微服务网关对接等,才能作为生产级使用。在《2018 波波的微服务基础架构和实践》课程中,第一个模块就是微服务安全架构和实践相关主题,会深度剖析 OAuth2 原理和 Spring Security OAuth2 实践,欢迎大家关注学习。

Spring Security OAuth2 是 Spring Security 框架的一个扩展。

服务框架 Spring/Boot

Spring 可以说是史上最成功的 Web App/API 开发框架之一,它融入了 Java 社区中多年来沉淀下来的最佳实践,虽然有将近 15 年历史,但目前的社区活跃度仍呈上升趋势。Spring Boot 在 Spring 的基础上进一步打包封装,提供更贴心的 Starter 工程,自启动能力,自动依赖管理,基于代码的配置等特性进一步降低接入门槛。另外 Spring Boot 也提供 actuator 这样的生产级监控特性,支持 DevOps 研发模式,它是微服务开发框架的推荐首选。

REST 契约规范 Swagger 和 Spring 有比较好的集成,使得 Spring 也支持契约驱动开发 (Contract Driven Development) 模型。对于一些中大规模的企业,如果业务复杂团队较多,考虑到互操作性和集成成本,建议采用契约驱动开发模型,也就是开发时先定义 Swagger 契约,然后再通过契约生成服务端接口和客户端,再实现服务端业务逻辑,这种开发模型能够标准化接口,降低系统间集成成本,对于多团队协同并行开发非常重要。

监控反馈组件
数据总线 Kafka

最初由 Linkedin 研发并在其内部大规模成功应用,然后在 Apache 上开源的 Kafka,是业内数据总线 (Databus) 一块的标配,几乎每一家互联网公司都可以看到 Kafka 的身影。Kafka 堪称开源项目的一个经典成功案例,其创始人团队从 Linkedin 离职后还专门成立了一家叫 confluent 的企业软件服务公司,围绕 Kafka 周边提供配套和增值服务。在监控一块,日志和 Metrics 等数据可以通过 Kafka 做收集、存储和转发,相当于中间增加了一个大容量缓冲,能够应对海量日志数据的场景。除了日志监控数据收集,Kafka 在业务大数据分析,IoT 等场景都有广泛应用。如果对 Kafka 进行适当定制增强,还可以用于传统消息中间件场景。

Kafka 的特性是大容量,高吞吐,高可用,数据可重复消费,可水平扩展,支持消费者组等。Kafka 尤其适用于不严格要求实时和不丢数据的大数据日志场景。

Kafka 创始人三人组,离开 Linkedin 后,创立了基于 Kafka 的创业公司 Confluent。

日志监控 ELK

<![endif]–>

ELK(ElasticSearch/Logstash/Kibana)是日志监控一块的标配技术栈,几乎每一家互联网公司都可以看到 ELK 的身影,据称携程是国内 ELK 的最大用户,每日增量日志数据量达到 80~90TB。ELK 已经非常成熟,基本上是开箱即用,后续主要的工作在运维、治理和调优。

ELK 一般和 Kafka 配套使用,因为日志分词操作还是比较耗时的,Kafka 主要作为前置缓冲,起到流量消峰作用,抵消日志流量高峰和消费(分词建索引)的不匹配问题。一旦反向索引建立,日志检索是非常快的,所以日志检索快和灵活是 ElasticSearch 的最大亮点。另外 ELK 还有大容量,高吞吐,高可用,可水平扩容等企业级特性。

创业公司起步期,考虑到资源时间限制,调用链监控和 Metrics 监控可以不是第一优先级,但是 ELK 是必须搭一套的,应用日志数据一定要收集并建立索引,基本能够覆盖大部分 Trouble Shooting 场景(业务,性能,程序 bug 等)。

另外用好 ELK 的关键是治理,需要制定一些规则(比如只收集 Warn 级别以上日志),对应用的日志数据量做好监控,否则开发人员会滥用,什么垃圾数据都往 ELK 里头丢,造成大量空间被浪费,严重的还可能造成性能可用性问题。

ELK + Kafka 参考部署架构

调用链监控 CAT

Spring Cloud 支持基于 Zipkin 的调用链监控,我个人基于实践经验认为 Zipkin 还不能算一款企业级调用链监控产品,充其量只能算是一个半成品,很多重要的企业级特性缺失。Zipkin 最早是由 Twitter 在消化 Google Dapper 论文的基础上研发,在 Twitter 内部有较成功应用,但是在开源出来的时候把不少重要的统计报表功能给阉割了(因为依赖于一些比较重的大数据分析平台),只是开源了一个半成品,能简单查询和呈现可视化调用链,但是细粒度的调用性能数据报表没有开源。

Google 大致在 2007 年左右开始研发称为 Dapper 的调用链监控系统,但在远远早于这个时间(大致在 2002 左右),eBay 就已经有了自己的调用链监控系统 CAL(Centralized Application Logging),Google 和 eBay 的设计思路大致相同,但是也有一些差别。CAL 在 eBay 有大规模成功应用,被称为是 eBay 的四大神器之一(另外三个是 DAL,Messaging 和 SOA)。

开源调用链监控系统 CAT 的作者吴其敏(我曾经和他同事,习惯叫他老吴),曾经在 eBay 工作近十年,期间深入消化吸收了 CAL 的设计。2011 年后老吴离开 eBay 去了点评,用三年时间在点评再造了一款调用链监控产品 CAT(Centralized Application Tracking),CAT 具有 CAL 的基因和影子,同时也融入了老吴在点评的探索实践和创新。

CAT 是一款更完整的企业级调用链监控产品,甚至已经接近一个 APM(Application Performance Management)产品的范畴,它不仅支持调用链的查询和可视化,还支持细粒度的调用性能数据统计报表,这块是 CAT 和市面上其它开源调用链监控产品最本质的差异点,实际上开发人员大部分时间用 CAT 是看性能统计报表(主要是 CAT 的 Transaction 和 Problem 报表),这些报表相当于给了开发人员一把尺子,可以自助测量并持续改进应用性能。另外 CAT 还支持应用报错大盘,自助告警等功能,也是企业级监控非常实用的功能。

CAT 在点评,携程,陆金所,拍拍贷等公司有成功落地案例,因为是国产调用链监控产品,界面展示和功能等更契合国内文化,更易于在国内公司落地。个人推荐 CAT 作为微服务调用链监控的首选。

至于社区里头有人提到 CAT 的侵入性问题,我觉得是要一分为二看,有利有弊,有耦合性但是性能更好,一般企业中基础架构团队会使用 CAT 统一为基础组件埋点,开发人员一般不用自己埋点;另外企业用了一款调用链监控产品以后,一般是不会换的,开发人员用习惯就好了,侵入不是大问题。

CAT 的 Transaction 报表

Metrics 监控 KariosDB

除了日志和调用链,Metrics 也是应用监控的重要关注点。互联网应用提倡度量驱动开发(Metrics Driven Development),也就是说开发人员不仅要关注功能实现,做好单元测试(TDD),还要做好业务层(例如注册,登录和下单数等)和应用层(例如调用数,调用延迟等)的监控埋点,这个也是 DevOps(开发即运维)理念的体现,DevOps 要求开发人员必须关注运维需求,监控埋点是一种生产级运维需求。

Metrics 监控产品底层依赖于时间序列数据库(TSDB),最近比较热的开源产品有 Prometheus 和 InfluxDB,社区用户数量和反馈都不错,可以采纳。但是这些产品分布式能力比较弱,定制扩展门槛比较高,一般建议刚起步量不大的公司采用。

如果企业业务和团队规模发展到一定阶段,建议考虑支持分布式能力的时间序列监控产品,例如 KairosDB 或者 OpenTSDB,我本人对这两款产品都有一些实践经验,KariosDB 基于 Cassandra,相对更轻量一点,建议中大规模公司采用,如果你们公司已经采用 Hadoop/HBase,则 OpenTSDB 也是不错选择。

KairosDB 一般也和 Kafka 配套使用,Kafka 作为前置缓冲。另外注意使用 KariosDB 打点的话 tag 的值不能太离散,否则会有查询性能问题,这个和 KariosDB 底层存储结构有关系。Grafana 是 Metrics 展示标配,可以和 KariosDB 无缝集成。

Grafana 是 Metrics 展示标配,和主流时间序列数据库都可以集成

健康检查和告警 ZMon

除了上述监控手段,我们仍需要健康检查和告警系统作为配套的监控手段。ZMon 是德国电商公司 Zalando 开源的一款健康检查和告警平台,具备强大灵活的监控告警能力。ZMon 本质上可以认为是一套分布式监控任务调度平台,它提供众多的 Check 脚本(也可以自己再定制扩展),能够对各种硬件资源或者目标服务(例如 HTTP 端口,Spring 的 Actuator 端点,KariosDB 中的 Metrics,ELK 中的错误日志等等)进行定期的健康检查和告警,它的告警逻辑和策略采用 Python 脚本实现,开发人员可以实现自助式告警。ZMon 同时适用于系统,应用,业务,甚至端用户体验层的监控和告警。

ZMon 分布式监控告警系统架构,底层基于 KairosDB 时间序列数据库

限流熔断和流聚合 Hystrix+Turbine

2010 年左右,Netflix 也饱受分布式微服务系统中雪崩效应(Cascading Failure)的困扰,于是专门启动了一个叫做弹性工程的项目来解决这个问题,Hystrix 就是弹性工程最终落地下来的一个产品。Hystrix 在 Netflix 微服务系统中大规模推广应用后,雪崩效应问题基本得到解决,整个体统更具弹性。

之后 Netflix 把 Hystrix 开源贡献给了社区,短期获得社区的大量正面反馈,目前 Hystrix 在 github 上有超过 1.3 万颗星,据说支持奥巴马总统选举的系统也曾使用 Hystrix 进行限流熔断保护 [参考附录 2],可见限流熔断是分布式系统稳定性的强需求,Netflix 很好的抓住了这个需求并给出了经过生产级验证的解决方案。Hystrix 已经被纳入 Spring Cloud 体系,它是 Java 社区中限流熔断组件的首选(目前还看不到第二个更好的产品)。

Turbine 是和 Hystrix 配套的一个流聚合服务,能够对 Hystrix 监控数据流进行聚合,聚合以后可以在 Hystrix Dashboard 上看到集群的流量和性能情况。

Hystrix 在英文中是豪猪兽的意思,豪猪兽通过身上的刺保护自己,Netflix 为限流熔断组件起名 Hystrix,寓意 Hystrix 能够保护微服务调用。

Exposing Spinnaker to End Users

One of the things we are commonly asked in the Spinnaker chat room is “How do I open Spinnaker up to end users?”

The answer depends on how you deploy Spinnaker — with a Local or Distributed environment.

This post focuses on a Local environment. To prevent inadvertently exposing your cloud infrastructure to the whole world, Halyard installs Spinnaker in its most locked-down form. This means all services only bind to localhost , which only accepts connections from inside the same server.

On the other hand, Distributed environment services bind to 0.0.0.0 , which allows them to receive requests from services running on different hosts. This is essential to scaling Spinnaker to large enterprise deployments as a high-availability service. The diagram below outlines Spinnaker’s micro-service architecture.

All Spinnaker micro-services can run on a single host in a Local environment, or each on its own host in a Distributed environment.

Starting Point

Because we’re focusing on the Local environment, we must first have a VM with all of Spinnaker installed. The “Halyard on GCE Quickstart” guide is a great way to get a Halyard-enabled instance up and running.

(Semi-Optional) Load Balancer and DNS Entries

It is generally a good practice to front a service with its own load balancer, so that you can change the backing implementation (say, when a new version of Spinnaker is released) without changing the way clients connect to it. A load balancer will have a (usually static) IP address that we can bind to a DNS name. More details on configuring static IPs and connecting them to DNS can be found here.

This step is only semi-optional because most users are going to want to hook up an authentication mechanism like OAuth 2.0, which doesn’t work with raw IP addresses on some OAuth providers.

Opening Gate and Deck

With DNS entries configured, we can now open Gate and Deck for external access. To do this for a Local environment, we need to hook into the custom service settings feature of Halyard.

We’ll specify the 0.0.0.0 host in both gate.yml and deck.yml in our default Halyard deployment with this command:

echo "host: 0.0.0.0" | tee \
    ~/.hal/default/service-settings/gate.yml \
    ~/.hal/default/service-settings/deck.yml
sudo hal deploy apply

You can test this out by navigating to the instance’s public IP address on port 9000 in your browser.

Note: You may need to take further action by editing your Security Groups in order to access your instance. For example, a Google Compute Engine instance needs a firewall rule that allows ports 8084 and 9000 through. You can create these with the following commands:

INSTANCE= # put your instance's name here
TAGNAME=blogpost
gcloud compute firewall-rules create $TAGNAME-1 \
    --allow=tcp:8084,tcp:9000 \
    --target-tags $TAGNAME
gcloud compute instances add-tags $INSTANCE --tags=$TAGNAME

Map to your DNS address

Gate and Deck are now listening for all connections, and the security groups are permitting access to ports 8084 and 9000. The last thing is configuring Gate and Deck to talk to each other over their DNS names instead of IP address. This is accomplished with the following commands:

hal config security ui edit \
    --override-base-url http://spinnaker.mydomain.org:9000

hal config security api edit \
    --override-base-url http://spinnaker.mydomain.org:8084
hal deploy apply

Test it out

Your Spinnaker instance should now be available and mapped to your DNS entry:

Now that Spinnaker is exposed for your end users, you should explore our different authentication and authorization mechanisms in the Securitydocumentation.

小红书在容器环境的 CD 实践

容器推出以来,给软件开发带来了极具传染性的振奋和创新,并获得了来自各个行业、各个领域的巨大的支持——从大企业到初创公司,从研发到各类 IT 人员等等。跨境知名电商小红书随着业务的铺开,线上部署单元的数量急剧增加,以 Jenkins 调用脚本进行文件推送的部署模式已经不能适应需求。本文作者介绍小红书如何以最小的投入,最低的开发量快速的实现容器化镜像部署,以及由此带来的收益。

小红书

图 1

小红书是一个从社区做起来的跨境电商。用户喜欢在我们的平台上发关于生活、健身、购物体验、旅游等相关帖子。目前我们已经有有 5 千万的用户,1 千万的图文,每日有 1 亿次笔记曝光,涉及彩妆、护肤、健身、旅游等等各种领域。

小红书是国内最早践行社区电商这个商业模式并获得市场认可的一家电商,我们从社区把流量引入电商,现在在电商平台的 SKU 已经上到了十万级。我们从社区里的用户创建的笔记生成相关的标签,关联相关商品,同时在商品页面也展示社区内的和这商品有关的用户笔记。

小红目前还处在创业阶段,我们的技术团队规模还不大,当然运维本身也是一个小团队,团队虽小,运维有关的方面却都得涉及。小公司资源有限,一个是人力资源有限,二是我们很多业务往前赶。在如何做好 CI/CD,怎么务实的落地方面,我们的策略就是开源优先,优先选择开源的产品,在开源的基础上,发现不满足需求的地方再自行开发。

之前的应用上线流程

图 2

如图 2 是之前的应用上线的过程,开发向运维提需求,需要多少台服务器,运维依据需求去做初始化并交付给开发。我们现在有一个运维平台,所有服务器的部署都是由这个平台来完成的,平台调用腾讯云 API 生成服务器,做环境初始化,配置监控和报警,交付给开发的是一个标准化好的服务器。

图 3

开发者拿到服务器准备线上发布时用 Jenkins 触发脚本的方式:用 Jenkins 的脚本做测试,执行代码推送。当需要新加一台服务器或者下线一台服务器,要去修改这个发布脚本。 发布流程大概是这样的:Jenkins 脚本先往 beta 环境发,开发者在 beta 环境里做自测,自测环境没有问题就全量发。

我们遇到不少的情况都是在开发者自测的时候没有问题,然后在线上发,线上都是全量发,结果就挂了。然后回退的时候,怎么做呢?我们只能整个流程跑一遍,开发者回退老代码,再跑一次 Jenkins 脚本,整个过程最长需要10来分钟,这段过程线上故障一直存在,所以这个效率挺低。

以上的做法其实是大多数公司的现状,但是对于我们已经不太能适应了,目前我们整个技术在做更迭,环境的复杂度越来越高,如果还是维持现有的代码上线模式,显然会有失控的风险,而且基于这样的基础架构要做例如自动容量管理等都是很难做到的。

问题 & 需求

首先,我们整个技术团队人数在增加,再加上技术栈在变。以前都是纯 Python 的技术环境,现在不同的团队在尝试 Java、Go、Node。还有就是我们在做微服务的改造,以前的单体应用正在加速拆分成各个微服务,所以应用的数量也增加很多。拆分微服务后,团队也变得更细分了;同时我们还在做前后端的拆分,原来很多 APP 的页面是后端渲染的,现在在做前后端的拆分,后端程序是 API,前端是展示页面,各种应用的依赖关系也变得越来越多。再加上电商每年大促销,扩容在现有模式也很耗时耗力。所以现在的模式基本上已经不太可行了,很难持续下去。

我们团队在两三个月以前就思考怎么解决这些问题,怎么把线上环境和代码发布做得更加好一点。基本上我们需要做这几点:

■重构“从代码到上线”的流程;

■支持 Canary 发布的策略,实现流量的细颗粒度管理;

■能快速回退;

■实践自动化测试,要有一个环境让自动化测试可以跑;

■要求服务器等资源管理透明化,不要让开发者关心应用跑在哪个服务器上,这对开发者没有意义,他只要关心开发就可以了。

■要能够方便的扩容、缩容。

方法

图 4

我们一开始就考虑到容器化,一开始就是用 Kubernetes 的框架做容器化的管理。为什么是容器化?因为容器和微服务是一对“好朋友”,从开发环境到线上环境可以做到基本一致;为什么用 Kubernetes?这和运行环境和部署环境有关系,我们是腾讯云的重度用户,腾讯云有对 Kubernetes提供了非常到位的原生支持,所谓原生支持是指它有几个方面的实现:第一个是网络层面,我们知道 Kubernetes 在裸金属的环境下,要实现 Overlay 网络,或者有 SDN 网络的环境,而在腾讯云的环境里,它本身就是软件定义网络,所以它在网络上的实现可以做到在容器环境里和原生的网络一样的快,没有任何的性能牺牲。第二在腾讯云的环境里,负载均衡器和 Kubernetes 里的 service 可以捆绑,可以通过创建 Kubernetes 的 service 去维护云服务的 L4 负载均衡器。第三就是腾讯云的网盘可以被 Kubernetes 管理,实现 PVC 等,当然 Kubernetes 本身提供的特性是足够满足我们的需求的。

刚刚说了我们作为创业公司都是是以开源为主,在新的环境里应用了这样的一些开源技术(图 4),Jenkins、GitLab、Prometheus 和 Spinnaker。Jenkins 和 GitLab 应该都听说,大家都在用,Prometheus、Docker 也都是很主流的。

重点介绍两个比较新,但是现在相当火的开源技术:第一个叫 Traefik,在我们的环境里用来取代 Nginx 反向代理,Traefik 是用 Go 写的一个反向代理服务软件。第二是 Spinnaker,这是一个我个人认为非常优秀的开源的发布系统,它是由 Netflix 在去年开源的,是基于 Netflix 内部一直在使用的发布系统做的开源,可以说是 Netflix 在 CD 方面的最佳实践,整个社区非常活跃,它对 Kubernetes 的环境支持非常好。接下来我会重点介绍这两块东西。

Spinnaker

■Netflix 开源项目

■开放性和集成能力

■较强的 Pipeline 表达能力

■强大的表达式

■界面友好

■支持多种云平台

刚才介绍了 Spinnaker,它是一个开源项目,是 Netflix 的开源项目。Netflix 的开源项目在社区一直有着不错的口碑。它有开放式的集成能力,它原生就可以支持 Jenkins、GitLab 的整合,它还支持 Webhook,就是说在某一个环境里,如果后面的某个资源的控制组件,本身是个 API,那它就很容易整合到 Spinnaker 里。

再者它有比较强的 Pipeline 的能力,它的 Pipeline 可以复杂非常复杂,Pipeline 之间还可以关联,它还有很强的表达式功能,可以在任何的环节里用表达式来替代静态参数和值,在 Pipeline 开始的时候,生成的过程变量都可以被 Pipeline 每个 stage 调用。比如说这个 Pipeline 是什么时候开始的,触发时的参数是什么,某一个步骤是成功还是失败了,此次要部署的镜像是什么,线上目前是什么版本,这些都可以通过变量访问到。它还有一个比较友好的操作界面,重点的是支持多种云平台。目前支持 Kubernetes、OpenStack、亚马逊的容器云平台。

图 5

图 5 是 Spinnaker 的架构,是一个微服务的架构。这是一个微服务架构,里面包含用户界面 Deck,API 网关 Gate 等,API 网关是可以对外开放的,我们可以利用它和其它工具做一些深度整合,Rosco 是它做镜像构建的组件,我们也可以不用 Rosco 来做镜像构建,Orca 是它的核心,就是流程引擎。Echo 是通知系统,Igor 是用来集成 Jenkins 等 CI 系统的一个组件。Front52 是存储管理,Cloud driver 是它用来适配不同的云平台的,比如 Kubernetes 就有专门的 Cloud driver,也有亚马逊容器云的 Cloud driver。Fiat 是它一个鉴权的组件。

图 6

图 6 是它的界面。界面一眼看上去挺乱,实际上它还是有很好的逻辑性。这里每一个块都有三种颜色来表示 Kubernetes 的环境里的某个实例的当前状态。绿色是代表是活着的,右边是实例的信息。实例的 YML 配置,实例所在的集群,实例的状态和相关 event。

图 7

图 7 是 Pipeline 的界面。首先,我觉得这个界面很好看很清晰。二是 Pipeline 可以做得非常灵活,可以说执行了前几个步骤之后,等所有的步骤执行完了再执行某个步骤。这个步骤是某个用户做某个审批,再分别执行三个步骤其中的一个步骤,然后再执行某个环节。也可以说要发布还是回退,发布是走发布的流程,回退就是回退的流程。总之在这里,你所期待的 Pipeline 的功能都可以提供,如果实在不行,还有 Webhook 的模式让你方便的和外部系统做整合。

图 8

图 8 是 Pipeline 步骤的类型。左上 Check Precondltions 前置条件满足的时候才执行某个步骤。例如当前面的第一次发布里所有的实例都存活的时候,才执行某个步骤。或者当前面的步骤达到了某个状态,再执行下一个步骤。deploy 是在 Kubernetes 环境里生成 Replication Set,可以在 deploy 里更新一个服务器组、禁用一个集群、把集群的容量往下降、往上升等等。也可以跑某一个脚本,这个脚本是在某一个容器里,有时候可能有这样的需求,比如说 Java 来说这个 Java 跑起来之后并不是马上能够接入流量,可能要到 Java 里跑一个 job,从数据库加载数据并做些初始化工作后,才可以开始承接流量。

图 9

Pipeline 表达式很厉害,它的表达式是用 Grovvy 来做,大家知道 Grovvy 是一个动态语言。凡是 Grovvy 能用的语法,在字符串的地方都可以用。所以,这些步骤中,可以说这个步骤参数是来自表达式。也可以说有条件的执行,生成环境的时候才做这样的东西。也可以有前置条件,当满足这个条件的时候,这个流程和 stage 可以继续走下去。

图 10

图 11

如图 10 是各种类型的表达式,从现在看起来,基本上我们各种需求都能满足了。Pipeline 可以自动触发(图 11),可以每天、每周、每月、每年,某一天的时候被自动触发,做一个自动发布等等,也可以在镜像有新 tag 推送到镜像仓库时,Pipeline 去做发布。

Spinnaker 和 Kubernetes 的关系

图 12

Spinnaker 和 Kubernetes 有什么关系?它有很多概念是一对一的,Spinnaker 有一个叫 Account的,Account 对应到 Kubernetes 是 Kubernetes Cluster,我们的环境里现在有三个 Kubernetes 的 Cluster,分别对应到开发、测试和生产,它也是对应到 Spinnaker 的 三个 Account;Instance 对应到 Kubernetes 里是 Pod,一个 Pod 就是一个运行的单元;还有有 Server Group,这个 Server Group 对应的是 Replica Set 或者是 Deepionment。然后是 Load Balance,在 Spinnaker 里称之为 Load Balance 的东西在 Kubernetes 里就是 Service。

Traefik

图 13

Traefik 亮点:

■配置热加载,无需重启

■自带熔断功能

-traefik.backend.circuitbreaker:NetworkErrorRatio() > 0.5

■动态权重的轮询策略

-traefik.backend.loadbalancer.method:drr

为什么我们用 Traefik 而不用 Nginx 做反向代理呢?首先 Traefik 是一个配置热加载,用 Nginx 时更新路由规则则是做后端服务器的上线、下线都需要重载,但 Traefik 不需要。Traefik 自带熔断功能,可以定义后端某个实例错误率超过比如 50% 的时候,主动熔断它,请求再也不发给它了。还有动态的负载均衡策略,它会记录 5 秒钟之内所有后端实例对请求的响应时间或连接数,如果某个后端实例响应特别慢,那接下来的 5 秒钟就会将这个后端的权重降低直到它恢复到正常性能,这个过程是在不断的调整中,这是我们需要的功能。因为上了容器之后,我们很难保证一个应用的所有实例都部署在相同处理能力的节点上,云服务商采购服务器也是按批量来的,每一批不可能完全一致,很难去保证所有的节点性能都是一致的。

图 14

图 14 是 Traefik 自带的界面。我们定义的规则,后端实例的情况都可以实时的展现。

为什么在 Kubernetes 环境里选择了 Traefik?

■Kubernetes 集群中的 Ingress Controller

■动态加载 Ingress 更新路由规则

■根据 Service 的定义动态更新后端 Pod

■根据 Pod 的 Liveness 检查结果动态调整可用 Pod

■请求直接发送到 Pod

Traefik 和 Kubernetes 有什么关系呢?为什么在 Kubernetes 环境里选择了 Traefik?Traefik 在 Kubernetes 是以 Ingress Controller 存在,大家知道 Kubernetes 到 1.4 之后就引进了 Ingress 的概念。Kubernetes 原来只有一个 Service 来实现服务发现和负载均衡,service 是四层的负载均衡,它做不到基于规则的转发。在 Kubernetes 里 Ingress 是属于七层 HTTP 的实现,当然 Kubernetes 本身不去做七层的负载均衡,它是通过 Ingress Controller 实现的,Traefik 在 Kubernetes 里就是一种 Ingress Controller。它可以动态加载 Kubernetes 里的 Ingress 所定义的路由规则,Ingress 里也定义了一个路由规则所对应的 Service,而 Service 又和具体的 Pod 相关,Traefik 据此可以将请求直接发送给目标 Pod,而无需通过 Service 所维护的 iptables 来做转发。Pod列表是根据 Pod 的 Liveness 和 Readiness 状态做动态的调整。

图 15

图 15 是新发布的一个流程或者是开发的流程。我们有三个环节:一个是开发阶段,一个是集成测试,一个是上线。

开发阶段,开发者在迭代开始时生成一个 Feature 分支,以后的每次更新都将这个 Feature 分支推送到 GitLab 。GitLab 里配置的 Webhook 触发一个 Jenkins job,这个 job 做单元测试和镜像构建,构建成一个 Feature 分支的镜像,给这个镜像一个特定的 tag。生成新的镜像之后,触发 Spinnaker 的部署,这个部署只在开发环境里。

开发者怎么访问刚刚部署的开发环境呢?如果这是个 HTTP 应用,假设应用叫做 APP1,而分支名称叫 A,那开发者就通过 APP1-A.dev.xiaohongshu.com 就可以访问到 Feature A 的代码。在整个周期里可以不断的迭代,最后开发者觉得完成了这个 Feature 了,就可以推送到 release。一旦把代码推往 release 就触发另一个构建,基本上和前面的过程差不多。最后会有一个自动化的测试,基本上是由测试团队提供的自动化测试的工具,用 Spinnaker 调用它,看结果是什么样。

如果今天很有信心了,决定往生产发了,可以在 Git 上生成一个 tag,比如这个 tag 是 0.1.1,今天要发 0.1.1 版了,同样也会触发一个镜像的构建。这三个不同的阶段构建的镜像 tag 不一样,每生成一个新 tag, Spinnaker 会根据 tag 的命名规则触发不同的 Pipeline,做不同环境的部署。

Canary

最重要的是我们有一个 Canary 的发布过程,我们在 Spinnaker 的基础上,开发了一套 Canary 的机制。Canary 和 Beta 差不多,但 Canary 是真实引入流量,它把线上用户分为两组:一是稳定版的流量用户;二是 Canary 版的用户,他们会率先使用新版本,我们的具体策略是先给公司、先给我们自己办公室的人来用,这个灰度如果没问题了,用户反馈 OK,看看监控数据也觉得没有问题,再按照 1%-10%-20%-50%-100% 的阶段随机挑选线上用户继续灰度,在这整个过程都有监控数据可以看, 任何时候如果有异常都可以通过 Spinnaker 进行回退。

图 16

这个是 Canary 的示意图,线上用户被分成两组,大部分用户访问老版本,特定用户通过负载均衡转发到特定的版本里,后台有监控数据方便去比较两个版本之间的差异。

图 17

这是我们在容器环境里实现的 Canary 的架构(图 17),用户请求从前面进来,首先打到 Traefik,如果没有做 Canary 的过程,Traefik 是直接把请求打到组实例。如果要发布一个新的版本,有一个 HTTP 的 API 控制 project service,决定把什么样的流量可以打到这个里面版本。我们的策略可能是把办公室用户,可以通过 IP 看到 IP,或者把线上的安卓用户,或者线上 1% 的安卓用户打给它,这些都是可以定义的。

图 18

如图 18 所示是线上真实的部署流程。首先是要设置一个 Canary 策略,这个策略可以指定完全随机还是根据用户的特定来源。比如说是一个办公室用户,或者所有上海的用户等等,然后去调整参数,是 1% 的上海用户,还是所有的上海用户。然后开始部署服务。接下来把这个 Canary 实例做扩展,在流量进来之前,实例的容量一定要先准备好。进来之后把流量做重新定向,把流量从原来直接打给后端的 Pod 改成打到 Canary 代理服务上,由 Canary 代理服务根据策略和用户来源做进一步的流量分发。整个过程不断的迭代,有 1% 的线上用户开始慢慢到到 100%。在达到 100% 后,就采用红黑的策略替换掉所有旧版本:先把所有的新版本实例生成出来,等所有的新版本通过健康检查,都在线了,旧的版本再批量下线,这样完成一个灰度。如果中途发现问题不能继续,马上就可以回退,所谓的回退就是把 把流量打回到线上版本去。

图 19

图上(图 19)是我们的 Canary 策略。这是我们自己实现的一套东西。图中的例子是把来自指定网段一半的 iPhone 用户进行 Canary。用户分组的维度还可以有其它规则,现在我们支持的是完全随机/特定 IP/特定设备类型,这些规则可以组合起来。

我们的用户分组是有一致性保证的,一旦为某个用户分组了,那在当前灰度期间,这个用户的分组不会变,否则会影响用户体验。

下一步打算

■ACA——自动灰度分析

■自动容量管理

下一步我们打算做两件事情:第一,我们想做自动灰度分析,叫 ACA,现在 AIOps 概念很热门,我个人认为自动灰度分析可以说是一个具体的 AIOps 落地。在灰度的过程中,人肉判断新版本是否正常,其实如果日志采集够完整的话,这个判断可以由机器来做,机器根据所有数据来为新版本做评分,然后发布系统根据评分结果自动继续发布或者终止发布并回退。第二,再往下可以做自动的容量管理,当然是基于 Kubernetes 的基础上,做自动容量管理,以便更好的善用计算资源。

最后总结一下:一个好的 CD 系统应该能够控制发布带来的风险;我们在人力资源有限的情况下倾向于采用开源的方法解决问题,如果开源不满足的话,我们再开发一些适配的功能。

Getting Started with Mesos Resource Reservation & Marathon Watchdog – A “Hello World” Example

2016-12-14-01_22_32

Today, we will introduce Apache Mesos, an open source distributed computing system with the target to allow applications to run on a computer cluster as if it was running on a single computer. On top of a Mesos cluster, we will run Mesosphere Marathon, an open source container orchestration platform. Similar to a watchdog, Marathon helps running and maintaining long-running applications. However, unlike a mere watchdog, Marathon runs the applications in containers, and it provides a modern Web Portal and a modern RESTful API.

With the help of Marathon, we will

  • run several instances of a simple “Hello World” script on the cluster (within and outside of Docker containers);
  • see, what happens, if an application dies unexpectedly;
  • see, what happens, if an application reservation request exceeds the available resources.

For simplicity and quick installation purposes, all components of the Mesos architecture will be run within Docker containers.

What is Mesos?

Mesos is an open source framework and provides a distributed computer system. Mesos provides applications (e.g. Hadoop, Spark, Kafka, Elasticsearch) with APIs for resource management and scheduling across entire datacenters and cloud environments.

 

2016-12-14-02_28_36-mesos-marathon-architecture-google-slides

The Mesos Agents advertise the available resources (CPU, DRAM, …) to the master, which will relay those offers to frameworks like Marathon, Hadoop and Jenkins and many more. The frameworks may reserve all or part of the offered resources and run the application on the Mesos agents (slave).

What is Marathon?

Mesosphere, the owner of Marathon calls Marathon “a production-grade container orchestration platform for Mesosphere’s Datacenter Operating System (DC/OS) and Apache Mesos.”

Among others, it offers:

  • active-standby redundancy for increased availability
  • Marathon is starting containers on the Mesos Agent. Both, Mesos containers (using cgroups) and Docker containers are supported.
  • It offers a powerful GUI and
  • a REST API for easier integration.

A more complete feature list can be found here.

Compared to other schedulers like Apache Aurora used by Twitter, Marathon seems to be much easier to handle. On the other side, Aurora offers elaborate prioritization and preemption features. Those may be important, if the same resources are shared between production and development: if a production workload does not find any resources on a Mesos slave, Aurora will kill of less important applications in order to free up resources.

A good comparison between Mesosphere Marathon and Apache Aurora can be found on this Hootsuite Development’s web page.

Target Configuration for this Blog Post

In this “Hello World” example, we will create a simple Mesos configuration with a Marathon framework, a single Zookeeper, a single master and a single agent (slave):

2016-12-14-02_14_43-mesos-marathon-architecture-google-slides

We will also show what happens, if we kill a Marathon app.

Versions & Tools used

Prerequisites:

  • >~4 GB RAM: after starting the a Mesos master, a Mesos agent (slave), a ZooKeeper and a Marathon Docker container, I have observed a DRAM usage of ~3.8 GB

Step 1: Install a Docker Host via Vagrant and Connect to the Host via SSH

If you are using an existing docker host, make sure that your host has enough free memory.

We will run the applications in Docker containers in order to allow for maximum interoperability. This way, we always can use the latest versions without the need to control the java version used.

If you are new to Docker, you might want to read this blog post.

Installing Docker on Windows and Mac can be a real challenge, but no worries: we will show an easy way here, that is much quicker than the one described in Docker’s official documentation:

Prerequisites of this step:

  • I recommend to have direct access to the Internet: via Firewall, but without HTTP proxy. However, if you cannot get rid of your HTTP proxy, read this blog post.
  • Administration rights on you computer.

Steps to install a Docker Host VirtualBox VM:

Download and install Virtualbox (if the installation fails with error message “<to be completed> see Appendix A of this blog post: Virtualbox Installation Workaround)

1. Download and Install Vagrant (requires a reboot)

2. Download Vagrant Box containing an Ubuntu-based Docker Host and create a VirtualBox VM like follows:

basesystem# mkdir ubuntu-trusty64-docker ; cd ubuntu-trusty64-docker
basesystem# vagrant init williamyeh/ubuntu-trusty64-docker
basesystem# vagrant up
basesystem# vagrant ssh

Now you are logged into the Docker host and we are ready for the next step: to create the Docker images.

Note: I have experienced problems with the vi editor when running vagrant ssh in a Windows terminal. In case of Windows, consider to follow Appendix C of this blog post and to use putty instead.

Step 2 (recommended): Download Docker Images

This step is optional, since the download will be done automatically with each docker run command, if the image is not available on the Docker host. However, I recommend to download the images in advance, so if you run the applications, you can observe the logs and other feedback (syntax errors) immediately.

Step 2.1 Download Zookeeper

By looking at the Mesosphere Github documentation, it seems like a Zookeeper (Exhibitor) is needed always. The only tag available as of today is the tag 1.5.2. Let us download it first:

(dockerhost)$ sudo docker pull netflixoss/exhibitor:1.5.2
1.5.2: Pulling from netflixoss/exhibitor

a3ed95caeb02: Pull complete
831a6feb5ab2: Pull complete
b32559aac4de: Pull complete
5e99535a7b44: Pull complete
aa076096cff1: Pull complete
423664404a49: Pull complete
929c1efe4d14: Pull complete
387bf8857f2e: Pull complete
5efe9ea3de0d: Pull complete
a53f74fd9d17: Pull complete
78b42a885be7: Pull complete
684d8691844e: Pull complete
Digest: sha256:9b384a431d2e231f0bd3fcda5eff20d5eabd5ba1e3361764a4834d3401fbc4d4
Status: Downloaded newer image for netflixoss/exhibitor:1.5.2

Step 2.2 Download Mesos Master

It is quite confusing, how many distributions of Mesos exist. The Docker Hub distribution from Mesoscloud seems to offer most of the stars:

$ sudo docker search mesos
NAME                                DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
mesoscloud/mesos-master             Mesos Master                                    50                   [OK]
mesoscloud/mesos-slave              Mesos Slave                                     31                   [OK]
...

However, if I search for mesos-master, I find another image with even more stars and downloads:

$ sudo docker search mesos-master
NAME                                         DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
mesosphere/mesos-master                      Mesos-Master in Docker                          71
mesoscloud/mesos-master                      Mesos Master                                    50                   [OK]
...

Mesosphere also offers an two-in-one solution with master and slave combined in a single container: mesosphere/mesos. However, the recommended way of running Mesos is to run a master on one Docker host and slaves on other Docker hosts.

Let us download the files mesosphere/mesos-master and mesosphere/mesos-slave. Note that the “latest” tag does not exist, so we need to specify the tag explicitly. Let us try with the latest version I have found to exist for both, master and slave: i.e. 1.1.01.1.0-2.0.107.ubuntu1404.

(dockerhost)$ docker pull mesosphere/mesos-master:1.1.01.1.0-2.0.107.ubuntu1404
1.1.01.1.0-2.0.107.ubuntu1404: Pulling from mesosphere/mesos-master

bf5d46315322: Already exists
9f13e0ac480c: Already exists
e8988b5b3097: Already exists
40af181810e7: Already exists
e6f7c7e5c03e: Already exists
a3ed95caeb02: Already exists
01a862c74d96: Already exists
651b06ceb77e: Already exists
Digest: sha256:a011e002d641c6ba8361c542bd9429af721b7c7434598a9615cbd5b05511af7f
Status: Downloaded newer image for mesosphere/mesos-master:1.1.01.1.0-2.0.107.ubuntu1404

The version of the downloaded Mesos Master image can be checked with following command:

(dockerhost)$ sudo docker run -it --rm --name mesos-master mesosphere/mesos-master:1.1.01.1.0-2.0.107.ubuntu1404 --version
mesos 1.1.0

We are using version 1.1.0, currently.

Step 2.3 Download Mesos Slave

(dockerhost)$ docker pull mesosphere/mesos-slave:1.1.01.1.0-2.0.107.ubuntu1404
1.1.01.1.0-2.0.107.ubuntu1404: Pulling from mesosphere/mesos-slave

bf5d46315322: Already exists
9f13e0ac480c: Already exists
e8988b5b3097: Already exists
40af181810e7: Already exists
e6f7c7e5c03e: Already exists
a3ed95caeb02: Already exists
01a862c74d96: Already exists
651b06ceb77e: Already exists
Digest: sha256:bb75cc78c6880a2faa5307e3d8caa806105c673e9002429e60e3ae858d162999
Status: Downloaded newer image for mesosphere/mesos-slave:1.1.01.1.0-2.0.107.ubuntu1404

Step 2.4 Download Marathon

While Mesos will offer computing resources, Marathon is a framework that will ask for those resources.

(dockerhost)$ sudo docker pull mesosphere/marathon
Using default tag: latest
latest: Pulling from mesosphere/marathon

43c265008fae: Pull complete
af36d2c7a148: Pull complete
143e9d501644: Pull complete
bfc4cdbc8d81: Pull complete
38c6fc3e9968: Pull complete
0bfa8d5153bb: Pull complete
05bc8d0fffca: Pull complete
f1266a2a7ecb: Pull complete
f505e7ed4b7e: Pull complete
219f8c7fc022: Pull complete
Digest: sha256:9c881ff6f46a0da69f622a19a1677f1424a12ef37d076ec439854f15b97179fa
Status: Downloaded newer image for mesosphere/marathon:latest

Marathon does not offer a –version option or alike. The Marathon version can only be seen in the log, when running Marathon:

(dockerhost)$ sudo docker run -it --net=host -v `pwd`:/work_dir --entrypoint=bash mesosphere/marathon
root@openshift-installer:/marathon# ./bin/start --master local --zk zk://localhost:2181/marathon --http_port=7070
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[2016-12-12 21:01:46,268] INFO Starting Marathon 1.3.6/unknown with --master local --zk zk://localhost:2181/marathon --http_port=7070 (mesosphere.marathon.Main$:main)

Step 3: Run Mesos

In this step, we will run Mesos Master interactively (with -it switch instead of -d switch) to better see, what is happening. In a productive environment, you will use the detached mode -d instead of the interactive terminal mode -it.

We have found out by analyzing the Dockerfiles of mesos-master and mesos-slave images that both are based on mesosphere/mesos with different entrypoints and commands:

  • mesosphere/mesos-master: entrypoint: mesos-master with default option --registry=in_memory
  • mesosphere/mesos-slave: entrypoint: mesos-slave with no default options

Interestingly, the Dockerfiles have no exposed ports specified. The reason is, that the Docker images are supposed to run in host network mode, and by that, they are sharing the interface including IP address(es) and Ports with the Docker host.

The usage of the docker images is documented on Github. It seems like a zookeeper is needed:

Step 3.1: Run Zookeeper (Exhibitor) interactively in a Container

In order to better see, what is happening, we will run the Zookeeper in interactive terminal (-it) mode instead of detached mode (-d), as described in the documentation. With the --nat=host option, we share the network with the Docker host, so we do not need to explicitly expose the used TCP ports:

(dockerhost)$ sudo docker run -it --net=host --name=zookeeper netflixoss/exhibitor:1.5.2
v1.0
INFO com.netflix.exhibitor.core.activity.ActivityLog Exhibitor started [main]
Dec 12, 2016 5:05:28 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
INFO org.mortbay.log Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog [main]
INFO org.mortbay.log jetty-1.0 [main]
Dec 12, 2016 5:05:29 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9.1 09/14/2011 02:36 PM'
INFO org.mortbay.log Started SocketConnector@0.0.0.0:8080 [main]
INFO com.netflix.exhibitor.core.activity.ActivityLog State: down [ActivityQueue-0]
Dec 12, 2016 5:05:30 PM java.util.prefs.FileSystemPreferences$6 run
WARNING: Prefs file removed in background /root/.java/.userPrefs/prefs.xml
INFO com.netflix.exhibitor.core.activity.ActivityLog Attempting to stop instance [ActivityQueue-0]
INFO com.netflix.exhibitor.core.activity.ActivityLog Attempting to start/restart ZooKeeper [ActivityQueue-0]
INFO com.netflix.exhibitor.core.activity.ActivityLog jps didn't find instance - assuming ZK is not running [ActivityQueue-0]
INFO com.netflix.exhibitor.core.activity.ActivityLog Starting in standalone mode [ActivityQueue-0]
ERROR com.netflix.exhibitor.core.activity.ActivityLog ZooKeeper Server: JMX enabled by default [pool-2-thread-1]
INFO com.netflix.exhibitor.core.activity.ActivityLog Process started via: /zookeeper/bin/zkServer.sh [ActivityQueue-0]
ERROR com.netflix.exhibitor.core.activity.ActivityLog ZooKeeper Server: Using config: /zookeeper/bin/../conf/zoo.cfg [pool-2-thread-1]
INFO com.netflix.exhibitor.core.activity.ActivityLog ZooKeeper Server: Starting zookeeper ... STARTED [pool-2-thread-2]
INFO com.netflix.exhibitor.core.activity.ActivityLog State: serving [ActivityQueue-0]

In the log, we see, that the ZooKeeper is using port 8080. Therefore, we will see in Step 3.3 that the TCP port for Marathon needs to be changed in order to avoid a port resource collision.

Step 3.2: Run Master interactively in a Container

Here again, we are following the documentation of the images, but we are starting the Mesos master in interactive mode in a different terminal in order to see all logs, and we use the latest version 1.1.0 instead of 0.28.0:

(dockerhost)$ sudo docker run -it --net=host \
  --name mesos-master \
  -e MESOS_PORT=5050 \
  -e MESOS_ZK=zk://127.0.0.1:2181/mesos \
  -e MESOS_QUORUM=1 \
  -e MESOS_REGISTRY=in_memory \
  -e MESOS_LOG_DIR=/var/log/mesos \
  -e MESOS_WORK_DIR=/var/tmp/mesos \
  -v "$(pwd)/log/mesos:/var/log/mesos" \
  -v "$(pwd)/tmp/mesos:/var/tmp/mesos" \
  mesosphere/mesos-master:1.1.01.1.0-2.0.107.ubuntu1404
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1212 17:10:44.898630 1 main.cpp:263] Build: 2016-11-16 01:30:23 by ubuntu
I1212 17:10:44.898900 1 main.cpp:264] Version: 1.1.0
I1212 17:10:44.898916 1 main.cpp:267] Git tag: 1.1.0
I1212 17:10:44.898927 1 main.cpp:271] Git SHA: a44b077ea0df54b77f05550979e1e97f39b15873
I1212 17:10:44.903816 1 logging.cpp:194] INFO level logging started!
I1212 17:10:44.904436 1 main.cpp:370] Using 'HierarchicalDRF' allocator
2016-12-12 17:10:44,905:1(0x7ff4a8164700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-12-12 17:10:44,905:1(0x7ff4a8164700):ZOO_INFO@log_env@730: Client environment:host.name=openshift-installer
2016-12-12 17:10:44,905:1(0x7ff4a8164700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-12-12 17:10:44,905:1(0x7ff4a8164700):ZOO_INFO@log_env@738: Client environment:os.arch=4.2.0-42-generic
2016-12-12 17:10:44,905:1(0x7ff4a8164700):ZOO_INFO@log_env@739: Client environment:os.version=#49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016
2016-12-12 17:10:44,907:1(0x7ff4a6961700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@log_env@730: Client environment:host.name=openshift-installer
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@log_env@738: Client environment:os.arch=4.2.0-42-generic
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@log_env@739: Client environment:os.version=#49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@log_env@767: Client environment:user.dir=/
2016-12-12 17:10:44,910:1(0x7ff4a6961700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=127.0.0.1:2181 sessionTimeout=10000 watcher=0x7ff4b16c6200 sessionId=0 sessionPasswd=<null> context=0x7ff490000930 flags=0
I1212 17:10:44.909833 11 master.cpp:380] Master 917a95ab-7b77-4316-8e52-1431a8043af3 (openshift-installer-native-docker-compose) started on 127.0.0.1:5050
I1212 17:10:44.912077 11 master.cpp:382] Flags at startup: --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate_agents="false" --authenticate_frameworks="false" --authenticate_http_frameworks="false" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticators="crammd5" --authorizers="local" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --port="5050" --quiet="false" --quorum="1" --recovery_agent_removal_limit="100%" --registry="in_memory" --registry_fetch_timeout="1mins" --registry_gc_interval="15mins" --registry_max_agent_age="2weeks" --registry_max_agent_count="102400" --registry_store_timeout="20secs" --registry_strict="false" --root_submissions="true" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/var/tmp/mesos" --zk="zk://127.0.0.1:2181/mesos" --zk_session_timeout="10secs"
2016-12-12 17:10:44,909:1(0x7ff4a8164700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
2016-12-12 17:10:44,913:1(0x7ff4a8164700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-12-12 17:10:44,913:1(0x7ff4a8164700):ZOO_INFO@log_env@767: Client environment:user.dir=/
2016-12-12 17:10:44,913:1(0x7ff4a8164700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=127.0.0.1:2181 sessionTimeout=10000 watcher=0x7ff4b16c6200 sessionId=0 sessionPasswd=<null> context=0x7ff4980038a0 flags=0
W1212 17:10:44.913079 11 master.cpp:385]
**************************************************
Master bound to loopback interface! Cannot communicate with remote schedulers or agents. You might want to set '--ip' flag to a routable IP address.
**************************************************
I1212 17:10:44.914429 11 master.cpp:434] Master allowing unauthenticated frameworks to register
I1212 17:10:44.914448 11 master.cpp:448] Master allowing unauthenticated agents to register
I1212 17:10:44.914455 11 master.cpp:462] Master allowing HTTP frameworks to register without authentication
I1212 17:10:44.914474 11 master.cpp:504] Using default 'crammd5' authenticator
W1212 17:10:44.914487 11 authenticator.cpp:512] No credentials provided, authentication requests will be refused
I1212 17:10:44.914687 11 authenticator.cpp:519] Initializing server SASL
2016-12-12 17:10:44,922:1(0x7ff497fff700):ZOO_INFO@check_events@1728: initiated connection to server [127.0.0.1:2181]
2016-12-12 17:10:44,923:1(0x7ff497fff700):ZOO_INFO@check_events@1775: session establishment complete on server [127.0.0.1:2181], sessionId=0x158f400557d0003, negotiated timeout=10000
2016-12-12 17:10:44,923:1(0x7ff4977fe700):ZOO_INFO@check_events@1728: initiated connection to server [127.0.0.1:2181]
2016-12-12 17:10:44,924:1(0x7ff4977fe700):ZOO_INFO@check_events@1775: session establishment complete on server [127.0.0.1:2181], sessionId=0x158f400557d0002, negotiated timeout=10000
I1212 17:10:44.924991 8 group.cpp:340] Group process (zookeeper-group(2)@127.0.0.1:5050) connected to ZooKeeper
I1212 17:10:44.925303 8 group.cpp:828] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1212 17:10:44.925424 8 group.cpp:418] Trying to create path '/mesos' in ZooKeeper
I1212 17:10:44.925204 5 group.cpp:340] Group process (zookeeper-group(1)@127.0.0.1:5050) connected to ZooKeeper
I1212 17:10:44.925606 5 group.cpp:828] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1212 17:10:44.925617 5 group.cpp:418] Trying to create path '/mesos' in ZooKeeper
I1212 17:10:44.931725 8 contender.cpp:152] Joining the ZK group
I1212 17:10:44.932301 11 master.cpp:1951] Successfully attached file '/var/log/mesos/mesos-master.INFO'
I1212 17:10:44.936324 9 contender.cpp:268] New candidate (id='1') has entered the contest for leadership
I1212 17:10:44.937194 5 detector.cpp:152] Detected a new leader: (id='1')
I1212 17:10:44.937408 5 group.cpp:697] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I1212 17:10:44.939241 5 zookeeper.cpp:259] A new leading master (UPID=master@127.0.0.1:5050) is detected
I1212 17:10:44.939414 5 master.cpp:2017] Elected as the leading master!
I1212 17:10:44.939437 5 master.cpp:1560] Recovering from registrar
I1212 17:10:44.941402 5 registrar.cpp:362] Successfully fetched the registry (0B) in 1.7152ms
I1212 17:10:44.941573 5 registrar.cpp:461] Applied 1 operations in 5462ns; attempting to update the registry
I1212 17:10:44.946907 5 registrar.cpp:506] Successfully updated the registry in 5.135104ms
I1212 17:10:44.947170 5 registrar.cpp:392] Successfully recovered registrar
I1212 17:10:44.947314 5 master.cpp:1676] Recovered 0 agents from the registry (184B); allowing 10mins for agents to re-register
2016-12-12 17:11:11,640:1(0x7ff497fff700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 14ms
2016-12-12 17:11:11,641:1(0x7ff4977fe700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 15ms
2016-12-12 17:11:35,045:1(0x7ff497fff700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 50ms
2016-12-12 17:11:35,046:1(0x7ff4977fe700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 51ms

Step 3.3: Run Slave interactively in a Container

Here again, we are following the documentation of the images, but we are starting the Mesos slave in interactive mode (-it) in a third terminal in order to see all log, and we use the latest version 1.1.0 instead of 0.28.0:

(dockerhost)$ sudo docker run -it --net=host --privileged \
  -e MESOS_PORT=5051 \
  -e MESOS_MASTER=zk://127.0.0.1:2181/mesos \
  -e MESOS_SWITCH_USER=0 \
  -e MESOS_CONTAINERIZERS=docker,mesos \
  -e MESOS_LOG_DIR=/var/log/mesos \
  -e MESOS_WORK_DIR=/var/tmp/mesos \
  -v "$(pwd)/log/mesos:/var/log/mesos" \
  -v "$(pwd)/tmp/mesos:/var/tmp/mesos" \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /cgroup:/cgroup \
  -v /sys:/sys \
  -v /usr/local/bin/docker:/usr/local/bin/docker \
  mesosphere/mesos-slave:1.1.01.1.0-2.0.107.ubuntu1404
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1212 17:18:39.260704 1 main.cpp:243] Build: 2016-11-16 01:30:23 by ubuntu
I1212 17:18:39.261031 1 main.cpp:244] Version: 1.1.0
I1212 17:18:39.261075 1 main.cpp:247] Git tag: 1.1.0
I1212 17:18:39.261108 1 main.cpp:251] Git SHA: a44b077ea0df54b77f05550979e1e97f39b15873
I1212 17:18:39.265000 1 logging.cpp:194] INFO level logging started!
I1212 17:18:39.400902 1 containerizer.cpp:200] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
I1212 17:18:39.429229 1 linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
2016-12-12 17:18:39,438:1(0x7ff8f7601700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-12-12 17:18:39,439:1(0x7ff8f7601700):ZOO_INFO@log_env@730: Client environment:host.name=openshift-installer
2016-12-12 17:18:39,439:1(0x7ff8f7601700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-12-12 17:18:39,439:1(0x7ff8f7601700):ZOO_INFO@log_env@738: Client environment:os.arch=4.2.0-42-generic
2016-12-12 17:18:39,439:1(0x7ff8f7601700):ZOO_INFO@log_env@739: Client environment:os.version=#49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016
I1212 17:18:39.438886 1 slave.cpp:208] Mesos agent started on (1)@127.0.0.1:5051
2016-12-12 17:18:39,439:1(0x7ff8f7601700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
I1212 17:18:39.439324 1 slave.cpp:209] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="docker,mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="linux" --launcher_dir="/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://127.0.0.1:2181/mesos" --max_completed_executors_per_framework="150" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --runtime_dir="/var/run/mesos" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="false" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/var/tmp/mesos"
2016-12-12 17:18:39,441:1(0x7ff8f7601700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-12-12 17:18:39,442:1(0x7ff8f7601700):ZOO_INFO@log_env@767: Client environment:user.dir=/
2016-12-12 17:18:39,442:1(0x7ff8f7601700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=127.0.0.1:2181 sessionTimeout=10000 watcher=0x7ff9023a0200 sessionId=0 sessionPasswd=<null> context=0x7ff8d4001e00 flags=0
2016-12-12 17:18:39,445:1(0x7ff8f4dfc700):ZOO_INFO@check_events@1728: initiated connection to server [127.0.0.1:2181]
2016-12-12 17:18:39,446:1(0x7ff8f4dfc700):ZOO_INFO@check_events@1775: session establishment complete on server [127.0.0.1:2181], sessionId=0x158f400557d0004, negotiated timeout=10000
W1212 17:18:39.440688 1 slave.cpp:212]
**************************************************
Agent bound to loopback interface! Cannot communicate with remote master(s). You might want to set '--ip' flag to a routable IP address.
**************************************************
I1212 17:18:39.448009 8 group.cpp:340] Group process (zookeeper-group(1)@127.0.0.1:5051) connected to ZooKeeper
I1212 17:18:39.448274 8 group.cpp:828] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1212 17:18:39.448418 8 group.cpp:418] Trying to create path '/mesos' in ZooKeeper
I1212 17:18:39.450460 1 slave.cpp:533] Agent resources: cpus(*):2; mem(*):2928; disk(*):1902607; ports(*):[31000-32000]
I1212 17:18:39.452038 8 detector.cpp:152] Detected a new leader: (id='1')
I1212 17:18:39.452385 8 group.cpp:697] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I1212 17:18:39.452198 1 slave.cpp:541] Agent attributes: [ ]
I1212 17:18:39.454398 1 slave.cpp:546] Agent hostname: openshift-installer-native-docker-compose
I1212 17:18:39.459702 8 zookeeper.cpp:259] A new leading master (UPID=master@127.0.0.1:5050) is detected
I1212 17:18:39.462762 9 state.cpp:57] Recovering state from '/var/tmp/mesos/meta'
I1212 17:18:39.464331 9 status_update_manager.cpp:203] Recovering status update manager
I1212 17:18:39.465109 9 docker.cpp:764] Recovering Docker containers
I1212 17:18:39.465148 11 containerizer.cpp:555] Recovering containerizer
I1212 17:18:39.472216 6 provisioner.cpp:253] Provisioner recovery complete
I1212 17:18:39.917292 8 slave.cpp:5281] Finished recovery
I1212 17:18:39.938516 8 slave.cpp:915] New master detected at master@127.0.0.1:5050
I1212 17:18:39.939077 8 slave.cpp:936] No credentials provided. Attempting to register without authentication
I1212 17:18:39.939728 8 slave.cpp:947] Detecting new master
I1212 17:18:39.938575 9 status_update_manager.cpp:177] Pausing sending status updates
E1212 17:18:40.622269 8 process.cpp:2154] Failed to shutdown socket with fd 12: Transport endpoint is not connected
I1212 17:18:40.631978 8 slave.cpp:4179] Got exited event for master@127.0.0.1:5050
W1212 17:18:40.634527 8 slave.cpp:4184] Master disconnected! Waiting for a new master to be elected
E1212 17:18:40.632072 13 process.cpp:2154] Failed to shutdown socket with fd 12: Transport endpoint is not connected
E1212 17:18:42.052165 13 process.cpp:2154] Failed to shutdown socket with fd 12: Transport endpoint is not connected
I1212 17:18:42.052312 7 slave.cpp:4179] Got exited event for master@127.0.0.1:5050
W1212 17:18:42.057737 7 slave.cpp:4184] Master disconnected! Waiting for a new master to be elected
I1212 17:18:45.093793 6 slave.cpp:4179] Got exited event for master@127.0.0.1:5050
W1212 17:18:45.094339 6 slave.cpp:4184] Master disconnected! Waiting for a new master to be elected
E1212 17:18:45.093952 13 process.cpp:2154] Failed to shutdown socket with fd 12: Transport endpoint is not connected
I1212 17:18:48.748410 9 slave.cpp:4179] Got exited event for master@127.0.0.1:5050
W1212 17:18:48.748883 9 slave.cpp:4184] Master disconnected! Waiting for a new master to be elected
E1212 17:18:48.748569 13 process.cpp:2154] Failed to shutdown socket with fd 12: Transport endpoint is not connected
2016-12-12 17:18:49,470:1(0x7ff8f4dfc700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 12ms
I1212 17:18:50.030248 8 detector.cpp:152] Detected a new leader: None
I1212 17:18:50.030894 8 slave.cpp:908] Lost leading master
I1212 17:18:50.031404 8 slave.cpp:947] Detecting new master
I1212 17:18:50.031327 7 status_update_manager.cpp:177] Pausing sending status updates
2016-12-12 17:18:53,405:1(0x7ff8f4dfc700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 43ms
I1212 17:19:02.637687 7 slave.cpp:1324] Skipping registration because no master present
I1212 17:19:22.959288 8 detector.cpp:152] Detected a new leader: (id='2')
I1212 17:19:22.961132 8 group.cpp:697] Trying to get '/mesos/json.info_0000000002' in ZooKeeper
I1212 17:19:22.965332 8 zookeeper.cpp:259] A new leading master (UPID=master@127.0.0.1:5050) is detected
I1212 17:19:22.971281 8 slave.cpp:915] New master detected at master@127.0.0.1:5050
I1212 17:19:22.972019 8 slave.cpp:936] No credentials provided. Attempting to register without authentication
I1212 17:19:22.975116 8 slave.cpp:947] Detecting new master
I1212 17:19:22.971410 9 status_update_manager.cpp:177] Pausing sending status updates
I1212 17:19:23.927111 11 slave.cpp:1115] Registered with master master@127.0.0.1:5050; given agent ID a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0
I1212 17:19:23.931748 12 status_update_manager.cpp:184] Resuming sending status updates
I1212 17:19:24.781162 11 slave.cpp:1175] Forwarding total oversubscribed resources {}
I1212 17:19:39.456789 10 slave.cpp:5044] Current disk usage 62.36%. Max allowed age: 1.934608332272824days
2016-12-12 17:19:46,345:1(0x7ff8f4dfc700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 20ms
2016-12-12 17:20:19,750:1(0x7ff8f4dfc700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 41ms

Step 4: Connect to Mesos Web Portal

Mesos’ web portal is running on port 5050 per default. In my case, I am running Mesos on Docker containers, which are hosted by a VirtualBox Docker host VM. Since the VM only has a NATed virtual Ethernet interface, I need to specify a port forwarding rule in VirtualBox, so I can reach port 5050 of the Docker host:

2016-12-12-18_41_57-regel-fur-port-weiterleitung

After that, we can access the Mesos Master Portal from a Browser on the base system:

2016-12-12-18_48_01-mesos

Note: if you see the error “Failed to connect to 127.0.0.1:5050!”, try reloading (instead of retrying) the page via the browser reload function. See Appendix A.1 for details.

We can see the single Mesos slave (i.e. agent) connected to the master on the left:

2016-12-12-19_02_19-mesos

The details of the agents (slaves) can be seen by clicking on the Agents link in the menu:

2016-12-16-20_34_44-mesos

So, what is next? It looks like we need a framework like Marathon in order to start applications on Mesos. Let us start Marathon now.

Step 5: Start Marathon

Let us start Marathon in a container as described here. However, we will use port 7070 instead of 8080, because 8080 collides with a port that is used  by the ZooKeeper. In addition, we will change the master URI and let it point to the external Mesos master we have started in step 3. Moreover, we need to set the environment variable MESOS_WORK_DIR because of a Mesos bug. See Appendix B for details.

(dockerhost)$ sudo docker run -it --name marathon --rm --net=host -e MESOS_WORK_DIR=/var/lib/mesos --entrypoint=bash mesosphere/marathon
(container)# ./bin/start --master zk://127.0.0.1:2181/mesos --zk zk://localhost:2181/marathon --http_port=7070
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[2016-12-13 14:46:09,798] INFO Starting Marathon 1.3.6/unknown with --master zk://127.0.0.1:2181/mesos --zk zk://localhost:2181/marathon --http_port=7070 (mesosphere.marathon.Main$:main)
[2016-12-13 14:46:10,148] WARN Method [public javax.ws.rs.core.Response mesosphere.marathon.api.MarathonExceptionMapper.toResponse(java.lang.Throwable)] is synthetic and is being intercepted by [mesosphere.marathon.DebugModule$MetricsBehavior@985696]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all. (com.google.inject.internal.ProxyFactory:main)
[2016-12-13 14:46:10,336] INFO Logging initialized @1841ms (org.eclipse.jetty.util.log:main)
[2016-12-13 14:46:10,849] INFO Slf4jLogger started (akka.event.slf4j.Slf4jLogger:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 14:46:11,060] INFO Started TaskTrackerUpdateStepsProcessorImpl with steps:
* continueOnError(notifyHealthCheckManager)
* continueOnError(notifyRateLimiter)
* continueOnError(notifyLaunchQueue)
* continueOnError(emitUpdate)
* continueOnError(postTaskStatusEvent)
* continueOnError(scaleApp) (mesosphere.marathon.core.task.tracker.impl.TaskTrackerUpdateStepProcessorImpl:main)
[2016-12-13 14:46:11,128] INFO Calling reviveOffers is enabled. Use --disable_revive_offers_for_new_apps to disable. (mesosphere.marathon.core.flow.FlowModule:main)
[2016-12-13 14:46:11,195] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authenticator' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 14:46:11,200] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 14:46:11,202] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authorizer' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 14:46:11,203] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 14:46:11,205] INFO Started status update processor (mesosphere.marathon.core.task.update.impl.TaskStatusUpdateProcessorImpl$$EnhancerByGuice$$ca7c928f:main)
[2016-12-13 14:46:11,321] INFO All actors suspended:
* Actor[akka://marathon/user/groupManager#1830533307]
* Actor[akka://marathon/user/launchQueue#-485770292]
* Actor[akka://marathon/user/killOverdueStagedTasks#540935184]
* Actor[akka://marathon/user/offersWantedForReconciliation#-876030651]
* Actor[akka://marathon/user/taskKillServiceActor#1912899513]
* Actor[akka://marathon/user/rateLimiter#1490654495]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#-1930600924]
* Actor[akka://marathon/user/expungeOverdueLostTasks#-405721147]
* Actor[akka://marathon/user/reviveOffersWhenWanted#1467955826]
* Actor[akka://marathon/user/taskTracker#623538008]
* Actor[akka://marathon/user/offerMatcherManager#1760835888] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 14:46:11,417] INFO Adding HTTP support. (mesosphere.chaos.http.HttpModule:main)
[2016-12-13 14:46:11,418] INFO No HTTPS support configured. (mesosphere.chaos.http.HttpModule:main)
[2016-12-13 14:46:11,497] INFO Starting up (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-13 14:46:11,497] INFO Beginning run (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-13 14:46:11,498] INFO Will offer leadership after 500 milliseconds backoff (mesosphere.marathon.core.election.impl.CuratorElectionService:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-13 14:46:11,504] INFO jetty-9.3.z-SNAPSHOT (org.eclipse.jetty.server.Server:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,535] INFO Now standing by. Closing existing handles and rejecting new. (mesosphere.marathon.core.event.impl.stream.HttpEventStreamActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 14:46:11,704] INFO Registering com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,705] INFO Registering mesosphere.marathon.api.MarathonExceptionMapper as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.AppsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.TasksResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.EventSubscriptionsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.QueueResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.GroupsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.InfoResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.LeaderResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.DeploymentsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.ArtifactsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.SchemaResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,706] INFO Registering mesosphere.marathon.api.v2.PluginsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,710] INFO Initiating Jersey application, version 'Jersey: 1.18.1 02/19/2014 03:28 AM' (com.sun.jersey.server.impl.application.WebApplicationImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,765] INFO Binding com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:11,782] INFO Binding mesosphere.marathon.api.MarathonExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,027] INFO Using HA and therefore offering leadership (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,028] INFO Will do leader election through localhost:2181 (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,073] WARN session timeout [10000] is less than connection timeout [15000] (org.apache.curator.CuratorZookeeperClient:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,096] INFO Starting (org.apache.curator.framework.imps.CuratorFrameworkImpl:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:zookeeper.version=3.5.0-alpha-1615249, built on 08/01/2014 22:13 GMT (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:host.name=openshift-installer-native-docker-compose (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:java.version=1.8.0_102 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:java.class.path=./bin/../target/marathon-assembly-1.3.6.jar (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:os.version=4.2.0-42-generic (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:user.dir=/marathon (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:os.memory.free=110MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:os.memory.max=880MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Client environment:os.memory.total=158MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,124] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.apache.curator.ConnectionState@3411e275 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 14:46:12,194] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-13 14:46:12,201] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-13 14:46:12,212] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x158f4c9e1f90014, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-13 14:46:12,217] INFO State change: CONNECTED (org.apache.curator.framework.state.ConnectionStateManager:ForkJoinPool-2-worker-13-EventThread)
[2016-12-13 14:46:12,279] INFO Elected (LeaderLatchListener Interface) (mesosphere.marathon.core.election.impl.CuratorElectionService:pool-1-thread-1)
[2016-12-13 14:46:12,284] INFO As new leader running the driver (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-13 14:46:12,302] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=com.twitter.zk.EventBroker@5301f666 (org.apache.zookeeper.ZooKeeper:pool-1-thread-1)
[2016-12-13 14:46:12,310] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-13 14:46:12,310] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-13 14:46:12,323] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158f4c9e1f90015, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-13 14:46:12,367] INFO Binding mesosphere.marathon.api.v2.AppsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,391] INFO Binding mesosphere.marathon.api.v2.TasksResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,398] INFO Binding mesosphere.marathon.api.v2.EventSubscriptionsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,398] INFO Event notification disabled. (mesosphere.marathon.core.event.EventModule:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,404] INFO Binding mesosphere.marathon.api.v2.QueueResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,425] INFO Binding mesosphere.marathon.api.v2.GroupsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,429] INFO Binding mesosphere.marathon.api.v2.InfoResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,431] INFO Binding mesosphere.marathon.api.v2.LeaderResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,433] INFO Binding mesosphere.marathon.api.v2.DeploymentsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,441] INFO Binding mesosphere.marathon.api.v2.ArtifactsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,443] INFO Binding mesosphere.marathon.api.v2.SchemaResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,445] INFO Binding mesosphere.marathon.api.v2.PluginsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,445] INFO Loading plugins implementing 'mesosphere.marathon.plugin.http.HttpRequestHandler' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,446] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,449] INFO Migration successfully applied for version Version(1, 3, 6) (mesosphere.marathon.state.Migration:ForkJoinPool-2-worker-5)
[2016-12-13 14:46:12,451] INFO Call preDriverStarts callbacks on EntityStoreCache(MarathonStore(app:)), EntityStoreCache(MarathonStore(group:)), EntityStoreCache(MarathonStore(deployment:)), EntityStoreCache(MarathonStore(framework:)), EntityStoreCache(MarathonStore(taskFailure:)), EntityStoreCache(MarathonStore(events:)) (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-13 14:46:12,468] INFO Finished preDriverStarts callbacks (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-13 14:46:12,469] INFO Started o.e.j.s.ServletContextHandler@65817241{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,484] INFO ExpungeOverdueLostTasksActor has started (mesosphere.marathon.core.task.jobs.impl.ExpungeOverdueLostTasksActor:marathon-akka.actor.default-dispatcher-4)
[2016-12-13 14:46:12,487] INFO TaskTrackerActor is starting. Task loading initiated. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 14:46:12,489] INFO started RateLimiterActor (mesosphere.marathon.core.launchqueue.impl.RateLimiterActor:marathon-akka.actor.default-dispatcher-4)
[2016-12-13 14:46:12,492] INFO no interest in offers for reservation reconciliation anymore. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-13 14:46:12,494] INFO Started. Will remain interested in offer reconciliation for 17500 milliseconds when needed. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-13 14:46:12,514] INFO All actors active:
* Actor[akka://marathon/user/groupManager#1830533307]
* Actor[akka://marathon/user/launchQueue#-485770292]
* Actor[akka://marathon/user/killOverdueStagedTasks#540935184]
* Actor[akka://marathon/user/offersWantedForReconciliation#-876030651]
* Actor[akka://marathon/user/taskKillServiceActor#1912899513]
* Actor[akka://marathon/user/rateLimiter#1490654495]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#-1930600924]
* Actor[akka://marathon/user/expungeOverdueLostTasks#-405721147]
* Actor[akka://marathon/user/reviveOffersWhenWanted#1467955826]
* Actor[akka://marathon/user/taskTracker#623538008]
* Actor[akka://marathon/user/offerMatcherManager#1760835888] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 14:46:12,520] INFO Started ServerConnector@7fc3545e{HTTP/1.1,[http/1.1]}{0.0.0.0:7070} (org.eclipse.jetty.server.ServerConnector:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,520] INFO Started @4026ms (org.eclipse.jetty.server.Server:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 14:46:12,521] INFO All services up and running. (mesosphere.marathon.Main$:main)
[2016-12-13 14:46:12,524] INFO About to load 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-5)
[2016-12-13 14:46:12,536] INFO Loaded 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-5)
[2016-12-13 14:46:12,545] INFO Task loading complete. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 14:46:12,555] INFO interested in offers for reservation reconciliation because of becoming leader (until 2016-12-13T14:46:29.995Z) (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-13 14:46:12,588] INFO Received offers WANTED notification (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 14:46:12,589] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 14:46:12,589] INFO 2 further revives still needed. Repeating reviveOffers according to --revive_offers_repetitions 3 (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 14:46:12,589] INFO => Schedule next revive at 2016-12-13T14:46:17.586Z in 4998 milliseconds, adhering to --min_revive_offers_interval 5000 (ms) (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 14:46:12,638] INFO Create new Scheduler Driver with frameworkId: Some(value: "44a35e16-dc32-4f91-afac-33dfff498944-0000"
) and scheduler mesosphere.marathon.core.heartbeat.MesosHeartbeatMonitor$$EnhancerByGuice$$c730b9b8@15d42ccb (mesosphere.marathon.MarathonSchedulerDriver$:pool-1-thread-1)
WARNING: Logging before InitGoogleLogging() is written to STDERR
W1213 14:46:12.777942 192 sched.cpp:1697]
**************************************************
Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address.
**************************************************
I1213 14:46:12.791020 196 sched.cpp:226] Version: 1.0.1
2016-12-13 14:46:12,791:134(0x7f351b0c7700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-12-13 14:46:12,791:134(0x7f351b0c7700):ZOO_INFO@log_env@730: Client environment:host.name=openshift-installer
2016-12-13 14:46:12,792:134(0x7f351b0c7700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-12-13 14:46:12,792:134(0x7f351b0c7700):ZOO_INFO@log_env@738: Client environment:os.arch=4.2.0-42-generic
2016-12-13 14:46:12,792:134(0x7f351b0c7700):ZOO_INFO@log_env@739: Client environment:os.version=#49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016
2016-12-13 14:46:12,792:134(0x7f351b0c7700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
2016-12-13 14:46:12,792:134(0x7f351b0c7700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-12-13 14:46:12,792:134(0x7f351b0c7700):ZOO_INFO@log_env@767: Client environment:user.dir=/marathon
2016-12-13 14:46:12,792:134(0x7f351b0c7700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=127.0.0.1:2181 sessionTimeout=10000 watcher=0x7f3522fb1d80 sessionId=0 sessionPasswd=<null> context=0x7f352403b5c0 flags=0
[2016-12-13 14:46:12,799] INFO Reset offerLeadership backoff (mesosphere.marathon.core.election.impl.ExponentialBackoff:pool-1-thread-1)
[2016-12-13 14:46:12,799] INFO Became active. Accepting event streaming requests. (mesosphere.marathon.core.event.impl.stream.HttpEventStreamActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 14:46:12,800] INFO Starting scheduler actor (mesosphere.marathon.MarathonSchedulerActor:marathon-akka.actor.default-dispatcher-8)
2016-12-13 14:46:12,803:134(0x7f35198c4700):ZOO_INFO@check_events@1728: initiated connection to server [127.0.0.1:2181]
[2016-12-13 14:46:12,811] INFO Scheduler actor ready (mesosphere.marathon.MarathonSchedulerActor:marathon-akka.actor.default-dispatcher-6)
2016-12-13 14:46:12,826:134(0x7f35198c4700):ZOO_INFO@check_events@1775: session establishment complete on server [127.0.0.1:2181], sessionId=0x158f4c9e1f90016, negotiated timeout=10000
I1213 14:46:12.828322 205 group.cpp:349] Group process (group(1)@127.0.0.1:35047) connected to ZooKeeper
I1213 14:46:12.829164 205 group.cpp:837] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1213 14:46:12.829583 205 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I1213 14:46:12.833498 205 detector.cpp:152] Detected a new leader: (id='1')
I1213 14:46:12.834187 205 group.cpp:706] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I1213 14:46:12.836730 205 zookeeper.cpp:259] A new leading master (UPID=master@127.0.0.1:5050) is detected
I1213 14:46:12.837736 205 sched.cpp:330] New master detected at master@127.0.0.1:5050
I1213 14:46:12.838312 205 sched.cpp:341] No credentials provided. Attempting to register without authentication
I1213 14:46:12.843353 205 sched.cpp:743] Framework registered with 44a35e16-dc32-4f91-afac-33dfff498944-0000
[2016-12-13 14:46:12,846] INFO Creating tombstone for old twitter commons leader election (mesosphere.marathon.core.election.impl.CuratorElectionService:pool-1-thread-1)
[2016-12-13 14:46:12,872] INFO Registered as 44a35e16-dc32-4f91-afac-33dfff498944-0000 to master '03f81417-51c9-4055-adbe-c1fb74fc8ab4' (mesosphere.marathon.MarathonScheduler$$EnhancerByGuice$$1ef061b0:Thread-14)
[2016-12-13 14:46:12,872] INFO Store framework id: value: "44a35e16-dc32-4f91-afac-33dfff498944-0000"
 (mesosphere.util.state.FrameworkIdUtil:Thread-14)
[2016-12-13 14:46:12,895] INFO Received reviveOffers notification: SchedulerRegisteredEvent (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-13 14:46:16,658] INFO 10.0.2.2 - - [13/Dec/2016:14:46:16 +0000] "GET //localhost:7070/v2/deployments HTTP/1.1" 200 22 "http://localhost:7070/ui/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36" 14 (mesosphere.chaos.http.ChaosRequestLog$$EnhancerByGuice$$c1e74978:qtp1755811644-35)
[2016-12-13 14:46:16,660] INFO 10.0.2.2 - - [13/Dec/2016:14:46:16 +0000] "GET //localhost:7070/v2/groups?embed=group.groups&embed=group.apps&embed=group.apps.deployments&embed=group.apps.counts&embed=group.apps.readiness HTTP/1.1" 200 95 "http://localhost:7070/ui/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36" 15 (mesosphere.chaos.http.ChaosRequestLog$$EnhancerByGuice$$c1e74978:qtp1755811644-36)
[2016-12-13 14:46:16,666] INFO 10.0.2.2 - - [13/Dec/2016:14:46:16 +0000] "GET //localhost:7070/v2/queue HTTP/1.1" 200 32 "http://localhost:7070/ui/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36" 15 (mesosphere.chaos.http.ChaosRequestLog$$EnhancerByGuice$$c1e74978:qtp1755811644-34)
[2016-12-13 14:46:17,603] INFO Received TimedCheck (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 14:46:17,604] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 14:46:17,609] INFO 2 further revives still needed. Repeating reviveOffers according to --revive_offers_repetitions 3 (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 14:46:17,612] INFO => Schedule next revive at 2016-12-13T14:46:22.602Z in 4998 milliseconds, adhering to --min_revive_offers_interval 5000 (ms) (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-8)

Let us connect to the Marathon portal:

We have chosen Marathon to run on port 7070. In my case, the Docker host VirtualBox VM has a NATed interface only, so I need to open the Marathon port on VirtualBox:

2016-12-13-14_59_36-regel-fur-port-weiterleitung

Now we can access the Marathon dashboard:

2016-12-13-15_17_35-marathon

Let us see, whether Maraton is visible on Mesos:

2016-12-13-15_44_39-mesos

Yes, the Marathon framework can be seen on the Mesos master; perfect.

thumps_up_3

Now, our topology now looks like follows:
2016-12-14-02_14_43-mesos-marathon-architecture-google-slides

However, we have not started any Marathon application yet. Let us do so now.

Step 6: Start a “Hello World” Application via Marathon Web Portal in a Mesos Container

Let us create an application running in a Mesos container on the Marathon web portal by clicking the Create Application button.

We choose

  • ID: while-loop-hello-world
    • ID needs to consist of lowercase letters, digits, hyphens, “.”, “..”
  • CPUs: 0.1
    • If you keep the default of 1, then you might hit resource problems, if you have not 4 CPUs available for the 4 instances. In this case, one or more instances might be found in the “Waiting” state.
  • Memory: 32 MiB
    • 32 MiB is the minimum value supported
  • Disk Space: 0
  • Instances: 4
  • Command: while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
    • we have redirected the output of the script, because the STDERR/STDOUT retrieval does not work on the Marathon portal, currently (see Appendix A.5).

2016-12-18-12_46_08-marathon

After less than 5 sec, we see, that all of the 4 instances are up and running:

2016-12-18-12_46_16-marathon

On the Docker host, we can see the four instances running:

$ ps -ef | grep Hello | grep -v grep
root 14137 14095 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14138 14096 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14141 14116 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14142 14127 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log

We also can find the log files on the Docker host:

$ sudo find / -name this_is_my_output.log
/vagrant/jenkins_home/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world.927a5843-c517-11e6-ad85-02422b10e522/runs/0f079dcc-25e7-4ad6-b263-4761370a6173/this_is_my_output.log
/vagrant/jenkins_home/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world.9285a2e4-c517-11e6-ad85-02422b10e522/runs/44a5c7dd-59f3-4b59-8d94-33754a3cbdc3/this_is_my_output.log
/vagrant/jenkins_home/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world.92861815-c517-11e6-ad85-02422b10e522/runs/5400a1d9-582f-4d59-9b81-a719d21c60a1/this_is_my_output.log
/vagrant/jenkins_home/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world.9286db66-c517-11e6-ad85-02422b10e522/runs/f3145a9f-2c88-4f99-a8de-618a4157b9e8/this_is_my_output.log

And we can see the output:

$ sudo tail -F /vagrant/jenkins_home/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world.927a5843-c517-11e6-ad85-02422b10e522/runs/0f079dcc-25e7-4ad6-b263-4761370a6173/this_is_my_output.log
I am a Hello World script
I am a Hello World script
I am a Hello World script
I am a Hello World script
I am a Hello World script
...

thumps_up_3

Step 7: Test of Application Resiliency

Step 7.1: Killing a Process and Testing, whether it was restarted automatically

Now, let us see, what happens, if one of the processes dies. For that, we just will kill one of the “Hello World” processes found on the Docker host:

(dockerhost)$ ps -ef | grep Hello | grep -v grep
root 14137 14095 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14138 14096 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14141 14116 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14142 14127 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
(dockerhost)$ sudo kill -9 14137
(dockerhost)$ ps -ef | grep Hello | grep -v grep
root 14138 14096 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14141 14116 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 14142 14127 0 11:46 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log
root 19818 19807 1 12:04 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done 2>&1 1> this_is_my_output.log

We can clearly see that the process was killed successfully, and a new process was started automatically.

thumps_up_3

Step 7.2: Find Traces of the killed Process in the Logs

The output log of the old process can still be retrieved:

$ cat -F /vagrant/jenkins_home/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33
dfff498944-0000/executors/while-loop-hello-world.92861815-c517-11e6-ad85-02422b10e522/runs/5400a1d9-582f-4d59-9b81-a719d21c60a1/this_is_my_output.log
...
I am a Hello World script
I am a Hello World script
I am a Hello World script
I am a Hello World script
(EOF)

However, we cannot see any traces that something went wrong. So, how can we troubleshoot, if a process dies unexpectedly?

Here, we are: the Marathon portal is providing us with information on what happened:

2016-12-18-13_20_47-marathon

thumps_up_3

Step 8: Test: Resource Over-subsription not supported per Default?

In this test, we observe, what happens, if an application instance is exceeding the offered resource limits. For that , we will try to start four instances with 1 CPU core each, while the slave provides 2 CPUs and not the required 4 CPUs:

2016-12-16-20_52_42-marathon

We press Create Application again and see that 2 of the 4 applications are started soon:

2016-12-16-20_54_49-marathon

Two of the task instances were started successfully, and two instances are waiting for resources and are “Unscheduled”. This is the expected behavior:

2016-12-16-21_03_00-marathon

thumps_up_3

On the Mesos Dashboard, we can see that the offered 2 CPUs are used by two instances already, leaving no CPU resources for the other two instances:

2016-12-17-16_27_55-mesos

Aurora vs. Marathon: Consider a situation, where you want to run high priority production applications with low priority development applications on the same hardware. Mesosphere Marathon does not provide any good answer to such situations. In case you hit such a situation, consider to using Apache Aurora instead of Mesosphere Marathon. Apache Aurora allows high priority application to preempt low priority applications.

Mesos: about hard-limit of CPU resources: Mesos’ CPU reservation feels similar to a hard assignment of resources (since over-subscription is not supported per default, as we have seen above), but under the hood, Mesos does not apply hard limits on the CPU usage, unless the Mesos slave has --cgroups_enable_cfs  (CFS = Completely Fair Scheduler) enabled. See also the second answer on this StackOverflow question. For more information on over-subscription by Mesos, see this Apache Mesos Documentation page.

Conclusion: Per default, resource over-subscription is not allowed by Mesos. See this Apache Mesos Documentation page on more information about over-subscription.

Step 9: Start a “Hello World” Application via Marathon Web Portal in a Docker Container

In this step we learn how to run an application in a Docker container instead of a Mesos container. We will perform similar steps as in Step 6.

We choose

  • ID: while-loop-hello-world-container-small
    • ID needs to consist of lowercase letters, digits, hyphens, “.”, “..”
  • CPUs: 0.1
    • If you keep the default of 1, then you might hit resource problems, if you have not 4 CPUs available for the 4 instances. In this case, one or more instances might be found in the “Waiting” state.
  • Memory: 32 MiB
    • 32 MiB is the minimum value supported
  • Disk Space: 0
  • Instances: 4
  • Command: while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
    • we have redirected the output of the script, because the STDERR/STDOUT retrieval does not work on the Marathon portal, currently (see Appendix A.5).
  • Docker: ubuntu:latest

We click “Create Application”

2016-12-17-16_40_36-marathon

Choose a low number of CPUs (0.1 CPUs in my case), the lowest number of Memory and four instances, together with a while loop as command:

2016-12-18-15_39_21-marathon

In the Docker Container tab, we choose an Ubuntu image.

2016-12-17-16_51_26-marathon

First, we see “waiting”,

2016-12-17-16_51_48-marathon

then we see that all four instances are “running”:

2016-12-18-09_17_33-marathon

2016-12-18-15_43_26-marathon

On the Docker host, we can see the four Docker containers that have been started:

$ docker ps
CONTAINER ID        IMAGE                                                   COMMAND                  CREATED             STATUS              PORTS               NAMES
78e346ddb3eb        ubuntu:latest                                           "/bin/sh -c 'while tr"   About an hour ago   Up About an hour                        mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.13aeacaa-20d5-4628-8e65-f82a0fc71724
809ad6513236        ubuntu:latest                                           "/bin/sh -c 'while tr"   About an hour ago   Up About an hour                        mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.c42d9e0c-f66e-42be-ac0e-46bc8f44e47b
de7523c00d41        ubuntu:latest                                           "/bin/sh -c 'while tr"   About an hour ago   Up About an hour                        mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.e7fcebfb-f301-4369-83eb-2782832d83a9
d3729dad1bed        ubuntu:latest                                           "/bin/sh -c 'while tr"   About an hour ago   Up About an hour                        mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.e924e32b-521b-4708-b538-bc2973094718

The processes are:

$ ps -ef | grep Hello | grep -v grep
root     19306 19258  0 14:40 ?        00:00:00 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 33554432 -e MARATHON_APP_VERSION=2016-12-18T14:40:40.292Z -e HOST=openshift-installer-native-docker-compose -e MARATHON_APP_RESOURCE_CPUS=0.1 -e MARATHON_APP_RESOURCE_GPUS=0 -e MARATHON_APP_DOCKER_IMAGE=ubuntu:latest -e PORT_10000=31730 -e MESOS_TASK_ID=while-loop-hello-world-container-small.f27ecd8a-c52f-11e6-ad85-02422b10e522 -e PORT=31730 -e MARATHON_APP_RESOURCE_MEM=32.0 -e PORTS=31730 -e MARATHON_APP_RESOURCE_DISK=0.0 -e MARATHON_APP_LABELS= -e MARATHON_APP_ID=/while-loop-hello-world-container-small -e PORT0=31730 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.e924e32b-521b-4708-b538-bc2973094718 -v /var/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world-container-small.f27ecd8a-c52f-11e6-ad85-02422b10e522/runs/e924e32b-521b-4708-b538-bc2973094718:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.e924e32b-521b-4708-b538-bc2973094718 ubuntu:latest -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
root     19319 19280  0 14:40 ?        00:00:00 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 33554432 -e MARATHON_APP_VERSION=2016-12-18T14:40:40.292Z -e HOST=openshift-installer-native-docker-compose -e MARATHON_APP_RESOURCE_CPUS=0.1 -e MARATHON_APP_RESOURCE_GPUS=0 -e MARATHON_APP_DOCKER_IMAGE=ubuntu:latest -e PORT_10000=31873 -e MESOS_TASK_ID=while-loop-hello-world-container-small.f27e5859-c52f-11e6-ad85-02422b10e522 -e PORT=31873 -e MARATHON_APP_RESOURCE_MEM=32.0 -e PORTS=31873 -e MARATHON_APP_RESOURCE_DISK=0.0 -e MARATHON_APP_LABELS= -e MARATHON_APP_ID=/while-loop-hello-world-container-small -e PORT0=31873 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.c42d9e0c-f66e-42be-ac0e-46bc8f44e47b -v /var/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world-container-small.f27e5859-c52f-11e6-ad85-02422b10e522/runs/c42d9e0c-f66e-42be-ac0e-46bc8f44e47b:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.c42d9e0c-f66e-42be-ac0e-46bc8f44e47b ubuntu:latest -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
root     19320 19281  0 14:40 ?        00:00:00 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 33554432 -e MARATHON_APP_VERSION=2016-12-18T14:40:40.292Z -e HOST=openshift-installer-native-docker-compose -e MARATHON_APP_RESOURCE_CPUS=0.1 -e MARATHON_APP_RESOURCE_GPUS=0 -e MARATHON_APP_DOCKER_IMAGE=ubuntu:latest -e PORT_10000=31245 -e MESOS_TASK_ID=while-loop-hello-world-container-small.f2792838-c52f-11e6-ad85-02422b10e522 -e PORT=31245 -e MARATHON_APP_RESOURCE_MEM=32.0 -e PORTS=31245 -e MARATHON_APP_RESOURCE_DISK=0.0 -e MARATHON_APP_LABELS= -e MARATHON_APP_ID=/while-loop-hello-world-container-small -e PORT0=31245 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.e7fcebfb-f301-4369-83eb-2782832d83a9 -v /var/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world-container-small.f2792838-c52f-11e6-ad85-02422b10e522/runs/e7fcebfb-f301-4369-83eb-2782832d83a9:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.e7fcebfb-f301-4369-83eb-2782832d83a9 ubuntu:latest -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
root     19368 19352  0 14:40 ?        00:00:01 /bin/sh -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
root     19379 19334  0 14:40 ?        00:00:00 docker -H unix:///var/run/docker.sock run --cpu-shares 102 --memory 33554432 -e MARATHON_APP_VERSION=2016-12-18T14:40:40.292Z -e HOST=openshift-installer-native-docker-compose -e MARATHON_APP_RESOURCE_CPUS=0.1 -e MARATHON_APP_RESOURCE_GPUS=0 -e MARATHON_APP_DOCKER_IMAGE=ubuntu:latest -e PORT_10000=31928 -e MESOS_TASK_ID=while-loop-hello-world-container-small.f27f90db-c52f-11e6-ad85-02422b10e522 -e PORT=31928 -e MARATHON_APP_RESOURCE_MEM=32.0 -e PORTS=31928 -e MARATHON_APP_RESOURCE_DISK=0.0 -e MARATHON_APP_LABELS= -e MARATHON_APP_ID=/while-loop-hello-world-container-small -e PORT0=31928 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.13aeacaa-20d5-4628-8e65-f82a0fc71724 -v /var/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world-container-small.f27f90db-c52f-11e6-ad85-02422b10e522/runs/13aeacaa-20d5-4628-8e65-f82a0fc71724:/mnt/mesos/sandbox --net host --entrypoint /bin/sh --name mesos-a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0.13aeacaa-20d5-4628-8e65-f82a0fc71724 ubuntu:latest -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
root     19422 19406  0 14:40 ?        00:00:01 /bin/sh -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
root     19443 19427  0 14:40 ?        00:00:01 /bin/sh -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log
root     19473 19457  0 14:40 ?        00:00:01 /bin/sh -c while true; do echo "I am a Hello World script in a Docker container"; sleep 1; done 2>&1 1> this_is_my_container_output.log

We can see that there are four docker run commands and four while loops running. We had assigned 0.1 CPUs to each application instance. This has been translated into a CPU share of 1024/10 = ~102.

Note, that Docker does not allow to hard-limit the used CPU per default, if --cgroups_enable_cfs (CFS) is not set on the Docker run command. Instead, CPU shares are configured for the Docker container.

Appendix A: Errors & Caveats

Appendix A.1: Mesos Portal Error: Failed to connect to 127.0.0.1:5050!

Problem:

After starting the Mesos Portal in a Browser, we see following symptom:

2016-12-12-18_49_20-mesos

After clicking “Try now”, the problem shows up immediately again. After clicking the Browser’s page reload button, it looks better, but the problem will show up soon again.

Status: Open

I have not found any solution yet. I have not yet tested any older version.

Appendix A.2: Critical Error: Mesos Master: Lost leadership… committing suicide!

  • Mesos master 1.1.0 run in a Docker container (image mesosphere/mesos-master:1.1.01.1.0-2.0.107.ubuntu1404)
  • ZooKeeper 1.5.2 run in a Docker container (image netflixoss/exhibitor:1.5.2)

Symptoms:

From time to time, we get following critical error log of the Mesos Master:

...
2016-12-18 08:15:59,593:22(0x7f18037fe700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 31198753ms
2016-12-18 08:15:59,595:22(0x7f1820ef8700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 31198755ms
2016-12-18 08:15:59,597:22(0x7f1820ef8700):ZOO_ERROR@handle_socket_error_msg@1735: Socket [127.0.0.1:2181] zk retcode=-4, errno=32(Broken pipe): failed while flushing send queue
2016-12-18 08:15:59,597:22(0x7f18037fe700):ZOO_ERROR@handle_socket_error_msg@1746: Socket [127.0.0.1:2181] zk retcode=-4, errno=112(Host is down): failed while receiving a server response
2016-12-18 08:15:59,599:22(0x7f1820ef8700):ZOO_INFO@check_events@1728: initiated connection to server [127.0.0.1:2181]
I1218 08:15:59.601364 25 group.cpp:451] Lost connection to ZooKeeper, attempting to reconnect ...
I1218 08:15:59.602128 26 group.cpp:451] Lost connection to ZooKeeper, attempting to reconnect ...
2016-12-18 08:15:59,604:22(0x7f1820ef8700):ZOO_ERROR@handle_socket_error_msg@1764: Socket [127.0.0.1:2181] zk retcode=-112, errno=116(Stale file handle): sessionId=0x158f4c9e1f90035 has expired.
I1218 08:15:59.605609 28 group.cpp:510] ZooKeeper session expired
I1218 08:15:59.606210 29 contender.cpp:217] Membership cancelled: 4
2016-12-18 08:15:59,606:22(0x7f1823737700):ZOO_INFO@zookeeper_close@2543: Freeing zookeeper resources for sessionId=0x158f4c9e1f90035

Lost leadership... committing suicide!
2016-12-18 08:15:59,607:22(0x7f182573b700):ZOO_INFO@log_env@726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-12-18 08:15:59,608:22(0x7f182573b700):ZOO_INFO@log_env@730: Client environment:host.name=openshift-installer
2016-12-18 08:15:59,608:22(0x7f182573b700):ZOO_INFO@log_env@737: Client environment:os.name=Linux
2016-12-18 08:15:59,608:22(0x7f182573b700):ZOO_INFO@log_env@738: Client environment:os.arch=4.2.0-42-generic
2016-12-18 08:15:59,608:22(0x7f182573b700):ZOO_INFO@log_env@739: Client environment:os.version=#49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016
2016-12-18 08:15:59,608:22(0x7f182573b700):ZOO_INFO@log_env@747: Client environment:user.name=(null)
2016-12-18 08:15:59,608:22(0x7f182573b700):ZOO_INFO@log_env@755: Client environment:user.home=/root
2016-12-18 08:15:59,609:22(0x7f182573b700):ZOO_INFO@log_env@767: Client environment:user.dir=/
2016-12-18 08:15:59,609:22(0x7f182573b700):ZOO_INFO@zookeeper_init@800: Initiating client connection, host=127.0.0.1:2181 sessionTimeout=10000 watcher=0x7f182e49c200 sessionId=0 sessionPasswd=<null> context=0x7f1810008470 flags=0
2016-12-18 08:15:59,609:22(0x7f1803fff700):ZOO_INFO@check_events@1728: initiated connection to server [127.0.0.1:2181]
(container)#

Resolution: None

I have found a similar issue for Marathon here. Both problems seem to be caused by ZooKeeper problems, but it is not clear, how to resolve the issue.

Appendix A.3: ZooKeeper: continuous warnings ‘Exceeded deadline’ and ‘Current disk usage’

After starting the ZooKeeper, I continuously see log messages like follows:

2016-12-18 10:24:14,664:1(0x7fb7abfff700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 77ms
I1218 10:24:58.446185 15 slave.cpp:5044] Current disk usage 62.33%. Max allowed age: 1.936647380411088days
I1218 10:25:58.450482 14 slave.cpp:5044] Current disk usage 62.33%. Max allowed age: 1.936647380411088days
2016-12-18 10:26:21,523:1(0x7fb7abfff700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 26ms
I1218 10:26:58.454617 9 slave.cpp:5044] Current disk usage 62.33%. Max allowed age: 1.936647380411088days
2016-12-18 10:27:58,326:1(0x7fb7abfff700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 15ms
I1218 10:27:58.458858 12 slave.cpp:5044] Current disk usage 62.33%. Max allowed age: 1.936647380411088days
2016-12-18 10:28:01,681:1(0x7fb7abfff700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 20ms

Resolution: None

Since the warnings do not seem to be critical, I have not yet dug into the problem.

Appendix A.4: Marathon Portal Error: no Reaction upon Restart Request

Symptoms:

If we try to restart an application, there is no reaction whatsoever:

2016-12-17-16_38_48-marathon

2016-12-16-21_07_20-marathon

2016-12-16-21_08_33-marathon

Running ps -ef on the Docker host yields the same process numbers before and after pressing the Restart button:

(dockerhost)$ ps -ef | grep while | grep -v grep
root 13142 29300 0 19:54 ? 00:00:00 mesos-containerizer launch --command={"arguments":["mesos-executor","--launcher_dir=\/usr\/libexec\/mesos"],"shell":false,"value":"\/usr\/libexec\/mesos\/mesos-executor"} --environment={"HOST":"openshift-installer-native-docker-compose","LIBPROCESS_PORT":"0","MARATHON_APP_ID":"\/while-loop-hello-world","MARATHON_APP_LABELS":"","MARATHON_APP_RESOURCE_CPUS":"1.0","MARATHON_APP_RESOURCE_DISK":"0.0","MARATHON_APP_RESOURCE_GPUS":"0","MARATHON_APP_RESOURCE_MEM":"32.0","MARATHON_APP_VERSION":"2016-12-16T19:54:30.939Z","MESOS_AGENT_ENDPOINT":"127.0.0.1:5051","MESOS_CHECKPOINT":"1","MESOS_DIRECTORY":"\/var\/tmp\/mesos\/slaves\/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0\/frameworks\/44a35e16-dc32-4f91-afac-33dfff498944-0000\/executors\/while-loop-hello-world.75d64c28-c3c9-11e6-9f91-02422b10e522\/runs\/a67b861b-c20c-4d94-8db5-e956b3dea8ab","MESOS_EXECUTOR_ID":"while-loop-hello-world.75d64c28-c3c9-11e6-9f91-02422b10e522","MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD":"5secs","MESOS_FRAMEWORK_ID":"44a35e16-dc32-4f91-afac-33dfff498944-0000","MESOS_HTTP_COMMAND_EXECUTOR":"0","MESOS_NATIVE_JAVA_LIBRARY":"\/usr\/lib\/libmesos-1.1.0.so","MESOS_NATIVE_LIBRARY":"\/usr\/lib\/libmesos-1.1.0.so","MESOS_RECOVERY_TIMEOUT":"15mins","MESOS_SANDBOX":"\/var\/tmp\/mesos\/slaves\/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0\/frameworks\/44a35e16-dc32-4f91-afac-33dfff498944-0000\/executors\/while-loop-hello-world.75d64c28-c3c9-11e6-9f91-02422b10e522\/runs\/a67b861b-c20c-4d94-8db5-e956b3dea8ab","MESOS_SLAVE_ID":"a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0","MESOS_SLAVE_PID":"slave(1)@127.0.0.1:5051","MESOS_SUBSCRIPTION_BACKOFF_MAX":"2secs","MESOS_TASK_ID":"while-loop-hello-world.75d64c28-c3c9-11e6-9f91-02422b10e522","PATH":"\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin","PORT":"31690","PORT0":"31690","PORTS":"31690","PORT_10000":"31690"} --help=false --pipe_read=15 --pipe_write=16 --pre_exec_commands=[] --runtime_directory=/var/run/mesos/containers/a67b861b-c20c-4d94-8db5-e956b3dea8ab --unshare_namespace_mnt=false --working_directory=/var/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world.75d64c28-c3c9-11e6-9f91-02422b10e522/runs/a67b861b-c20c-4d94-8db5-e956b3dea8ab
root 13144 29300 0 19:54 ? 00:00:00 mesos-containerizer launch --command={"arguments":["mesos-executor","--launcher_dir=\/usr\/libexec\/mesos"],"shell":false,"value":"\/usr\/libexec\/mesos\/mesos-executor"} --environment={"HOST":"openshift-installer-native-docker-compose","LIBPROCESS_PORT":"0","MARATHON_APP_ID":"\/while-loop-hello-world","MARATHON_APP_LABELS":"","MARATHON_APP_RESOURCE_CPUS":"1.0","MARATHON_APP_RESOURCE_DISK":"0.0","MARATHON_APP_RESOURCE_GPUS":"0","MARATHON_APP_RESOURCE_MEM":"32.0","MARATHON_APP_VERSION":"2016-12-16T19:54:30.939Z","MESOS_AGENT_ENDPOINT":"127.0.0.1:5051","MESOS_CHECKPOINT":"1","MESOS_DIRECTORY":"\/var\/tmp\/mesos\/slaves\/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0\/frameworks\/44a35e16-dc32-4f91-afac-33dfff498944-0000\/executors\/while-loop-hello-world.75cab367-c3c9-11e6-9f91-02422b10e522\/runs\/cb6ec83f-a228-47f5-ad06-ee6675134bb0","MESOS_EXECUTOR_ID":"while-loop-hello-world.75cab367-c3c9-11e6-9f91-02422b10e522","MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD":"5secs","MESOS_FRAMEWORK_ID":"44a35e16-dc32-4f91-afac-33dfff498944-0000","MESOS_HTTP_COMMAND_EXECUTOR":"0","MESOS_NATIVE_JAVA_LIBRARY":"\/usr\/lib\/libmesos-1.1.0.so","MESOS_NATIVE_LIBRARY":"\/usr\/lib\/libmesos-1.1.0.so","MESOS_RECOVERY_TIMEOUT":"15mins","MESOS_SANDBOX":"\/var\/tmp\/mesos\/slaves\/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0\/frameworks\/44a35e16-dc32-4f91-afac-33dfff498944-0000\/executors\/while-loop-hello-world.75cab367-c3c9-11e6-9f91-02422b10e522\/runs\/cb6ec83f-a228-47f5-ad06-ee6675134bb0","MESOS_SLAVE_ID":"a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0","MESOS_SLAVE_PID":"slave(1)@127.0.0.1:5051","MESOS_SUBSCRIPTION_BACKOFF_MAX":"2secs","MESOS_TASK_ID":"while-loop-hello-world.75cab367-c3c9-11e6-9f91-02422b10e522","PATH":"\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin","PORT":"31996","PORT0":"31996","PORTS":"31996","PORT_10000":"31996"} --help=false --pipe_read=15 --pipe_write=16 --pre_exec_commands=[] --runtime_directory=/var/run/mesos/containers/cb6ec83f-a228-47f5-ad06-ee6675134bb0 --unshare_namespace_mnt=false --working_directory=/var/tmp/mesos/slaves/a06268a0-4cfd-4d76-94c7-fbdf053be0ba-S0/frameworks/44a35e16-dc32-4f91-afac-33dfff498944-0000/executors/while-loop-hello-world.75cab367-c3c9-11e6-9f91-02422b10e522/runs/cb6ec83f-a228-47f5-ad06-ee6675134bb0
root 13164 13143 0 19:54 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done
root 13165 13154 0 19:54 ? 00:00:00 sh -c while true; do echo "I am a Hello World script"; sleep 1; done

Status: Workaround given

The workaround is to destroy the application and run a new application with the exact same parameters.

Appendix A.5: Marathon Portal Error: Retrieval of STDERR and STDOUT fails

Seen on Marathon 1.3.6 with both, a shell script on Mesos container as well as with a shell script on a Docker container.

Symptoms:

If you try to retrieve the error log of output of a hello world application on the Marathon portal, we get the error message “Sorry, there was a problem retrieving file. Click to retry.” Retrying does not help.

2016-12-16-21_04_29-marathon

Status: Open (idea for a workaround given below)

A possible workaround is to run the script with redirection into a file you can retrieve later, E.g. we could define a script like

while true; do echo "I am a Hello World script" 2>&1 1> this_is_my_output.log

After having found the working directory of Mesos-started scripts, you can retrieve the log file from the slave’s file system.

Appendix A.6: Critical Marathon Error: Flag 'work_dir' is required, but it was not provided

Marathon 1.3.6 run as Docker container (mesosphere/marathon:latest; image ID 9d03a8fd0fdd)

Symptoms

When we try to start Marathon as seen below, the attempt fails with the exception:

Failed to start a local cluster while loading agent flags from the environment: Flag 'work_dir' is required, but it was not provided

Full log:

(dockerhost)$ sudo docker run -it --net=host -v `pwd`:/work_dir --entrypoint=bash mesosphere/marathon
root@openshift-installer:/marathon# ./bin/start --master local --zk zk://localhost:2181/marathon --http_port=7070
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[2016-12-12 21:01:46,268] INFO Starting Marathon 1.3.6/unknown with --master local --zk zk://localhost:2181/marathon --http_port=7070 (mesosphere.marathon.Main$:main)
[2016-12-12 21:01:46,588] WARN Method [public javax.ws.rs.core.Response mesosphere.marathon.api.MarathonExceptionMapper.toResponse(java.lang.Throwable)] is synthetic and is being intercepted by [mesosphere.marathon.DebugModule$MetricsBehavior@985696]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all. (com.google.inject.internal.ProxyFactory:main)
[2016-12-12 21:01:46,899] INFO Logging initialized @1979ms (org.eclipse.jetty.util.log:main)
[2016-12-12 21:01:47,322] INFO Slf4jLogger started (akka.event.slf4j.Slf4jLogger:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:01:47,517] INFO Started TaskTrackerUpdateStepsProcessorImpl with steps:
* continueOnError(notifyHealthCheckManager)
* continueOnError(notifyRateLimiter)
* continueOnError(notifyLaunchQueue)
* continueOnError(emitUpdate)
* continueOnError(postTaskStatusEvent)
* continueOnError(scaleApp) (mesosphere.marathon.core.task.tracker.impl.TaskTrackerUpdateStepProcessorImpl:main)
[2016-12-12 21:01:47,579] INFO Calling reviveOffers is enabled. Use --disable_revive_offers_for_new_apps to disable. (mesosphere.marathon.core.flow.FlowModule:main)
[2016-12-12 21:01:47,657] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authenticator' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:01:47,662] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:01:47,665] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authorizer' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:01:47,666] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:01:47,668] INFO Started status update processor (mesosphere.marathon.core.task.update.impl.TaskStatusUpdateProcessorImpl$$EnhancerByGuice$$ca7c928f:main)
[2016-12-12 21:01:47,779] INFO All actors suspended:
* Actor[akka://marathon/user/expungeOverdueLostTasks#1718670650]
* Actor[akka://marathon/user/rateLimiter#1278378489]
* Actor[akka://marathon/user/groupManager#890512610]
* Actor[akka://marathon/user/taskTracker#-11699813]
* Actor[akka://marathon/user/launchQueue#1496971565]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#-557182315]
* Actor[akka://marathon/user/killOverdueStagedTasks#-1648400379]
* Actor[akka://marathon/user/reviveOffersWhenWanted#1003103868]
* Actor[akka://marathon/user/offerMatcherManager#219115497]
* Actor[akka://marathon/user/offersWantedForReconciliation#-1104494480]
* Actor[akka://marathon/user/taskKillServiceActor#-521724399] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-12 21:01:47,891] INFO Adding HTTP support. (mesosphere.chaos.http.HttpModule:main)
[2016-12-12 21:01:47,892] INFO No HTTPS support configured. (mesosphere.chaos.http.HttpModule:main)
[2016-12-12 21:01:47,971] INFO Starting up (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-12 21:01:47,972] INFO Beginning run (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-12 21:01:47,974] INFO Will offer leadership after 500 milliseconds backoff (mesosphere.marathon.core.election.impl.CuratorElectionService:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-12 21:01:47,983] INFO jetty-9.3.z-SNAPSHOT (org.eclipse.jetty.server.Server:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,018] INFO Now standing by. Closing existing handles and rejecting new. (mesosphere.marathon.core.event.impl.stream.HttpEventStreamActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-12 21:01:48,229] INFO Registering com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,233] INFO Registering mesosphere.marathon.api.MarathonExceptionMapper as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,233] INFO Registering mesosphere.marathon.api.v2.AppsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,233] INFO Registering mesosphere.marathon.api.v2.TasksResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,233] INFO Registering mesosphere.marathon.api.v2.EventSubscriptionsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,233] INFO Registering mesosphere.marathon.api.v2.QueueResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,234] INFO Registering mesosphere.marathon.api.v2.GroupsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,234] INFO Registering mesosphere.marathon.api.v2.InfoResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,234] INFO Registering mesosphere.marathon.api.v2.LeaderResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,234] INFO Registering mesosphere.marathon.api.v2.DeploymentsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,234] INFO Registering mesosphere.marathon.api.v2.ArtifactsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,234] INFO Registering mesosphere.marathon.api.v2.SchemaResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,234] INFO Registering mesosphere.marathon.api.v2.PluginsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,238] INFO Initiating Jersey application, version 'Jersey: 1.18.1 02/19/2014 03:28 AM' (com.sun.jersey.server.impl.application.WebApplicationImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,311] INFO Binding com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,328] INFO Binding mesosphere.marathon.api.MarathonExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:01:48,527] INFO Using HA and therefore offering leadership (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,528] INFO Will do leader election through localhost:2181 (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,553] WARN session timeout [10000] is less than connection timeout [15000] (org.apache.curator.CuratorZookeeperClient:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,565] INFO Starting (org.apache.curator.framework.imps.CuratorFrameworkImpl:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:zookeeper.version=3.5.0-alpha-1615249, built on 08/01/2014 22:13 GMT (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:host.name=openshift-installer-native-docker-compose (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:java.version=1.8.0_102 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:java.class.path=./bin/../target/marathon-assembly-1.3.6.jar (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,576] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,577] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,577] INFO Client environment:os.version=4.2.0-42-generic (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,577] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,577] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,577] INFO Client environment:user.dir=/marathon (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,578] INFO Client environment:os.memory.free=121MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,578] INFO Client environment:os.memory.max=880MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,578] INFO Client environment:os.memory.total=157MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,578] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.apache.curator.ConnectionState@7531678e (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:01:48,623] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-12 21:01:48,633] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-12 21:01:48,653] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x158f4c9e1f90004, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-12 21:01:48,661] INFO State change: CONNECTED (org.apache.curator.framework.state.ConnectionStateManager:ForkJoinPool-2-worker-13-EventThread)
[2016-12-12 21:01:48,722] INFO Elected (LeaderLatchListener Interface) (mesosphere.marathon.core.election.impl.CuratorElectionService:pool-1-thread-1)
[2016-12-12 21:01:48,723] INFO As new leader running the driver (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-12 21:01:48,734] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=com.twitter.zk.EventBroker@5ca52bc0 (org.apache.zookeeper.ZooKeeper:pool-1-thread-1)
[2016-12-12 21:01:48,748] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-12 21:01:48,749] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-12 21:01:48,755] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158f4c9e1f90005, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-12 21:01:48,854] INFO Migration successfully applied for version Version(1, 3, 6) (mesosphere.marathon.state.Migration:ForkJoinPool-2-worker-7)
[2016-12-12 21:01:48,854] INFO Call preDriverStarts callbacks on EntityStoreCache(MarathonStore(app:)), EntityStoreCache(MarathonStore(group:)), EntityStoreCache(MarathonStore(deployment:)), EntityStoreCache(MarathonStore(framework:)), EntityStoreCache(MarathonStore(taskFailure:)), EntityStoreCache(MarathonStore(events:)) (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-12 21:01:48,868] INFO Finished preDriverStarts callbacks (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-12 21:01:48,885] INFO TaskTrackerActor is starting. Task loading initiated. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-12 21:01:48,893] INFO no interest in offers for reservation reconciliation anymore. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-12 21:01:48,897] INFO ExpungeOverdueLostTasksActor has started (mesosphere.marathon.core.task.jobs.impl.ExpungeOverdueLostTasksActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-12 21:01:48,900] INFO About to load 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-3)
[2016-12-12 21:01:48,901] INFO Loaded 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-3)
[2016-12-12 21:01:48,902] INFO Started. Will remain interested in offer reconciliation for 17500 milliseconds when needed. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-12 21:01:48,911] INFO Task loading complete. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-7)
[2016-12-12 21:01:48,914] INFO Create new Scheduler Driver with frameworkId: None and scheduler mesosphere.marathon.core.heartbeat.MesosHeartbeatMonitor$$EnhancerByGuice$$c730b9b8@15d42ccb (mesosphere.marathon.MarathonSchedulerDriver$:pool-1-thread-1)
[2016-12-12 21:01:48,924] INFO started RateLimiterActor (mesosphere.marathon.core.launchqueue.impl.RateLimiterActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-12 21:01:48,925] INFO All actors active:
* Actor[akka://marathon/user/expungeOverdueLostTasks#1718670650]
* Actor[akka://marathon/user/rateLimiter#1278378489]
* Actor[akka://marathon/user/groupManager#890512610]
* Actor[akka://marathon/user/taskTracker#-11699813]
* Actor[akka://marathon/user/launchQueue#1496971565]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#-557182315]
* Actor[akka://marathon/user/killOverdueStagedTasks#-1648400379]
* Actor[akka://marathon/user/reviveOffersWhenWanted#1003103868]
* Actor[akka://marathon/user/offerMatcherManager#219115497]
* Actor[akka://marathon/user/offersWantedForReconciliation#-1104494480]
* Actor[akka://marathon/user/taskKillServiceActor#-521724399] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-12 21:01:48,994] INFO Received offers WANTED notification (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-7)
[2016-12-12 21:01:48,994] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-7)
[2016-12-12 21:01:48,995] INFO interested in offers for reservation reconciliation because of becoming leader (until 2016-12-12T21:02:06.407Z) (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-12 21:01:49,008] INFO 2 further revives still needed. Repeating reviveOffers according to --revive_offers_repetitions 3 (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-12 21:01:49,008] INFO => Schedule next revive at 2016-12-12T21:01:53.993Z in 4986 milliseconds, adhering to --min_revive_offers_interval 5000 (ms) (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-5)
WARNING: Logging before InitGoogleLogging() is written to STDERR
W1212 21:01:49.134382 63 sched.cpp:1697]
**************************************************
Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address.
**************************************************
I1212 21:01:49.179211 63 leveldb.cpp:174] Opened db in 40.639535ms
I1212 21:01:49.188115 63 leveldb.cpp:181] Compacted db in 8.801601ms
I1212 21:01:49.188176 63 leveldb.cpp:196] Created db iterator in 11072ns
I1212 21:01:49.188189 63 leveldb.cpp:202] Seeked to beginning of db in 758ns
I1212 21:01:49.188194 63 leveldb.cpp:271] Iterated through 0 keys in the db in 202ns
I1212 21:01:49.188232 63 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
I1212 21:01:49.189082 70 recover.cpp:451] Starting replica recovery
I1212 21:01:49.189720 70 recover.cpp:477] Replica is in EMPTY status
I1212 21:01:49.189486 73 master.cpp:375] Master 8be08601-c962-42fa-9f78-7beda337b644 (openshift-installer-native-docker-compose) started on 127.0.0.1:35322
I1212 21:01:49.190399 73 master.cpp:377] Flags at startup: --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate_agents="false" --authenticate_frameworks="false" --authenticate_http_frameworks="false" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticators="crammd5" --authorizers="local" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --quiet="false" --recovery_agent_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="20secs" --registry_strict="false" --root_submissions="true" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/tmp/mesos/local/9Y1S6q" --zk_session_timeout="10secs"
W1212 21:01:49.190558 73 master.cpp:380]
**************************************************
Master bound to loopback interface! Cannot communicate with remote schedulers or agents. You might want to set '--ip' flag to a routable IP address.
**************************************************
I1212 21:01:49.190712 73 master.cpp:429] Master allowing unauthenticated frameworks to register
I1212 21:01:49.191285 73 master.cpp:443] Master allowing unauthenticated agents to register
I1212 21:01:49.191337 73 master.cpp:457] Master allowing HTTP frameworks to register without authentication
I1212 21:01:49.191366 73 master.cpp:499] Using default 'crammd5' authenticator
Failed to start a local cluster while loading agent flags from the environment: Flag 'work_dir' is required, but it was not provided
W1212 21:01:49.192327 73 authenticator.cpp:512] No credentials provided, authentication requests will be refused
I1212 21:01:49.192360 73 authenticator.cpp:519] Initializing server SASL
root@openshift-installer:/marathon# ./bin/start --master local --zk zk://localhost:2181/marathon --http_port=7070 --work_dir
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[scallop] Error: Unknown option 'work_dir'
root@openshift-installer:/marathon# ./bin/start --master local --zk zk://localhost:2181/marathon --http_port=7070 -work_dir=/work_dir
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[scallop] Error: Unknown option 'w'
root@openshift-installer:/marathon# ./bin/start --master local --zk zk://localhost:2181/marathon --http_port=7070 --work_dir=/work_dir
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[scallop] Error: Unknown option 'work_dir=/work_dir'
root@openshift-installer:/marathon# ./bin/start --master local --zk zk://localhost:2181/marathon --http_port=7070
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[2016-12-12 21:05:35,546] INFO Starting Marathon 1.3.6/unknown with --master local --zk zk://localhost:2181/marathon --http_port=7070 (mesosphere.marathon.Main$:main)
[2016-12-12 21:05:35,823] WARN Method [public javax.ws.rs.core.Response mesosphere.marathon.api.MarathonExceptionMapper.toResponse(java.lang.Throwable)] is synthetic and is being intercepted by [mesosphere.marathon.DebugModule$MetricsBehavior@985696]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all. (com.google.inject.internal.ProxyFactory:main)
[2016-12-12 21:05:36,115] INFO Logging initialized @1856ms (org.eclipse.jetty.util.log:main)
[2016-12-12 21:05:36,621] INFO Slf4jLogger started (akka.event.slf4j.Slf4jLogger:marathon-akka.actor.default-dispatcher-4)
[2016-12-12 21:05:36,800] INFO Started TaskTrackerUpdateStepsProcessorImpl with steps:
* continueOnError(notifyHealthCheckManager)
* continueOnError(notifyRateLimiter)
* continueOnError(notifyLaunchQueue)
* continueOnError(emitUpdate)
* continueOnError(postTaskStatusEvent)
* continueOnError(scaleApp) (mesosphere.marathon.core.task.tracker.impl.TaskTrackerUpdateStepProcessorImpl:main)
[2016-12-12 21:05:36,864] INFO Calling reviveOffers is enabled. Use --disable_revive_offers_for_new_apps to disable. (mesosphere.marathon.core.flow.FlowModule:main)
[2016-12-12 21:05:36,930] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authenticator' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:05:36,936] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:05:36,940] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authorizer' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:05:36,940] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-12 21:05:36,942] INFO Started status update processor (mesosphere.marathon.core.task.update.impl.TaskStatusUpdateProcessorImpl$$EnhancerByGuice$$ca7c928f:main)
[2016-12-12 21:05:37,047] INFO All actors suspended:
* Actor[akka://marathon/user/taskKillServiceActor#-984511042]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#671912224]
* Actor[akka://marathon/user/killOverdueStagedTasks#-698941106]
* Actor[akka://marathon/user/groupManager#-958454684]
* Actor[akka://marathon/user/reviveOffersWhenWanted#1917290502]
* Actor[akka://marathon/user/offerMatcherManager#2056148696]
* Actor[akka://marathon/user/expungeOverdueLostTasks#-308686880]
* Actor[akka://marathon/user/rateLimiter#-1808696027]
* Actor[akka://marathon/user/offersWantedForReconciliation#362759878]
* Actor[akka://marathon/user/launchQueue#-1787605975]
* Actor[akka://marathon/user/taskTracker#1265821735] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-12 21:05:37,115] INFO Adding HTTP support. (mesosphere.chaos.http.HttpModule:main)
[2016-12-12 21:05:37,116] INFO No HTTPS support configured. (mesosphere.chaos.http.HttpModule:main)
[2016-12-12 21:05:37,174] INFO Starting up (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-12 21:05:37,181] INFO jetty-9.3.z-SNAPSHOT (org.eclipse.jetty.server.Server:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,185] INFO Beginning run (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-12 21:05:37,188] INFO Will offer leadership after 500 milliseconds backoff (mesosphere.marathon.core.election.impl.CuratorElectionService:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-12 21:05:37,219] INFO Now standing by. Closing existing handles and rejecting new. (mesosphere.marathon.core.event.impl.stream.HttpEventStreamActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-12 21:05:37,416] INFO Registering com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,425] INFO Registering mesosphere.marathon.api.MarathonExceptionMapper as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,425] INFO Registering mesosphere.marathon.api.v2.AppsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,426] INFO Registering mesosphere.marathon.api.v2.TasksResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,428] INFO Registering mesosphere.marathon.api.v2.EventSubscriptionsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,429] INFO Registering mesosphere.marathon.api.v2.QueueResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,430] INFO Registering mesosphere.marathon.api.v2.GroupsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,431] INFO Registering mesosphere.marathon.api.v2.InfoResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,432] INFO Registering mesosphere.marathon.api.v2.LeaderResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,432] INFO Registering mesosphere.marathon.api.v2.DeploymentsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,433] INFO Registering mesosphere.marathon.api.v2.ArtifactsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,433] INFO Registering mesosphere.marathon.api.v2.SchemaResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,434] INFO Registering mesosphere.marathon.api.v2.PluginsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,437] INFO Initiating Jersey application, version 'Jersey: 1.18.1 02/19/2014 03:28 AM' (com.sun.jersey.server.impl.application.WebApplicationImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,491] INFO Binding com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,508] INFO Binding mesosphere.marathon.api.MarathonExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:37,717] INFO Using HA and therefore offering leadership (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,719] INFO Will do leader election through localhost:2181 (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,775] WARN session timeout [10000] is less than connection timeout [15000] (org.apache.curator.CuratorZookeeperClient:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,840] INFO Starting (org.apache.curator.framework.imps.CuratorFrameworkImpl:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,857] INFO Client environment:zookeeper.version=3.5.0-alpha-1615249, built on 08/01/2014 22:13 GMT (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,863] INFO Client environment:host.name=openshift-installer-native-docker-compose (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,864] INFO Client environment:java.version=1.8.0_102 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,864] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,865] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,866] INFO Client environment:java.class.path=./bin/../target/marathon-assembly-1.3.6.jar (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,866] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,867] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,867] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,868] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,868] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,869] INFO Client environment:os.version=4.2.0-42-generic (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,869] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,870] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,870] INFO Client environment:user.dir=/marathon (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,870] INFO Client environment:os.memory.free=128MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,871] INFO Client environment:os.memory.max=880MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,871] INFO Client environment:os.memory.total=165MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,872] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.apache.curator.ConnectionState@2fba82d1 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-12 21:05:37,922] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-12 21:05:37,939] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-12 21:05:37,971] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x158f4c9e1f90006, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-12 21:05:37,978] INFO State change: CONNECTED (org.apache.curator.framework.state.ConnectionStateManager:ForkJoinPool-2-worker-13-EventThread)
[2016-12-12 21:05:38,049] INFO Elected (LeaderLatchListener Interface) (mesosphere.marathon.core.election.impl.CuratorElectionService:pool-1-thread-1)
[2016-12-12 21:05:38,052] INFO As new leader running the driver (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-12 21:05:38,075] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=com.twitter.zk.EventBroker@2c6c37f7 (org.apache.zookeeper.ZooKeeper:pool-1-thread-1)
[2016-12-12 21:05:38,082] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-12 21:05:38,084] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-12 21:05:38,088] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158f4c9e1f90007, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-12 21:05:38,174] INFO Migration successfully applied for version Version(1, 3, 6) (mesosphere.marathon.state.Migration:ForkJoinPool-2-worker-7)
[2016-12-12 21:05:38,176] INFO Call preDriverStarts callbacks on EntityStoreCache(MarathonStore(app:)), EntityStoreCache(MarathonStore(group:)), EntityStoreCache(MarathonStore(deployment:)), EntityStoreCache(MarathonStore(framework:)), EntityStoreCache(MarathonStore(taskFailure:)), EntityStoreCache(MarathonStore(events:)) (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-12 21:05:38,188] INFO Finished preDriverStarts callbacks (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-12 21:05:38,195] INFO no interest in offers for reservation reconciliation anymore. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-12 21:05:38,200] INFO Started. Will remain interested in offer reconciliation for 17500 milliseconds when needed. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-12 21:05:38,208] INFO started RateLimiterActor (mesosphere.marathon.core.launchqueue.impl.RateLimiterActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-12 21:05:38,209] INFO ExpungeOverdueLostTasksActor has started (mesosphere.marathon.core.task.jobs.impl.ExpungeOverdueLostTasksActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-12 21:05:38,216] INFO TaskTrackerActor is starting. Task loading initiated. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:05:38,222] INFO Create new Scheduler Driver with frameworkId: None and scheduler mesosphere.marathon.core.heartbeat.MesosHeartbeatMonitor$$EnhancerByGuice$$c730b9b8@46383a78 (mesosphere.marathon.MarathonSchedulerDriver$:pool-1-thread-1)
[2016-12-12 21:05:38,224] INFO All actors active:
* Actor[akka://marathon/user/taskKillServiceActor#-984511042]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#671912224]
* Actor[akka://marathon/user/killOverdueStagedTasks#-698941106]
* Actor[akka://marathon/user/groupManager#-958454684]
* Actor[akka://marathon/user/reviveOffersWhenWanted#1917290502]
* Actor[akka://marathon/user/offerMatcherManager#2056148696]
* Actor[akka://marathon/user/expungeOverdueLostTasks#-308686880]
* Actor[akka://marathon/user/rateLimiter#-1808696027]
* Actor[akka://marathon/user/offersWantedForReconciliation#362759878]
* Actor[akka://marathon/user/launchQueue#-1787605975]
* Actor[akka://marathon/user/taskTracker#1265821735] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-4)
[2016-12-12 21:05:38,230] INFO About to load 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-5)
[2016-12-12 21:05:38,239] INFO Loaded 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-5)
[2016-12-12 21:05:38,257] INFO Task loading complete. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:05:38,284] INFO interested in offers for reservation reconciliation because of becoming leader (until 2016-12-12T21:05:55.712Z) (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-12 21:05:38,322] INFO Binding mesosphere.marathon.api.v2.AppsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,333] INFO Received offers WANTED notification (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:05:38,335] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:05:38,338] INFO 2 further revives still needed. Repeating reviveOffers according to --revive_offers_repetitions 3 (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:05:38,339] INFO => Schedule next revive at 2016-12-12T21:05:43.333Z in 4995 milliseconds, adhering to --min_revive_offers_interval 5000 (ms) (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:05:38,341] INFO Received offers WANTED notification (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-3)
[2016-12-12 21:05:38,356] INFO Binding mesosphere.marathon.api.v2.TasksResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,359] INFO Binding mesosphere.marathon.api.v2.EventSubscriptionsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,361] INFO Event notification disabled. (mesosphere.marathon.core.event.EventModule:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,375] INFO Binding mesosphere.marathon.api.v2.QueueResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,385] INFO Binding mesosphere.marathon.api.v2.GroupsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,390] INFO Binding mesosphere.marathon.api.v2.InfoResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,396] INFO Binding mesosphere.marathon.api.v2.LeaderResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,400] INFO Binding mesosphere.marathon.api.v2.DeploymentsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,418] INFO Binding mesosphere.marathon.api.v2.ArtifactsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,423] INFO Binding mesosphere.marathon.api.v2.SchemaResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,428] INFO Binding mesosphere.marathon.api.v2.PluginsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,429] INFO Loading plugins implementing 'mesosphere.marathon.plugin.http.HttpRequestHandler' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,431] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,452] INFO Started o.e.j.s.ServletContextHandler@33ffb91e{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
WARNING: Logging before InitGoogleLogging() is written to STDERR
W1212 21:05:38.488497 223 sched.cpp:1697]
**************************************************
Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address.
**************************************************
[2016-12-12 21:05:38,494] INFO Started ServerConnector@23580056{HTTP/1.1,[http/1.1]}{0.0.0.0:7070} (org.eclipse.jetty.server.ServerConnector:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,496] INFO Started @4237ms (org.eclipse.jetty.server.Server:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-12 21:05:38,497] INFO All services up and running. (mesosphere.marathon.Main$:main)
I1212 21:05:38.524034 223 leveldb.cpp:174] Opened db in 33.764878ms
I1212 21:05:38.525073 223 leveldb.cpp:181] Compacted db in 985356ns
I1212 21:05:38.525146 223 leveldb.cpp:196] Created db iterator in 9599ns
I1212 21:05:38.525156 223 leveldb.cpp:202] Seeked to beginning of db in 501ns
I1212 21:05:38.525161 223 leveldb.cpp:271] Iterated through 0 keys in the db in 202ns
I1212 21:05:38.525203 223 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
I1212 21:05:38.526582 229 recover.cpp:451] Starting replica recovery
I1212 21:05:38.527250 229 recover.cpp:477] Replica is in EMPTY status
I1212 21:05:38.528532 229 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (4)@127.0.0.1:33899
I1212 21:05:38.528669 229 recover.cpp:197] Received a recover response from a replica in EMPTY status
I1212 21:05:38.528774 229 recover.cpp:568] Updating replica status to STARTING
I1212 21:05:38.529911 232 master.cpp:375] Master e999af40-d59e-4d95-a77a-05403042ca4f (openshift-installer-native-docker-compose) started on 127.0.0.1:33899
I1212 21:05:38.530508 232 master.cpp:377] Flags at startup: --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate_agents="false" --authenticate_frameworks="false" --authenticate_http_frameworks="false" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticators="crammd5" --authorizers="local" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --quiet="false" --recovery_agent_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="20secs" --registry_strict="false" --root_submissions="true" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/tmp/mesos/local/X38lNg" --zk_session_timeout="10secs"
W1212 21:05:38.531312 232 master.cpp:380]
**************************************************
Master bound to loopback interface! Cannot communicate with remote schedulers or agents. You might want to set '--ip' flag to a routable IP address.
**************************************************
I1212 21:05:38.532498 232 master.cpp:429] Master allowing unauthenticated frameworks to register
I1212 21:05:38.532708 232 master.cpp:443] Master allowing unauthenticated agents to register
I1212 21:05:38.532913 232 master.cpp:457] Master allowing HTTP frameworks to register without authentication
I1212 21:05:38.533169 232 master.cpp:499] Using default 'crammd5' authenticator
W1212 21:05:38.533359 232 authenticator.cpp:512] No credentials provided, authentication requests will be refused
I1212 21:05:38.533457 232 authenticator.cpp:519] Initializing server SASL
I1212 21:05:38.532569 229 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 3.159782ms
I1212 21:05:38.534039 229 replica.cpp:320] Persisted replica status to STARTING
I1212 21:05:38.534257 229 recover.cpp:477] Replica is in STARTING status
I1212 21:05:38.534668 229 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (5)@127.0.0.1:33899
I1212 21:05:38.535151 229 recover.cpp:197] Received a recover response from a replica in STARTING status
I1212 21:05:38.540293 229 recover.cpp:568] Updating replica status to VOTING
Failed to start a local cluster while loading agent flags from the environment: Flag 'work_dir' is required, but it was not provided
(container):/marathon#

 

This error message is described in the documentation of the used Docker image as follows:

“Note: Currently the Docker container fails due to strange behavior from the latest Mesos version. There will be an error about work_dir that is still unresolved”

Unfortunately they did not provide any workaround or solution for the problem.

Resolution (Workaround):

I have found a workaround here: “It seems like explicitly setting the Mesos work directory by adding ENV MESOS_WORK_DIR /var/lib/mesos to the Dockerfile resolves the issue.”

I.e., we need to set the MESOS_WORK_DIR variable:

(dockerhost)$ sudo docker run -it --name marathon --rm --net=host -e MESOS_WORK_DIR=/var/lib/mesos --entrypoint=bash mesosphere/marathon
(container)# ./bin/start --master local --zk zk://localhost:2181/marathon --http_port=7070
MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to '/usr/lib/libmesos.so'
No start hook file found ($HOOK_MARATHON_START). Proceeding with the start script.
[2016-12-13 13:55:36,831] INFO Starting Marathon 1.3.6/unknown with --master local --zk zk://localhost:2181/marathon --http_port=7070 (mesosphere.marathon.Main$:main)
[2016-12-13 13:55:37,118] WARN Method [public javax.ws.rs.core.Response mesosphere.marathon.api.MarathonExceptionMapper.toResponse(java.lang.Throwable)] is synthetic and is being intercepted by [mesosphere.marathon.DebugModule$MetricsBehavior@985696]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all. (com.google.inject.internal.ProxyFactory:main)
[2016-12-13 13:55:37,371] INFO Logging initialized @1806ms (org.eclipse.jetty.util.log:main)
[2016-12-13 13:55:37,864] INFO Slf4jLogger started (akka.event.slf4j.Slf4jLogger:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 13:55:38,059] INFO Started TaskTrackerUpdateStepsProcessorImpl with steps:
* continueOnError(notifyHealthCheckManager)
* continueOnError(notifyRateLimiter)
* continueOnError(notifyLaunchQueue)
* continueOnError(emitUpdate)
* continueOnError(postTaskStatusEvent)
* continueOnError(scaleApp) (mesosphere.marathon.core.task.tracker.impl.TaskTrackerUpdateStepProcessorImpl:main)
[2016-12-13 13:55:38,119] INFO Calling reviveOffers is enabled. Use --disable_revive_offers_for_new_apps to disable. (mesosphere.marathon.core.flow.FlowModule:main)
[2016-12-13 13:55:38,196] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authenticator' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 13:55:38,200] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 13:55:38,204] INFO Loading plugins implementing 'mesosphere.marathon.plugin.auth.Authorizer' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 13:55:38,204] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:main)
[2016-12-13 13:55:38,208] INFO Started status update processor (mesosphere.marathon.core.task.update.impl.TaskStatusUpdateProcessorImpl$$EnhancerByGuice$$ca7c928f:main)
[2016-12-13 13:55:38,361] INFO All actors suspended:
* Actor[akka://marathon/user/launchQueue#872625406]
* Actor[akka://marathon/user/offerMatcherManager#1742672554]
* Actor[akka://marathon/user/reviveOffersWhenWanted#-965567836]
* Actor[akka://marathon/user/expungeOverdueLostTasks#-1682888773]
* Actor[akka://marathon/user/offersWantedForReconciliation#-1611878367]
* Actor[akka://marathon/user/killOverdueStagedTasks#-1736387483]
* Actor[akka://marathon/user/taskTracker#-1718074132]
* Actor[akka://marathon/user/groupManager#1707955637]
* Actor[akka://marathon/user/taskKillServiceActor#-2061381059]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#-786726035]
* Actor[akka://marathon/user/rateLimiter#-27927539] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 13:55:38,466] INFO Adding HTTP support. (mesosphere.chaos.http.HttpModule:main)
[2016-12-13 13:55:38,466] INFO No HTTPS support configured. (mesosphere.chaos.http.HttpModule:main)
[2016-12-13 13:55:38,518] INFO Starting up (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-13 13:55:38,519] INFO Beginning run (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-13 13:55:38,522] INFO Will offer leadership after 500 milliseconds backoff (mesosphere.marathon.core.election.impl.CuratorElectionService:MarathonSchedulerService$$EnhancerByGuice$$4bb98838)
[2016-12-13 13:55:38,525] INFO jetty-9.3.z-SNAPSHOT (org.eclipse.jetty.server.Server:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,555] INFO Now standing by. Closing existing handles and rejecting new. (mesosphere.marathon.core.event.impl.stream.HttpEventStreamActor:marathon-akka.actor.default-dispatcher-4)
[2016-12-13 13:55:38,727] INFO Registering com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,730] INFO Registering mesosphere.marathon.api.MarathonExceptionMapper as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,730] INFO Registering mesosphere.marathon.api.v2.AppsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,730] INFO Registering mesosphere.marathon.api.v2.TasksResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,730] INFO Registering mesosphere.marathon.api.v2.EventSubscriptionsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,730] INFO Registering mesosphere.marathon.api.v2.QueueResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,731] INFO Registering mesosphere.marathon.api.v2.GroupsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,731] INFO Registering mesosphere.marathon.api.v2.InfoResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,731] INFO Registering mesosphere.marathon.api.v2.LeaderResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,731] INFO Registering mesosphere.marathon.api.v2.DeploymentsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,731] INFO Registering mesosphere.marathon.api.v2.ArtifactsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,731] INFO Registering mesosphere.marathon.api.v2.SchemaResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,733] INFO Registering mesosphere.marathon.api.v2.PluginsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,737] INFO Initiating Jersey application, version 'Jersey: 1.18.1 02/19/2014 03:28 AM' (com.sun.jersey.server.impl.application.WebApplicationImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,785] INFO Binding com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:38,802] INFO Binding mesosphere.marathon.api.MarathonExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,057] INFO Using HA and therefore offering leadership (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,060] INFO Will do leader election through localhost:2181 (mesosphere.marathon.core.election.impl.CuratorElectionService:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,119] WARN session timeout [10000] is less than connection timeout [15000] (org.apache.curator.CuratorZookeeperClient:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,136] INFO Starting (org.apache.curator.framework.imps.CuratorFrameworkImpl:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,153] INFO Client environment:zookeeper.version=3.5.0-alpha-1615249, built on 08/01/2014 22:13 GMT (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:host.name=openshift-installer-native-docker-compose (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:java.version=1.8.0_102 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:java.class.path=./bin/../target/marathon-assembly-1.3.6.jar (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:os.version=4.2.0-42-generic (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:user.dir=/marathon (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:os.memory.free=118MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:os.memory.max=880MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Client environment:os.memory.total=159MB (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,154] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.apache.curator.ConnectionState@3da7f0be (org.apache.zookeeper.ZooKeeper:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,183] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-13 13:55:39,195] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-13 13:55:39,230] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158f4c9e1f9000d, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:ForkJoinPool-2-worker-13-SendThread(localhost:2181))
[2016-12-13 13:55:39,236] INFO State change: CONNECTED (org.apache.curator.framework.state.ConnectionStateManager:ForkJoinPool-2-worker-13-EventThread)
[2016-12-13 13:55:39,309] INFO Binding mesosphere.marathon.api.v2.AppsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,328] INFO Binding mesosphere.marathon.api.v2.TasksResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,331] INFO Binding mesosphere.marathon.api.v2.EventSubscriptionsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,331] INFO Event notification disabled. (mesosphere.marathon.core.event.EventModule:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,334] INFO Elected (LeaderLatchListener Interface) (mesosphere.marathon.core.election.impl.CuratorElectionService:pool-1-thread-1)
[2016-12-13 13:55:39,338] INFO As new leader running the driver (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-13 13:55:39,341] INFO Binding mesosphere.marathon.api.v2.QueueResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,351] INFO Binding mesosphere.marathon.api.v2.GroupsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,356] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=com.twitter.zk.EventBroker@44a67b18 (org.apache.zookeeper.ZooKeeper:pool-1-thread-1)
[2016-12-13 13:55:39,357] INFO Binding mesosphere.marathon.api.v2.InfoResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,360] INFO Binding mesosphere.marathon.api.v2.LeaderResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,364] INFO Binding mesosphere.marathon.api.v2.DeploymentsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,369] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-13 13:55:39,369] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-13 13:55:39,374] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158f4c9e1f9000e, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:pool-1-thread-1-SendThread(localhost:2181))
[2016-12-13 13:55:39,378] INFO Binding mesosphere.marathon.api.v2.ArtifactsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,380] INFO Binding mesosphere.marathon.api.v2.SchemaResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,391] INFO Binding mesosphere.marathon.api.v2.PluginsResource to GuiceManagedComponentProvider with the scope "Singleton" (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,392] INFO Loading plugins implementing 'mesosphere.marathon.plugin.http.HttpRequestHandler' from these urls: [] (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,392] INFO Found 0 plugins. (mesosphere.marathon.core.plugin.impl.PluginManagerImpl:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,407] INFO Started o.e.j.s.ServletContextHandler@72f0fd27{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,419] INFO Started ServerConnector@3c2f8cd8{HTTP/1.1,[http/1.1]}{0.0.0.0:7070} (org.eclipse.jetty.server.ServerConnector:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,419] INFO Started @3855ms (org.eclipse.jetty.server.Server:HttpService$$EnhancerByGuice$$1fd57a30 STARTING)
[2016-12-13 13:55:39,420] INFO All services up and running. (mesosphere.marathon.Main$:main)
[2016-12-13 13:55:39,464] INFO Migration successfully applied for version Version(1, 3, 6) (mesosphere.marathon.state.Migration:ForkJoinPool-2-worker-5)
[2016-12-13 13:55:39,464] INFO Call preDriverStarts callbacks on EntityStoreCache(MarathonStore(app:)), EntityStoreCache(MarathonStore(group:)), EntityStoreCache(MarathonStore(deployment:)), EntityStoreCache(MarathonStore(framework:)), EntityStoreCache(MarathonStore(taskFailure:)), EntityStoreCache(MarathonStore(events:)) (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-13 13:55:39,482] INFO Finished preDriverStarts callbacks (mesosphere.marathon.MarathonSchedulerService$$EnhancerByGuice$$4bb98838:pool-1-thread-1)
[2016-12-13 13:55:39,485] INFO no interest in offers for reservation reconciliation anymore. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 13:55:39,487] INFO Started. Will remain interested in offer reconciliation for 17500 milliseconds when needed. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 13:55:39,498] INFO started RateLimiterActor (mesosphere.marathon.core.launchqueue.impl.RateLimiterActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 13:55:39,501] INFO ExpungeOverdueLostTasksActor has started (mesosphere.marathon.core.task.jobs.impl.ExpungeOverdueLostTasksActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 13:55:39,506] INFO All actors active:
* Actor[akka://marathon/user/launchQueue#872625406]
* Actor[akka://marathon/user/offerMatcherManager#1742672554]
* Actor[akka://marathon/user/reviveOffersWhenWanted#-965567836]
* Actor[akka://marathon/user/expungeOverdueLostTasks#-1682888773]
* Actor[akka://marathon/user/offersWantedForReconciliation#-1611878367]
* Actor[akka://marathon/user/killOverdueStagedTasks#-1736387483]
* Actor[akka://marathon/user/taskTracker#-1718074132]
* Actor[akka://marathon/user/groupManager#1707955637]
* Actor[akka://marathon/user/taskKillServiceActor#-2061381059]
* Actor[akka://marathon/user/offerMatcherLaunchTokens#-786726035]
* Actor[akka://marathon/user/rateLimiter#-27927539] (mesosphere.marathon.core.leadership.impl.LeadershipCoordinatorActor:marathon-akka.actor.default-dispatcher-8)
[2016-12-13 13:55:39,509] INFO TaskTrackerActor is starting. Task loading initiated. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-13 13:55:39,522] INFO About to load 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-13)
[2016-12-13 13:55:39,528] INFO Loaded 0 tasks (mesosphere.marathon.core.task.tracker.impl.TaskLoaderImpl:ForkJoinPool-2-worker-5)
[2016-12-13 13:55:39,537] INFO Task loading complete. (mesosphere.marathon.core.task.tracker.impl.TaskTrackerActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,568] INFO interested in offers for reservation reconciliation because of becoming leader (until 2016-12-13T13:55:57.005Z) (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 13:55:39,589] INFO Received offers WANTED notification (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,589] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,596] INFO 2 further revives still needed. Repeating reviveOffers according to --revive_offers_repetitions 3 (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,597] INFO => Schedule next revive at 2016-12-13T13:55:44.589Z in 4993 milliseconds, adhering to --min_revive_offers_interval 5000 (ms) (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,655] INFO Create new Scheduler Driver with frameworkId: Some(value: "44a35e16-dc32-4f91-afac-33dfff498944-0000"
) and scheduler mesosphere.marathon.core.heartbeat.MesosHeartbeatMonitor$$EnhancerByGuice$$c730b9b8@4eb166a1 (mesosphere.marathon.MarathonSchedulerDriver$:pool-1-thread-1)
WARNING: Logging before InitGoogleLogging() is written to STDERR
W1213 13:55:39.779234 64 sched.cpp:1697]
**************************************************
Scheduler driver bound to loopback interface! Cannot communicate with remote master(s). You might want to set 'LIBPROCESS_IP' environment variable to use a routable IP address.
**************************************************
I1213 13:55:39.809289 64 leveldb.cpp:174] Opened db in 28.534471ms
I1213 13:55:39.811184 64 leveldb.cpp:181] Compacted db in 1.79725ms
I1213 13:55:39.811257 64 leveldb.cpp:196] Created db iterator in 11125ns
I1213 13:55:39.811283 64 leveldb.cpp:202] Seeked to beginning of db in 670ns
I1213 13:55:39.811290 64 leveldb.cpp:271] Iterated through 0 keys in the db in 197ns
I1213 13:55:39.811357 64 replica.cpp:779] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
I1213 13:55:39.812511 77 recover.cpp:451] Starting replica recovery
I1213 13:55:39.812706 77 recover.cpp:477] Replica is in EMPTY status
I1213 13:55:39.813297 77 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (3)@127.0.0.1:40102
I1213 13:55:39.813491 77 recover.cpp:197] Received a recover response from a replica in EMPTY status
I1213 13:55:39.813727 77 recover.cpp:568] Updating replica status to STARTING
I1213 13:55:39.815317 73 master.cpp:375] Master d4536c66-4886-4c88-9a02-7dac4b5f2e6d (openshift-installer-native-docker-compose) started on 127.0.0.1:40102
I1213 13:55:39.815476 73 master.cpp:377] Flags at startup: --agent_ping_timeout="15secs" --agent_reregister_timeout="10mins" --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate_agents="false" --authenticate_frameworks="false" --authenticate_http_frameworks="false" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticators="crammd5" --authorizers="local" --framework_sorter="drf" --help="false" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_agent_ping_timeouts="5" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --quiet="false" --recovery_agent_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="20secs" --registry_strict="false" --root_submissions="true" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/var/lib/mesos" --zk_session_timeout="10secs"
W1213 13:55:39.815767 73 master.cpp:380]
**************************************************
Master bound to loopback interface! Cannot communicate with remote schedulers or agents. You might want to set '--ip' flag to a routable IP address.
**************************************************
I1213 13:55:39.815810 73 master.cpp:429] Master allowing unauthenticated frameworks to register
I1213 13:55:39.815817 73 master.cpp:443] Master allowing unauthenticated agents to register
I1213 13:55:39.815824 73 master.cpp:457] Master allowing HTTP frameworks to register without authentication
I1213 13:55:39.815835 73 master.cpp:499] Using default 'crammd5' authenticator
W1213 13:55:39.815845 73 authenticator.cpp:512] No credentials provided, authentication requests will be refused
I1213 13:55:39.815851 73 authenticator.cpp:519] Initializing server SASL
I1213 13:55:39.817608 77 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 3.763381ms
I1213 13:55:39.817780 77 replica.cpp:320] Persisted replica status to STARTING
I1213 13:55:39.817886 77 recover.cpp:477] Replica is in STARTING status
I1213 13:55:39.818063 77 replica.cpp:673] Replica in STARTING status received a broadcasted recover request from (7)@127.0.0.1:40102
I1213 13:55:39.818200 77 recover.cpp:197] Received a recover response from a replica in STARTING status
I1213 13:55:39.818279 77 recover.cpp:568] Updating replica status to VOTING
I1213 13:55:39.819156 77 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 782170ns
I1213 13:55:39.819229 77 replica.cpp:320] Persisted replica status to VOTING
I1213 13:55:39.819284 77 recover.cpp:582] Successfully joined the Paxos group
I1213 13:55:39.819321 77 recover.cpp:466] Recover process terminated
I1213 13:55:39.820673 64 containerizer.cpp:196] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
I1213 13:55:39.823923 73 master.cpp:1847] The newly elected leader is master@127.0.0.1:40102 with id d4536c66-4886-4c88-9a02-7dac4b5f2e6d
I1213 13:55:39.823953 73 master.cpp:1860] Elected as the leading master!
I1213 13:55:39.823961 73 master.cpp:1547] Recovering from registrar
I1213 13:55:39.824046 73 registrar.cpp:332] Recovering registrar
I1213 13:55:39.824394 73 log.cpp:553] Attempting to start the writer
I1213 13:55:39.824522 73 slave.cpp:198] Agent started on 1)@127.0.0.1:40102
I1213 13:55:39.824532 73 slave.cpp:199] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="true" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --image_provisioner_backend="copy" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="posix" --launcher_dir="/usr/libexec/mesos" --logbufsecs="0" --logging_level="INFO" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/var/lib/mesos/0"
W1213 13:55:39.824811 73 slave.cpp:202]
**************************************************
Agent bound to loopback interface! Cannot communicate with remote master(s). You might want to set '--ip' flag to a routable IP address.
**************************************************
I1213 13:55:39.825176 73 slave.cpp:519] Agent resources: cpus(*):2; mem(*):2928; disk(*):34068; ports(*):[31000-32000]
I1213 13:55:39.825285 73 slave.cpp:527] Agent attributes: [ ]
I1213 13:55:39.825294 73 slave.cpp:532] Agent hostname: openshift-installer-native-docker-compose
I1213 13:55:39.825721 73 state.cpp:57] Recovering state from '/var/lib/mesos/0/meta'
I1213 13:55:39.826089 73 status_update_manager.cpp:200] Recovering status update manager
I1213 13:55:39.826179 73 containerizer.cpp:522] Recovering containerizer
I1213 13:55:39.826676 72 replica.cpp:493] Replica received implicit promise request from (21)@127.0.0.1:40102 with proposal 1
I1213 13:55:39.826875 74 provisioner.cpp:253] Provisioner recovery complete
I1213 13:55:39.828560 74 slave.cpp:4782] Finished recovery
I1213 13:55:39.829459 74 slave.cpp:895] New master detected at master@127.0.0.1:40102
I1213 13:55:39.829998 74 slave.cpp:916] No credentials provided. Attempting to register without authentication
I1213 13:55:39.829576 72 leveldb.cpp:304] Persisting metadata (8 bytes) to leveldb took 1.862869ms
I1213 13:55:39.830559 72 replica.cpp:342] Persisted promised to 1
I1213 13:55:39.830961 72 coordinator.cpp:238] Coordinator attempting to fill missing positions
I1213 13:55:39.829550 75 status_update_manager.cpp:174] Pausing sending status updates
I1213 13:55:39.830510 74 slave.cpp:927] Detecting new master
I1213 13:55:39.831506 77 replica.cpp:388] Replica received explicit promise request from (23)@127.0.0.1:40102 for position 0 with proposal 2
I1213 13:55:39.832626 77 leveldb.cpp:341] Persisting action (8 bytes) to leveldb took 920616ns
I1213 13:55:39.832670 77 replica.cpp:712] Persisted action at 0
I1213 13:55:39.833004 75 replica.cpp:537] Replica received write request for position 0 from (24)@127.0.0.1:40102
I1213 13:55:39.834054 75 leveldb.cpp:436] Reading position from leveldb took 40734ns
I1213 13:55:39.835325 75 leveldb.cpp:341] Persisting action (14 bytes) to leveldb took 722995ns
I1213 13:55:39.835367 75 replica.cpp:712] Persisted action at 0
I1213 13:55:39.835605 75 replica.cpp:691] Replica received learned notice for position 0 from @0.0.0.0:0
I1213 13:55:39.836382 75 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 699339ns
I1213 13:55:39.836410 75 replica.cpp:712] Persisted action at 0
I1213 13:55:39.836419 75 replica.cpp:697] Replica learned NOP action at position 0
I1213 13:55:39.836597 75 log.cpp:569] Writer started with ending position 0
I1213 13:55:39.836915 75 leveldb.cpp:436] Reading position from leveldb took 12450ns
I1213 13:55:39.838395 77 registrar.cpp:365] Successfully fetched the registry (0B) in 14.328064ms
I1213 13:55:39.838459 77 registrar.cpp:464] Applied 1 operations in 4626ns; attempting to update the 'registry'
I1213 13:55:39.840064 77 log.cpp:577] Attempting to append 226 bytes to the log
I1213 13:55:39.840198 77 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 1
I1213 13:55:39.840348 77 replica.cpp:537] Replica received write request for position 1 from (25)@127.0.0.1:40102
I1213 13:55:39.842838 77 leveldb.cpp:341] Persisting action (245 bytes) to leveldb took 2.394634ms
I1213 13:55:39.848156 77 replica.cpp:712] Persisted action at 1
I1213 13:55:39.848462 77 replica.cpp:691] Replica received learned notice for position 1 from @0.0.0.0:0
I1213 13:55:39.849424 77 leveldb.cpp:341] Persisting action (247 bytes) to leveldb took 911778ns
I1213 13:55:39.849508 77 replica.cpp:712] Persisted action at 1
I1213 13:55:39.849520 77 replica.cpp:697] Replica learned APPEND action at position 1
I1213 13:55:39.849964 77 registrar.cpp:509] Successfully updated the 'registry' in 11.484928ms
I1213 13:55:39.850054 77 registrar.cpp:395] Successfully recovered registrar
I1213 13:55:39.850160 77 master.cpp:1655] Recovered 0 agents from the Registry (187B) ; allowing 10mins for agents to re-register
I1213 13:55:39.850020 75 log.cpp:596] Attempting to truncate the log to 1
I1213 13:55:39.850306 75 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 2
I1213 13:55:39.850533 75 replica.cpp:537] Replica received write request for position 2 from (26)@127.0.0.1:40102
I1213 13:55:39.851971 75 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 1.375167ms
I1213 13:55:39.852044 75 replica.cpp:712] Persisted action at 2
I1213 13:55:39.852219 75 replica.cpp:691] Replica received learned notice for position 2 from @0.0.0.0:0
I1213 13:55:39.853096 75 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 825996ns
I1213 13:55:39.853198 75 leveldb.cpp:399] Deleting ~1 keys from leveldb took 55808ns
I1213 13:55:39.853235 75 replica.cpp:712] Persisted action at 2
I1213 13:55:39.853245 75 replica.cpp:697] Replica learned TRUNCATE action at position 2
I1213 13:55:39.855962 68 sched.cpp:226] Version: 1.0.1
I1213 13:55:39.857800 75 sched.cpp:330] New master detected at master@127.0.0.1:40102
I1213 13:55:39.857892 75 sched.cpp:341] No credentials provided. Attempting to register without authentication
I1213 13:55:39.858000 75 master.cpp:2424] Received SUBSCRIBE call for framework 'marathon' at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:39.858047 75 master.cpp:2500] Subscribing framework marathon with checkpointing enabled and capabilities [ ]
I1213 13:55:39.858240 75 hierarchical.cpp:271] Added framework 44a35e16-dc32-4f91-afac-33dfff498944-0000
I1213 13:55:39.858304 74 sched.cpp:743] Framework registered with 44a35e16-dc32-4f91-afac-33dfff498944-0000
[2016-12-13 13:55:39,867] INFO Starting scheduler actor (mesosphere.marathon.MarathonSchedulerActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,875] INFO Scheduler actor ready (mesosphere.marathon.MarathonSchedulerActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,876] INFO Became active. Accepting event streaming requests. (mesosphere.marathon.core.event.impl.stream.HttpEventStreamActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:39,876] INFO Reset offerLeadership backoff (mesosphere.marathon.core.election.impl.ExponentialBackoff:pool-1-thread-1)
[2016-12-13 13:55:39,901] INFO Creating tombstone for old twitter commons leader election (mesosphere.marathon.core.election.impl.CuratorElectionService:pool-1-thread-1)
[2016-12-13 13:55:39,908] INFO Registered as 44a35e16-dc32-4f91-afac-33dfff498944-0000 to master 'd4536c66-4886-4c88-9a02-7dac4b5f2e6d' (mesosphere.marathon.MarathonScheduler$$EnhancerByGuice$$1ef061b0:Thread-14)
[2016-12-13 13:55:39,908] INFO Store framework id: value: "44a35e16-dc32-4f91-afac-33dfff498944-0000"
 (mesosphere.util.state.FrameworkIdUtil:Thread-14)
[2016-12-13 13:55:39,925] INFO Received reviveOffers notification: SchedulerRegisteredEvent (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-4)
I1213 13:55:39.948446 72 master.cpp:4550] Registering agent at slave(1)@127.0.0.1:40102 (openshift-installer-native-docker-compose) with id d4536c66-4886-4c88-9a02-7dac4b5f2e6d-S0
I1213 13:55:39.948563 72 registrar.cpp:464] Applied 1 operations in 10900ns; attempting to update the 'registry'
I1213 13:55:39.949964 77 log.cpp:577] Attempting to append 424 bytes to the log
I1213 13:55:39.950723 77 coordinator.cpp:348] Coordinator attempting to write APPEND action at position 3
I1213 13:55:39.951776 76 replica.cpp:537] Replica received write request for position 3 from (27)@127.0.0.1:40102
I1213 13:55:39.952749 76 leveldb.cpp:341] Persisting action (443 bytes) to leveldb took 891144ns
I1213 13:55:39.952900 76 replica.cpp:712] Persisted action at 3
I1213 13:55:39.953106 76 replica.cpp:691] Replica received learned notice for position 3 from @0.0.0.0:0
I1213 13:55:39.953799 76 leveldb.cpp:341] Persisting action (445 bytes) to leveldb took 501557ns
I1213 13:55:39.953876 76 replica.cpp:712] Persisted action at 3
I1213 13:55:39.953902 76 replica.cpp:697] Replica learned APPEND action at position 3
I1213 13:55:39.954263 76 registrar.cpp:509] Successfully updated the 'registry' in 5.665024ms
I1213 13:55:39.954856 71 hierarchical.cpp:478] Added agent d4536c66-4886-4c88-9a02-7dac4b5f2e6d-S0 (openshift-installer-native-docker-compose) with cpus(*):2; mem(*):2928; disk(*):34068; ports(*):[31000-32000] (allocated: )
I1213 13:55:39.954433 75 log.cpp:596] Attempting to truncate the log to 3
I1213 13:55:39.955282 71 coordinator.cpp:348] Coordinator attempting to write TRUNCATE action at position 4
I1213 13:55:39.955612 71 replica.cpp:537] Replica received write request for position 4 from (28)@127.0.0.1:40102
I1213 13:55:39.955880 76 master.cpp:4619] Registered agent d4536c66-4886-4c88-9a02-7dac4b5f2e6d-S0 at slave(1)@127.0.0.1:40102 (openshift-installer-native-docker-compose) with cpus(*):2; mem(*):2928; disk(*):34068; ports(*):[31000-32000]
I1213 13:55:39.956233 76 master.cpp:5725] Sending 1 offers to framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:39.956048 75 slave.cpp:1095] Registered with master master@127.0.0.1:40102; given agent ID d4536c66-4886-4c88-9a02-7dac4b5f2e6d-S0
I1213 13:55:39.957394 73 status_update_manager.cpp:181] Resuming sending status updates
I1213 13:55:39.957774 75 slave.cpp:1155] Forwarding total oversubscribed resources
I1213 13:55:39.958096 75 master.cpp:5002] Received update of agent d4536c66-4886-4c88-9a02-7dac4b5f2e6d-S0 at slave(1)@127.0.0.1:40102 (openshift-installer-native-docker-compose) with total oversubscribed resources
I1213 13:55:39.958235 75 hierarchical.cpp:542] Agent d4536c66-4886-4c88-9a02-7dac4b5f2e6d-S0 (openshift-installer-native-docker-compose) updated with oversubscribed resources (total: cpus(*):2; mem(*):2928; disk(*):34068; ports(*):[31000-32000], allocated: cpus(*):2; mem(*):2928; disk(*):34068; ports(*):[31000-32000])
I1213 13:55:39.956370 71 leveldb.cpp:341] Persisting action (16 bytes) to leveldb took 663264ns
I1213 13:55:39.958457 71 replica.cpp:712] Persisted action at 4
I1213 13:55:39.958593 71 replica.cpp:691] Replica received learned notice for position 4 from @0.0.0.0:0
I1213 13:55:39.959283 71 leveldb.cpp:341] Persisting action (18 bytes) to leveldb took 622339ns
I1213 13:55:39.959475 71 leveldb.cpp:399] Deleting ~2 keys from leveldb took 25791ns
I1213 13:55:39.959547 71 replica.cpp:712] Persisted action at 4
I1213 13:55:39.959604 71 replica.cpp:697] Replica learned TRUNCATE action at position 4
I1213 13:55:40.008047 73 master.cpp:3951] Processing DECLINE call for offers: [ d4536c66-4886-4c88-9a02-7dac4b5f2e6d-O0 ] for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
[2016-12-13 13:55:44,622] INFO Received TimedCheck (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 13:55:44,622] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-2)
I1213 13:55:44.624449 73 master.cpp:4046] Processing REVIVE call for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:44.626070 73 hierarchical.cpp:1022] Removed offer filters for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000
I1213 13:55:44.626485 73 master.cpp:5725] Sending 1 offers to framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
[2016-12-13 13:55:44,627] INFO 2 further revives still needed. Repeating reviveOffers according to --revive_offers_repetitions 3 (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-7)
[2016-12-13 13:55:44,630] INFO => Schedule next revive at 2016-12-13T13:55:49.621Z in 4995 milliseconds, adhering to --min_revive_offers_interval 5000 (ms) (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-7)
I1213 13:55:44.638694 71 master.cpp:3951] Processing DECLINE call for offers: [ d4536c66-4886-4c88-9a02-7dac4b5f2e6d-O1 ] for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:49.643371 71 master.cpp:4046] Processing REVIVE call for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
[2016-12-13 13:55:49,641] INFO Received TimedCheck (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-13 13:55:49,642] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-13 13:55:49,642] INFO 1 further revives still needed. Repeating reviveOffers according to --revive_offers_repetitions 3 (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-9)
[2016-12-13 13:55:49,643] INFO => Schedule next revive at 2016-12-13T13:55:54.640Z in 4999 milliseconds, adhering to --min_revive_offers_interval 5000 (ms) (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-9)
I1213 13:55:49.649920 71 hierarchical.cpp:1022] Removed offer filters for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000
I1213 13:55:49.652896 71 master.cpp:5725] Sending 1 offers to framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:49.664896 75 master.cpp:3951] Processing DECLINE call for offers: [ d4536c66-4886-4c88-9a02-7dac4b5f2e6d-O2 ] for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:54.672166 77 master.cpp:4046] Processing REVIVE call for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:54.672760 77 hierarchical.cpp:1022] Removed offer filters for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000
[2016-12-13 13:55:54,671] INFO Received TimedCheck (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 13:55:54,672] INFO => revive offers NOW, canceling any scheduled revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-2)
I1213 13:55:54.675570 77 master.cpp:5725] Sending 1 offers to framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:54.688268 75 master.cpp:3951] Processing DECLINE call for offers: [ d4536c66-4886-4c88-9a02-7dac4b5f2e6d-O3 ] for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
[2016-12-13 13:55:54,848] INFO initiate task reconciliation (mesosphere.marathon.MarathonSchedulerActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:54,859] INFO Requesting task reconciliation with the Mesos master (mesosphere.marathon.SchedulerActions:ForkJoinPool-2-worker-7)
I1213 13:55:54.862715 77 master.cpp:5398] Performing implicit task state reconciliation for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
[2016-12-13 13:55:54,862] INFO task reconciliation has finished (mesosphere.marathon.MarathonSchedulerActor:marathon-akka.actor.default-dispatcher-2)
[2016-12-13 13:55:54,871] INFO Message [mesosphere.marathon.MarathonSchedulerActor$TasksReconciled$] from Actor[akka://marathon/user/MarathonScheduler/$a#-694294900] to Actor[akka://marathon/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. (akka.actor.DeadLetterActorRef:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:57,020] INFO no interest in offers for reservation reconciliation anymore. (mesosphere.marathon.core.matcher.reconcile.impl.OffersWantedForReconciliationActor:marathon-akka.actor.default-dispatcher-5)
[2016-12-13 13:55:57,024] INFO Received offers NOT WANTED notification, canceling 0 revives (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-6)
[2016-12-13 13:55:57,026] INFO => Suppress offers NOW (mesosphere.marathon.core.flow.impl.ReviveOffersActor:marathon-akka.actor.default-dispatcher-6)
I1213 13:55:57.036034 74 master.cpp:2882] Processing SUPPRESS call for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102
I1213 13:55:57.036320 74 hierarchical.cpp:1002] Suppressed offers for framework 44a35e16-dc32-4f91-afac-33dfff498944-0000
I1213 13:56:09.705217 73 master.cpp:5463] Performing explicit task state reconciliation for 1 tasks of framework 44a35e16-dc32-4f91-afac-33dfff498944-0000 (marathon) at scheduler-a6e2761a-1f40-4e86-bc73-63171f1435aa@127.0.0.1:40102

This has resolved the issue.

Summary

In this blog post we have performed following tasks:

  1. Started a Marathon + Mesos environment with ZooKeeper, Mesos Master, Mesos Agent (Slave) and Mesosphere Marathon.
  2. Started a “Hello World” application via Marathon in a default Mesos container.
  3. Tested the resiliency by killing a process managed by Marathon.
  4. Started a “Hello World” application via Marathon in a Docker container.
  5. Showed, what happens, if all CPU resources are reserved.
  6. Described many errors we have observed in the Appendix.

Our observations were:

  • Different from our expectation for Mesos and Marathon to be production-ready, the Mesos applications do not seem to be very robust: we have seen critical errors like “suicides” of the Mesos Master, ZooKeeper and Mesosphere Marathon. I.e., monitoring and watchdogs are key when using Mesos + Marathon in a production environment. Apart from the critical errors, there are some minor problems like “connection lost” messages on the Mesos Web Portal, non-working “Application Restart” functions on Mesosphere Marathon. All problems are discussed in Appendix A.1 to A.6.
  • We have demonstrated in Step 8, that Mesos will not allow for over-subscription of resources per default. I.e., if you have 2 CPU cores and you are reserving 1 CPU core of 2 instances each, all CPU resources are occupied and no other task instance can be started (it will reside in “waiting” state). Mesos’ CPU reservation feels similar to a hard assignment of resources (since over-subscription is not supported per default, as we have seen above), but under the hood, Mesos does not apply hard limits on the CPU usage, unless the Mesos slave has --cgroups_enable_cfs  (CFS = Completely Fair Scheduler) enabled. See also the second answer on this StackOverflow question. For more information on over-subscription by Mesos, see this Apache Mesos Documentation page.
  • Mesos supports processes run in Mesos containers as well as processes run on Docker containers.
  • We could demonstrate that Mesosphere Marathon does, what it is designed for: it will make sure that an application is restarted, if it has crashed or has been killed.