分类目录归档:微服务

kubernetes_on_macOS

Requirements

Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS. To check that this is enabled on OSX / macOS run:

sysctl -a | grep machdep.cpu.features | grep VMX

If there’s output, you’re good!

Prerequisites

  • kubectl
  • docker (for Mac)
  • minikube
  • virtualbox
brew update && brew install kubectl && brew cask install docker minikube virtualbox

Verify

docker --version                # Docker version 17.09.0-ce, build afdb6d4
docker-compose --version        # docker-compose version 1.16.1, build 6d1ac21
docker-machine --version        # docker-machine version 0.12.2, build 9371605
minikube version                # minikube version: v0.22.3
kubectl version --client        # Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-12T00:45:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}      

Start

minikube start

This can take a while, expected output:

Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

Great! You now have a running Kubernetes cluster locally. Minikube started a virtual machine for you, and a Kubernetes cluster is now running in that VM.

Check k8s

kubectl get nodes

Should output something like:

NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     <none>    40s       v1.7.5

Use minikube’s built-in docker daemon:

eval $(minikube docker-env)

Running docker ps should now output something like:

CONTAINER ID        IMAGE                                         COMMAND                 CREATED             STATUS              PORTS               NAMES
e97128790bf9        gcr.io/google-containers/kube-addon-manager   "/opt/kube-addons.sh"   22 seconds ago      Up 22 seconds                           k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_c654b2f084cf26941c334a2c3d6db53d_0
69707e54d1d0        gcr.io/google_containers/pause-amd64:3.0      "/pause"                33 seconds ago      Up 33 seconds                           k8s_POD_kube-addon-manager-minikube_kube-system_c654b2f084cf26941c334a2c3d6db53d_0

Build, deploy and run an image on your local k8s setup

First setup a local registry, so Kubernetes can pull the image(s) from there:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

Build

First of, store all files (Dockerfile, my-app.yml, index.html) in this gist locally in some new (empty) directory.

You can build the Dockerfile below locally if you want to follow this guide to the letter. Store the Dockerfile locally, preferably in an empty directory and run:

docker build . --tag my-app

You should now have an image named ‘my-app’ locally, check by using docker images (or your own image of course). You can then publish it to your local docker registry:

docker tag my-app localhost:5000/my-app:0.1.0

Running docker images should now output the following:

REPOSITORY                                             TAG                 IMAGE ID            CREATED             SIZE
my-app                                                 latest              cc949ad8c8d3        44 seconds ago      89.3MB
localhost:5000/my-app                                  0.1.0               cc949ad8c8d3        44 seconds ago      89.3MB
httpd                                                  2.4-alpine          fe26194c0b94        7 days ago          89.3MB

Deploy and run

Store the file below my-app.yml on your system and run the following:

kubectl create -f my-app.yml

You should now see your pod and your service:

kubectl get all

The configuration exposes my-app outside of the cluster, you can get the address to access it by running:

minikube service my-app --url

This should give an output like http://192.168.99.100:30304 (the port will most likely differ). Go there with your favorite browser, you should see “Hello world!”. You just accessed your application from outside of your local Kubernetes cluster!

Kubernetes GUI

minikube dashboard

Delete deployment of my-app

kubectl delete deploy my-app
kubectl delete service my-app

You’re now good to go and deploy other images!

Reset everything

minikube stop;
minikube delete;
rm -rf ~/.minikube .kube;
brew uninstall kubectl;
brew cask uninstall docker virtualbox minikube;

TODO

Will try to convert this to xhyve when possible.

Version

Last tested on 2017 October 20th macOS Sierra 10.12.6

用jetty-runner.jar 运行程序的脚本

先阐述清楚 linux/unix下特性:

一般情况下,每个 Unix/Linux 命令运行时都会打开三个文件:
标准输入文件(stdin):stdin的文件描述符为0,Unix程序默认从stdin读取数据。
标准输出文件(stdout):stdout 的文件描述符为1,Unix程序默认向stdout输出数据。
标准错误文件(stderr):stderr的文件描述符为2,Unix程序会向stderr流中写入错误信息。

有时候,程序刚启动会刷日志到std out(console),很烦的~ 今天根据上面的说明, 优化了一下脚本。

启动脚本

#!/bin/bash
java -Xmx512m -Xms64m -XX:PermSize=256M -XX:MaxPermSize=512M -jar jetty-runner.jar --port 12345 server-0.0.1-SNAPSHOT.war  >/web/program/gw/logs/stdout.log 2>&1 &


echo $! > ./program.pid

启动脚本说明:

1. java – jar 和一些jvm参数
2. jetty-runner.jar 参数
3. 把标准输出、标准错误(1/2) 从定向到/web/program/gw/logs/stdout.log
4. & 后台启动
5. 把程序的pid输出到program.pid

关闭脚本

#!/bin/bash
pkill -F ./program.pid
rm ./program.pid

关闭脚本简单, 就是一个pkill -F 然后删除文件。

小小心得

java -Xmx512m -Xms64m -XX:PermSize=256M -XX:MaxPermSize=512M -jar jetty-runner.jar --port 12345 server-0.0.1-SNAPSHOT.war  >/web/program/gw/logs/stdout.log 2>&1 &

之前是没有上面红色的部分的。问题就是系统启动后,控制台在刷日志, 神烦。 把上面红色的加上, 意思是把标准输出和标准错误都输出到这个文件中。

Google、IBM和Lyft开源的微服务管理框架Istio

本文根据官网的文档整理而成,步骤包括安装istio 0.1.5并创建一个bookinfo的微服务来测试istio的功能。

文中使用的yaml文件可以在kubernetes-handbookmanifests/istio目录中找到,所有的镜像都换成了我的私有镜像仓库地址,请根据官网的镜像自行修改。

安装环境

  • CentOS 7.3.1611
  • Docker 1.12.6
  • Kubernetes 1.6.0

安装

1.下载安装包

下载地址:https://github.com/istio/istio/releases

下载Linux版本的当前最新版安装包

wget https://github.com/istio/istio/releases/download/0.1.5/istio-0.1.5-linux.tar.gz

2.解压

解压后,得到的目录结构如下:

.
├── bin
│   └── istioctl
├── install
│   └── kubernetes
│       ├── addons
│       │   ├── grafana.yaml
│       │   ├── prometheus.yaml
│       │   ├── servicegraph.yaml
│       │   └── zipkin.yaml
│       ├── istio-auth.yaml
│       ├── istio-rbac-alpha.yaml
│       ├── istio-rbac-beta.yaml
│       ├── istio.yaml
│       ├── README.md
│       └── templates
│           ├── istio-auth
│           │   ├── istio-auth-with-cluster-ca.yaml
│           │   ├── istio-cluster-ca.yaml
│           │   ├── istio-egress-auth.yaml
│           │   ├── istio-ingress-auth.yaml
│           │   └── istio-namespace-ca.yaml
│           ├── istio-egress.yaml
│           ├── istio-ingress.yaml
│           ├── istio-manager.yaml
│           └── istio-mixer.yaml
├── istio.VERSION
├── LICENSE
└── samples
    ├── apps
    │   ├── bookinfo
    │   │   ├── bookinfo.yaml
    │   │   ├── cleanup.sh
    │   │   ├── destination-ratings-test-delay.yaml
    │   │   ├── loadbalancing-policy-reviews.yaml
    │   │   ├── mixer-rule-additional-telemetry.yaml
    │   │   ├── mixer-rule-empty-rule.yaml
    │   │   ├── mixer-rule-ratings-denial.yaml
    │   │   ├── mixer-rule-ratings-ratelimit.yaml
    │   │   ├── README.md
    │   │   ├── route-rule-all-v1.yaml
    │   │   ├── route-rule-delay.yaml
    │   │   ├── route-rule-reviews-50-v3.yaml
    │   │   ├── route-rule-reviews-test-v2.yaml
    │   │   ├── route-rule-reviews-v2-v3.yaml
    │   │   └── route-rule-reviews-v3.yaml
    │   ├── httpbin
    │   │   ├── httpbin.yaml
    │   │   └── README.md
    │   └── sleep
    │       ├── README.md
    │       └── sleep.yaml
    └── README.md

11 directories, 41 files

从文件里表中可以看到,安装包中包括了kubernetes的yaml文件,示例应用和安装模板。

3.安装istioctl

./bin/istioctl拷贝到你的$PATH目录下。

4.检查RBAC

因为我们安装的kuberentes版本是1.6.0默认支持RBAC,这一步可以跳过。如果你使用的其他版本的kubernetes,请参考官方文档操作。

执行以下命令,正确的输出是这样的:

$ kubectl api-versions | grep rbac
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1

5.创建角色绑定

$ kubectl create -f install/kubernetes/istio-rbac-beta.yaml
clusterrole "istio-manager" created
clusterrole "istio-ca" created
clusterrole "istio-sidecar" created
clusterrolebinding "istio-manager-admin-role-binding" created
clusterrolebinding "istio-ca-role-binding" created
clusterrolebinding "istio-ingress-admin-role-binding" created
clusterrolebinding "istio-sidecar-role-binding" created

注意:官网的安装包中的该文件中存在RoleBinding错误,应该是集群级别的clusterrolebinding,而release里的代码只是普通的rolebinding,查看该Issue Istio manager cannot list of create k8s TPR when RBAC enabled #327

6.安装istio核心组件

用到的镜像有:

docker.io/istio/mixer:0.1.5
docker.io/istio/manager:0.1.5
docker.io/istio/proxy_debug:0.1.5

我们暂时不开启Istio Auth

注意:本文中用到的所有yaml文件中的type: LoadBalancer去掉,使用默认的ClusterIP,然后配置Traefik ingress,就可以在集群外部访问。请参考安装Traefik ingress

kubectl apply -f install/kubernetes/istio.yaml

7.安装监控插件

用到的镜像有:

docker.io/istio/grafana:0.1.5
quay.io/coreos/prometheus:v1.1.1
gcr.io/istio-testing/servicegraph:latest
docker.io/openzipkin/zipkin:latest

为了方便下载,其中两个镜像我备份到了时速云:

index.tenxcloud.com/jimmy/prometheus:v1.1.1
index.tenxcloud.com/jimmy/servicegraph:latest

安装插件

kubectl apply -f install/kubernetes/addons/prometheus.yaml
kubectl apply -f install/kubernetes/addons/grafana.yaml
kubectl apply -f install/kubernetes/addons/servicegraph.yaml
kubectl apply -f install/kubernetes/addons/zipkin.yaml

在traefik ingress中增加增加以上几个服务的配置,同时增加istio-ingress配置。

    - host: grafana.istio.io
      http:
        paths:
        - path: /
          backend:
            serviceName: grafana
            servicePort: 3000
    - host: servicegraph.istio.io
      http:
        paths:
        - path: /
          backend:
            serviceName: servicegraph
            servicePort: 8088
    - host: prometheus.istio.io
      http:
        paths:
        - path: /
          backend:
            serviceName: prometheus
            servicePort: 9090
    - host: zipkin.istio.io
      http:
        paths:
        - path: /
          backend:
            serviceName: zipkin
            servicePort: 9411
    - host: ingress.istio.io
      http:
        paths:
        - path: /
          backend:
            serviceName: istio-ingress
            servicePort: 80

测试

我们使用Istio提供的测试应用bookinfo微服务来进行测试。

该微服务用到的镜像有:

istio/examples-bookinfo-details-v1
istio/examples-bookinfo-ratings-v1
istio/examples-bookinfo-reviews-v1
istio/examples-bookinfo-reviews-v2
istio/examples-bookinfo-reviews-v3
istio/examples-bookinfo-productpage-v1

该应用架构图如下:

BookInfo Sample应用架构图

部署应用

kubectl create -f <(istioctl kube-inject -f samples/apps/bookinfo/bookinfo.yaml)

Istio kube-inject命令会在bookinfo.yaml文件中增加Envoy sidecar信息。参考 istio 文档

在本机的/etc/hosts下增加VIP节点和ingress.istio.io的对应信息。具体步骤参考:边缘节点配置

在浏览器中访问http://ingress.istio.io/productpage

监控

不断刷新productpage页面,将可以在以下几个监控中看到如下界面。

Grafana页面

http://grafana.istio.io

Istio Grafana界面

Prometheus页面

http://prometheus.istio.io

Prometheus页面

Zipkin页面

http://zipkin.istio.io

Zipkin页面

ServiceGraph页面

http://servicegraph.istio.io/dotviz

可以用来查看服务间的依赖关系。

获得json格式的返回结果,访问http://servicegraph.istio.io/graph

ServiceGraph页面