OpenShift 容器平台社区版 OKD 4.15.0-0部署

参考:OpenShift 容器平台社区版 OKD 4.10.0部署

OpenShift — 部署 OKD 4.5_51CTO博客_openshift 部署

OKD-4.15--deploying_installer-provisioned_clusters_on_bare_metal-use

版本浏览: Home | OKD Documentation

官方文档参考:Installing a cluster on vSphere

Installing a cluster on vSphere with customizations

coreos-installer 命令的子命令、选项和参数

一、Openshift简介

RedHat OpenShift 是一个领先的企业级 Kubernetes 容器平台,它为本地、混合和多云部署提供了基础。通过自动化运营和简化的生命周期管理,OpenShift 使开发团队能够构建和部署新的应用程序,并帮助运营团队配置、管理和扩展 Kubernetes 平台,OpenShift 还提供了一个CLI,该CLI支持Kubernetes CLI提供的操作的超集。

1.1 OpenShift有多个版本,两个主要版本:


红帽OpenShift的开源社区版本称为 OKD(The Origin Community Distribution of Kubernetes,或OpenShift Kubernetes Distribution的缩写,原名OpenShiftOrigin),是 Red Hat OpenShift Container Platform (OCP) 的上游和社区支持版本。

红帽OpenShift的企业版本称为OCP(Red Hat OpenShift Container Platform ),OpenShift 的私有云产品, 不购买订阅也可以安装使用,只是不提供技术支持

1.2 OpenShift安装方式分为两种:

IPI(Installer Provisioned Infrastructure)方式:安装程序配置的基础架构集群,基础架构引导和配置委托给安装程序,而不是自己进行。安装程序会创建支持集群所需的所有网络、机器和操作系统。

UPI(User Provisioned Infrastructure)方式:用户配置的基础架构集群,必须由用户自行提供所有集群基础架构和资源,包括引导节点、网络、负载均衡、存储和集群的每个节点。
本文基于VMware vSphere7.0.3环境创建多个虚拟机,并在虚拟机上使用UPI模式手动部署OpenShift OKD 4.10版本集群,即官方介绍的 Bare Metal (UPI)模式。

1.3 安装架构示意图:

1.4 安装流程示意图:

二、OKD社区版安装

版本浏览: Home | OKD Documentation

官方文档参考:Installing a cluster on vSphere

Installing a cluster on vSphere with customizations

备注:本篇文章大多内容出自官方文档示例。

2.1 集群基本信息

集群名称:okd4
基本域名:hpcloud.fun
集群规格:3个maste节点,2个worker节点

主办方

描述

一台临时引导机(bootstrap)

集群需要 bootstrap 机在三台控制平面机器上部署 OKD 集群,安装集群后可以移除 bootstrap 机。

三个控制平面机器(master)

控制平面机器运行构成控制平面的 Kubernetes 和 OKD 服务。

至少两台计算机器(worker),也称为工作机。

OKD 用户请求的工作负载在计算机上运行。

2.2 节点配置清单:

前期只需创建一个bastion节点,在bastion节点准备就绪后,其他节点需要逐个手动引导启动,无需提前创建。

OKD域名:

bastion.okd4.hpcloud.fun 192.168.31.20
registry.hpcloud.fun
*.apps.okd4.hpcloud.fun
api-int.okd4.hpcloud.fun
api.okd4.hpcloud.fun

bootstrap.okd4.hpcloud.fun 192.168.31.21
master0.okd4.hpcloud.fun 192.168.31.22
master1.okd4.hpcloud.fun 192.168.31.23
master2.okd4.hpcloud.fun 192.168.31.24
worker0.okd4.hpcloud.fun 192.168.31.25
worker1.okd4.hpcloud.fun 192.168.31.26

注意: CPU和内存是最低配置,再低集群无法正常启动。

Hostname

FQDN

IPaddress

NodeType

CPU

Mem

Disk

OS

bastion

bastion.okd4.hpcloud.fun

192.168.31.20

基础节点

2C

4G

100G

Ubuntu 20.04.4 LTS

bootstrap

bootstrap.okd4.hpcloud.fun

192.168.31.21

引导节点

4C

16G

100G

Fedora CoreOS 35

master0

master0.okd4.hpcloud.fun

192.168.31.22

主控节点

4C

16G

100G

Fedora CoreOS 35

master1

master1.okd4.hpcloud.fun

192.168.31.23

主控节点

4C

16G

100G

Fedora CoreOS 35

master2

master2.okd4.hpcloud.fun

192.168.31.24

主控节点

4C

16G

100G

Fedora CoreOS 35

worker0

worker0.okd4.hpcloud.fun

192.168.31.25

工作节点

2C

8G

100G

Fedora CoreOS 35

worker1

worker1.okd4.hpcloud.fun

192.168.31.26

工作节点

2C

8G

100G

Fedora CoreOS 35

api server

api.okd4.hpcloud.fun

192.168.31.20

Kubernetes API

api-int

api-int.okd4.hpcloud.fun

192.168.31.20

Kubernetes API

apps

*.apps.okd4.hpcloud.fun

192.168.31.20

Apps

registry

registry.okd4.

hpcloud.fun

192.168.31.20

镜像仓库

2.3 节点类型介绍:

Bastion节点,基础节点或堡垒机节点,提供http服务和registry的本地安装仓库服务,同时所有的ign点火文件,coreos所需要的ssh-rsa密钥等都由这个节点生成,OS类型可以任意。

Bootstrap节点,引导节点, 引导工作完成后续可以删除,OS类型必须为Fedora CoreOS

Master节点,openshift的管理节点,操作系统必须为Fedora CoreOS

Worker节点,openshift的工作节点,操作系统可以在 Fedora CoreOS、Fedora 8.4 或 Fedora 8.5 之间进行选择。

2.4 组件介绍

bastion节点需要安装以下组件:

组件名称

组件说明

Docker

容器环境

Bind9

DNS服务器

Haproxy

负载均衡服务器

Nginx

Web服务器

Harbor

容器镜像仓库

OpenShift CLI

oc命令行客户端

OpenShift-Install

openshift安装程序

2.5 基础资源信息

部署完成后的基础资源信息:

2.6 openshift节点信息

部署完成后的openshift节点信息:

三、Bastion环境准备

首先创建一台Bastion 节点,配置静态IP地址,作为基础部署节点,操作系统类型没有要求,这里使用ubuntu,无特殊说明以下所有操作在该节点执行。

3.1 修改主机名

# use
hostnamectl set-hostname bastion-vm-20

# 参考
hostnamectl set-hostname bastion.okd4.hpcloud.fun

3.2 安装docker

# use
# 安装docker
  apt install docker.io -y
  # 卸载docker
  apt remove docker.io -y

# 参考
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
systemctl status docker
docker version

3.3 查看节点ip信息

# use
root@bastion-vm-20:/opt# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:6d:8f:f1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.20/24 brd 192.168.31.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 2408:822e:ca2:56b0:20c:29ff:fe6d:8ff1/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 225808sec preferred_lft 139408sec
    inet6 fe80::20c:29ff:fe6d:8ff1/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:ac:38:5c:68 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

# ================================================= #
# old
root@bastion:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:99:0d:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.20/24 brd 192.168.31.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe99:d57/64 scope link 
       valid_lft forever preferred_lft forever

3.4 查看OS发行版本

# use
root@bastion-vm-20:/opt# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.6 LTS
Release:        20.04
Codename:       focal

# old
root@bastion:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:        20.04
Codename:       focal

四、Bind安装 -- (使用公网域名,略过)

在 OKD 部署中,以下组件需要 DNS 名称解析:

Kubernetes API
OKD 应用访问入口
引导节点、控制平面和计算节点

Kubernetes API、引导机器、控制平面机器和计算节点也需要反向 DNS 解析。DNS A/AAAA 或 CNAME 记录用于名称解析,PTR 记录用于反向名称解析。反向记录很重要,因为 Fedora CoreOS (FCOS) 使用反向记录来设置所有节点的主机名,除非主机名由 DHCP 提供。此外,反向记录用于生成 OKD 需要操作的证书签名请求 (CSR)。

在每条记录中,<cluster_name>是集群名称,并且<base_domain>是在install-config.yaml文件中指定的基本域。完整的 DNS 记录采用以下形式:<component>.<cluster_name>.<base_domain>..

4.1 创建bind配置文件目录

mkdir -p /etc/bind
mkdir -p /var/lib/bind
mkdir -p /var/cache/bind

4.2 创建bind主配置文件

cat >/etc/bind/named.conf<<EOF
options {
        directory "/var/cache/bind";
        listen-on { any; };
        listen-on-v6 { any; };
        allow-query { any; };
        allow-query-cache { any; };
        recursion yes;
        allow-recursion { any; };
        allow-transfer { none; };
        allow-update { none; };
        auth-nxdomain no;
        dnssec-validation no;
        forward first;
        forwarders {
          114.114.114.114;
          8.8.8.8;
        };
};
zone "hpcloud.fun" IN {
  type master;
  file "/var/lib/bind/hpcloud.fun.zone";
};
zone "72.168.192.in-addr.arpa" IN {
  type master;
  file "/var/lib/bind/72.168.192.in-addr.arpa";
};
EOF

4.3 创建正向解析配置文件

cat >/var/lib/bind/hpcloud.fun.zone<<'EOF'
$TTL 1W
@   IN    SOA    ns1.hpcloud.fun.    root (
                 2019070700        ; serial
                 3H                ; refresh (3 hours)
                 30M               ; retry (30 minutes)
                 2W                ; expiry (2 weeks)
                 1W )              ; minimum (1 week)
    IN    NS     ns1.hpcloud.fun.
    IN    MX 10  smtp.hpcloud.fun.
;
ns1.hpcloud.fun.            IN A 192.168.31.20
smtp.hpcloud.fun.           IN A 192.168.31.20
;
registry.hpcloud.fun.       IN A 192.168.31.20
api.okd4.hpcloud.fun.       IN A 192.168.31.20
api-int.okd4.hpcloud.fun.   IN A 192.168.31.20
;
*.apps.okd4.hpcloud.fun.    IN A 192.168.31.20
;
bastion.okd4.hpcloud.fun.   IN A 192.168.31.20
bootstrap.okd4.hpcloud.fun. IN A 192.168.31.21
;
master0.okd4.hpcloud.fun.   IN A 192.168.31.22
master1.okd4.hpcloud.fun.   IN A 192.168.31.23
master2.okd4.hpcloud.fun.   IN A 192.168.31.24
;
worker0.okd4.hpcloud.fun.   IN A 192.168.31.25
worker1.okd4.hpcloud.fun.   IN A 192.168.31.26
EOF

4.4 创建反向解析配置文件

cat >/var/lib/bind/72.168.192.in-addr.arpa<<'EOF'
$TTL 1W
@   IN    SOA      ns1.hpcloud.fun.     root (
                   2019070700        ; serial
                   3H                ; refresh (3 hours)
                   30M               ; retry (30 minutes)
                   2W                ; expiry (2 weeks)
                   1W )              ; minimum (1 week)
    IN    NS       ns1.hpcloud.fun.
;
20.72.168.192.in-addr.arpa. IN PTR api.okd4.hpcloud.fun.
20.72.168.192.in-addr.arpa. IN PTR api-int.okd4.hpcloud.fun.
;
20.72.168.192.in-addr.arpa. IN PTR bastion.okd4.hpcloud.fun.
 
21.72.168.192.in-addr.arpa. IN PTR bootstrap.okd4.hpcloud.fun.
;
22.72.168.192.in-addr.arpa. IN PTR master0.okd4.hpcloud.fun.
23.72.168.192.in-addr.arpa. IN PTR master1.okd4.hpcloud.fun.
24.72.168.192.in-addr.arpa. IN PTR master2.okd4.hpcloud.fun.
;
25.72.168.192.in-addr.arpa. IN PTR worker0.okd4.hpcloud.fun.
26.72.168.192.in-addr.arpa. IN PTR worker1.okd4.hpcloud.fun.
EOF

4.5 配置文件权限,允许容器有读写权限

chmod -R a+rwx /etc/bind
chmod -R a+rwx /var/lib/bind/
chmod -R a+rwx /var/cache/bind/

4.6 ubuntu中的dns由systemd-resolved管理,

修改以下配置项,指定dns为本地DNS:

root@ubuntu:~# cat /etc/systemd/resolved.conf 
[Resolve]
DNS=192.168.31.20

重启systemd-resolved服务

systemctl restart systemd-resolved.service

创建到resolv.conf的链接:

ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

查看resolv.conf配置,确认输出内容如下:

root@ubuntu:~# cat /etc/resolv.conf 
......
# operation for /etc/resolv.conf.
 
nameserver 192.168.31.20
nameserver 114.114.114.114

4.7 以容器方式启动bind服务

注意绑定到本机IP,以免与ubuntu默认dns服务53端口冲突:

docker run -d --name bind9 \
  --restart always \
  --name=bind9 \
  -e TZ=Asia/Shanghai \
  --publish 192.168.31.20:53:53/udp \
  --publish 192.168.31.20:53:53/tcp \
  --publish 192.168.31.20:953:953/tcp \
  --volume /etc/bind:/etc/bind \
  --volume /var/cache/bind:/var/cache/bind \
  --volume /var/lib/bind:/var/lib/bind \
  --volume /var/log/bind:/var/log \
  internetsystemsconsortium/bind9:9.18

4.8 使用dig命令来验证正向域名解析

dig +noall +answer @192.168.31.20 registry.hpcloud.fun
dig +noall +answer @192.168.31.20 api.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 api-int.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 console-openshift-console.apps.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 bootstrap.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 master0.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 master1.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 master2.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 worker0.okd4.hpcloud.fun
dig +noall +answer @192.168.31.20 worker1.okd4.hpcloud.fun

正向解析结果如下,确认每一项都能够正常解析

root@bastion:~# dig +noall +answer @192.168.31.20 registry.hpcloud.fun
registry.hpcloud.fun.   604800  IN      A       192.168.31.20
root@bastion:~# dig +noall +answer @192.168.31.20 api.okd4.hpcloud.fun
api.okd4.hpcloud.fun.   604800  IN      A       192.168.31.20
root@bastion:~# dig +noall +answer @192.168.31.20 api-int.okd4.hpcloud.fun
api-int.okd4.hpcloud.fun. 604800 IN     A       192.168.31.20
root@bastion:~# dig +noall +answer @192.168.31.20 console-openshift-console.apps.okd4.hpcloud.fun
console-openshift-console.apps.okd4.hpcloud.fun. 604800 IN A 192.168.31.20
root@bastion:~# dig +noall +answer @192.168.31.20 bootstrap.okd4.hpcloud.fun
bootstrap.okd4.hpcloud.fun. 604800 IN   A       192.168.31.21
root@bastion:~# dig +noall +answer @192.168.31.20 master0.okd4.hpcloud.fun
master0.okd4.hpcloud.fun. 604800 IN     A       192.168.31.22
root@bastion:~# dig +noall +answer @192.168.31.20 master1.okd4.hpcloud.fun
master1.okd4.hpcloud.fun. 604800 IN     A       192.168.31.23
root@bastion:~# dig +noall +answer @192.168.31.20 master2.okd4.hpcloud.fun
master2.okd4.hpcloud.fun. 604800 IN     A       192.168.31.24
root@bastion:~# dig +noall +answer @192.168.31.20 worker0.okd4.hpcloud.fun
worker0.okd4.hpcloud.fun. 604800 IN     A       192.168.31.25
root@bastion:~# dig +noall +answer @192.168.31.20 worker1.okd4.hpcloud.fun
worker1.okd4.hpcloud.fun. 604800 IN     A       192.168.31.26

验证反向域名解析

dig +noall +answer @192.168.31.20 -x 192.168.31.21
dig +noall +answer @192.168.31.20 -x 192.168.31.22
dig +noall +answer @192.168.31.20 -x 192.168.31.23
dig +noall +answer @192.168.31.20 -x 192.168.31.24
dig +noall +answer @192.168.31.20 -x 192.168.31.25
dig +noall +answer @192.168.31.20 -x 192.168.31.26

反向解析结果如下,同样需要确认每一项都能够正常解析

root@bastion:~# dig +noall +answer @192.168.31.20 -x 192.168.31.21
21.72.168.192.in-addr.arpa. 604800 IN   PTR     bootstrap.okd4.hpcloud.fun.
root@bastion:~# dig +noall +answer @192.168.31.20 -x 192.168.31.22
22.72.168.192.in-addr.arpa. 604800 IN   PTR     master0.okd4.hpcloud.fun.
root@bastion:~# dig +noall +answer @192.168.31.20 -x 192.168.31.23
23.72.168.192.in-addr.arpa. 604800 IN   PTR     master1.okd4.hpcloud.fun.
root@bastion:~# dig +noall +answer @192.168.31.20 -x 192.168.31.24
24.72.168.192.in-addr.arpa. 604800 IN   PTR     master2.okd4.hpcloud.fun.
root@bastion:~# dig +noall +answer @192.168.31.20 -x 192.168.31.25
25.72.168.192.in-addr.arpa. 604800 IN   PTR     worker0.okd4.hpcloud.fun.
root@bastion:~# dig +noall +answer @192.168.31.20 -x 192.168.31.26
26.72.168.192.in-addr.arpa. 604800 IN   PTR     worker1.okd4.hpcloud.fun.

五、安装Haproxy

使用haproxy创建负载均衡器,负载machine-configkube-apiserver和集群ingress controller

5.1 创建haproxy配置目录

mkdir -p /etc/haproxy

5.2 创建haproxy配置文件

cat >/etc/haproxy/haproxy.cfg<<EOF
global
  log         127.0.0.1 local2
  maxconn     4000
  daemon
defaults
  mode                    http
  log                     global
  option                  dontlognull
  option http-server-close
  option                  redispatch
  retries                 3
  timeout http-request    10s
  timeout queue           1m
  timeout connect         10s
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 10s
  timeout check           10s
  maxconn                 3000
frontend stats
  bind *:1936
  mode            http
  log             global
  maxconn 10
  stats enable
  stats hide-version
  stats refresh 30s
  stats show-node
  stats show-desc Stats for openshift cluster 
  stats auth admin:openshift
  stats uri /stats
frontend openshift-api-server
    bind *:6443
    default_backend openshift-api-server
    mode tcp
    option tcplog
backend openshift-api-server
    balance source
    mode tcp
    server bootstrap 192.168.31.21:6443 check 
    server master0 192.168.31.22:6443 check
    server master1 192.168.31.23:6443 check
    server master2 192.168.31.24:6443 check
frontend machine-config-server
    bind *:22623
    default_backend machine-config-server
    mode tcp
    option tcplog
backend machine-config-server
    balance source
    mode tcp
    server bootstrap 192.168.31.21:22623 check
    server master0 192.168.31.22:22623 check
    server master1 192.168.31.23:22623 check
    server master2 192.168.31.24:22623 check
frontend ingress-http
    bind *:80
    default_backend ingress-http
    mode tcp
    option tcplog
backend ingress-http
    balance source
    mode tcp
    server worker0 192.168.31.25:80 check
    server worker1 192.168.31.26:80 check
frontend ingress-https
    bind *:443
    default_backend ingress-https
    mode tcp
    option tcplog
backend ingress-https
    balance source
    mode tcp
    server worker0 192.168.31.25:443 check
    server worker1 192.168.31.26:443 check
EOF

5.3 以容器方式启动haproxy服务

$. docker pull haproxy:2.5.5-alpine3.15

# docker restart haproxy
$. docker run -d \
  --name haproxy \
  --restart always \
  -p 1936:1936 \
  -p 6443:6443 \
  -p 22623:22623 \
  -p 80:80 -p 443:443 \
  --sysctl net.ipv4.ip_unprivileged_port_start=0 \
  -v /etc/haproxy/:/usr/local/etc/haproxy:ro \
  haproxy:2.5.5-alpine3.15

六、安装Nginx

OpenShift 集群部署时需要从 web服务器下载 CoreOS Image 和 Ignition 文件,这里使用nginx提供文件下载。

6.1 创建nginx相关目录

mkdir -p /etc/nginx/templates
mkdir -p /usr/share/nginx/html/{ignition,install}

6.2 创建nginx配置文件,打开目录浏览功能(可选)

$. cat >/etc/nginx/templates/default.conf.template<<EOF
server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        autoindex on;
        autoindex_exact_size off;
        autoindex_format html;
        autoindex_localtime on;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}
EOF

修改文件权限,允许容器内部读写

$. chmod -R a+rwx /etc/nginx/
$. chmod -R a+rwx /usr/share/nginx/

6.3 以容器方式启动nginx服务

注意修改为以下端口以免冲突

$. docker pull nginx:1.21.6-alpine 

$. docker run -d --name nginx-okd \
  --restart always \
  -p 8088:80 \
  -v /etc/nginx/templates:/etc/nginx/templates \
  -v /usr/share/nginx/html:/usr/share/nginx/html:ro \
  nginx:1.21.6-alpine

浏览器访问验证:

七、安装OpenShift CLI

OpenShift CLI ( oc) 用于从命令行界面与 OKD 交互,可以在 Linux、Windows 或 macOS 上安装oc。

下载地址:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

风情客家__

原创不易,觉得好的话给个打赏哈

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值