redash安装-2022年最新版本-非docker方式

环境

redash 版本 Version: 11.0.0-dev (dev)
redis 无版本要求 本文采用 3.2.12 192.168.16.36
postgres 有版本要求,需要最低 9.4以上,本文最终安装10 192.168.16.43
python3 本文采用 3.7 192.168.16.43
os: CentOS Linux release 7.9.2009 (Core) 192.168.16.43
Node.js
Refer to the documentation of Python (3), PostgreSQL (9.6 or newer), Redis (3 or newer), Node.js (14.16.1 or newer)

redis环境

使用之前安装环境,所以无安装操作

[bduser@192-168-16-36 bin]$ ./redis-server redis.conf

[bduser@192-168-16-36 bin]$ redis-cli -v
redis-cli 3.2.12

[bduser@192-168-16-36 bin]$ ./redis-cli

127.0.0.1:6379> FLUSHALL
OK
127.0.0.1:6379> keys *
(empty list or set)

Postgres

安装

sudo yum -y install epel-release

https://www.postgresql.org/download/linux/redhat/

# Install the repository RPM:
sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

# Install PostgreSQL:
sudo yum install -y postgresql10-server

# Optionally initialize the database and enable automatic start:
sudo /usr/pgsql-10/bin/postgresql-10-setup initdb
sudo systemctl enable postgresql-10
sudo systemctl start postgresql-10

设置

https://www.cnblogs.com/fonks/p/15093572.html#%E7%AC%AC%E5%9B%9B%E6%AD%A5%E5%BC%80%E5%90%AF%E5%A4%96%E7%BD%91%E8%AE%BF%E9%97%AE

登录添加用户密码:
sudo su - postgres
-bash-4.2$ psql
postgres=# ALTER USER postgres WITH ENCRYPTED PASSWORD '********';
postgres=# alter role postgres login;

配置文件修改:
/var/lib/pgsql/10/data
pg_hba.conf  添加
host	all		all		0.0.0.0/0		md5
host	replication	all		0.0.0.0/0		md5
postgresql.conf 修改
listen_addresses='*'

配置完成后重启
sudo systemctl restart postgresql-10
sudo systemctl status postgresql-10

psql -h192.168.16.43 -U postgres

Node.js

sudo yum search nodejs --showduplicates
sudo yum install nodejs-16.15.0-3.el7.x86_64
rpm -ql nodejs-16.15.0-3.el7.x86_64
rpm -qa | grep nodejs

redash安装

refer

https://blog.csdn.net/weixin_30580943/article/details/99502387
https://redash.io/help/open-source/dev-guide/setup
https://www.dazdata.com/docs/install/4.html

https://github.com/getredash/setup
https://github.com/getredash/redash/tree/v10.1.0
https://github.com/dazdata/redash/tags

安装记录

下载

git clone https://github.com/getredash/redash.git

python虚拟环境

python3 -m venv redash-env

source redash-env/bin/activate

pip install --upgrade pip

安装python依赖

python -m pip install -r requirements.txt

python -m pip install -r requirements_dev.txt

python -m pip install -r requirements_all_ds.txt

 
 pip install wheel
 pip3 install importlib_resources==1.5
 pip3 install redis
 yum -y install cyrus-sasl cyrus-sasl-devel cyrus-sasl-lib
 pip install sasl
 
 pip install thrift
 pip install thrift-sasl

以上命令需要反复重试

  1. 网络经常性中断导致连接失败而退出,只需要不断重跑即可
  2. 有一些ds组件安装编译会报错,按照报错修改内容,如果用不到的ds可以#进行注释
  3. ds之间的版本冲突,见下 conflict,注释掉dql这个数据源
  4. src/pyodbc.h:56:17: 致命错误:sql.h:没有那个文件或目录 见下
编译
npm install 
npm run build 

以上两个命令需要不断重试,主要原因是网络中断,有一部分是编译错误,比如
1. get_ipaddr 找不到问题,见下
2. yarn 冲突问题 见下
3. axios-auth-refresh 版本异常 见下

配置
  1. 根目录:webpack.config.js

      devServer: {
        host: '0.0.0.0',
    
  2. 根目录:package.json

    "scripts": {
        "start": "npm-run-all --parallel watch:viz webpack-dev-server --disableHostCheck=true",
    
  3. 根目录: redash/settings/init.py

    _REDIS_URL = os.environ.get(
        "REDASH_REDIS_URL", os.environ.get("REDIS_URL", "redis://192.168.16.36:6379/0")
    )
    
    SQLALCHEMY_DATABASE_URI = os.environ.get(
        "REDASH_DATABASE_URL", os.environ.get("DATABASE_URL", "postgresql://用户名:密码@192.168.16.43")
    )
    
    // 此处的key和下面.env中的可以需要一直,这些key都是用户自己生成即可
    SECRET_KEY = os.environ.get("REDASH_COOKIE_SECRET","py58ChjnjNBkpYRdGB3rEVjZEMdNl2yK") 
    
    
  4. 根目录:.env

    PYTHONUNBUFFERED="0"
    REDASH_LOG_LEVEL="INFO"
    REDASH_REDIS_URL="redis://192.168.16.36:6379/0"
    POSTGRES_PASSWORD="修改为密码"
    REDASH_COOKIE_SECRET="py58ChjnjNBkpYRdGB3rEVjZEMdNl2yK"
    REDASH_SECRET_KEY="Iy6byoeUVwK1csz6H6UeA5G42dZAxk7Ydfde"
    REDASH_DATABASE_URL="postgresql://用户名:密码@192.168.16.43"
    
检查配置
bin/run ./manage.py check-settings

根据错误修改各种报错,知道没有报错为止
初始化数据库
bin/run ./manage.py database create-tables
保证运行正确
(redash-env) [bduser@192-168-16-43 redash]$ ./manage.py database create-tables
[2022-08-02 18:37:12,382][PID:85874][INFO][alembic.runtime.migration] Context impl PostgresqlImpl.
[2022-08-02 18:37:12,383][PID:85874][INFO][alembic.runtime.migration] Will assume transactional DDL.
[2022-08-02 18:37:12,416][PID:85874][INFO][alembic.runtime.migration] Running stamp_revision  -> fd4fc850d7ea
启动项目
  1. 前端 server

    npm run start
    后端运行 nohup npm run start out.log 2>&1 &
    
    异常情况:ECONNREFUSED 处理见下
    
  2. flask server

    ./manage.py runserver --debugger --reload
    
    异常情况:TypeError: Descriptors cannot not be created directly. 处理加下
    
  3. RQ

    ./manage.py rq worker
    
  4. RQ Scheduler

    ./manage.py rq scheduler
    
登录
http://192.168.16.43:8080/

疑难处理操作

postgres第一次安装失败

https://www.yisu.com/zixun/597242.html

[bduser@192-168-16-43 software]$ sudo yum install postgresql-server

sudo postgresql-setup initdb     //首次初始化数据库,只能初始化一次
systemctl enable postgresql.service  //设置开机自启动,可不开启,但是每次使用都需要开启服务
sudo systemctl start postgresql.service     //开启服务

关闭服务:systemctl stop postgresql.service
重启服务:systemctl restart postgresql.service
初次安装后,默认生成一个名为postgres的数据库和一个名为postgres的数据库用户。同时还生成了一个名为postgres的Linux系统用户。

失败:版本9.2太低,无法使用,卸载 后 重新安装 sudo yum remove postgresql-server

报错如下:

psycopg2.errors.UndefinedObject: 错误: 类型 “jsonb” 不存在

yarn 冲突
hadoop 的yarn和 nodejs的yarn 是冲突的,解决办法
sudo mv /bin/yarn /bin/hadoop_yarn
sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
sudo yum install yarn
yarn -version

yarn config set ignore-engines true
yarn --frozen-lockfile
cannot import name ‘get_ipaddr’ from 'flask_limiter.util
https://www.cnblogs.com/Du704/p/13281032.html
找到对应的python文件 将 get_ipaddr 改为 get_remote_address
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
....

limiter = Limiter(app, key_func=get_remote_address)
axios-auth-refresh版本问题
sudo npm install  [email protected]
会反复失败,主要原因是网络不稳定,需要不断重跑,知道跑成功为止
ERROR: Cannot install -r requirements_all_ds.txt
ERROR: Cannot install -r requirements_all_ds.txt (line 10), -r requirements_all_ds.txt (line 11), -r requirements_all_ds.txt (line 13), -r requirements_all_ds.txt (line 22), -r requirements_all_ds.txt (line 35), -r requirements_all_ds.txt (line 5) and -r requirements_all_ds.txt (line 8) because these package versions have conflicting dependencies.

The conflict is caused by:
    influxdb 5.2.3 depends on python-dateutil>=2.6.0
    pyhive 0.6.1 depends on python-dateutil
    vertica-python 0.9.5 depends on python-dateutil>=1.5
    td-client 1.0.0 depends on python-dateutil
    dql 0.5.26 depends on python-dateutil<2.7.0
    atsd-client 3.0.5 depends on python-dateutil
    azure-kusto-data 0.0.35 depends on python-dateutil>=2.8.0

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

分析可见,这些ds的组件都对python-dateutil 有不同的版本要求
舍弃dql这个包 , 即使用#注释掉 

(redash-env) [bduser@192-168-16-43 redash]$ pip3 show python-dateutil
Name: python-dateutil
Version: 2.8.0
Summary: Extensions to the standard Python datetime module
Home-page: https://dateutil.readthedocs.io
Author: Gustavo Niemeyer
Author-email: [email protected]
License: Dual License
Location: /opt/software/redash/redash-env/lib/python3.7/site-packages
Requires: six
Required-by: botocore, croniter, freezegun, PyHive, pysaml2

再安装一次ds依赖
pip3 install  -r requirements_all_ds.txt
src/pyodbc.h:56:17: 致命错误:sql.h:没有那个文件或目录
Using legacy 'setup.py install' for google-api-python-client, since package 'wheel' is not installed.
Using legacy 'setup.py install' for impyla, since package 'wheel' is not installed.
Using legacy 'setup.py install' for mysqlclient, since package 'wheel' is not installed.
Using legacy 'setup.py install' for pydruid, since package 'wheel' is not installed.
Using legacy 'setup.py install' for phoenixdb, since package 'wheel' is not installed.
Using legacy 'setup.py install' for pyodbc, since package 'wheel' is not installed.
Using legacy 'setup.py install' for sasl, since package 'wheel' is not installed.
Using legacy 'setup.py install' for thrift, since package 'wheel' is not installed.
Using legacy 'setup.py install' for qds-sdk, since package 'wheel' is not installed.
Using legacy 'setup.py install' for geomet, since package 'wheel' is not installed.
Using legacy 'setup.py install' for pure-sasl, since package 'wheel' is not installed.
Using legacy 'setup.py install' for thriftpy2, since package 'wheel' is not installed.
Installing collected packages: rfc3986, pytz, pyodbc, pymssql, pure-sasl, ply, mysqlclient, msgpack, ijson, certifi, boto, bitarray, azure-common, async-property, asn1crypto, xlrd, websocket-client, uritemplate, tzdata, thriftpy2, thrift, sqlparse, sniffio, scramp, sasl, rsa, readerwriterlock, python-rapidjson, pyparsing, pydantic, pycryptodomex, pyasn1-modules, protobuf, oscrypto, isodate, inflection, hyperframe, hpack, h11, grpcio, et-xmlfile, charset-normalizer, cffi, cachetools, backports.zoneinfo, aiorwlock, vertica-python, trino, thrift_sasl, td-client, requests-toolbelt, rdflib, qds-sdk, pytz-deprecation-shim, pyexasol, pydruid, pydgraph, phoenixdb, openpyxl, nzpy, influxdb, impyla, httplib2, h2, gspread, google-auth, anyio, tzlocal, python-arango, oauth2client, nzalchemy, httpcore, google-auth-httplib2, geomet, dynamo3, cmem-cmempy, azure-storage-common, adal, httpx, google-api-python-client, cassandra-driver, azure-storage-blob, azure-kusto-data, atsd_client, snowflake-connector-python, simple_salesforce, requests_aws_sign, firebolt-sdk
  Attempting uninstall: pytz
    Found existing installation: pytz 2022.1
    Uninstalling pytz-2022.1:
      Successfully uninstalled pytz-2022.1
  Running setup.py install for pyodbc ... error
  error: subprocess-exited-with-error

  × Running setup.py install for pyodbc did not run successfully.
  │ exit code: 1
  ╰─> [14 lines of output]
      running install
      running build
      running build_ext
      building 'pyodbc' extension
      creating build
      creating build/temp.linux-x86_64-3.7
      creating build/temp.linux-x86_64-3.7/src
      gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -DPYODBC_VERSION=4.0.28 -I/opt/software/redash/redash-env/include -I/usr/local/python3/include/python3.7m -c src/buffer.cpp -o build/temp.linux-x86_64-3.7/src/buffer.o -Wno-write-strings
      In file included from src/buffer.cpp:12:0:
      src/pyodbc.h:56:17: 致命错误:sql.h:没有那个文件或目录
       #include 
                       ^
      编译中断。
      error: command 'gcc' failed with exit status 1
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> pyodbc

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.


挑选有用的进行单独安装
(redash-env) [bduser@192-168-16-43 redash]$ pip3 install mysqlclient==1.3.14
Collecting mysqlclient==1.3.14
  Using cached mysqlclient-1.3.14.tar.gz (91 kB)
  Preparing metadata (setup.py) ... done
Using legacy 'setup.py install' for mysqlclient, since package 'wheel' is not installed.
Installing collected packages: mysqlclient
  Attempting uninstall: mysqlclient
    Found existing installation: mysqlclient 2.1.1
    Uninstalling mysqlclient-2.1.1:
      Successfully uninstalled mysqlclient-2.1.1
  Running setup.py install for mysqlclient ... done
Successfully installed mysqlclient-1.3.14
发现是版本降级,,查看是否已经安装,如果已经安装,不再降级

(redash-env) [bduser@192-168-16-43 redash]$ pip3 install pydruid==0.5.7
(redash-env) [bduser@192-168-16-43 redash]$ pip3 install phoenixdb==0.7

TypeError: Descriptors cannot not be created directly.
Traceback (most recent call last):
  File "./manage.py", line 6, in 
    from redash.cli import manager
  File "/opt/software/redash/redash/redash/__init__.py", line 56, in 
    import_query_runners(settings.QUERY_RUNNERS)
  File "/opt/software/redash/redash/redash/query_runner/__init__.py", line 438, in import_query_runners
    __import__(runner_import)
  File "/opt/software/redash/redash/redash/query_runner/phoenix.py", line 9, in 
    import phoenixdb
  File "/opt/software/redash/redash-env/lib/python3.7/site-packages/phoenixdb/__init__.py", line 15, in 
    from phoenixdb import errors, types
  File "/opt/software/redash/redash-env/lib/python3.7/site-packages/phoenixdb/types.py", line 19, in 
    from phoenixdb.calcite import common_pb2
  File "/opt/software/redash/redash-env/lib/python3.7/site-packages/phoenixdb/calcite/common_pb2.py", line 36, in 
    type=None),
  File "/opt/software/redash/redash-env/lib/python3.7/site-packages/google/protobuf/descriptor.py", line 755, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
处理方式:
(redash-env) [bduser@192-168-16-43 redash]$ pip3 show protobuf
Name: protobuf
Version: 4.21.4
Summary:
Home-page: https://developers.google.com/protocol-buffers/
Author: [email protected]
Author-email: [email protected]
License: 3-Clause BSD License
Location: /opt/software/redash/redash-env/lib/python3.7/site-packages
Requires:
Required-by: phoenixdb

降级 protobuf
(redash-env) [bduser@192-168-16-43 redash]$ pip3 install protobuf==3.20.0
System limit for number of file watchers reached
………………………………
Error from chokidar (/opt/software/redash/redash/node_modules/scheduler): Error: ENOSPC: System limit for number of file watchers reached, watch '/opt/software/redash/redash/node_modules/scheduler/tracing.js'
Error from chokidar (/opt/software/redash/redash/client/app/assets/fonts/roboto): Error: ENOSPC: System limit for number of file watchers reached, watch '/opt/software/redash/redash/client/app/assets/fonts/roboto/Roboto-Thin-webfont.svg'
Error from chokidar (/opt/software/redash/redash/client/app/assets/fonts/roboto): Error: ENOSPC: System limit for number of file watchers reached, watch '/opt/software/redash/redash/client/app/assets/fonts/roboto/Roboto-Thin-webfont.ttf'
Error from chokidar (/opt/software/redash/redash/client/app/assets/fonts/roboto): Error: ENOSPC: System limit for number of file watchers reached, watch '/opt/software/redash/redash/client/app/assets/fonts/roboto/Roboto-Thin-webfont.woff'
………………………………

原因分析
Linux使用inotify包来观察文件系统事件,单个文件或目录。
由于React / Angular在保存时热重载和重新编译文件,因此需要跟踪所有项目文件。

解决问题方法
该错误的意思是系统监视的文件数量已达到限制!!
解决方案:
修改系统监控文件数
我的是CentOS7,其他系统应该差不多

sudo vim /etc/sysctl.conf
1
在底部添加一行

fs.inotify.max_user_watches=524288
1
然后保存并退出!
命令:

sudo sysctl -p
1
执行结果:

sudo sysctl -p
fs.inotify.max_user_watches = 524288
1
2
这样就解决了!
————————————————
版权声明:本文为CSDN博主「ITKEY_」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/lxyoucan/article/details/116736501
ECONNREFUSED
[HPM] Error occurred while trying to proxy request /api/session from 192.168.16.43:8080 to http://localhost:5000/ (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
[HPM] Error occurred while trying to proxy request /api/organization/status from 192.168.16.43:8080 to http://localhost:5000/ (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
[HPM] Error occurred while trying to proxy request /login?next=/ from 192.168.16.43:8080 to http://localhost:5000/ (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
[HPM] Error occurred while trying to proxy request /login?next=/ from 192.168.16.43:8080 to http://localhost:5000/ (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
[HPM] Error occurred while trying to proxy request /api/session from 192.168.16.43:8080 to http://localhost:5000/ (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
[HPM] Error occurred while trying to proxy request /api/organization/status from 192.168.16.43:8080 to http://localhost:5000/ (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
[HPM] Error occurred while trying to proxy request /login?next=/ from 192.168.16.43:8080 to http://localhost:5000/ (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)


npm run start
(redash-env) [bduser@192-168-16-43 redash]$ npm run start

> [email protected] start
> npm-run-all --parallel watch:viz webpack-dev-server --disableHostCheck=true


> [email protected] webpack-dev-server
> webpack-dev-server


> [email protected] watch:viz
> (cd viz-lib && yarn watch:babel)

yarn run v1.22.19
$ yarn build:babel:base --watch
$ babel src --out-dir lib --source-maps --ignore 'src/**/*.test.js' --copy-files --no-copy-ignored --extensions .ts,.tsx,.js,.jsx --watch
[HPM] Proxy created: [
  '/login',
  '/logout',
  '/invite',
  '/setup',
  '/status.json',
  '/api',
  '/oauth'
]  ->  http://localhost:5000/
[HPM] Proxy created: [Function: context]  ->  http://localhost:5000/
(node:64403) [DEP0111] DeprecationWarning: Access to process.binding('http_parser') is deprecated.
(Use `node --trace-deprecation ...` to show where the warning was created)
ℹ 「wds」: Project is running at http://0.0.0.0:8080/
ℹ 「wds」: webpack output is served from /static/
ℹ 「wds」: 404s will fallback to /static/index.html
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist@latest --update-db
Successfully compiled 169 files with Babel.
ℹ 「wdm」: wait until bundle finished: /static/index.html
hive数据源连接报错
Connection Test Failed:
No module named 'sasl'
https://discuss.redash.io/t/using-a-hive-database/95

Install libsasl2-dev (system package).
Install with pip: pyhive==0.1.6, sasl>=0.1.3, thrift>=0.8.0, thrift_sasl>=0.1.0.


 pip install pyhive --upgrade
 pip install wheel
 yum -y install cyrus-sasl cyrus-sasl-devel cyrus-sasl-lib
 pip install sasl

Connection Test Failed:
No module named 'thrift_sasl'
pip install thrift
pip install thrift-sasl
hive连接测试通过,但是查询时没有数据表显示
查看后台日志,发现已经开始更新hive元数据,但是hive库太大,导致更新超时,日志如下
    return read_all(sz)
  File "/opt/software/redash/redash-env/lib/python3.7/site-packages/thrift/transport/TTransport.py", line 62, in readAll
    chunk = self.read(sz - have)
  File "/opt/software/redash/redash-env/lib/python3.7/site-packages/thrift/transport/TSocket.py", line 150, in read
    buff = self.handle.recv(sz)
  File "/opt/software/redash/redash-env/lib/python3.7/site-packages/rq/timeouts.py", line 64, in handle_death_penalty
    '({0} seconds)'.format(self._timeout))
rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)

需要调整超时时间
另外,即便没有更新出元数据信息,可是可以query查询的

你可能感兴趣的:(运维,大数据)