每篇10个问题.
Python问题总结(一)
Python问题总结(二)
cv2.error: /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/highgui/src/window.cpp:304: error: (-215) size.width>0 && size.height>0 in function imshow
opencv获取的视频流为空.
重新检查是否有输入视频流.
Traceback (most recent call last):
File "socket_server.py", line 24, in
_server.server_test()
File "socket_server.py", line 19, in server_test
c.send("I am xin daqi")
TypeError: a bytes-like object is required, not 'str'
send字节数据,不是str.
import socket
c = socket.socket()
# snap
c.send(bytes("I am xin daqi", encoding='utf-8'))
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 8: invalid start byte
#-*-coding:utf-8-*-
(2) 字节转字符串
bytes
使用encode进行编码转换为str
bytes.decode('utf8')
bytes.decode('utf-8')
(3) 字符串转字节
str
编码为bytes
str.encode('utf-8')
print("你好").encode("utf-8")
unix:///var/run/supervisor.sock no such file
error: , [Errno 111] Connection refused: file: /usr/lib/python2.7/socket.py line: 228
unix:///var/run/supervisor.sock refused connection
supervisord
supervisord
sudo supervisord -c /etc/supervisor/supervisord.conf
Error: Could not determine IP address for hostname ai-lab-gpu, please try setting an explicit IP address in the "port" setting of your [inet_http_server] section. For example, instead of "port = 9001", try "port = 127.0.0.1:9001."
For help, use /usr/bin/supervisord -h
Error: Cannot open an HTTP server: socket.error reported errno.EADDRNOTAVAIL (99)
compiled with version: 5.4.0 20160609 on 28 September 2018 15:49:44
os: Linux-4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 12:16:42 UTC 2017
nodename: ai-lab-gpu
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 16
current working directory: /home/SP-in-AI/xindq/couplets
detected binary path: /usr/bin/uwsgi-core
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 257559
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
bind(): Cannot assign requested address [core/socket.c line 769]
http=127.0.0.1:8080
libcublas.so.9.0: cannot open shared object file: No such file or directory
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME=/usr/local/cuda
详见博客:服务器(Linux)部署cuDNN
Traceback (most recent call last):
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/spiderloader.py", line 69, in load
return self._spiders[spider_name]
KeyError: 'baidu_search'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/xdq/.local/bin/scrapy", line 11, in
sys.exit(execute())
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/cmdline.py", line 150, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/cmdline.py", line 90, in _run_print_help
func(*a, **kw)
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/cmdline.py", line 157, in _run_command
cmd.run(args, opts)
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 170, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 198, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 202, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "/home/xdq/.local/lib/python3.6/site-packages/scrapy/spiderloader.py", line 71, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: baidu_search'
cd scrapy_html
scrapy startproject firstscrapy
文件结构
|-- scrapy_html
| `-- firstscrapy
| |-- firstscrapy
| | |-- __init__.py
| | |-- __pycache__
| | | |-- __init__.cpython-36.pyc
| | | `-- settings.cpython-36.pyc
| | |-- items.py
| | |-- middlewares.py
| | |-- pipelines.py
| | |-- settings.py
| | `-- spiders
| | |-- __init__.py
| | |-- __pycache__
| `-- scrapy.cfg
string = "E 我是天蓝蓝,你?是 谁?"
# string = string.encode("utf8")
# string = re.sub("[\s+\.\!\/_,$%^*(+\"\']+|[+——!,。?、~@#¥%……&*()]+".encode("utf8"), "".encode("utf8"),string)
string = re.sub("[\s+\.\!\/_,$?%^*(+\"\']+|[+——!,。?、~@#¥%……&*()]+","",string)
string = string[1:]
print("cut symbol: {}".format(string))
cut symbol: 我是天蓝蓝你是谁
DEBUG: Filtered offsite request to 'www2.soopat.com':
allowed_domains = ["soopat.com"]
if "Image" in request.json:
TypeError: argument of type 'NoneType' is not iterable
if reqeust.json and "Image" in request.json:
upload_data = request.json["Image"]
详细讲解参考博客第二节:Flask post,get请求request获取json,form,file参数
python2.x和python3.x共存于操作系统,若没有使用Anaconda或Virtualenv管理环境,使用对应版本的python可使用如下命令:
python2.7 run.py
python3.5 run.py
python3.6 run.py
python3.7 run.py
[参考文献]
[1]https://www.cnblogs.com/xiaojianliu/p/9443874.html
[2]https://www.cnblogs.com/aylin/p/5572104.html
[3]https://blog.csdn.net/ksws0292756/article/details/80034086
[4]https://blog.csdn.net/Xin_101/article/details/87165132