《Python編程-利用爬蟲配合dedecms全自動采集發布》要點:
本文介紹了Python編程-利用爬蟲配合dedecms全自動采集發布,希望對您有用。如果有疑問,可以聯系我們。
之前想實現一個爬蟲,實時采集別人的文章,根據自己的規則去修改采集到的文章,然后自動發布.決定用dedecms做新聞發布,還可以自動生成html,自動把遠程圖片本地化等一些優點,為了安全,完全可以把前后臺分離.
起初想用scrapy爬蟲框架去實現,覺得定制開發的話用scrapy只能用到里面的一些基礎的功能,有一些情況要跟著框架的規則走,如果自己寫的話可以自己寫規則去處理,也有優點爬蟲、處理器等,最后還是自己寫了一個demo.
首先分析需求,python做爬蟲,dedecms做發布,起初先考慮了發布功能,實現了模擬登陸,或者研究dedecms的數據庫設計,直接寫到數據庫,實際中沒有這樣去做,開始做模擬登陸的時候,需要改dedecms的代碼去掉驗證碼,不然還要實現驗證碼識別,這個完全沒有必要,因為要發布的是自己的網站,自己也有賬戶、密碼、發布文章權限,然后就改了下dedecms的登陸功能,加了一個登陸接口,分析了dedecms的發布文章HTTP數據包.這塊搞定了后就開始設計爬蟲了,最后設計的感覺和scrapy的一些基礎的處理機制很像.
做dedecms的登陸接口如下:
后臺目錄下的config.php 34行找到
/**
//檢驗用戶登錄狀態
$cuserLogin = new userLogin();
if($cuserLogin->getUserID()==-1)
{
header(“location:login.php?gotopage=”.urlencode($dedeNowurl));
exit();
}
**/
改為下面
//http://127.0.0.2/dede/index.php?username=admin&password=admin
123456789101112 | $cuserLogin = new userLogin();if($cuserLogin->getUserID()==-1) {if($_REQUEST['username'] != ''){$res = $cuserLogin->checkUser($_REQUEST['username'], $_REQUEST['password']);if($res==1) $cuserLogin->keepUser();}if($cuserLogin->getUserID()==-1) {header("location:login.php?gotopage=".urlencode($dedeNowurl));exit();}} |
這樣只要請求:http://127.0.0.2/dede/index.php?username=admin&password=admin 就可以得到一個sessionid,只要用這個sessionid去發布文章就可以了.
發布文章的HTTP數據包如下:
#http://127.0.0.2/dede/article_add.php
POST /dede/article_add.php HTTP/1.1
Host: 127.0.0.2
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://127.0.0.2/dede/article_add.php?cid=2
Cookie: menuitems=1_1%2C2_1%2C3_1; CNZZDATA1254901833=1497342033-1472891946-%7C1473171059; Hm_lvt_a6454d60bf94f1e40b22b89e9f2986ba=1472892122; ENV_GOBACK_URL=%2Fmd5%2Fcontent_list.php%3Farcrank%3D-1%26cid%3D11; lastCid=11; lastCid__ckMd5=2f82387a2b251324; DedeUserID=1; DedeUserID__ckMd5=74be9ff370c4536f; DedeLoginTime=1473174404; DedeLoginTime__ckMd5=b8edc1b5318a3923; hasshown=1; Hm_lpvt_a6454d60bf94f1e40b22b89e9f2986ba=1473173893; PHPSESSID=m2o3k882tln0ttdi964v5aorn6
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Content-Type: multipart/form-data; boundary=—————————2802133914041
Content-Length: 3639
—————————–2802133914041
Content-Disposition: form-data; name=”channelid”
1
—————————–2802133914041
Content-Disposition: form-data; name=”dopost”
save
—————————–2802133914041
Content-Disposition: form-data; name=”title”
2222222222
—————————–2802133914041
Content-Disposition: form-data; name=”shorttitle”
—————————–2802133914041
Content-Disposition: form-data; name=”redirecturl”
—————————–2802133914041
Content-Disposition: form-data; name=”tags”
—————————–2802133914041
Content-Disposition: form-data; name=”weight”
100
—————————–2802133914041
Content-Disposition: form-data; name=”picname”
—————————–2802133914041
Content-Disposition: form-data; name=”litpic”; filename=””
Content-Type: application/octet-stream
—————————–2802133914041
Content-Disposition: form-data; name=”source”
—————————–2802133914041
Content-Disposition: form-data; name=”writer”
—————————–2802133914041
Content-Disposition: form-data; name=”typeid”
2
—————————–2802133914041
Content-Disposition: form-data; name=”typeid2″
—————————–2802133914041
Content-Disposition: form-data; name=”keywords”
—————————–2802133914041
Content-Disposition: form-data; name=”autokey”
1
—————————–2802133914041
Content-Disposition: form-data; name=”description”
—————————–2802133914041
Content-Disposition: form-data; name=”dede_addonfields”
—————————–2802133914041
Content-Disposition: form-data; name=”remote”
1
—————————–2802133914041
Content-Disposition: form-data; name=”autolitpic”
1
—————————–2802133914041
Content-Disposition: form-data; name=”needwatermark”
1
—————————–2802133914041
Content-Disposition: form-data; name=”sptype”
hand
—————————–2802133914041
Content-Disposition: form-data; name=”spsize”
5
—————————–2802133914041
Content-Disposition: form-data; name=”body”
2222222222
—————————–2802133914041
Content-Disposition: form-data; name=”voteid”
—————————–2802133914041
Content-Disposition: form-data; name=”notpost”
0
—————————–2802133914041
Content-Disposition: form-data; name=”click”
70
—————————–2802133914041
Content-Disposition: form-data; name=”sortup”
0
—————————–2802133914041
Content-Disposition: form-data; name=”color”
—————————–2802133914041
Content-Disposition: form-data; name=”arcrank”
0
—————————–2802133914041
Content-Disposition: form-data; name=”money”
0
—————————–2802133914041
Content-Disposition: form-data; name=”pubdate”
2016-09-06 23:07:52
—————————–2802133914041
Content-Disposition: form-data; name=”ishtml”
1
—————————–2802133914041
Content-Disposition: form-data; name=”filename”
—————————–2802133914041
Content-Disposition: form-data; name=”templet”
—————————–2802133914041
Content-Disposition: form-data; name=”imageField.x”
41
—————————–2802133914041
Content-Disposition: form-data; name=”imageField.y”
6
—————————–2802133914041–
#更新生成html請求
http://127.0.0.2/dede/task_do.php?typeid=2&aid=109&dopost=makeprenext&nextdo=
GET /dede/task_do.php?typeid=2&aid=109&dopost=makeprenext&nextdo= HTTP/1.1
Host: 127.0.0.2
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://127.0.0.2/dede/article_add.php
Cookie: menuitems=1_1%2C2_1%2C3_1; CNZZDATA1254901833=1497342033-1472891946-%7C1473171059; Hm_lvt_a6454d60bf94f1e40b22b89e9f2986ba=1472892122; ENV_GOBACK_URL=%2Fmd5%2Fcontent_list.php%3Farcrank%3D-1%26cid%3D11; lastCid=11; lastCid__ckMd5=2f82387a2b251324; DedeUserID=1; DedeUserID__ckMd5=74be9ff370c4536f; DedeLoginTime=1473174404; DedeLoginTime__ckMd5=b8edc1b5318a3923; hasshown=1; Hm_lpvt_a6454d60bf94f1e40b22b89e9f2986ba=1473173893; PHPSESSID=m2o3k882tln0ttdi964v5aorn6
Connection: keep-alive
Upgrade-Insecure-Requests: 1
通過上面數據包可以分析到如下結果:
POST http://127.0.0.2/dede/article_add.php
需要配置的參數:
channelid:1 #普通文章提交
dopost:save #提交方式
shorttitle:” #短標題
autokey:1 #自動獲取關鍵詞
remote:1 #不指定縮略圖,遠程自動獲取縮略圖
autolitpic:1 #提取第一個圖片為縮略圖
sptype:auto #自動分頁
spsize:5 #5k大小自動分頁
notpost:1 #禁止評論
sortup:0 #文章排序、默認
arcrank:0 #閱讀權限為開放瀏覽
money: #消費金幣0
ishtml:1 #生成html
title:”文章標題” #文章標題
source:”文章來源” #文章來源
writer:”文章作者” #文章作者
typeid:”主欄目ID2″ #主欄目ID
body:”文章內容” #文章內容
click:”文章點擊量” #文章點擊量
pubdate:”提交時間” #提交時間
然后開始模擬dedecms發布文章測試了,python代碼如下:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859 | #!/usr/bin/python#coding:utf8import requests,random,time#訪問登陸接口保持cookiessid = requests.session()login_url = "http://127.0.0.2/dede/index.php?username=admin&password=admin"header = { "User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0","Referer" :"http://127.0.0.2"}#登陸接口獲取Cookiesloadcookies = sid.get(url = login_url,headers = header)#進入增加文章頁面#get_html = sid.get('http://127.0.0.2/dede/article_add.php?channelid=1',headers = header)#print get_html.content#定義固定字段article = {'channelid':1, #普通文章提交'dopost':'save', #提交方式'shorttitle':'', #短標題'autokey':1, #自動獲取關鍵詞'remote':1, #不指定縮略圖,遠程自動獲取縮略圖'autolitpic':1, #提取第一個圖片為縮略圖'sptype':'auto', #自動分頁'spsize':5, #5k大小自動分頁'notpost':1, #禁止評論'sortup':0, #文章排序、默認'arcrank':0, #閱讀權限為開放瀏覽'money': 0,#消費金幣0'ishtml':1, #生成html'click':random.randint(10, 300), #隨機生成文章點擊量'pubdate':time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()), #s生成當前提交時間}#定義可變字段article['source'] = "文章來源" #文章來源article['writer'] = "文章作者" #文章作者article['typeid'] = "2" #主欄目ID#定義提交文章請求URLarticle_request = "http://127.0.0.2/dede/article_add.php""""#測試提交數據article['title'] = "測試_文章標題" #文章標題article['body'] = "測試_文章內容" #文章內容#提交后會自動重定向 生成html,http返回狀態為200則成功!res = sid.post(url = article_request,data = article, headers = header)print res"""for i in range(50): article['title'] = str(i) + "_文章標題" #文章標題 article['body'] = str(i) + "_文章內容" #文章內容 #print article res = sid.post(url = article_request,data = article, headers = header) print res |
其次就是分析爬蟲需求階段了,如下:
收集采集頁面:
http://www.tunvan.com/col.jsp?id=115
http://www.zhongkerd.com/news.html
http://www.qianxx.com/news/field/
http://www.ifenguo.com/news/xingyexinwen/
http://www.ifenguo.com/news/gongsixinwen/
每一個采集頁面和要改的規則都不一樣,發布文章的欄目可能也有變化,要寫多個爬蟲,一個爬蟲實現不了這個功能,要有爬蟲、處理器、配置文件、函數文件(避免重復寫代碼)、數據庫文件.
數據庫里面主要是保存文章url和標題,主要是判斷這篇文章是否是更新的,如果已經采集發布了就不要重復發布了,如果不存在文章就是最新的文章,需要寫入數據庫并發布文章.數據庫就一個表幾個字段就好,采用的sqlite3,數據庫文件db.dll建表如下:
123456 | CREATE TABLE history (id INTEGER PRIMARY KEY ASC AUTOINCREMENT,url VARCHAR( 100 ),title TEXT,DATE DATETIME DEFAULT ( ( datetime( 'now', 'localtime' ) ) )); |
架構設計如下:
│ db.dll #sqlite數據庫
│ dede.py #測試dede登陸接口
│ function.py #公共函數
│ run.py #爬蟲集開始函數
│ settings.py #爬蟲配置設置
│ spiders.py #爬蟲示例
│ sqlitestudio-2.1.5.exe #sqlite數據庫編輯工具
│ __init__.py #前置方法供模塊用
dede.py如下:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566 | #!/usr/bin/python#coding:utf8import requests,random,timeimport lxml#定義域名domain = "http://127.0.0.2/"admin_dir = "dede/"houtai = domain + admin_dirusername = "admin"password = "admin"#訪問登陸接口保持cookiessid = requests.session()login_url = houtai + "index.php?username=" + username + "&password=" + passwordheader = { "User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0","Referer" : domain}#登陸接口獲取Cookiesloadcookies = sid.get(url = login_url,headers = header)#定義固定字段article = {'channelid':1, #普通文章提交'dopost':'save', #提交方式'shorttitle':'', #短標題'autokey':1, #自動獲取關鍵詞'remote':1, #不指定縮略圖,遠程自動獲取縮略圖'autolitpic':1, #提取第一個圖片為縮略圖'sptype':'auto', #自動分頁'spsize':5, #5k大小自動分頁'notpost':1, #禁止評論'sortup':0, #文章排序、默認'arcrank':0, #閱讀權限為開放瀏覽'money': 0,#消費金幣0'ishtml':1, #生成html'click':random.randint(10, 300), #隨機生成文章點擊量'pubdate':time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()), #s生成當前提交時間}#定義可變字段article['source'] = "文章來源" #文章來源article['writer'] = "文章作者" #文章作者article['typeid'] = "2" #主欄目ID#定義提交文章請求URLarticle_request = houtai + "article_add.php""""#測試提交數據article['title'] = "11測試_文章標題" #文章標題article['body'] = "11測試_文章內容" #文章內容#提交后會自動重定向 生成html,http返回狀態為200則成功!res = sid.post(url = article_request,data = article, headers = header)print res""""""for i in range(50): article['title'] = str(i) + "_文章標題" #文章標題 article['body'] = str(i) + "_文章內容" #文章內容 #print article res = sid.post(url = article_request,data = article, headers = header) print res""" |
function.py如下:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455 | # coding:utf-8from settings import *#檢查數據庫中是否存在文章,0為不存在,1為存在def res_check(article): exec_select = "SELECT count(*) FROM history WHERE url = '%s' AND title = '%s' " res_check = cur.execute(exec_select % (article[0],article[1])) for res in res_check: result = res[0] return result#寫入數據庫操作def res_insert(article): exec_insert = "INSERT INTO history (url,title) VALUES ('%s','%s')" cur.execute(exec_insert % (article[0],article[1])) conn.commit()#模擬登陸發布文章def send_article(title,body,typeid = "2"): article['title'] = title #文章標題 article['body'] = body #文章內容 article['typeid'] = "2" #print article #提交后會自動重定向 生成html,http返回狀態為200則成功! res = sid.post(url = article_request,data = article, headers = header) #print res if res.status_code == 200 : #print u"send mail!" send_mail(title = title,body = body) print u"success article send!" else: #發布文章失敗處理 pass#發郵件通知send_mail(收件,標題,內容)def send_mail(title,body): shoujian = "admin@0535code.com" # 設置服務器,用戶名、密碼以及郵箱的后綴 mail_user = "610358898" mail_pass="你的郵箱密碼" mail_postfix="qq.com" me=mail_user+"<"+mail_user+"@"+mail_postfix+">" msg = MIMEText(body, 'html', 'utf-8') msg['Subject'] = title #msg['to'] = shoujian try: mail = smtplib.SMTP() mail.connect("smtp.qq.com")#配置SMTP服務器 mail.login(mail_user,mail_pass) mail.sendmail(me,shoujian, msg.as_string()) mail.close() print u"send mail success!" except Exception, e: print str(e) print u"send mail exit!" |
run.py如下:
1234 | # -*- coding: utf-8 -*-import spiders #開始第一個爬蟲 spiders.start() |
settings.py如下:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778 | # coding:utf-8import re,sys,os,requests,lxml,string,time,random,loggingfrom bs4 import BeautifulSoupfrom lxml import etreeimport smtplibfrom email.mime.text import MIMETextimport sqlite3import HTMLParser#刷新系統reload(sys)sys.setdefaultencoding( "utf-8" )#定義當前時間#now = time.strftime( '%Y-%m-%d %X',time.localtime())#設置頭信息headers={ "User-Agent":"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36","Accept":"*/*","Accept-Language":"zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3","Accept-Encoding":"gzip, deflate","Content-Type":"application/x-www-form-urlencoded; charset=UTF-8","Connection":"keep-alive","X-Requested-With":"XMLHttpRequest",}domain = u"<a href='http://010bjsoft.com'>北京軟件外包</a>".decode("string_escape") #要替換的超鏈接html_parser = HTMLParser.HTMLParser() #生成轉義器########################################################dede參數配置#定義域名domain = "http://127.0.0.2/"admin_dir = "dede/"houtai = domain + admin_dirusername = "admin"password = "admin"#訪問登陸接口保持cookiessid = requests.session()login_url = houtai + "index.php?username=" + username + "&password=" + passwordheader = { "User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0","Referer" : domain}#登陸接口獲取Cookiesloadcookies = sid.get(url = login_url,headers = header)#定義固定字段article = {'channelid':1, #普通文章提交'dopost':'save', #提交方式'shorttitle':'', #短標題'autokey':1, #自動獲取關鍵詞'remote':1, #不指定縮略圖,遠程自動獲取縮略圖'autolitpic':1, #提取第一個圖片為縮略圖'sptype':'auto', #自動分頁'spsize':5, #5k大小自動分頁'notpost':1, #禁止評論'sortup':0, #文章排序、默認'arcrank':0, #閱讀權限為開放瀏覽'money': 0,#消費金幣0'ishtml':1, #生成html'click':random.randint(10, 300), #隨機生成文章點擊量'pubdate':time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()), #s生成當前提交時間}#定義可變字段article['source'] = "文章來源" #文章來源article['writer'] = "文章作者" #文章作者#定義提交文章請求URLarticle_request = houtai + "article_add.php"########################################################數據庫配置#建立數據庫連接conn = sqlite3.connect("db.dll")#創建游標cur = conn.cursor() |
spiders.py如下:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970 | # coding:utf-8from settings import *from function import *#獲取內容, 文章url,文章內容xpath表達式def get_content( url = "http://www.zhongkerd.com/news/content-1389.html" , xpath_rule = "http://html/body/div[3]/div/div[2]/div/div[2]/div/div[1]/div/div/dl/dd" ): html = requests.get(url,headers = headers).content tree = etree.HTML(html) res = tree .xpath(xpath_rule)[0] res_content = etree.tostring(res) #轉為字符串 res_content = html_parser.unescape(res_content) #轉為html編碼 輸出 res_content = res_content.replace('\t','').replace('\n','') #去除空格 .replace(' ',''),換行符,制表符 return res_content#獲取結果,url列表def get_article_list(url = "http://www.zhongkerd.com/news.html" ): body_html = requests.get(url,headers = headers).content #print body_html soup = BeautifulSoup(body_html,'lxml') page_div = soup.find_all(name = "a",href = re.compile("content"),class_="w-bloglist-entry-link") #print page_div list_url = [] for a in page_div: #print a #print a.get('href') #print a.string list_url.append((a.get('href'),a.string)) #print get_content(a.get('href')) else: #print list_url return list_url#處理采集頁面def res_content(url): content = get_content(url) #print content info = re.findall(r'<dd>(.*?)</dd>',content,re.S)[0] #去掉dd標簽 re_zhushi = re.compile(r'<!--[^>]*-->') #HTML注釋 re_href = re.compile(r'<\s*a[^>]*>[^<](.*?)*<\s*/\s*a\s*>') #去出超鏈接,替換 re_js = re.compile(r'<\s*script[^>]*>[^<](.*?)*<\s*/\s*script\s*>') #去出 javascript re_copyright = re.compile(r'<p\s*align=\"left\">(.*?)</p>') #去出 版權信息 #r'<p\s*align=\"left\">' 注意處理換行要 info = re_zhushi.sub('',info,re.S) info = re_href.sub(domain,info,re.S) #print content #exit() info = re_copyright.sub(u"",info,re.S) info = info.replace(u'\xa0', u' ') #防止gbk轉btf輸出錯誤 #print info return info#處理結果def caiji_result(): article_list = get_article_list() #print article_list #判斷是否數據庫中是否有,是否寫入數據庫 for article in article_list: #print res_check(article) #判斷是否需要寫入 if not res_check(article): #print "no" #u"不存在需要寫入" res_insert(article) #寫入后需要發布文章 body = res_content(article[0]) send_article(title = article[1],body = body) else: #print "yes" #u"已經存在不需要寫入" pass#爬蟲調用函數def start(): caiji_result() |
__init__.py用于發布模塊時用.
寫完了、是不是發現和scrapy基礎功能有點像呢...
作者:網癡
歡迎參與《Python編程-利用爬蟲配合dedecms全自動采集發布》討論,分享您的想法,維易PHP學院為您提供專業教程。