个人作业——顶会热词实现(Personal homework – the realization of top hot words)-其他
个人作业——顶会热词实现(Personal homework – the realization of top hot words)
将环境配置好之后,首先获取数据,使用python的爬虫技术进行数据获取,并存到数据库中
这是一个小案例
import requests
url = 'https://tse1-mm.cn.bing.net/th/id/OIP-C.RxlEiBrRi5qLhxXqa88TNwHaMV?w=190&h=317&c=7&r=0&o=5&dpr=1.25&pid=1.7'
Beauty = requests.get(url)
f = open('小姐姐.png', 'wb')
f.write(Beauty.content)
f.close()
这才是真正的数据获取
import requests
import pymysql
from bs4 import BeautifulSoup
db = pymysql.connect(host='localhost',
user='root',
password='数据库密码',
db='使用的数据库名',
charset='utf8')
cursor = db.cursor()
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
}
url = "https://openaccess.thecvf.com/ECCV2018.py"
html = requests.get(url)
soup = BeautifulSoup(html.content, 'html.parser')
soup.a.contents == 'pdf'
pdfs = soup.findAll(name="a", text="pdf")
lis = []
jianjie = ""
for i, pdf in enumerate(pdfs):
pdf_name = pdf["href"].split('/')[-1]
name = pdf_name.split('.')[0].replace("_CVPR_2019_paper", "")
link = "http://openaccess.thecvf.com/content_CVPR_2019/html/" + name + "_CVPR_2019_paper.html"
url1 = link
html1 = requests.get(url1)
soup1 = BeautifulSoup(html1.content, 'html.parser')
weizhi = soup1.find('div', attrs={'id': 'abstract'})
if weizhi:
jianjie = weizhi.get_text()
print("这是第" + str(i) + "条数据")
keyword = str(name).split('_')
keywords = ''
for k in range(len(keyword)):
if (k == 0):
keywords += keyword[k]
else:
keywords += ',' + keyword[k]
info = {}
info['title'] = name
info['link'] = link
info['abstract'] = jianjie
info['keywords'] = keywords
lis.append(info)
cursor = db.cursor()
for i in range(len(lis)):
cols = ", ".join('`{}`'.format(k) for k in lis[i].keys())
print(cols) # '`name`, `age`'
val_cols = ', '.join('%({})s'.format(k) for k in lis[i].keys())
print(val_cols) # '%(name)s, %(age)s'
sql = "insert into lun(%s) values(%s)"
res_sql = sql % (cols, val_cols)
print(res_sql)
cursor.execute(res_sql, lis[i]) # 将字典a传入
db.commit()
num = 1
print(num)
print("ok")
这个运行时间比较长,需要耐心。
数据获取完之后,进行数据查询,这里我使用的是模糊查询,由于Ajax技术不精通,使用jsp进行输出
思路:使用Druid技术进行模糊查询,将其存储为集合,再存到request域中。
public List<Lunwen> findAll2(String title, String keywords) {
String sql = "select * from lun where title like '%" + title + "%' and keywords like '%" + keywords + "%'";
List<Lunwen> list = template.query(sql, new BeanPropertyRowMapper<>(Lunwen.class));
return list;
}
String title=request.getParameter("title");
String keywords=request.getParameter("keywords");
findService service=new findServiceImpl();
List<Lunwen> lunwens=service.findAll2(title,keywords);
request.setAttribute("lunwens",lunwens);
request.getRequestDispatcher("/List.jsp").forward(request,response);
第一阶段到这里就结束了,下一篇为第二阶段,生成词云。
After configuring the environment, first obtain the data, use Python’s crawler technology to obtain the data, and coexist in the database
This is a small case
import requests
url = 'https://tse1-mm.cn.bing.net/th/id/OIP-C.RxlEiBrRi5qLhxXqa88TNwHaMV?w=190&h=317&c=7&r=0&o=5&dpr=1.25&pid=1.7'
Beauty = requests.get(url)
f = open('小姐姐.png', 'wb')
f.write(Beauty.content)
f.close()
This is the real data acquisition
import requests
import pymysql
from bs4 import BeautifulSoup
db = pymysql.connect(host='localhost',
user='root',
password='数据库密码',
db='使用的数据库名',
charset='utf8')
cursor = db.cursor()
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
}
url = "https://openaccess.thecvf.com/ECCV2018.py"
html = requests.get(url)
soup = BeautifulSoup(html.content, 'html.parser')
soup.a.contents == 'pdf'
pdfs = soup.findAll(name="a", text="pdf")
lis = []
jianjie = ""
for i, pdf in enumerate(pdfs):
pdf_name = pdf["href"].split('/')[-1]
name = pdf_name.split('.')[0].replace("_CVPR_2019_paper", "")
link = "http://openaccess.thecvf.com/content_CVPR_2019/html/" + name + "_CVPR_2019_paper.html"
url1 = link
html1 = requests.get(url1)
soup1 = BeautifulSoup(html1.content, 'html.parser')
weizhi = soup1.find('div', attrs={'id': 'abstract'})
if weizhi:
jianjie = weizhi.get_text()
print("这是第" + str(i) + "条数据")
keyword = str(name).split('_')
keywords = ''
for k in range(len(keyword)):
if (k == 0):
keywords += keyword[k]
else:
keywords += ',' + keyword[k]
info = {}
info['title'] = name
info['link'] = link
info['abstract'] = jianjie
info['keywords'] = keywords
lis.append(info)
cursor = db.cursor()
for i in range(len(lis)):
cols = ", ".join('`{}`'.format(k) for k in lis[i].keys())
print(cols) # '`name`, `age`'
val_cols = ', '.join('%({})s'.format(k) for k in lis[i].keys())
print(val_cols) # '%(name)s, %(age)s'
sql = "insert into lun(%s) values(%s)"
res_sql = sql % (cols, val_cols)
print(res_sql)
cursor.execute(res_sql, lis[i]) # 将字典a传入
db.commit()
num = 1
print(num)
print("ok")
This takes a long time and requires patience.
After the data is obtained, the data is queried. Here I use fuzzy query. Because I am not proficient in AJAX technology, I use JSP for output
Idea: use Druid technology for fuzzy query, store it as a set, and then save it in the request field.
public List<Lunwen> findAll2(String title, String keywords) {
String sql = "select * from lun where title like '%" + title + "%' and keywords like '%" + keywords + "%'";
List<Lunwen> list = template.query(sql, new BeanPropertyRowMapper<>(Lunwen.class));
return list;
}
String title=request.getParameter("title");
String keywords=request.getParameter("keywords");
findService service=new findServiceImpl();
List<Lunwen> lunwens=service.findAll2(title,keywords);
request.setAttribute("lunwens",lunwens);
request.getRequestDispatcher("/List.jsp").forward(request,response);
The first stage is over here. The next one is the second stage, generating word cloud.