Skip to content

Commit 4791328

Browse files
committed
update talk details
1 parent a8c0966 commit 4791328

3 files changed

Lines changed: 116 additions & 18 deletions

File tree

eventsapp/data/tracks.json

Lines changed: 90 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"type": "default",
77
"speaker": {
88
"name": "PyCon India Registeration Team",
9-
"info": "Some Info about workshops here",
9+
"info": "Bring your Tickets up to the counters and get registered.",
1010
"photo": "https://images.yourstory.com/2016/06/registration.jpg?auto=compressedu/ICCPM/ImgBanner1/register.jpg"
1111
}
1212
},
@@ -79,7 +79,7 @@
7979
"description": "Geospatial representation are so prevalent in day to day life, such as even in simple travel related conversation to maps, aerial/satellite images etc. In digital era, geospatial data is extensively produced and consumed in ever growing proportion. Python with its free and open source libraries are giving wide variety yet simple and effective set of tools to visualise and analyse geospatial data. The current workshop is directed for beginners of Python programming language, who have basic understanding on computing and data formats. The primary objective of the workshop is to introduce and give hands on training on selected list of FOSS libraries for geospatial analysis. The workshop as a do it yourself fashion tries to solve two real world problems in Geographical Information System (GIS) and its geospatial data sources.",
8080
"type": "workshop",
8181
"speaker": {
82-
"name": "Nisha KA",
82+
"name": "Nishad KA",
8383
"info": "I am a research associate at UrbanEmissions.info, PhD in Environmental Sciences from Sálim Ali Centre for Ornithology and Natural History, Bharathiar University, Coimbatore. My thesis was on `Particulate air pollution data for Coimbatore, India: real time monitoring and modeling with data-interoperability measures`, which emphasised need of open and standardised real time data for better management of air pollution in the country. I got exposure into programming and open source software related to my research studies. I used free and open source libraries of Python for most of my studies, especially on geospatial data compilation, analysis and visualization. I have conducted workshop similar to the current one in Kovaipy, Scipy-India 2017.",
8484
"photo": "https://yt3.ggpht.com/a-/AN66SAxdUNwCVJZ9YuYlGvMBwOTifUEYgUEPONisyA=s900-mo-c-c0xffffffff-rj-k-no",
8585
"social": {
@@ -192,6 +192,94 @@
192192
}
193193
}
194194
},
195+
"15": {
196+
"title": "How Helpshift built machine learning platform using Python at large scale",
197+
"description": "The purpose of this talk is to describe how helpshift has leveraged python ecosystem to build a machine learning platform without using any third party framework, and how you can build one too.\n\nIn particular, You can learn how to build the following components of machine learning platform using python from this talk.",
198+
"type": "talk",
199+
"speaker": {
200+
"name": "Shyam Shinde",
201+
"info": "Hello, I am shyam shinde, actively developing machine learning platform at helpshift.\n\n\nI have diverse experience in developing backend systems, designing and developing system to handle big data.\n\nDeveloped production systems using Java, Clojure and Python. Currently, interested in deploying machine learning services at scale. As side projects, I learn machine learning concepts and try to implement them.\n\nApart from that, I like trekking, reading books and watching movies.",
202+
"photo": "https://yt3.ggpht.com/a-/AN66SAxdUNwCVJZ9YuYlGvMBwOTifUEYgUEPONisyA=s900-mo-c-c0xffffffff-rj-k-no",
203+
"social": {
204+
"linkedin": "http://linkedin.com/in/shyam-shinde-4a867224",
205+
"github": "https://github.com/shindesh1",
206+
"proposal": "https://in.pycon.org/cfp/2018/proposals/how-helpshift-built-machine-learning-platform-using-python-at-large-scale~e0mLa/"
207+
}
208+
}
209+
},
210+
"16": {
211+
"title": "Large scale web crawling using Python",
212+
"description": "The purpose of this talk is to describe how helpshift has leveraged python ecosystem to build a machine learning platform without using any third party framework, and how you can build one too.\n\nIn particular, You can learn how to build the following components of machine learning platform using python from this talk.",
213+
"type": "talk",
214+
"speaker": {
215+
"name": "Anand B Pillai & Noufal Ibrahim",
216+
"info": "Anand B Pillai is a technology professional with 20 years of software development, design and architecture. He has worked in a number of companies over the years in fields ranging from Security, Search Engines, Large Scale Web Portals and Big Data. He is the founder of the Bangalore Python User's Group and the author of Software Architecture with Python (PacktPub, April 2017). Anand has a lot of experience in web-crawling having written the original Python web-crawler HarvestMan in 2005 and developing a number of custom crawlers for various startups solving various problems. Anand is an Independent Software Professional.\n\nNoufal Ibrahim is the CEO and Founder of Hamon Technologies at Calicut, Kerala. He was key to starting the very first PyCon India conference in 2009 and has since been involved in the conference closely throughout the years. Noufal was the keynote speaker of PyCon India 2017. Noufal has made a name not just by his Python community activities, but also by his creative Python introductory talks he has conducted in various universities and institutions in Kerala. He is also a professional trainer in Python and git.\n\nBoth Noufal and Anand are Fellows of the Python Software Foundation (PSF).",
217+
"photo": "http://paste.opensuse.org/view/raw/10686651",
218+
"social": {
219+
"twitter": "https://twitter.com/skeptichacker",
220+
"website": "http://hamon.in/",
221+
"proposal": "https://in.pycon.org/cfp/2018/proposals/large-scale-web-crawling-using-python~bkl6d/"
222+
}
223+
}
224+
},
225+
"17": {
226+
"title": "Creating 3rd generation Web APIs using Hydra and Hydrus",
227+
"description": "3rd generation Web APIs enables the creation of truly RESTful services with all its benefits in terms of scalability, maintainability, and evolvability. This allows creating Generic Consoles and loosely coupled clients. The main objective of this talk is to provide an overview of Hydra and Hydrus and how we can create such APIs using Hydrus.",
228+
"type": "talk",
229+
"speaker": {
230+
"name": "Akshay Dahiya",
231+
"info": "Hi, my name is Akshay Dahiya. I'm a Mentor and Organization Admin for Python Hydra in Google Summer of Code 2018 and I love working on Semantic Web and Artificial Intelligence-related projects.\nI also mentor a couple of students across various Udacity Nanodegree programs (FullStack Nanodegree, React Nanodegree and Deep Learning Nanodegree) in my free time.",
232+
"photo": "https://media.licdn.com/dms/image/C4E03AQHXk2hEjl0z0w/profile-displayphoto-shrink_200_200/0?e=1544054400&v=beta&t=3wVnzwg76YwPGXOXGPXVQsNd2fZn8yQr7h_ZoOQkEcg",
233+
"social": {
234+
"github": "https://github.com/xadahiya/",
235+
"website": "http://www.xadahiya.me/",
236+
"linkedin": "https://www.linkedin.com/in/xadahiya/",
237+
"proposal": "https://in.pycon.org/cfp/2018/proposals/creating-3rd-generation-web-apis-using-hydra-and-hydrus~dBpYa/"
238+
}
239+
}
240+
},
241+
"18": {
242+
"title": "Python Project Workflows - Continuous Deployment Friendly",
243+
"description": "Have conflicting dependencies (unpleasantly) surprised you? (Darn: It worked on my laptop!)\nDo deterministic builds matter?\nWhat about those run-time errors, which were a typo while accessing an attribute of a class?\nHas the codebase already started smelling a bit?\nUnit tests and what about Dockerization?\nTypically, when your Python project grows beyond a few modules and your team size is more than a couple of developers, having the right tools built into your project development workflow saves one from a lot of surprises (and perhaps late night calls). In this talk, we start with challenges typically seen in Python Projects and look at ways of overcoming them, so that the velocity of code deployment increases. Specifically we are going to be looking at tools that are out there that allow you to -",
244+
"type": "talk",
245+
"speaker": {
246+
"name": "Abhijit Gadgil",
247+
"info": "Running a Consulting Company 'hyphenOs Software Labs' in Pune, India.\n\nPython/Go programmer - Mostly for things that pay the bills and ideas that I want to try out.\n\nDatacenter Networking Enthusiast (hacking a yet another Container Networking technology, borrowing ideas from different Projects)\nEternally grateful to whoever wrote tcpdump and the new Wireshark. Number of problems solved using these tools could run into triple digits.\nHates trailing white spaces in a file.",
248+
"photo": "https://avatars3.githubusercontent.com/u/387214?s=400&v=4",
249+
"social": {
250+
"github": "https://github.com/gabhijit",
251+
"linkedin": "https://www.linkedin.com/in/amgadgil/",
252+
"proposal": "https://in.pycon.org/cfp/2018/proposals/python-project-workflows-continuous-deployment-friendly~bq8ya/"
253+
}
254+
}
255+
},
256+
"19": {
257+
"title": "Sponsor Talk - DEShaw\nPyflyby: Automatic imports for Python",
258+
"description": "",
259+
"type": "talk",
260+
"speaker": {
261+
"name": "Karl Chen",
262+
"info": "Vice President at D. E. Shaw & Co. New York, New York",
263+
"photo": "https://media.licdn.com/dms/image/C4E03AQH_W9JeSHICAQ/profile-displayphoto-shrink_800_800/0?e=1544054400&v=beta&t=6dNjlOAuCvHsIuryGeQFUlcLCA8FP2Mv72H-jfmvYv8",
264+
"social": {
265+
"linkedin": "https://www.linkedin.com/in/quarl/"
266+
}
267+
}
268+
},
269+
"20": {
270+
"title": "Language Model (Text Analysis) using Python from scratch",
271+
"description": "What is Language Model?\n\nLanguage Model is basically a way to determine how likely a certain sentence is in the language. 'You are reading my LM write up now ' is more likely to be said than `Now you are my LM reading write up`, even though both sentences contain only correct English words; and the sentence `I had ice-cream with a` is more likely to end with `spoon` than with `banana`. LM helps impart this understanding of a language to machines.\n\nWhat’s the need? `Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination.` (Albert Einstein)",
272+
"type": "talk",
273+
"speaker": {
274+
"name": "Divya Choudhary",
275+
"info": "Data Scientist with ~4 years of experience. For more info, please pay a visit to my LinkedIn.",
276+
"photo": "https://media.licdn.com/dms/image/C5603AQFbbWt9_kRVzw/profile-displayphoto-shrink_800_800/0?e=1544054400&v=beta&t=Yl14rl2E9CNpBtgC0YLdfO7w6iWc6l5k3Iroo2xNyUY",
277+
"social": {
278+
"linkedin": "https://www.linkedin.com/in/divyachoudhary28/",
279+
"proposal": "https://in.pycon.org/cfp/2018/proposals/python-project-workflows-continuous-deployment-friendly~bq8ya/"
280+
}
281+
}
282+
},
195283
"34": {
196284
"title": "Keynote",
197285
"description": "",

eventsapp/main.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
# This way you avoid first loading kivy default images and .kv then
2222
# loading your data files on top.
2323
os.environ['KIVY_DATA_DIR'] = abspath(dirname(__file__)) + '/data'
24-
os.environ["PYCONF_OFFLINE_MODE"] = "1"
24+
#os.environ["PYCONF_OFFLINE_MODE"] = "1"
2525

2626
# import App this is the main Class that manages UI's event loop
2727
from kivy.app import App

eventsapp/network/__init__.py

Lines changed: 25 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
import os
66
import json
77
import time
8+
from functools import partial
89

910
app = App.get_running_app()
1011

@@ -22,8 +23,9 @@ def write_oldata(fpath, data):
2223
f.write(data)
2324

2425

25-
def on_success(req, oldata, endpoint):
26+
def on_success(oldata, endpoint, req, bl):
2627
# got new data, update the schedule
28+
print 'success', endpoint
2729
ndata = None
2830
with open(req.file_path) as f:
2931
ndata = f.read()
@@ -40,7 +42,8 @@ def on_success(req, oldata, endpoint):
4042
'sponsors': 'screensponsor',
4143
'about': 'screenabout',
4244
'venue': 'screenvenue',
43-
'community': 'screencommunity'}[endpoint]
45+
'community': 'screencommunity',
46+
'event': 'screenschedule'}[endpoint]
4447
getattr(app, scr).on_enter(onsuccess=True)
4548

4649

@@ -55,27 +58,34 @@ def _check_data(req, oldata):
5558
write_oldata(req.file_path, oldata)
5659

5760

58-
def on_failure(req, oldata, endpoint):
61+
def on_failure(oldata, endpoint, req, bl):
62+
print 'failure', endpoint, req.file_path
5963
_check_data(req, oldata)
6064

6165

62-
def on_error(req, oldata, endpoint):
66+
def on_error(oldata, endpoint, req, bl):
67+
print 'error', endpoint, req.file_path
6368
_check_data(req, oldata)
6469

6570

6671
def fetch_remote_data(dt):
6772
'''Fetch remote data from the endpoint
6873
'''
69-
endpoint, filepath, oldata = fetch_remote_data._args
70-
UrlRequest(
71-
#FIXME: initial url should be abstracted out too.
72-
'https://raw.githubusercontent.com/pythonindia/PyCon_Mobile_App/\
73-
blob/master/eventsapp/data/' + endpoint + '.json',
74-
file_path=filepath,
75-
on_success=lambda req, r2: on_success(req, oldata, endpoint),
76-
on_error=lambda req, r2: on_error(req, oldata, endpoint),
77-
on_failure=lambda req, r2: on_failure(req, oldata, endpoint),
78-
timeout=15)
74+
for args in fetch_remote_data._args:
75+
endpoint, filepath, oldata = args
76+
print 'fetch', endpoint, filepath
77+
UrlRequest(
78+
#FIXME: initial url should be abstracted out too.
79+
'https://raw.githubusercontent.com/pythonindia/'
80+
'PyCon-Mobile-App/master/eventsapp/data/{}.json'.format(endpoint),
81+
file_path=filepath,
82+
on_success=partial(on_success, oldata, endpoint),
83+
on_error=partial(on_error, oldata, endpoint),
84+
on_failure=partial(on_failure, oldata, endpoint),
85+
timeout=15)
86+
fetch_remote_data._args = []
87+
88+
fetch_remote_data._args = []
7989

8090
trigger_fetch_remote_data = Clock.create_trigger(fetch_remote_data, 9)
8191
'''Trigger fetching of data only once every 9 seconds
@@ -93,7 +103,7 @@ def get_data(endpoint, onsuccess=False):
93103
if os.environ.get("PYCONF_OFFLINE_MODE", None) == '1':
94104
onsuccess = True
95105
if not onsuccess:
96-
fetch_remote_data._args = endpoint, filepath, oldata
106+
fetch_remote_data._args.append([endpoint, filepath, oldata])
97107
trigger_fetch_remote_data()
98108

99109
jsondata = json.loads(oldata)

0 commit comments

Comments
 (0)