You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/api_example.ipynb
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -64,7 +64,7 @@
64
64
"cell_type": "markdown",
65
65
"metadata": {},
66
66
"source": [
67
-
"There is a function that formats search API rules into valid json queries called `gen_rule_payload`. It has sensible defaults, such as pulling more tweets per call than the default 100 (but note that a sandbox environment can only have a max of 100 here, so if you get errors, please check this) not including dates, and defaulting to hourly counts when using the counts api. Discussing the finer points of generating search rules is out of scope for these examples; I encourage you to see the docs to learn the nuances within, but for now let's see what a rule looks like."
67
+
"There is a function that formats search API rules into valid json queries called `gen_rule_payload`. It has sensible defaults, such as pulling more Tweets per call than the default 100 (but note that a sandbox environment can only have a max of 100 here, so if you get errors, please check this) not including dates, and defaulting to hourly counts when using the counts api. Discussing the finer points of generating search rules is out of scope for these examples; I encourage you to see the docs to learn the nuances within, but for now let's see what a rule looks like."
68
68
]
69
69
},
70
70
{
@@ -96,12 +96,12 @@
96
96
"cell_type": "markdown",
97
97
"metadata": {},
98
98
"source": [
99
-
"From this point, there are two ways to interact with the API. There is a quick method to collect smaller amounts of tweets to memory that requires less thought and knowledge, and interaction with the `ResultStream` object which will be introduced later.\n",
99
+
"From this point, there are two ways to interact with the API. There is a quick method to collect smaller amounts of Tweets to memory that requires less thought and knowledge, and interaction with the `ResultStream` object which will be introduced later.\n",
100
100
"\n",
101
101
"\n",
102
102
"## Fast Way\n",
103
103
"\n",
104
-
"We'll use the `search_args` variable to power the configuration point for the API. The object also takes a valid PowerTrack rule and has options to cutoff search when hitting limits on both number of tweets and API calls.\n",
104
+
"We'll use the `search_args` variable to power the configuration point for the API. The object also takes a valid PowerTrack rule and has options to cutoff search when hitting limits on both number of Tweets and API calls.\n",
105
105
"\n",
106
106
"We'll be using the `collect_results` function, which has three parameters.\n",
107
107
"\n",
@@ -144,7 +144,7 @@
144
144
"cell_type": "markdown",
145
145
"metadata": {},
146
146
"source": [
147
-
"By default, tweet payloads are lazily parsed into a `Tweet` object. An overwhelming number of tweet attributes are made available directly, as such:"
147
+
"By default, Tweet payloads are lazily parsed into a `Tweet` [object](https://twitterdev.github.io/tweet_parser/). An overwhelming number of Tweet attributes are made available directly, as such:"
148
148
]
149
149
},
150
150
{
@@ -241,7 +241,7 @@
241
241
"cell_type": "markdown",
242
242
"metadata": {},
243
243
"source": [
244
-
"Voila, we have some tweets. For interactive environments and other cases where you don't care about collecting your data in a single load or don't need to operate on the stream of tweets or counts directly, I recommend using this convenience function.\n",
244
+
"Voila, we have some Tweets. For interactive environments and other cases where you don't care about collecting your data in a single load or don't need to operate on the stream of Tweets or counts directly, I recommend using this convenience function.\n",
245
245
"\n",
246
246
"\n",
247
247
"## Working with the ResultStream\n",
@@ -285,7 +285,7 @@
285
285
"cell_type": "markdown",
286
286
"metadata": {},
287
287
"source": [
288
-
"There is a function, `.stream`, that seamlessly handles requests and pagination for a given query. It returns a generator, and to grab our 500 tweets that mention `beyonce` we can do this:"
288
+
"There is a function, `.stream`, that seamlessly handles requests and pagination for a given query. It returns a generator, and to grab our 500 Tweets that mention `beyonce` we can do this:"
289
289
]
290
290
},
291
291
{
@@ -303,7 +303,7 @@
303
303
"cell_type": "markdown",
304
304
"metadata": {},
305
305
"source": [
306
-
"Tweets are lazily parsed using our Tweet Parser, so tweet data is very easily extractable."
306
+
"Tweets are lazily parsed using our [Tweet Parser](https://twitterdev.github.io/tweet_parser/), so tweet data is very easily extractable."
307
307
]
308
308
},
309
309
{
@@ -341,9 +341,9 @@
341
341
"source": [
342
342
"## Counts Endpoint\n",
343
343
"\n",
344
-
"We can also use the Search API Counts endpoint to get counts of tweets that match our rule. Each request will return up to *30* results, and each count request can be done on a minutely, hourly, or daily basis. The underlying `ResultStream` object will handle converting your endpoint to the count endpoint, and you have to specify the `count_bucket` argument when making a rule to use it.\n",
344
+
"We can also use the Search API Counts endpoint to get counts of Tweets that match our rule. Each request will return up to *30* results, and each count request can be done on a minutely, hourly, or daily basis. The underlying `ResultStream` object will handle converting your endpoint to the count endpoint, and you have to specify the `count_bucket` argument when making a rule to use it.\n",
345
345
"\n",
346
-
"The process is very similar to grabbing tweets, but has some minor differences.\n",
346
+
"The process is very similar to grabbing Tweets, but has some minor differences.\n",
347
347
"\n",
348
348
"\n",
349
349
"_Caveat - premium sandbox environments do NOT have access to the Search API counts endpoint._"
0 commit comments