Hyperopt with SparkTrials will automatically track trials in MLflow. To view the MLflow experiment associated with the notebook, click the 'Runs' icon in the notebook context bar on the upper right. There, you can view all runs.
To view logs from trials, please check the Spark executor logs. To view executor logs, expand 'Spark Jobs' above until you see the (i) icon next to the stage from the trial job. Click it and find the list of tasks; Task 0 is the first trial attempt, and subsequent Tasks are retries. Click the 'stderr' link for a task to view trial logs.
0%| | 0/80 [00:00<?, ?trial/s, best loss=?]
1%|▏ | 1/80 [00:05<06:47, 5.16s/trial, best loss: 3.641341814081204]
2%|▎ | 2/80 [00:06<05:09, 3.97s/trial, best loss: 3.6185327681839743]
4%|▍ | 3/80 [00:08<04:23, 3.43s/trial, best loss: 3.554343997973016]
5%|▌ | 4/80 [00:09<03:25, 2.70s/trial, best loss: 3.554343997973016]
8%|▊ | 6/80 [00:11<02:42, 2.19s/trial, best loss: 3.554343997973016]
9%|▉ | 7/80 [00:15<03:22, 2.78s/trial, best loss: 3.5414203015275407]
10%|█ | 8/80 [00:16<02:41, 2.25s/trial, best loss: 3.5414203015275407]
11%|█▏ | 9/80 [00:17<02:12, 1.87s/trial, best loss: 3.5414203015275407]
12%|█▎ | 10/80 [00:20<02:35, 2.21s/trial, best loss: 3.5414203015275407]
14%|█▍ | 11/80 [00:21<02:07, 1.85s/trial, best loss: 3.5414203015275407]
15%|█▌ | 12/80 [00:23<02:09, 1.90s/trial, best loss: 3.5414203015275407]
16%|█▋ | 13/80 [00:24<01:49, 1.63s/trial, best loss: 3.5414203015275407]
19%|█▉ | 15/80 [00:27<01:43, 1.59s/trial, best loss: 3.5414203015275407]
20%|██ | 16/80 [00:31<02:28, 2.32s/trial, best loss: 3.5414203015275407]
21%|██▏ | 17/80 [00:32<02:01, 1.92s/trial, best loss: 3.5414203015275407]
22%|██▎ | 18/80 [00:34<02:00, 1.95s/trial, best loss: 3.5414203015275407]
24%|██▍ | 19/80 [00:36<02:00, 1.97s/trial, best loss: 3.5414203015275407]
25%|██▌ | 20/80 [00:38<01:58, 1.98s/trial, best loss: 3.5414203015275407]
26%|██▋ | 21/80 [00:39<01:42, 1.73s/trial, best loss: 3.540445199381716]
28%|██▊ | 22/80 [00:40<01:27, 1.51s/trial, best loss: 3.540445199381716]
29%|██▉ | 23/80 [00:42<01:34, 1.66s/trial, best loss: 3.540445199381716]
30%|███ | 24/80 [00:43<01:22, 1.47s/trial, best loss: 3.540445199381716]
31%|███▏ | 25/80 [00:45<01:29, 1.63s/trial, best loss: 3.540445199381716]
32%|███▎ | 26/80 [00:46<01:17, 1.44s/trial, best loss: 3.540445199381716]
34%|███▍ | 27/80 [00:48<01:25, 1.61s/trial, best loss: 3.540445199381716]
35%|███▌ | 28/80 [00:49<01:14, 1.43s/trial, best loss: 3.540445199381716]
36%|███▋ | 29/80 [00:51<01:21, 1.61s/trial, best loss: 3.540445199381716]
38%|███▊ | 30/80 [00:52<01:11, 1.42s/trial, best loss: 3.540445199381716]
39%|███▉ | 31/80 [00:55<01:18, 1.60s/trial, best loss: 3.540445199381716]
40%|████ | 32/80 [00:56<01:08, 1.42s/trial, best loss: 3.540445199381716]
41%|████▏ | 33/80 [00:58<01:15, 1.60s/trial, best loss: 3.540445199381716]
42%|████▎ | 34/80 [00:59<01:05, 1.42s/trial, best loss: 3.540445199381716]
44%|████▍ | 35/80 [01:01<01:11, 1.60s/trial, best loss: 3.540445199381716]
46%|████▋ | 37/80 [01:04<01:07, 1.57s/trial, best loss: 3.540445199381716]
48%|████▊ | 38/80 [01:07<01:24, 2.01s/trial, best loss: 3.540445199381716]
49%|████▉ | 39/80 [01:08<01:09, 1.71s/trial, best loss: 3.540445199381716]
51%|█████▏ | 41/80 [01:11<01:04, 1.65s/trial, best loss: 3.540445199381716]
54%|█████▍ | 43/80 [01:15<01:04, 1.76s/trial, best loss: 3.540445199381716]
55%|█████▌ | 44/80 [01:18<01:16, 2.13s/trial, best loss: 3.540445199381716]
56%|█████▋ | 45/80 [01:20<01:13, 2.10s/trial, best loss: 3.540445199381716]
57%|█████▊ | 46/80 [01:21<01:00, 1.77s/trial, best loss: 3.540445199381716]
59%|█████▉ | 47/80 [01:23<01:00, 1.84s/trial, best loss: 3.540445199381716]
60%|██████ | 48/80 [01:25<01:00, 1.89s/trial, best loss: 3.540445199381716]
61%|██████▏ | 49/80 [01:26<00:50, 1.63s/trial, best loss: 3.540445199381716]
64%|██████▍ | 51/80 [01:29<00:46, 1.59s/trial, best loss: 3.540445199381716]
65%|██████▌ | 52/80 [01:32<00:56, 2.02s/trial, best loss: 3.540445199381716]
66%|██████▋ | 53/80 [01:33<00:46, 1.71s/trial, best loss: 3.540445199381716]
68%|██████▊ | 54/80 [01:35<00:46, 1.80s/trial, best loss: 3.540445199381716]
69%|██████▉ | 55/80 [01:36<00:39, 1.56s/trial, best loss: 3.540445199381716]
70%|███████ | 56/80 [01:38<00:40, 1.70s/trial, best loss: 3.540445199381716]
71%|███████▏ | 57/80 [01:39<00:34, 1.49s/trial, best loss: 3.540445199381716]
72%|███████▎ | 58/80 [01:41<00:36, 1.65s/trial, best loss: 3.540445199381716]
74%|███████▍ | 59/80 [01:42<00:30, 1.45s/trial, best loss: 3.540445199381716]
75%|███████▌ | 60/80 [01:44<00:32, 1.62s/trial, best loss: 3.540445199381716]
76%|███████▋ | 61/80 [01:45<00:27, 1.44s/trial, best loss: 3.540445199381716]
78%|███████▊ | 62/80 [01:47<00:28, 1.61s/trial, best loss: 3.540445199381716]
79%|███████▉ | 63/80 [01:48<00:24, 1.43s/trial, best loss: 3.540445199381716]
80%|████████ | 64/80 [01:50<00:25, 1.60s/trial, best loss: 3.540445199381716]
81%|████████▏ | 65/80 [01:51<00:21, 1.42s/trial, best loss: 3.540445199381716]
82%|████████▎ | 66/80 [01:53<00:22, 1.60s/trial, best loss: 3.540445199381716]
84%|████████▍ | 67/80 [01:54<00:18, 1.42s/trial, best loss: 3.540445199381716]
85%|████████▌ | 68/80 [01:56<00:19, 1.60s/trial, best loss: 3.540445199381716]
86%|████████▋ | 69/80 [01:57<00:15, 1.42s/trial, best loss: 3.540445199381716]
88%|████████▊ | 70/80 [01:59<00:15, 1.60s/trial, best loss: 3.540445199381716]
89%|████████▉ | 71/80 [02:00<00:12, 1.42s/trial, best loss: 3.540445199381716]
90%|█████████ | 72/80 [02:02<00:12, 1.60s/trial, best loss: 3.540445199381716]
91%|█████████▏| 73/80 [02:03<00:09, 1.42s/trial, best loss: 3.540445199381716]
92%|█████████▎| 74/80 [02:05<00:09, 1.60s/trial, best loss: 3.540445199381716]
94%|█████████▍| 75/80 [02:06<00:07, 1.42s/trial, best loss: 3.540445199381716]
95%|█████████▌| 76/80 [02:08<00:06, 1.60s/trial, best loss: 3.540445199381716]
96%|█████████▋| 77/80 [02:09<00:04, 1.42s/trial, best loss: 3.540445199381716]
99%|█████████▉| 79/80 [02:12<00:01, 1.45s/trial, best loss: 3.540445199381716]
100%|██████████| 80/80 [02:15<00:00, 1.91s/trial, best loss: 3.540445199381716]
Total Trials: 80: 80 succeeded, 0 failed, 0 cancelled.
Calculating the Probability of Future Customer Engagement
In non-subscription retail models, customers come and go with no long-term commitments, making it very difficult to determine whether a customer will return in the future. Determining the probability that a customer will re-engage is critical to the design of effective marketing campaigns. Different messaging and promotions may be required to incentivize customers who have likely dropped out to return to our stores. Engaged customers may be more responsive to marketing that encourages them to expand the breadth and scale of purchases with us. Understanding where our customers land with regard to the probability of future engagement is critical to tailoring our marketing efforts to them.
The Buy 'til You Die (BTYD) models popularized by Peter Fader and others leverage two basic customer metrics, i.e. the recency of a customer's last engagement and the frequency of repeat transactions over a customer's lifetime, to derive a probability of future re-engagement. This is done by fitting customer history to curves describing the distribution of purchase frequencies and engagement drop-off following a prior purchase. The math behind these models is fairly complex but thankfully it's been encapsulated in the lifetimes library, making it much easier for traditional enterprises to employ. The purpose of this notebook is to examine how these models may be applied to customer transaction history and how they may be deployed for integration in marketing processes.
Last refresh: Never