啤酒和尿布:一文看懂关联规则


谈到大数据在零售业的应用,不得不提到一个经典的营销案例–啤酒和尿布的故事,有个有趣的现象——啤酒和尿布赫然摆放在一起出售,但是,这个奇怪的举措却使啤酒和尿布的销量双双增加了。这是发生在美国沃尔玛超市的真实案例,并一直为商家所津津乐道。
为人们所知的是因为男人去超市买尿布时顺手买了啤酒,所以啤酒和尿布销量增加,那么从数据和算法层面分析,大家知道为什么啤酒和尿布放在一起销量会增加吗。

一、关联规则

站在啤酒和尿布故事后面的是关联规则算法,进行关联性分析是涉及到三个指标:

1.支持度(Support)
支持度表示项集{X,Y}在总项集里出现的概率。 表 示 A 和 B 同 时 在 总 数 I 中 发 生 的 概 率 \color{#FF0000}{表示A和B同时在总数I 中发生的概率} ABI ,公式为:

          Support(X→Y) = P(X,Y) / P(I) = P(X∩Y) / P(I) = num(X∩Y) / num(I)

其中,I表示总事务集。num()表示求事务集里特定项集出现的次数。

2.置信度 (Confidence)
置信度表示在先决条件X发生的情况下,由关联规则”X→Y“推出Y的概率。 表 示 在 发 生 X 的 项 集 中 , 同 时 会 发 生 Y 的 可 能 性 , 即 X 和 Y 同 时 发 生 的 个 数 占 仅 仅 X 发 生 个 数 的 比 例 \color{red}{表示在发生X的项集中,同时会发生Y的可能性,即X和Y同时发生的个数占仅仅X发生个数的比例} XYXYX,公式为:

           Confidence(X→Y) = P(Y|X)  = P(X,Y) / P(X) = P(X∩Y) / P(X) 

3.提升度(Lift)
提升度表示含有X的条件下,同时含有Y的概率,与只看Y发生的概率之比。 提 升 度 反 映 了 关 联 规 则 中 的 X 与 Y 的 相 关 性 \color{red}{提升度反映了关联规则中的X与Y的相关性} XY,提升度>1且越高表明正相关性越高,提升度<1且越低表明负相关性越高,提升度=1表明没有相关性,即相互独立。

           Lift(X→Y) = P(Y|X) / P(Y)

下面用例子直观的解释一下三个指标:

比如有100个订单I,其中10个订单中有牛奶A,20个订单中有面包B,5个订单即有牛奶又有面包:
Support(A->B) = P(A,B) / P(I) = 5/100 = 0.05;
Confidence(A->B) = P(B/A) = P(A,B) / P(A) = 5/10 = 0.5;
Lift(A->B) = P(B/A) / P(B) = 0.5/0.2 = 2.5;
LIFT>1,表示牛奶面包正相关性;

二、Apriori算法

2.1 相关概念

  • 项与项集:设itemset={item1, item_2, …, item_m}是所有项的集合,其中,item_k(k=1,2,…,m)成为项。项的集合称为项集(itemset),包含k个项的项集称为k项集(k-itemset)。
  • 事务与事务集:一个事务T是一个项集,它是itemset的一个子集,每个事务均与一个唯一标识符Tid相联系。不同的事务一起组成了事务集D,它构成了关联规则发现的事务数据库。
  • 关联规则:关联规则是形如A=>B的蕴涵式,其中A、B均为itemset的子集且均不为空集,而A交B为空。
  • 频繁项集(frequent itemset):如果项集I的相对支持度满足事先定义好的最小支持度阈值(即I的出现频度大于相应的最小出现频度(支持度计数)阈值),则I是频繁项集。
    强关联规则:满足最小支持度和最小置信度的关联规则,即待挖掘的关联规则。

2.2 实现步骤

一般而言,关联规则的挖掘是一个两步的过程:
比如对于数据集:

[['l1', 'l2', 'l5'], 
['l2', 'l4'], 
['l2', 'l3'],
['l1', 'l2', 'l4'], 
['l1', 'l3'], 
['l2', 'l3'],
['l1', 'l3'], 
['l1', 'l2', 'l3', 'l5'], 
['l1', 'l2', 'l3']]
  1. 找出所有的频繁项集
    遍历数据集,首先找出频繁1项集:[‘l1’]、[‘l2’]、[‘l3’]、[‘l4’]、[‘l5’]
    然后找出频繁2项集:[‘l1’,‘l2’]、[‘l1’,‘l3’]、[‘l1’,‘l4’]、[‘l1’,‘l5’]、[‘l2’,‘l3’]、[‘l2’,‘l4’]、[‘l2’,‘l5’]、…、[‘l3’,‘l4’];
    找出频繁3项集:[‘l1’, ‘l2’, ‘l5’]、[‘l1’, ‘l2’, ‘l4’]…
  2. 由频繁项集产生强关联规则
    把每个频繁项集的支持度和置信度和阈值对比,找出强关联规则。

2.3 代码实现

"""
# Python 2.7
# Filename: apriori.py
# Author: llhthinker
# Email: hangliu56[AT]gmail[DOT]com
# Blog: http://www.cnblogs.com/llhthinker/p/6719779.html
# Date: 2017-04-16
"""


def load_data_set():
    """
    Load a sample data set (From Data Mining: Concepts and Techniques, 3th Edition)
    Returns: 
        A data set: A list of transactions. Each transaction contains several items.
    """
    data_set = [['l1', 'l2', 'l5'], ['l2', 'l4'], ['l2', 'l3'],
            ['l1', 'l2', 'l4'], ['l1', 'l3'], ['l2', 'l3'],
            ['l1', 'l3'], ['l1', 'l2', 'l3', 'l5'], ['l1', 'l2', 'l3']]
    return data_set


def create_C1(data_set):
    """
    Create frequent candidate 1-itemset C1 by scaning data set.
    Args:
        data_set: A list of transactions. Each transaction contains several items.
    Returns:
        C1: A set which contains all frequent candidate 1-itemsets
    """
    C1 = set()
    for t in data_set:
        for item in t:
            item_set = frozenset([item])
            C1.add(item_set)
    return C1


def is_apriori(Ck_item, Lksub1):
    """
    Judge whether a frequent candidate k-itemset satisfy Apriori property.
    Args:
        Ck_item: a frequent candidate k-itemset in Ck which contains all frequent
                 candidate k-itemsets.
        Lksub1: Lk-1, a set which contains all frequent candidate (k-1)-itemsets.
    Returns:
        True: satisfying Apriori property.
        False: Not satisfying Apriori property.
    """
    for item in Ck_item:
        sub_Ck = Ck_item - frozenset([item])
        if sub_Ck not in Lksub1:
            return False
    return True


def create_Ck(Lksub1, k):
    """
    Create Ck, a set which contains all all frequent candidate k-itemsets
    by Lk-1's own connection operation.
    Args:
        Lksub1: Lk-1, a set which contains all frequent candidate (k-1)-itemsets.
        k: the item number of a frequent itemset.
    Return:
        Ck: a set which contains all all frequent candidate k-itemsets.
    """
    Ck = set()
    len_Lksub1 = len(Lksub1)
    list_Lksub1 = list(Lksub1)
    for i in range(len_Lksub1):
        for j in range(1, len_Lksub1):
            l1 = list(list_Lksub1[i])
            l2 = list(list_Lksub1[j])
            l1.sort()
            l2.sort()
            if l1[0:k-2] == l2[0:k-2]:
                Ck_item = list_Lksub1[i] | list_Lksub1[j]
                # pruning
                if is_apriori(Ck_item, Lksub1):
                    Ck.add(Ck_item)
    return Ck


def generate_Lk_by_Ck(data_set, Ck, min_support, support_data):
    """
    Generate Lk by executing a delete policy from Ck.
    Args:
        data_set: A list of transactions. Each transaction contains several items.
        Ck: A set which contains all all frequent candidate k-itemsets.
        min_support: The minimum support.
        support_data: A dictionary. The key is frequent itemset and the value is support.
    Returns:
        Lk: A set which contains all all frequent k-itemsets.
    """
    Lk = set()
    item_count = {
    
    }
    for t in data_set:
        for item in Ck:
            if item.issubset(t):
                if item not in item_count:
                    item_count[item] = 1
                else:
                    item_count[item] += 1
    t_num = float(len(data_set))
    for item in item_count:
        if (item_count[item] / t_num) >= min_support:
            Lk.add(item)
            support_data[item] = item_count[item] / t_num
    return Lk


def generate_L(data_set, k, min_support):
    """
    Generate all frequent itemsets.
    Args:
        data_set: A list of transactions. Each transaction contains several items.
        k: Maximum number of items for all frequent itemsets.
        min_support: The minimum support.
    Returns:
        L: The list of Lk.
        support_data: A dictionary. The key is frequent itemset and the value is support.
    """
    support_data = {
    
    }
    C1 = create_C1(data_set)
    L1 = generate_Lk_by_Ck(data_set, C1, min_support, support_data)
    Lksub1 = L1.copy()
    L = []
    L.append(Lksub1)
    for i in range(2, k+1):
        Ci = create_Ck(Lksub1, i)
        Li = generate_Lk_by_Ck(data_set, Ci, min_support, support_data)
        Lksub1 = Li.copy()
        L.append(Lksub1)
    return L, support_data


def generate_big_rules(L, support_data, min_conf):
    """
    Generate big rules from frequent itemsets.
    Args:
        L: The list of Lk.
        support_data: A dictionary. The key is frequent itemset and the value is support.
        min_conf: Minimal confidence.
    Returns:
        big_rule_list: A list which contains all big rules. Each big rule is represented
                       as a 3-tuple.
    """
    big_rule_list = []
    sub_set_list = []
    for i in range(0, len(L)):
        for freq_set in L[i]:
            for sub_set in sub_set_list:
                if sub_set.issubset(freq_set):
                    conf = support_data[freq_set] / support_data[freq_set - sub_set]
                    big_rule = (freq_set - sub_set, sub_set, conf)
                    if conf >= min_conf and big_rule not in big_rule_list:
                        # print freq_set-sub_set, " => ", sub_set, "conf: ", conf
                        big_rule_list.append(big_rule)
            sub_set_list.append(freq_set)
    return big_rule_list


if __name__ == "__main__":
    """
    Test
    """
    data_set = load_data_set()
    L, support_data = generate_L(data_set, k=3, min_support=0.2)
    big_rules_list = generate_big_rules(L, support_data, min_conf=0.7)
    for Lk in L:
        print "="*50
        print "frequent " + str(len(list(Lk)[0])) + "-itemsets\t\tsupport"
        print "="*50
        for freq_set in Lk:
            print freq_set, support_data[freq_set]
    print
    print "Big Rules"
    for item in big_rules_list:
        print item[0], "=>", item[1], "conf: ", item[2]

代码运行结果截图如下:
请添加图片描述

参考:https://www.cnblogs.com/llhthinker/p/6719779.html

猜你喜欢

转载自blog.csdn.net/ljzology/article/details/119863045
今日推荐