In Automated planning, learning and exploiting additional knowledge within a domain model, in order to improve the performance of domain-independent planners, has attracted much research. Reformulation techniques such as those based on macro-operators or entanglements are very promising because they are, to some extent, domain model and planning engine independent. Despite the significant amount of work that has been done for designing techniques aimed at extracting this additional knowledge in this form, no methodological analysis has been performed for a better comprehension of their learning process. In this paper, we focus on studying learnability of entanglements in planning, in terms of how the learning process can be influenced by the quantity and the quality of the training data. So, we aim to investigate whether a small number of training planning problems is sufficient for learning a good quality set of (compatible) entanglements. Quality of the training data refers to situations where (suboptimal) plans often consist of 'flaws' (e.g. unnecessary actions). Therefore, we will investigate how the current entanglement learning approach handles such 'flaws' in training plans. Also, we will investigate whether training plans generated by different planners lead to different results of the learning process.