Apache Spark SQL Analyzer Resolves Order-by Column

The Apache Spark SQL component has several sub-components including Analyzer, which plays an important role in making sure that the logical plan is fully resolved at the end of an analysis phase. Analyzer takes a parsed logical plan as input and makes sure all the table references, attributes/column references, and function references are resolved by looking up the metadata from catalogs. It works by applying a set of rules on the logical plan — and transforming it on each stage in order to resolve specific portions of the plan.

We’ll examine the workings of Analyzer by taking an example defect and describing how we addressed the problem.

Example Query:

select a as a1 , c as a2, count(c) as a3 from tab group by a, b order by a1, c

Problem Description:

In this case, Analyzer was unable to resolve the attributes referenced in the order by clause. To see why, let’s look at the underlying parsed logical plan.

Parsed Logical plan:

'Sort ['a1 ASC,'c ASC], true +- 'Aggregate ['a,'c], ['a AS a1#17,'c AS a2#18, (count('a),mode=Complete,isDistinct=false) AS a3#19] +- LocalRelation [a#1,b#2,c#3,d#4,e#5]

In this case, only the LocalRelation is resolved. None of the other plan operators are resolved since the underlying attributes they refer to are not resolved. However, we can see that the Sort operator is above the Aggregate Operator and the attributes referenced by the Sort operator was being resolved from the outputs of its child (the Aggregate operator). The output of the Aggregate operator is ‘a1#17’, ‘a2#18’ and ‘a3#19’ in the above plan which is missing the attribute ‘C#3’, which is referenced by the Sort operator. That causes a failure in the analysis process which in turn results in query failure.

In order to properly resolve the Sort operator, we need to make sure that…

  • `a1 in Sort is resolved from its immediate child (Aggregate)
  • `c in Sort is resolved from its grandchild (Local Relation)

In Spark Analyzer, the ResolveAggregateFunctions rule was modified in order to properly resolve the Sort operator and the query results in following analyzed logical plan after the fix.

Project [a1#14,a2#15,a3#16L] +- Sort [a1#14 ASC,a2#15 ASC], true +- Aggregate [a#1,c#3], [a#1 AS a1#14,c#3 AS a2#15, (count(a#1),mode=Complete,isDistinct=false) AS a3#16L] +- LocalRelation [a#1,b#2,c#3,d#4,e#5]


Hopefully this blog gives a brief insight into the workings of Analyzer. We’ll post a more extended description of Analyzer in the future. In general, handling Analyzer issues requires a deep understanding of Spark logical plans.

About the Author:

Dilip Biswal is a Senior software engineer at the Spark Technology Center at IBM. He is an active Apache Spark contributor and works in the open source community. He is experienced in Relational Databases, Distributed Computing and Big Data Analytics.  He has extensively worked on SQL engines like Informix, Derby, and Big SQL. *

Spark Technology Center


Subscribe to the Spark Technology Center newsletter for the latest thought leadership in Apache Spark™, machine learning and open source.



You Might Also Enjoy