i want add column in dataframe
arbitrary value (that same each row). error when use withcolumn
follows:
dt.withcolumn('new_column', 10).head(5) --------------------------------------------------------------------------- attributeerror traceback (most recent call last) <ipython-input-50-a6d0257ca2be> in <module>() 1 dt = (messages 2 .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt"))) ----> 3 dt.withcolumn('new_column', 10).head(5) /users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withcolumn(self, colname, col) 1166 [row(age=2, name=u'alice', age2=4), row(age=5, name=u'bob', age2=7)] 1167 """ -> 1168 return self.select('*', col.alias(colname)) 1169 1170 @ignore_unicode_prefix attributeerror: 'int' object has no attribute 'alias'
it seems can trick function working want adding , subtracting 1 of other columns (so add zero) , adding number want (10 in case):
dt.withcolumn('new_column', dt.messagetype - dt.messagetype + 10).head(5) [row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10), row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10), row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10), row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10), row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)]
this supremely hacky, right? assume there more legit way this?
spark 2.2+
spark 2.2 introduces typedlit
support seq
, map
, , tuples
(spark-19254) , following calls should supported (scala):
import org.apache.spark.sql.functions.typedlit df.withcolumn("some_array", typedlit(seq(1, 2, 3))) df.withcolumn("some_struct", typedlit(("foo", 1, .0.3))) df.withcolumn("some_map", typedlit(map("key1" -> 1, "key2" -> 2)))
spark 1.3+ (lit
), 1.4+ (array
, struct
), 2.0+ (map
):
the second argument dataframe.withcolumn
should column
have use literal:
from pyspark.sql.functions import lit df.withcolumn('new_column', lit(10))
if need complex columns can build these using blocks array
:
from pyspark.sql.functions import array, create_map, struct df.withcolumn("some_array", array(lit(1), lit(2), lit(3))) df.withcolumn("some_struct", struct(lit("foo"), lit(1), lit(.3))) df.withcolumn("some_map", create_map(lit("key1"), lit(1), lit("key2"), lit(2)))
exactly same methods can used in scala.
import org.apache.spark.sql.functions.{array, lit, map, struct} df.withcolumn("new_column", lit(10)) df.withcolumn("map", map(lit("key1"), lit(1), lit("key2"), lit(2)))
it possible, although slower, use udf.
Comments
Post a Comment